Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough

The EU Web Accessibility Directive: A practical guide for the public sector 

Jan 14 2020
Jan 14
Jan 14 2020
Jan 14

New legislation from the EU means that public sector websites must comply with the EU Web Accessibility Directive if they are launched after 23rd September 2019. For existing websites, public sector organisations have slightly longer to make their services accessible to everyone. With the next deadline for existing sites in September 2020, not too far away, this guide explores the practical steps organisations can take to comply. 

What is the EU Web Accessibility Directive?

An estimated 80 million people in the EU live with a disability, making it more necessary than ever to ensure everyone has equal access to digital products and services. The EU Web Accessibility Directive is a new piece of legislation which aims to consolidate accessibility standards, making web accessibility a legal requirement.

The Directive requires that member states have processes in place to “ensure that public sector bodies take the necessary measures to make their websites and mobile applications more accessible”. 

As with most legislation, there are some exclusions which apply. These include broadcasters, some schools and nurseries, and private organisations, along with non-government organisations (such as charities) which “provide services that are not essential to the public or services that do not specifically address the needs of persons with disabilities”.

Unlike the Web Content Accessibility Guidelines (WCAG), the Directive does not include rules about how to make websites and mobile applications accessible. However, the four WCAG principles which provide the foundation for accessibility (perceivable, operable, understandable and robust) are present throughout. This begins to unify the digital accessibility standards for EU member states, by putting WCAG at the core.

How does this affect public sector organisations?

Any new websites launched after 23rd September 2019 must meet accessibility standards and must have an accessibility statement.

If a public sector organisation launched a website before 23rd September 2019, the website must meet the accessibility standards by 23rd September 2020. Improving accessibility for an existing website is notoriously more difficult than building with accessibility in mind from the beginning. Organisations with older sites therefore have slightly longer to meet the required standards.

Accessibility standards also apply to mobile apps, however, organisations have until 23rd June 2021 to meet that deadline.

While organisations are progressing in their efforts to make services more accessible, it’s clear there is a lot more to do for existing websites and apps - and they are running out of time. With the next deadline approaching rapidly, what practical steps can public sector bodies take to raise their accessibility game?

Audit your websites

A recent survey revealed that 40% of local authority homepages aren’t accessible to people with disabilities. Organisations must therefore first understand how an existing website is performing in terms of accessibility.

Common issues include: failing to provide a good heading structure and links with sufficient context, not using visible indicators to show where the keyboard focus is, not adding skip links to jump over repetitive page content and not having sufficient contrast between text and its background. 

Completing an accessibility audit will help find where there are barriers for people with disabilities and help plan in any remediation work necessary to meet the standards.

Add an accessibility statement

The Directive set a deadline of 23rd December 2018 for public sector organisations to add an accessibility statement to their websites. 

When an accessibility audit or accessibility evaluation has taken place, the results from the audit can be used to help write an accessibility statement. The accessibility statement should be regularly updated and the W3C suggests including the following as a minimum:

  • A commitment to accessibility for people with disabilities
  • The accessibility standard applied, such as WCAG 2.1
  • Contact information in case people encounter problems
  • Any known limitations of the website, to avoid frustrating your visitors
  • Measures taken by your organization to ensure accessibility

Make the content as accessible as possible

The most difficult stage for any organisation is making the content as accessible as possible. While building accessibility in from the start of a web development project is always the best route to ensure your services are accessible to everyone, it is vital the team responsible for planning, updating and writing content are also committed to high accessibility standards.  

Web accessibility isn’t a one-time task, it’s an on-going commitment. Website content and development work needs to be constantly monitored and updated to achieve ongoing compliance. Performing regular audits or evaluations and asking people with disabilities to test products and services ensures websites and apps are as accessible as possible.

This article originally appeared on Open Access Government published on the 10th December 2019. 

Can we help you on your accessibility journey with an healthcheck, audit or consultancy? Get in touch

Feb 22 2018
Feb 22

In this edition of E3 Takeaways Business Development Strategist, Zach, talks Drupal 8 and why its flexible workflows, ability to integrate, and internationalization suite of modules work so well for the SaaS industry.

[embedded content]

Hello there, I'm Zach Ettelman, Business Development Strategist here at Elevated Third. Today I'm going to talk about why Drupal is a good fit for software as a service companies. 

Takeaway #1: The old adage "content is king" is not dead and that means companies are still having to create a ton of content. With so many content cooks in the kitchen that means there are many approval layers to even get the smallest piece of content onto your site. Thanks to Drupal's content authorization workflow, we can personalize and customize it to meet the needs of your team. 

Takeaway #2: In today's digital world, virtually every SaaS organization is serving an international audience. Thanks to Drupal 8's internationalization suite of modules, we can now translate and localize content based on where the product's services are available for your offerings. 

Takeaway #3: SaaS companies leverage a multitude of tools in their digital arsenal to get daily business operations done, including CRM, marketing automation tools, and inventory management tools. Drupal 8 specifically is built API first making it easy to handle all these third-party integrations. Thanks to contributed modules and custom modules, we can keep up with your daily business operations and connect them to your digital website. 

Oct 17 2017
Oct 17

In this edition of 3 Takeaways, our Business Development Strategist, Nelson Harris, reviews Drupal 8 and how the latest improvements help get more out of the box, leverage mobile, and upgrade smoothly.


[embedded content]


Hi, I’m Nelson Harris, Business Development Strategist at Elevated Third. A question I get a lot from people is “what’s new and interesting about Drupal 8, and why might I upgrade.” There are a lot of reasons why you might want to upgrade to Drupal 8 but I’m just going to list three of them.

Takeaways #1: First, you get more out of the box.

There are a lot of useful modules in Drupal 8 core that have been built in. Things like views, multilingual, a WYSIWYG editor, and more types of fields. This means you can spend less time configuring and installing modules, and more time working on your site.

Takeaway #2: Second of all, mobile is in it’s DNA.

Built-in themes are all responsive and adapt well to different screen sizes. Tables will scale, and the new admin toolbar is really good on mobile devices. Chances are, you’re probably watching this video on the screen of your mobile device right now, so you can imagine why mobile might be important.

Takeaway #3: Finally, it’s built to be more future proof.

Where an upgrade from 7 to 8 or 6 to 7 requires scraping your codebase and starting all over from scratch, Drupal 8 is designed to go from 8 to 9 and 9 to 10 more seamlessly and more like an update patch as opposed to starting over. An investment in Drupal 8 really means that you're investing in your website because it's going to be easier to upgrade in the future.

Jul 31 2017
Jul 31

Drupal is a comprehensive content management framework which offers a number of modules to extend its functionalities with day-to-day updates. If you want to know about ‘Drupal Maintenance’, then you need to know the following things: 

Security updates

  • Upgrade from Drupal 6 to Drupal 7 or Drupal 8
  • Migration from legacy system

Security updates

                   Updating refers to taking your site from one 'minor' version to another minor version. Lets consider an example if a site is in Drupal 7 version D7.1 and latest version is released as D7.2 or from Drupal 8 version D8.1.0 to D8.1.3 then its an update.

Version Upgrades

                 Upgrading refers to taking your site from one 'Major' version to another major version.Switching from Drupal 6 to Drupal 7, or from Drupal 6 to Drupal 8, or from Drupal 7 to Drupal 8 is an 'upgrade'.

Drupal Migration

               You may want to 'migrate' your site from running locally on your computer to an online webhost. Or you may want to 'migrate' your site from one webhost to another. If that is what you want, head to: Backing up and migrating a site.

Steps to Drupal 7 upgrade 

  • Log-in as an user with administrator privileges who can perform "Administer software updates".
  • Turn your site into maintenance mode. Ie. If we are updating a live/production site then go to Administration > Configuration > Development > Maintenance mode. Enable the "Put site into maintenance mode" checkbox and save the configuration.
  • Backup your files and database. Clear the performance cache and delete all the watchdog logs, if any, before taking the database backup.
  • Remove all old core files and directories, except for the 'sites' directory, the original install profile in the 'profiles' directory and any custom files (like htaccess, gitignore or other files) you added elsewhere.
  • To be more specific, in your Drupal root directory, delete all files and the following directories: includes, misc, modules, scripts, and themes. If you made a normal installation, then also delete the profiles folder, but if you used a custom profile, then in the profiles folder, delete the subfolders minimal, standard, and testing.
  • Modifications to the files like .htaccess or robots.txt, needs to be applied from your backup, after the new files are kept in place.
  • Read the release announcement carefully before proceeding the upgrade as the updates may also include changes in settings.php. If you have to use the new settings.php file, then replace the settings.php with the default.settings.php, and apply the site-specific entries like database name, user, and password from the backup file.
  • Any custom files and directories outside the sites directories has to be placed manually. However, it is recommended not to have any custom files or directories outside the sites directory.
  • Download the latest Drupal 7.x core release from http://drupal.org/project/drupal to a directory outside of your web root. Extract the archive and copy the files into your Drupal directory.
  • After copying the new core files, run update url by visiting http://YOURSITENAME/update.php. This will update the core database tables if any alteration to the database table structures is there.
  • In order to access the update.php file please login as administrator. Before you start doing the upgrade process, we have logged in as admin user, get access to the update.php file automatically. In case you had logged out or closed the browser by mistake, you can do the following to access the update.php file.

           Open settings.php with a text editor.
           Look for $update_free_access variable.
           $update_free_access = FALSE;
           Change the access to true:
           $update_free_access = TRUE;
           Now run update.php.

  • Once the update is completed, check for errors if any. $update_free_access must be changed back to FALSE for security reasons.
  • Clear the sites performance cache after the upgrade. You can do it by navigating to Configuration => Development => Performance and "Clear all Cache".
  • Check for the status report and report log to verify that everything is working as expected.
  • Once you are error free, make the site online to change the "Put site into maintenance mode" checkbox and save the configuration.

Tips for Drupal site maintenance

  • Keep the Drupal core up-to-date
  • Keep the contributed Drupal modules up-to-date. 
  • Check for the security updates. Install update manager to know the latest available and recommended releases for Drupal core and its contributed modules
  • Configure cron scheduler so that periodic site maintenance will take care of regular checkup and updates
  • Keep the admin user password secure
  • Ensure using check_plain method while assigning or computing values from form post fields to avoid cross-site scripting
  • Write protect the setting.php file.
Jul 31 2017
Jul 31

Drupal is a comprehensive content management framework which offers a number of modules to extend its functionalities with day-to-day updates. If you want to know about ‘Drupal Maintenance’, then you need to know the following things: 

Security updates

  • Upgrade from Drupal 6 to Drupal 7 or Drupal 8
  • Migration from legacy system

Security updates

                   Updating refers to taking your site from one 'minor' version to another minor version. Lets consider an example if a site is in Drupal 7 version D7.1 and latest version is released as D7.2 or from Drupal 8 version D8.1.0 to D8.1.3 then its an update.

Version Upgrades

                 Upgrading refers to taking your site from one 'Major' version to another major version.Switching from Drupal 6 to Drupal 7, or from Drupal 6 to Drupal 8, or from Drupal 7 to Drupal 8 is an 'upgrade'.

Drupal Migration

               You may want to 'migrate' your site from running locally on your computer to an online webhost. Or you may want to 'migrate' your site from one webhost to another. If that is what you want, head to: Backing up and migrating a site.

Steps to Drupal 7 upgrade 

  • Log-in as an user with administrator privileges who can perform "Administer software updates".
  • Turn your site into maintenance mode. Ie. If we are updating a live/production site then go to Administration > Configuration > Development > Maintenance mode. Enable the "Put site into maintenance mode" checkbox and save the configuration.
  • Backup your files and database. Clear the performance cache and delete all the watchdog logs, if any, before taking the database backup.
  • Remove all old core files and directories, except for the 'sites' directory, the original install profile in the 'profiles' directory and any custom files (like htaccess, gitignore or other files) you added elsewhere.
  • To be more specific, in your Drupal root directory, delete all files and the following directories: includes, misc, modules, scripts, and themes. If you made a normal installation, then also delete the profiles folder, but if you used a custom profile, then in the profiles folder, delete the subfolders minimal, standard, and testing.
  • Modifications to the files like .htaccess or robots.txt, needs to be applied from your backup, after the new files are kept in place.
  • Read the release announcement carefully before proceeding the upgrade as the updates may also include changes in settings.php. If you have to use the new settings.php file, then replace the settings.php with the default.settings.php, and apply the site-specific entries like database name, user, and password from the backup file.
  • Any custom files and directories outside the sites directories has to be placed manually. However, it is recommended not to have any custom files or directories outside the sites directory.
  • Download the latest Drupal 7.x core release from http://drupal.org/project/drupal to a directory outside of your web root. Extract the archive and copy the files into your Drupal directory.
  • After copying the new core files, run update url by visiting http://YOURSITENAME/update.php. This will update the core database tables if any alteration to the database table structures is there.
  • In order to access the update.php file please login as administrator. Before you start doing the upgrade process, we have logged in as admin user, get access to the update.php file automatically. In case you had logged out or closed the browser by mistake, you can do the following to access the update.php file.

           Open settings.php with a text editor.
           Look for $update_free_access variable.
           $update_free_access = FALSE;
           Change the access to true:
           $update_free_access = TRUE;
           Now run update.php.

  • Once the update is completed, check for errors if any. $update_free_access must be changed back to FALSE for security reasons.
  • Clear the sites performance cache after the upgrade. You can do it by navigating to Configuration => Development => Performance and "Clear all Cache".
  • Check for the status report and report log to verify that everything is working as expected.
  • Once you are error free, make the site online to change the "Put site into maintenance mode" checkbox and save the configuration.

Tips for Drupal site maintenance

  • Keep the Drupal core up-to-date
  • Keep the contributed Drupal modules up-to-date. 
  • Check for the security updates. Install update manager to know the latest available and recommended releases for Drupal core and its contributed modules
  • Configure cron scheduler so that periodic site maintenance will take care of regular checkup and updates
  • Keep the admin user password secure
  • Ensure using check_plain method while assigning or computing values from form post fields to avoid cross-site scripting
  • Write protect the setting.php file.
Jun 26 2017
Jun 26

Big congratulations to Tanner Langley, our Senior Drupal Developer turned Acquia Certified Grand Master. He has earned the highest ranking Drupal certification.

Elevated Third is lucky to now have 3 of the world’s 150 Acquia Certified Grand Masters. Our Grand Master Drupal developers, Nick Switzer, Michael Lander, and Tanner Langley are incredibly important to our implementation workflow. They work on every website we launch.

In order to help everyone understand the significance of this certification, we’ll discuss what a Grand Master is, what the certification process looks like, and the value a Grand Master provides.


What is an Acquia Certified Grand Master?

Let’s take a step back to understand Acquia’s role in this certification process. Acquia is the leading cloud platform SaaS company for Drupal websites and is led by Dries Buytaert, the creator of Drupal. They serve as the administrator and regulator of the premier professional certification program for Drupal. Having Dries Buytaert, the co-founder of Acquia and the creator of Drupal, develop this program provides an enormous amount of credibility. It demonstrates Acquia’s commitment to not only their platform but also to the whole Drupal community. With more knowledgeable and engaged developers bettering the community, Drupal’s power increases and in turn supports the companies that leverage Drupal for their digital experiences.

To become an Acquia Certified Grand Master a developer has a one-year timeframe to complete three certification exams: Certified Developer, Certified Front End Specialist, and Certified Back End Specialist. Developers interested in taking the exams and becoming Grand Master certified can take the test online at any time, at testing centers across the world, or during DrupalCon, where most developers take the tests. 


Why Does It Matter?

Passing three of the hardest Drupal certification exams is a daunting task and those who pass the test should be applauded. But how does the time and effort into passing these certification exams translate to project success?

When working with a Grand Master Drupal developer you are getting one of the top developers in the world and the knowledge of Drupal best practices and efficiency in mind. Staying up-to-date on industry trends and best practices gives them a unique perspective on the next Drupal challenge.

Most importantly, our clients gain an incredible competitive advantage having access to Grand Master developers on their projects. Top Drupal talent in the industry gives our clients access to developers that are always looking for new ways to innovate and develop modules that deliver never before seen functionality within Drupal.

Our Grand Master certified developers recently used their knowledge to develop a fully decoupled Drupal project, that has been highly successful for our client and serves as a benchmark in the industry.

Let's take a deeper look into what is required of an Acquia Certified Grand Master. 


Acquia Certified Developer Exam

The more general of the three exams focus on the areas of fundamental web concepts, site building, front end development (theming), and back end development (coding). Specifically, the exam tests a developer’s level of knowledge and ability to:

  • Setup and configure new Drupal sites
  • Develop and implement new Drupal modules and themes
  • Customize and extend existing modules


Acquia Certified Front End Specialist Exam

This test specifically focuses on a developer’s skills and knowledge of Drupal front end theming, including:

  • Fundamental web development concepts, HTML, CSS, Javascript, PHP, jQuery
  • Theming concepts like custom regions, theme configuration, stylesheets, breakpoints, sub-themes
  • Templates and pre-process functions, Twig syntax, templating, Form Alter and Template Suggestions After
  • Layout configuration, Blocks, views, and the Responsive Image module
  • Performance/security, analyzing and resolving site performance/security issues from site configuration and custom themes

Acquia Certified Back End Specialist Exam

This exam validates the skills and knowledge of building and implementing Drupal solutions through module development. This test focuses on:

  • Fundamental web development concepts, HTML, CSS, Javascript, PHP programming, managing dependencies using Composer, Git for version control, and Automated Testing concepts
  • Drupal core API. Registering paths for URL requests using Routing system and Menu API, building and validating forms using Form API, interact with Entity system using Entity API, and ability to use Core APIs for building and extending Drupal functionality
  • Debug code and troubleshooting
  • Theme integration
  • Performance
  • Security
  • Leveraging community by contributing modules back to the Drupal community and ability to write code using Drupal Coding Standards

To know there is a Grand Master on your project is to know your project will be influenced by someone with a well-rounded background of the entire development Drupal landscape, not just specialization in front end or back end development. This well-rounded background helps the developer have a better understanding of how the different pieces of a project come together to create a powerful, flexible, and scalable digital platform.

Quite simply, having Grand Masters on our team ensures our clients get the top talent in the industry to develop their sites and produce extraordinary digital experiences for their own customers. 

If you are looking for top Drupal talent for your project, let's talk

Jun 20 2017
Jun 20

We teamed up with Acquia to present “A Decoupled Drupal Story: Powdr Gives Developers Ultimate Flexibility To Build Best CX Possible.” The webinar aired in June but you can view the recording here anytime.

[embedded content]

As the internet and web-connected devices continue to evolve, so do the needs to develop and render content. Given today’s rate of change, organizations are using decoupled architecture to build applications - giving them flexibility to accommodate any device or experience.

In this session, we’ll cover Powdr, a ski resort holding company. To give its developers the freedom to use the right front-end tools needed for any given use case, Powdr built its 17 ski resort websites on one decoupled Drupal platform. Join Elevated Third, Hoorooh Digital and Acquia to learn:

  • How a custom Drupal 8 solution cut implementation time in half vs Drupal 7

  • The ease in which Drupal 8’s internal REST API can be extended to fit a client's needs

  • The details of handling non-Drupal application routing on Acquia's servers


If you are considering a decoupled Drupal implementation, let’s talk.

Jun 08 2017
Jun 08

Every day, cultural institutions face unique challenges while working towards a mission with a small technical staff and digital budget. Limited money, revolving staff, and regulatory pressures require visionaries to think ahead of their competition to build a digital presence that doesn’t tie their hands with expensive proprietary licenses and high maintenance code. 

So often, cultural nonprofits feel pain triggered by the decision of another department. When technology solutions like ticketing, donor and membership management, point-of-sale, email marketing, and content management are selected without cross department communication, they won’t integrate. This causes struggles big and small like: 

  • Extracting or inputting data 
  • Battling with vendors to get even the most innocuous tracking code installed
  • Making the public-facing user experience feel seamless
  • Customizing the look and feel of simple things, like forms and checkout screens
  • Keeping content (like event descriptions) consistent across systems

After more than 13 years working with nonprofit and cultural institutions, like the Denver Botanic Gardens, we’ve seen that these problems are epidemic. Drawing from experience, we have a few ideas about why that could be, and how Drupal can help. 

Mind Shift: Expense vs. Investment

The big problem is that most nonprofit web design projects are considered a one-time expense, instead of a long term investment. 

Expenses are a one-time cost with a start and end-date. Once a purchase has been made, it is scrutinized as an operating cost by the board and the finance committee, often dubbed a ‘necessary evil.’

Investments require long term, strategic thinking. They receive ongoing budget priority and dedicated resources. They, like an employee, are expected to make money and be accountable.

When a for-profit company spends money on the development of a new product or venture, they bank their business on it. They set goals and expect it will eventually enjoy returns that will help the company grow. 

Treating technology spend as an investment rather than an expense can position a nonprofit to be more strategic about its vendor selection, increase direct revenues from a nonprofit website design and generate longer term buy-in from leadership.

How can nonprofits make the shift?

Open-source technology1. Invest in open source. Open source software differs from platforms provided by Microsoft, Adobe, etc. in that it doesn't cost anything to license and use. It also means you can pick up your site and take it to any vendor. Because it's open source, Drupal is updated and maintained by millions of developers (a lot like Wikipedia). This means that when a new social media platform becomes popular for example, the community can create an integration within a matter of days or even hours. 

Integration2. Make integration-focused software a priority. Own the technology. Don’t let the technology roadmap be dictated by whether another company thinks a feature is important. Pick vendors by their commitment to playing nice with other tools, not by how many out-of-the-box features they have and always, always make APIs a priority. Drupal can connect to almost anything. Other less custom platforms have a hard time integrating with third party software. Drupal can integrate with almost any platform, regardless of how old or specific. Drupal works well with things like Salesforce, Hubspot, Marketo, and countless many more.

Drupal logo3. Learn how well Drupal works for nonprofits. It is a scalable content management and system integration platform of choice. Trusted by institutions like GreenPeace, LACMA, The Red Cross, and The Whitehouse, Drupal offers the ability to integrate with enterprise solutions like Blackbaud/Convio, Magento and other commerce platforms, ticketing systems like Galaxy, Tessitura and more that haven’t been invented yet. Integrations, scalability, and speed to market are all things to be kept in mind when selecting digital tools.

Analytics4. Think in terms of conversions. Measure. Technology tools should save and make money, directly or indirectly. Have higher expectations of a ticketing system, a content management system, or a volunteer management system. Figure out how things that are valuable and can be tracked like “conversions”. Assign value to non-monetary outcomes so gain and ROI can be calculated. 

For example: 

A volunteer may not be a revenue line, but recruiting someone takes valuable staff time. Calculate how the website can do some of that work for you. 

  • Volunteer value - $50 each 
  • New dynamic volunteer signup form - $1,200 
  • Result? 30 more recruits than usual 
  • ROI: ((30 x $50) - $1200) / $1200 = 25%

25% return? Not bad.

Managing and reconciling event information across all website platforms can be cumbersome and require tons of time by a content manager. 

  • Staff cost - $40/hour
  • Manual ticketing effort for event - 120 hours
  • Calendar API integration - $2000
  • Automated ticketing effort for event - 40 hours
  • Annual Savings: ((120 x $40) - ((40 x $40)+$2000)) = $1,200

$1,200 savings every year? Nice.

Staff Happy5. Keep your staff happy. Drupal is built to make sense to users of any technical skill level, and the admin interface can be optimized for any type of workflow. The interface can even be customized to look like other systems that users may be more familiar with. Content edits can be made easily, and Drupal can be configured to allow for revisions and approval from multiple content editors with various permission levels. 

Hosting6. Don’t forget hosting. During a nonprofit web design project and throughout the life of a website, is it important to have the support of a reputable hosting company. A Drupal specific hosting company, like Acquia, offers the most comprehensive bundle of support and integrated hosting services, which as an long term investment can save thousands. For a nonprofit, reliable maintenance and security is unmatched. 

Drupal is a long term investment because it can be scaled as a nonprofit institution grows. It can save time, money, and hassle, especially when paired with a top-notch hosting platform, like Acquia. In our tenure working with nonprofits like the Denver Botanic Gardens, The NFPA, and the Colorado General Assembly, we’ve solved many problems using Drupal. If your nonprofit or cultural institution could use an overhaul, contact us

Apr 18 2017
Apr 18

Fairfax County Public Schools (FCPS) is the largest school system in Virginia and the 10th largest in the United States, with more than 200 schools and centers serving 186,000 students. To keep this large community of students, parents, teachers, employees, and the general public informed, FCPS is building out a network of 195 school websites.

Over time and without a unified and modern content management system, FCPS faced numerous obstacles to managing its content and effectively communicating with its audiences. The school system engaged Forum One to help realize its vision of a modern enterprise web platform that connected their sites centrally. Harnessing the content creation and syndication powers of Drupal with Pantheon Custom Upstreams, Forum One developed a platform that enables FCPS to deploy, manage, and update this network of school websites from a common codebase and to easily share news, events, and alerts from a central source.

I’m Brooke Heaton, a senior developer at Forum One, and I helped lead the development of our solution for the FCPS system. In this post, I’ll discuss how we worked with Pantheon to devise a powerful solution that met a number of critical needs for FCPS. I’ll outline the modules we used to scaffold each school site starting from a Pantheon Upstream, and I’ll also dig into the tools and practices we used to quickly deploy multiple sites.

One Codebase Upstream, Dozens of School Sites Downstream

Getting the solution right for FCPS required a long-term vision that would meet a range of needs in a sustainable and scalable way. Our solution needed to:

  • Provide a common CMS that is user friendly, highly efficient, and cost effective

  • Modernize the FCPS brand with an updated visual identity

  • Syndicate central communications for multiple, diverse audiences

  • Quickly scaffold and deploy numerous new sites with common menu items, homepage and landing pages, content importers, and a unified user interface

  • Centrally add or remove users from a central source

While a Drupal “multisite” approach could have met many such needs, experienced Drupalists can attest that the multisite approach has been fraught with issues. We opted instead to harness Pantheon’s Custom Upstream workflow, which allowed us to unify all sites with a common codebase while also customizing each site with each school’s own configuration—allowing them to display their individual school name, logo, custom menus items, unique landing pages, and their own users.

Utilizing Upstreams, we are also able to continually develop core functionality and new features and to then propagate these updates to downstream school sites.

school repo Upstream graphic

Code Propagation: Upstream code changes are merged to downstream school repositories.

Our solution was also able to create a content strategy and governance model that allows FCPS administrators to syndicate content from a central point—the central FCPS site—to individual schools by harnessing Drupal 8’s core Views, Feeds, and Migrate modules.  

FCPS.edu upstream graphic

Content Propagation: Up-to-the minute News, Events, and Blog updates are syndicated by the central FCPS website to all school sites and imported by Feeds or Migrate.

mobile view of FCPS site

Creating Turn-Key Sites with Drupal 8

To get where we wanted to go, the Forum One dev team utilized a suite of powerful Drupal 8 core and contributed modules that generate default-content, menus, taxonomy terms, and blocks—even images!—immediately after spinning up a new site from the Pantheon Upstream.

Key modules we used include:

  • Configuration Installer: A common installation profile that would import Drupal 8 configuration from a directory so that all of the core configuration of a standard FCPS school site would be there from the start.

  • Default Content: The ability to create generic and school system-wide content that cannot be saved in configuration and to export this in json format. Upon site deployment, the Default Content module scans the modules directory for any exported default content and adds it to the database. Brilliant!

  • Migrate: Now integrated into the Drupal core, the Migrate module allows us to import complex content from a central source, whether csv or xml. While the Drupal 8 Feeds module continues to mature, the Migrate module combined with Migrate Plus and Migrate Source CSV provides powerful tools to import content.

  • Views: Also part of Drupal core, the Views module provides powerful content. syndication tools, including the ability to produce content “feeds” in json, csv, or xml format. This allows content from the central FCPS site to be broadcast in a lightweight data format, then imported by the Feeds module on each individual school site. Triggered regularly by a cron job, Feeds instances import the latest system-wide News, Events, and Blog posts.

The FCPS Install Profile

With a solid foundation in a central FCPS site (hat tip to F1 Tech Lead Chaz Chumley), we were able to clone the central site then ‘genericize’ the initial site so that it could be customized by each school. We removed site.settings from config to prevent Upstream code from overwriting downstream site settings, such as the site name, url, email, etc. We also developed a special solution in the theme settings to allow schools adjust their site colors with a custom theme color selector. Bring on the team spirit!

With this genericized starting site in code and all pages, menus, taxonomy terms, image files, and blocks exported via Default Content, we were ready to deploy our code to the Upstream. To set up the Upstream, we placed a request to Pantheon with some basic information about our Upstream and within minutes, we were ready to deploy our first FCPS school site. Huzzah!

Spinning up a New FCPS School Site from Pantheon’s Upstream

With the installation profile complete and our codebase hosted on Pantheon’s Upstream, we could now create our first school site and install Drupal from the profile. While sites can be created via Pantheon’s UI, we sped up the process using Pantheon’s Terminus CLI tool and Drush to quickly spin up multiple sites while meticulously tracking site metadata and settings. Pantheon’s helpful documentation illustrates just how easy this is:

$ terminus site:create --org=1a2b3c4d5e6f7g8h9i10j11k12l13m14n15o our_new_site 'Our New Site Label' o15n14m13l12k11j10i9h8g7f6e5d4c3b2a1 

The above command uses the format site: create [--org [ORG]] [--] <site> <label> <upstream_id>

[notice] Creating a new site...

With a site created from the Upstream codebase, the next step was to update our local Drush aliases via Terminus so that we could install Drupal from our configuration_installer:

$ terminus sites aliases
[2017-04-05 00:00:00] [info] Pantheon aliases updated 

Then we run drush sa to identify the new DEV site alias: 

$ drush sa

 ← we will use this alias to install the site on DEV


With the alias for our newly created site we install Drupal from the Upstream using Drush:

drush @pantheon.our-new-site.dev  si  config_installer -y --notify --account-name=administrator --account-pass=password \
 config_installer_site_configure_form.account.name=admin \
 config_installer_site_configure_form.account.pass.pass1=admin \
 config_installer_site_configure_form.account.pass.pass2=admin \
 config_installer_site_configure_form.account.mail=[email protected]

The above command will fully install Drupal using our configuration and will automatically add the default content provided by the default content exported to our custom module. 

Once Drupal was installed on Pantheon, we added site editor accounts for FCPS and trained school staff in customizing their site(s).

school sites dashboard

The initial batch of FCPS schools sites after deployment

What about updates?

After we deployed the first fifteen FCPS school sites and made them live to the world, it was inevitable that changes would soon be needed. Beyond the normal module and core updates, we also faced new functional needs and requests for enhancements (and yes, maybe even a few bugs that crept in), so we needed to updated our ‘downstream’ sites from the Pantheon Upstream. To do so, we merged our changes into our Upstream ‘master’ branch and within a few minutes each site displayed an update message showing that Upstream Updates are available’. This meant that we could merge Upstream changes into each school repository.

Upstream updates screenshot

To merge these changes into each site, we once again opted for the faster approach of using Pantheon’s Terminus CLI tool to quickly accept the Upstream changes and then run update.php on each of the school sites.

$ terminus site:list --format=list | terminus site:mass-update:apply --accept-upstream --updatedb --dry-run

$ terminus site:list --format=list | terminus site:mass-update:apply --accept-upstream --updatedb

To import the modified Drupal configuration, we also had to run a drush cim --partial on each site—a step that can be further automated in deployments by harnessing yet another brilliant Pantheon tool: Quicksilver Platform Hooks. With a Quicksilver hook to automatically run a Drupal partial configuration import after code deployments, we are able to remove another manual step from the process, further speeding up the efficiency of the platform.

Is this for you?

Combining Pantheon’s Custom Upstreams with Drupal 8 and its contributed modules can unleash a very powerful solution for nearly any organization that includes sub-organizations, branches, departments or other entities. Universities, public schools, government agencies, and even private enterprises can leverage this powerful tool to maintain strict control over “core” functionality while allowing for flexibility with individual sites.

You may also like: 

Apr 10 2017
Apr 10

We honed in on Drupal nearly a decade ago. It is built into everything we do from design, to project management, to development (obviously). Which means that every year a group of us pack up and head to DrupalCon. This year, we are sending more E3ers to Baltimore than we’ve sent to any other DrupalCon for two awesome reasons:

1. Backed by Drupal, we are growing quickly but carefully. More employees mean more tickets to DrupalCon 2017.

2. We’ve had FOUR sessions accepted to the DrupalCon 2017 lineup.


DrupalCon 2017 Baltimore Logo


We are thrilled! As a Drupal champion, Acquia Preferred Partner, and top Denver agency, we’ve felt huge momentum in the Drupal community this year. We are excited to share a few of our secrets with the DrupalCon 2017 attendees. Come see us!


A Photo of Melissa

Data Driven Design for Better Business

Melissa Dufresne, UX/Designer

April 26th at 2:15 pm

We do not create a design based on a client’s favorite color – and it is rare that we’d ever be persuaded to do so. At Elevated Third, all our design decisions are based on data. We run tests, do research, and generate numbers to support almost every decision our design team makes.

It is difficult, if not impossible, for clients to argue with cold, hard facts. So, we put in the leg work upfront instead of spending time making rounds of revisions for indecisive clients. We show, in quantifiable terms, what their customers respond to and build the design around those results. For this reason, our clients are happier, our relationships are smoother, and our business is more efficient.

Client decisions happen faster when we can offer them the best option possible, backed up by numbers. The largest Drupal 8 site we’ve built to date launched on time and under budget based on this theory. Our UX Designers and Content Strategists can set up quick tests to prove why one icon or label should be selected over another so that our clients are not burdened based on preference, but instead, numbers. This tactic has made Elevated Third more profitable and efficient to help our small agency grow.


Photos of Jeff, Nick, and Nelson

New Business is Everyone's Responsibility

Jeff Calderone, CEO/Founder

Nick Switzer, Development Director

Nelson Harris, Business Development Strategist

April 26th at 2:45 pm

Creating a sense of shared responsibility for new business and developing an environment where it is a whole agency discipline is critical. We will talk about our approach over the recent years and how we have shifted our process to reflect this idea. Our agency, Elevated Third, has learned a great deal about how to improve this aspect of our business through various experiences that we will highlight to illustrate this point.

We will explain how both design and creative teams, as well as the technical team, are involved in evaluating opportunities, estimating, creating a proposal and presenting a final pitch. The group presenting includes our CEO, development director, and a member of our business development team. We want to provide useful information and highlight real examples that can help other organizations using Drupal in making sure that the sales team does not live in a silo and that everyone has a role in winning new clients.

Our goal is to show the details of our process to provide as much actionable information as possible. Our team will present the deliverables that we use to explain exactly how we connect the different groups within our agency to create a fluid and inclusive procedure. We want to show the “how” and provide attendees with the tools to execute within their own organizations.

Photos of Nick and Kylie

Agile with a Lowercase 'a': The Art of Collaborative Project Management

Nick Switzer, Development Director

Kylie Forcinito, Account Manager  

April 26th at 5:00 pm

As a small agency, we are always striving to be more efficient and maximize the tools we have. How can we work smarter not harder, and spend more time focused on our client’s business problem? We chose to develop and follow a process rooted in agile, but with a nimble approach that requires dev, design, UX, strategy, and account to individually contribute to the success of the project.

Join Kylie Forcinito, an accomplished account manager and veteran of the agency world, and Nick Switzer, development team lead with 7 years of Drupal and agency experience, to learn how they came together to produce a version of agile tailored to the world of budget constraints, short timelines, limited resources, and required deliverables. Come see how to collaboratively plan and execute a large development project with a small team - all disciplines are welcome!

Questions we’ll address in this talk include:

  • How can I involve my whole team in project planning and scoping without blowing my budget?

  • How do you successfully manage a project without a PMP certified scrum master?

  • What is the best way to keep the team focused on the bigger picture without losing track of project details?

  • How much documentation do we need to successfully provide working software?

  • How do I satisfy my client’s contract and budget needs, while keeping collaboration a priority?


Photo of Anthony

Atomic Design in Drupal 8: Isolating frontend workflow with Pattern Lab!

Anthony Simone, Drupal Developer

April 26th at 10:45 am

Drupal 8 has allowed for the integration of modern workflows into the Drupal community. The transition to Twig as the templating engine specifically provides the space to integrate new patterns and tools into your frontend workflow. Pattern Lab is a static site generator that provides a structure for developing a templating and theming framework based on atomic design. The Twig version of Pattern Lab, along with the Data Transform plugin written by Aleksi Peebles, creates the possibility to integrate Pattern Lab directly into your Drupal project.

This session will review the basic principles of Pattern Lab and atomic design but will focus on the practical implementation of Pattern Lab in YOUR next Drupal project. We will work toward the following goals:

  • Review the basic principles of Pattern Lab and how it can integrate directly with a Drupal 8 project, including specific issues that make Pattern Lab in Drupal different from a standalone Pattern Lab project

  • Discuss some challenges that you might encounter if you want to add Pattern Lab to your project and an example of one specific implementation

  • Consider a functioning example of a Drupal 8 site that has a well developed Pattern Lab backbone and discuss some potential benefits of this type of workflow

Mar 22 2017
Mar 22

There are plenty of website developers out there. Freelance dudes in their mom’s basement and large advertising agencies, both can build you a website. The important thing to think about when undergoing an enterprise web development project is that all digital agencies are not created equal, and this goes beyond just the technology they use or price and quality of work they produce. It can be extremely daunting to set off on the journey of evaluating vendors, especially to do work on something so highly technical and nuanced. How do you make sense of what’s out there and pick the right enterprise web development solution?



The vendor checks off lists and passes the project off


1. The Vendor

First and most important, make sure you pick a partner, not a vendor. What’s the difference? In short, a vendor will take a list of tasks and accomplish them, while a partner will work with you to figure out what those tasks should be, and then help you work out how to best prioritize them within the constraints of your budget, time, and internal capabilities.

This is an extremely important distinction. Think about building a website as you would building a house. There are a million ways to do things, they vary greatly in price, and scope can change very quickly. If you aren’t working with a trustworthy and knowledgeable partner, you run the risk of getting a house that’s not built to code, or doesn’t solve your problems, or blows your timeline and budget. They may not use the right materials for the job, and they may up-charge you, or sell you on something costly you don’t need.

A partner will work backward from your dream house, account for the money you have to build that house, and will help you prioritize the most important parts of the house to suit your goals and lifestyle so your money is spent efficiently. Now, this doesn’t mean you’ll get everything you want. But a partner might help you realize you can make a concession on the marble countertops for a fireplace in the living room because you live in Denver, not Miami.



The One Stop Shop spreads itself thin


2. The One Stop Flop

You might notice some digital agency websites that have a services page describing how they’re the best at 30 different services. Those that can do it all are often masters of none. When you pick a shop that can build a website, handle hosting, setup and manage a paid search campaign, SEO, social media, email marketing, branding, mobile app development, content marketing, and whatever else, you’re getting a diluted product in all areas. Likely, their staff is no larger than more specialized firms, they don’t have a deep knowledge in any one area, and they’re spread too thin to really focus on any of these things individually.

When you’re selecting an enterprise web development partner to help you with an undertaking of this size and scale, you may be tempted to work with one like this that can do it all - a one stop shop where you can funnel your entire marketing budget. Sure, there are plenty of agencies that can do it all - print, digital, paid search, social, SEO, AND build you a website. And sure, that keeps things simple, but does that mean that your result will also be simple? Yes. Dangerously so.

The Siloed Specialist lives in a bubble

3. The Siloed Specialist

Recently, I came across an agency that handles only religious websites for Catholic churches and archdiocese. There’s a common misconception that a focus like this on one particular vertical will make you the best at it. That’s likely how this religious agency stays in business - social proof in the form of a bandwagon effect. In reality, a focus on religious projects doesn’t make you good at coding, user experience, or staying on time and under budget. Sure, having experience within a very specific vertical means you’re familiar with the common challenges faced by all your clients, but in this situation, development experience trumps vertical.

Making the conscious decision to work on variations of different types of projects over a group of industries will ultimately produce the best product, because of the benefit of exposure to many types of problems, use cases, and ultimately, solutions. An enterprise web development partner that can build you any website and do it well is the true indication of mastery, skill, and experience.

The Low-Cost Sweatshop pumps websites out with little care.

4. The Low-cost Sweatshop

Many website development shops only do one thing: develop. They have a staff of single-minded development robots who crank out tasks and develop a website quickly and usually rather cheaply. Often these shops will outsource large chunks of work overseas because of the repetitiveness and very low level of big-picture knowledge required.

These firms may seem appealing due to their low costs and specialization, but don’t be fooled. What you may gain in cost savings, you lose in overall value. Your website will be developed by and for web developers, with very little thought given to user experience, design, and sometimes common sense. Think Windows Vista vs. Mac OS X. Developers at these firms and overseas don’t understand UX best practices, ignore the bigger picture, and don’t share/communicate information well internally and externally to you as a client. Because they crank out sites as quickly as possible, these firms will cut corners, and make sites as they were “told” to, rather than in a way that’s actually functional. Sites built in this way are often inconsistent, hard to scale, and unfriendly to users.


The Sweet Spot

The truth is that finding a strategic digital marketing partner is a challenging and nuanced thing. Because an enterprise web development project isn’t something that has a fixed cost or can be produced in bulk and marked up to be sold at a profit, it’s important that you find a partner you can trust, is communicative and responsive, and who takes the time to do their due diligence before taking a project on. An ideal partner will be upfront with you about the uncertainty and risk associated with a project of this type, and will clearly explain potential landmines and concerns that could affect the project’s timeline and budget. If they work with outside subcontractors due to possible bandwidth issues, they’ll let you know. They’ll also keep you updated on the project as it progresses, showing you burndown on features and working with your team to remove any obstacles standing in the way of development.  

The best value will ultimately come from a full-service partner who specializes in their core competency, but without putting blinders on.

Aug 21 2016
Aug 21
TL;DR The Google Summer of Code period ends, and am glad that I am able to meet all the goals and develop something productive for the Drupal community. In this blog post, I will be sharing the details of the project, the functionality of the module and its current status.

I am glad that I was one of the lucky students who were selected to be a part of the Google Summer of Code 2016 program for the project “Integrate Google Cloud Vision API to Drupal 8”. The project was under the mentorship of Naveen Valecha, Christian López Espínola and Eugene Ilyin. Under their mentoring and guidance, I am able meet all the goals and develop something productive for the Drupal community.

Let me first share why the Google Vision API module may be required.

Google Cloud Vision API bring to picture the automated content analysis of the images. The API can not only detect objects ranging from animals to famous monuments, but also detects faces on emotions. In addition, the API can also help censor images, extract text from images, detect logos and landmarks, and even the attributes of the image itself, for instance the dominant color in the image. Thus, it can serve as a powerful content analysis tool for images.

Now let us see how can we put the module to use, i.e. what are its use cases. To start with, the Google Vision API module allows Taxonomy tagging of image files using Label Detection. Label Detection classifies the images into a number of general purpose categories. For example, classifying a war scenario to war, troop, soldiers, transport, etc. based on the surroundings in the images. This feature of the module is especially important to filter the images based on some tags.

Second feature listing our use case is the Safe Search Detection. It quickly identifies and detects the presence of any explicit or violent contents in an image which are not fit for display. When this feature is enabled in the module, the Safe Search technique validates any image for explicit/violent contents. If found, these images are asked for moderation, and are not allowed to be uploaded on the site, thus keeping the site clean.

Please click here for video demonstration of the two above-mentioned use cases.

Continuing with the other use cases, the third one is Filling the Alternate Text field of an image file. Label, Logo, Landmark and Optical Character Detection feature of the Google Cloud Vision API have been used to implement this use case. Based on the choice entered by the end user, he/she can have the Alternate Text for any image auto filled by one of the four above-mentioned options. The choice “Label Detection” would fill the field with the first value returned in the API response. “Logo Detection” identifies the logos of famous brands, and can be used to fill the field accordingly. Likewise, “Landmark Detection” identifies the monuments and structures, ranging from natural to man-made; and “Optical Character Detection” detects and identifies the texts within an image, and fills the Alternate Text field accordingly.

Next comes the User Emotion Detection feature. This feature is especially important in cases of new account creation. On enabling this feature, it would detect the emotion of the user in the profile picture and notify the new user if he/she seems to be unhappy in the image, prompting them to upload a happy one.

Lastly, the module also allows Displaying the similar image files. Based on the dominant color component (Red, Green or Blue), the module quickly groups all the images which share the same color component, and display them under the “Similar Content” tab in the form of a list. Each item links itself to the image file itself, and is named as per the filename saved by the user. Users should note here that by “similar contents”, we do not mean that the images would resemble each other always. Instead we mean here the same dominant color components.

All the details of my work, the interesting facts and features have been shared on the Drupal Planet.

Please watch this video to know more on how to use the above-mentioned use cases in proper way.

[embedded content]

This is the complete picture of the Google Vision API module developed during the Google Summer of Code phase (May 23, 2016- August 23, 2016).

With this, the three wonderful months of Google Summer of Code phase comes to an end, enriching me with lots of experiences, meeting great persons and working with them. In addition of giving me an asset, it also boosted and enhanced my skills. I learnt a lot of new techniques, which probably, I would not have learnt otherwise. The use of services and dependency injection, constraints and validators, controllers, automated tests and the introduction to concepts of entities and entity types to name a few.
I would put to use these concepts in best possible way, and try to contribute to the Drupal community with my best efforts.
Aug 16 2016
Aug 16
TL;DR Last week I had worked moving the helper functions for filling Alt Text of image file to a new service; and moving the reused/supporting functions of the tests to an abstract parent class, GoogleVisionTestBase. This week I have worked on improving the documentation of the module and making the label detection results configurable.

With all major issues and features committed to the module, this week I worked on few minor issues, including the documentation and cleanup in the project..

It is an immense pleasure for me that I am getting the feedbacks from the community on the Google Vision API module. An issue Improve documentation for helper functions was created to develop more on documentation and provide the minute details on the code. I have worked on it, and added more documentation to the helper functions so that they can be understood better.

In addition, a need was felt to let the number of results obtained from the Vision API for each of the feature as configurable, and allow the end user to take the control on that. The corresponding issue is Make max results for Label Detection configurable. In my humble opinion, most of the feature implementations and requests to the Google Cloud Vision API have nothing to do with allowing the end user to configure the number of results. For instance, the Safe Search Detection feature detects and avoids the explicit contents to be uploaded, and does not need the number of results to be configurable. However, the taxonomy tagging using Label Detection should be user dependent, and hence, I worked on the issue to make the value configurable only for Label Detection purpose. This value can be configured from the Google Vision settings page, where we set the API key. I have also developed simple web tests to verify that the value is configurable. Presently, the issue is under review.

I have also worked on standard coding fixes and pa-reviews and assisted my mentor, Naveen Valecha to develop interfaces for the services. I assisted him on access rights of the functions, and fixing the documentation issues which clashed with the present one.

Lastly, I worked on improving the README and the module page to include all the new information and instructions implemented during the Google Summer of Code phase.

With all these works done, and all the minor issues resolved, I believe that the module is ready for usage with all the features and end user cases implemented.
Next Week, I’ll work on creating a video demonstration on how to use Google Vision API to fill the Alt Text attribute of an image file, detect the emotion in the user profile pictures and to group the similar images which share the same dominant color.
Aug 09 2016
Aug 09
TL;DR Last week I had worked on modifying the tests for “Fill Alt Text”, “Emotion Detection” and “Image Properties” features of the Google Vision API module. The only tasks left are moving the supporting functions to a separate service, in addition to, creating an abstract parent class for tests and moving the functions there.

The issues Alt Text field gets properly filled using various detection features, Emotion Detection(Face Detection) feature and Implementation of Image Properties feature of the Google Vision API module are still under review by my mentors. Meanwhile, my mentors asked me to move the supporting functions of the “Fill Alt Text” issue to a separate service and use it from there. In addition, they also suggested me to create an abstract parent class for the Google Vision simple tests, and move the supporting functions to the parent class. Thus, this week, I contributed to follow these suggestions and implement them out.

There are few supporting functions, namely, google_vision_set_alt_text() and google_vision_edit_alt_text() to fill the Alt Text in accordance to the feature requested from the Vision API, and also to manipulate the value, if needed. I moved these functions to a separate service, namely, FillAltText, and have altered the code to use the functions from there instead of directly accessing them.

In addition, there are a number of supporting functions used in the simple web tests of the module, to create users, contents and fields, which were placed in the test file itself, which in one way, is a kind of redundancy. Hence, I moved all these supporting functions to abstract parent class named GoogleVisionTestBase, and altered the test classes to extend the parent class instead and in place of WebTestBase. This removed the redundant code, as well as, gave a proper structure and orientation to the web tests.
These minor changes would be committed to the module directly, once the major issues are reviewed by my mentors and committed to the module.
Aug 03 2016
Aug 03
TL;DR Last week, I had worked on and developed tests to ensure that the Alt Text field of an image file gets filled in accordance to the various detection features of the Vision API, namely Label Detection, Landmark Detection, Logo Detection and Optical Character Detection. This week I have worked to modify and add tests to various features of the Google Vision module, namely filling of Alt Text field, emotion detection of user pictures and grouping the image files on the basis of their dominant color component.

My mentors reviewed the code and the tests which I had put for review to get them committed to the Google Vision API module. However, the code needs some amendment pointed out by my mentors, which was to be corrected before commit. Hence, I spent this week working on the issues and resolving the flaws, rather than starting with a new feature. Let me start discussing my work in detail.

I had submitted the code and the tests which ensure that the Alt Text field gets properly filled using various detection features according to the end user choice. However, as was pointed out by my mentor, it had one drawback- the user would not be able to manipulate or change the value of the field if he wishes to. Amidst the different options available to the end user to fill the alt text field of the image file, there was a small bug- once an option is selected, it was possible to switch between the options, however, disabling it was not working. After, been pointed out, I worked on modifying the feature and introducing the end user ability to manipulate the value of the field as and when required. Also, I worked on the second bug, and resolved the issues of disabling the feature.

Regarding the Emotion Detection(Face Detection) feature of the Vision API, I was guided to use injections instead of using the static methods directly, and to modify variables. For example, the use of get(‘entity_type.manager’) over the static call \Drupal::entityTypeManager(). Apart from these minor changes, a major issue was the feature was being called whenever an image file is associated with. However, I need to direct it to focus only when the user uploads an image, and not on its removal (as both the actions involves an image file, hence the bug).

In the issue, Implementation of Image Properties feature in the Vision API, I had queried multiple times to the database in the cycle to fetch results and build the routed page using the controllers. However, my mentor instructed me that its really a bad way of implementing the database queries to fetch the results. Hence, I modified the code and changed them to single queries to fetch the result and use them to build the page. In addition, I was asked to build the list using ‘item_list’ instead of using the conventional ‘#prefix’ and ‘#suffix’ to generate the list. Another important change in my approach towards my code was the use of db_query(), the use of which is deprecated. Hence, I switched to use addExpressions() instead of db_query().
Presently, the code is under review by the mentors. I will work further on them, once they get reviewed and I get further instructions on it.
Jul 27 2016
Jul 27
TL;DR Last week, I had worked on and developed tests to ensure that the similar images are grouped in accordance to the Image Properties feature of the Vision API. The code is under review by the mentors, and I would continue on it once the review is done. Meanwhile, they also reviewed the “Fill Alt Text” feature issue, and approved it is good to go. This week, I have worked on developing tests for this issue.

An important feature that I have implemented in the Google Vision API module is the filling of Alt Text field of an image file entity by any of the four choices- Label Detection, Landmark Detection, Logo Detection and Optical Character Detection. My mentor suggested me to check the availability of the response and then fill the field, as we can not fully rely on the third party responses. With this minor suggestion being implemented, now its time to develop tests to ensure the functionality of this feature.

I started developing simple web tests for this feature, to ensure that the Alt Text field is properly filled in accordance to the choice of the user. It requires the selection of the four choices one by one and verify that the field is filled correctly. Thus we require four tests to test the entire functionality. I have added an extra test to ensure that if none of the options are selected then the field remains empty.

I created the image files using the images available in the simpletests. The images can be accessed through drupalGetTestFiles(). The filling, however, requires call to the Google Cloud Vision API, thus inducing dependency on the API key. To remove the dependency, I mocked the function in the test module, returning the custom data to implement the feature.

The first test ensures that the Label Detection feature returns correct response and the Alt Text field is filled correctly. The simpletest provides a list of assertions to verify it, however, I found assertFieldByName() to be most suitable for the purpose. It asserts the value of a field based on the field name. The second test ensures that the Landmark Detection feature works correctly. Similarly, the third and fourth test ensures the correct functionality of the Logo and the Optical Character Detection feature.

The fifth test which I have included perform tests when none of the options are selected. It ensures that under this case, the Alt Text field remains empty, and does not contain any unwanted values.
I have posted the patch covering the suggestions and tests on the issue queue Fill the Alt Text of the Image File using Google Vision API to be reviewed by my mentors. Once they review it, I would work on it further, if required.
Jul 26 2016
Jul 26
TL;DR In the past two weeks I had worked on using the Image Properties feature offered by the Google Cloud Vision API to group the image files together on the basis of the dominant color components filling them. In addition, I had also worked on detecting the image files and filling the Alternate Text field based on the results of Label/Landmark/Logo/Optical Character Detection, based on the demand of the end user. This week, I have worked on and developed tests to ensure that the similar images are grouped in accordance to the Image Properties feature of the Vision API.

At present, the Google Vision API module supports the Label Detection feature to be used as taxonomy terms, the Safe Search Detection feature to avoid displaying any explicit contents or violence and the User Emotion detection to detect the emotions of the users in their profile pictures and notify them about it.

I had worked on grouping the images on the basis of the dominant color component(Red, Green or Blue) which they are comprised of. I got the code reviewed by my mentors, and they approved it with minor suggestions on injecting the constructors wherever possible. Following their suggestions, I injected the Connection object instead of accessing the database via \Drupal::database().

After making changes as per the suggestions, I started developing simple web tests for this feature, to ensure that the similar images gets displayed under the SImilarContents tab. It requires the creation of new taxonomy vocabulary and adding an entity reference field to the image file entity. After the creation of the new Vocabulary and addition of the new field to the image file, I created the image files using the images available in the simpletests. The images can be accessed through drupalGetTestFiles(). The first test ensures that if the Vocabulary named ‘Dominant Color’ is selected, the similar images gets displayed under the file/{file_id}/similarcontent link.

The grouping, however, requires call to the Google Cloud Vision API, thus inducing dependency on the API key. To remove the dependency, I mocked the function in the test module, returning the custom data to implement the grouping.

To cover the negative aspect, i.e. the case when the Dominant Color option is not selected, I have developed another test which creates a demo vocabulary to simply store the labels, instead of the dominant color component. In this case, the file/{file_id}/similarcontent link displays the message “No items found”.
I have posted the patch covering the suggestions and tests on the issue queue to be reviewed by my mentors. Once they review it, I would work on it further, if required.
Jul 19 2016
Jul 19

On our first day as interns at Cheeky Monkey, we (Jared and Jordan) were given the task of exploring the somewhat uncharted waters of using Behat, an open source BDD (Behavior-driven development) testing framework, with Drupal 7.

Why BDD Testing?

We all know that testing is important, but why do we bother with “BDD” testing?

Behavior-driven development testing is exactly what it sounds like, testing the behavior of the site. This makes the tests very different than say a unit test.

Unit tests are often reliant on a small piece of code, such as an individual function, so if you change that function, you often have to change the test. With BDD tests, however, you write plain English “Scenarios” inside of specific “Features” or “Stories” to test how you expect the website to react in response to certain user actions. Having these tests available in your back pocket helps you catch bugs in unpredicted areas of your site when you’re implementing new features.

Now that we have the “why?” out of the way, it is time to get cracking on some serious detective work. We also need a sandbox to play around in with these foreign concepts. We set up a very basic Drupal 7 site on Pantheon and cloned it down on our local machines.


The Process:

Being relatively new to the world of development, and with Behat being fairly new to the world of Cheeky Monkey, we didn’t have many clues right off the bat. For the first few days of the project, we were on a quest to gather resources and knowledge. First stop? The wise sage, Google. We discovered that there was not a definitive Behat/Drupal tutorial out there, but there are plenty of little breadcrumbs to go off of. The most helpful resources for us were the Drupal Extension to Behat and Mink and the Behat Docs.

The first few days that we spent trying to piece everything together were filled with a constant flux of blind frustration, complete confusion and wonderful epiphanies. Over the course of around two weeks, we were able to put together a small set of features, or tests. Our intention was that they cover some basic Drupal 7 site functionality and can hopefully be implemented on most Drupal 7 projects going forward.


The following steps are what we ironed out to get Behat up and running on Drupal 7 sites locally.

  1. In your local project directory, create a folder called ‘behat’ inside of your sites folder: PROJECT/sites/Behat

  2. In your new Behat folder, create a composer.json file that looks like this:

Screen Shot 2016-07-06 at 10.53.39 AM.png

  1. From your command line, in PROJECT/sites/behat you will want to run $ composer install to get all of those dependencies installed. In order for this step to work, you will need composer installed on your machine.

  2. You will also need to create a behat.yml file that looks something like this, to configure your testing environment:

Screen Shot 2016-07-06 at 3.54.45 PM.png

6. We now need to initialize Behat. To do this, run: $ bin/Behat --init. This creates the features folder where you will write your tests, and your own FeatureContext.php file, where you can define custom steps. To learn more about this, visit the Behat and Drupal Extension documentation that we listed above.

Writing Tests

Now to actually writing the tests! This is the easy part. The tests are written using a language called Gherkin, in files with the extension ‘.feature’.

GitHub user mikecrittenden has a list of predefined Drupal behat steps that are available if you like to look at them in a browser.

The quick and easy way to view these steps, in our opinion, is to run $ bin/behat -dl in your terminal from the PROJECT/sites/behat folder.

Here is an example of a small and simple test to get a sense of how the tests are structured:

Screen Shot 2016-07-07 at 10.43.50 AM.png

In the above test, the “Feature” declaration is not processed by Behat as it is there for humans to understand what this .feature file is testing. The @api tag before the “Scenario” calls the Drupal API Driver.

  • Given’ is generally used to define some parameters of the environment of your website.
  • When’ is usually the actions taken by the user.
  • Then’ is usually saved for the expected behaviors of the site.

Executing Tests

Once the tests are written, you probably want to run them, right?

Luckily, once everything is correctly installed, running Behat tests is a breeze.

You just implemented a new feature onto your website and now you need to run your tests to make sure it didn’t accidently break a behavior.

In your command line, navigate to the PROJECT/sites/Behat folder and run the simple command $ bin/Behat. This tells Behat to find all of the *.feature files and test them against your website. Once it is done running you should be able to see all of your passing tests, and more importantly, any failing scenarios specifying the exact step that failed.

Now let’s say you have your core set of features and you have just written a new one. You don’t need to run all of the tests just to see if the new one works. In your command line, you start as you did before, just adding the path from your project’s Behat folder to that specific .feature file. For example, you made a new test and named it my_example.feature. You would simply run $ bin/Behat features/my_example.feature in your command line.

In Conclusion

While this is still a work in progress for us interns, we have learned a lot about Behat and hope that our new found knowledge will be of some help for the fine developers at Cheeky Monkey Media and for anybody else who wishes to cut back on unpredicted bugs!

Jul 14 2016
Jul 14
TL;DR Previous week I had worked on detecting the emotion in the profile pictures of the users, and notifying them to change the image if they do not look happy. The work is under review by the mentors. Once it gets reviewed, I would resume it if it needs any changes. This week I have worked on filling the ‘Alt Text’ field of an image file based on any one of the method selected by the end user- Label Detection, Landmark Detection, Logo Detection and Optical Character Detection.

Last week, I had worked on implementing the Face Detection feature in the Google Vision API module. The code is currently under the review by the mentors. Once, they review it, I would develop further on it if it requires any changes.

The Google Cloud Vision API provides the features to detect popular landmarks in an image(Landmark Detection), logos of popular brands(Logo Detection), texts within an image(Optical Character Detection), in addition to Label Detection. These features, though of less significance, are helpful in identifying an image. Hence, I have started working on implementing a new helpful case for the users- Filling of the Alternate Text field of an image file using these features.

The Alt Text field of the image file entity is modified to incorporate the options to fill the field using the features. The user may select any one of the four options to fill the Alt Text field of the image.

Coming to the technical aspect, I have made use of hook_form_BASE_FORM_ID_alter() to alter the Alternate Text field of the image file entity. I have modified the edit form of the Alt Text field to add four radio options, namely- Label Detection, Landmark Detection, Logo Detection and Optical Character Detection. The user may select any of the options and save the configuration. The Alternate Text field would be filled up accordingly.
Presently, the code is under the review by the mentors. Once it gets reviewed, I would make suggested changes, if required.
Jul 07 2016
Jul 07
TL;DR Previous week I had worked on grouping the contents based on the dominant color component in their images, if present. The work is under review of the mentors. And once, it gets reviewed, I would work further on that issue. Meanwhile, I have started developing and implementing the Emotion Detection feature of the Google Cloud Vision API. It would detect the emotion of the person in the profile picture uploaded, and if the person looks angry or unhappy, he would be notified thereof. This feature is especially important when building sites for professional purposes, as the facial looks matters a lot in such cases.

Last week, I had worked on implementing the Dominant Color Detection feature in the Google Vision API module. The code is currently under the review by the mentors. Once, they review it, I would develop further on it if it requires any changes.

Hence, meanwhile, I have started working on implementing a new feature Face Detection in an image. This feature gives us the location of the face in an image, and in addition, the emotions and expressions on the face.

I have used this feature to detect the emotion of the person in the profile picture uploaded by him/her. If the person does not seem happy in the image, he/she is notified thereof of their expressions. This is especially useful when the end users are developing a site for professional purposes, as in professional matters, expressions matters a lot.

Coming to the technical aspect, I have made use of hook_entity_bundle_field_info_alter() to alter the image fields, and check the emotions in the uploaded images. This function has been used, as we only want to implement this feature on the image fields. If the image is not a happy one, then appropriate message is displayed using drupal_set_message(). This feature also makes use of Constraints and Validators just like the Safe Search detection feature. Presently, the code is under the review by the mentors.

In addition to the implementation of Face Detection, I also worked on expanding the tests of the Safe Search Detection feature of the Google Vision API module to test other entities as well, in addition to the nodes. I have expanded the tests to test the safe search constraint on the comment entity as well. This requires the creation of a dummy comment type, adding an image field to the comment type, and attaching the comment to the content type. The image field contains the safe search as the constraint on it. This test is basically similar to the tests present in the module for the node entity. The code is under review by the mentors and would soon be committed to the module. For reference on how to create dummy comment types and attaching it to the content types, the CommentTestBase class is very helpful.
Jun 29 2016
Jun 29
TL;DR The safe search constraint feature is now committed to the module along with proper web tests. So, this week I started off with a new feature offered by the Google Cloud Vision API- “Image Properties Detection”. It detects various properties and attributes of an image, namely, the RGB components, pixel fraction and score. I have worked on to detect the dominant component in the image present in any content, and display all the contents sharing similar dominant color. It is pretty much like what we see on the e-commerce sites.

Previous week I had worked on writing web tests for the safe search constraint validation on the image fields. This feature is now committed in the module Google Vision API.

This week I have worked on implementing another feature provided by the Google Cloud Vision API, namely, Image Properties Detection. This feature detects the color components of red, green and blue colors in the images along with their pixel fractions and scores. I have used this feature to determine the dominant color component (i.e. red, blue or green) in the image, and to display all those contents which have the same dominant color in their images.

I have developed the code which creates a new route- /node/{nid}/relatedcontent to display the related contents in the form of a list. This concept makes use of Controllers and Routing System of Drupal 8. The Controller class is extended to render the output of our page in the format we require. The contents are displayed in the form of list with the links to their respective nodes, and are named by their titles.

In addition to the grouping of similar contents, the colors are also stored in the form of taxonomy terms under a taxonomy vocabulary programmatically generated under the name Dominant Colors.

This issue is still under progress, and requires little modification. I need to add the link to the new route in each of the nodes, so as to  get a better interface to access those contents. Henceforth, I will put this patch for review.
A very simple example of creating routes and controllers in your module can be found here.
Jun 29 2016
Jun 29
TL;DR It has been over a month since I started working on my Drupal project “Integrate Google Cloud Vision API to Drupal 8”, and gradually I have crossed the second stage towards the completion of the project, first being selection in the Google Summer of Code 2016 programme. Here, I would like to share my experiences and accomplishments during this one month journey, and also I would like to summarize my further plans with the project and the features which I would be implementing in the coming two months.

Let me first describe the significance of this post and what actually does “midterm submission” means? The GSOC coding phase has been divided into two halves, viz. Midterm submission and Final submission. In the first half, the students try to accomplish around 50% of the project, and submit their work to the mentors for evaluation. Those who passed the midterm evaluations are allowed to proceed further and complete the remaining portion of their project.

Now coming back to my experiences, after successfully passing through the Community Bonding period of the GSOC 2016 programme, now it was the time for start coding our project proposal to reality. As I had shared earlier that during the Community Bonding period, I came to know that the project has already been initiated by Eugene Ilyin,(who is now a part of my GSOC team). So, we discussed upon the project and set a roadmap of the project and the goals we need to achieve in the GSOC period. I had started coding the very first day of the coding phase, moving the new as well as existing functions to services. My mentors Naveen Valecha, Christian López Espínola and Eugene Ilyin really helped me a lot and guided me whenever and wherever I needed their guidance and support. They helped me to get through new concepts and guided me to implement them in the most effective way to make the module the best that we could.

During this period, I also came to learn about a lot of new techniques and concepts which I had not implemented earlier. Right from the very first day of the coding period, I have been coming across new things everyday, and it is really interesting and fun to learn all those techniques. In this one month period, I learnt about services and containers and how to implement them. The post on Services and dependency injection in Drupal 8 and the videos of Drupalize.me were of great help to understand the concept of services and implement dependency injection. I also learnt about the use of validators and constraints and how can they be implemented both on general basis or specifically on fields. I also learnt about how to create test modules and alter various classes and their functions in our tests so as to remove the dependency on web access or on valid informations for our testing purposes. I learnt new things every day and enjoyed implementing them to code our module plan into reality. At present, the module supports the Label Detection feature of the Vision API, along with the tests to verify whether the API key has been set by the end user or not. Currently, the feature of Safe Search Detection is available as a patch which can be found here, which would soon be committed to the module.
[embedded content] I have shared all the details of my work on the Drupal Planet. Please watch this video for detailed information on how to use the Google Vision API module on your Drupal site.
Jun 23 2016
Jun 23
TL;DR In my last post Avoid Explicit Contents in the images using Google Vision module, I had discussed about the services which “Safe Search” feature of the Vision API provides us, and how we have put this into use in the Google Vision module as a constraint to all the image fields which would be validated when the option is enabled. This week I have worked on developing simple web tests for testing this feature whether it gives us the results as expected.

Last week I had worked on developing the code to use the Safe Search detection feature as a constraint to the image fields which would validate the images for the presence of explicit contents, provided that the user enables the configuration for the concerned image field.

Besides the code, testing the functionality using simple web tests are equally essential to ensure that the feature executes perfectly when necessary steps are implemented. Hence, this week I have worked on developing simple web tests, which ensures that we have a fully functional feature.

I have tested both the conditions with safe search enabled and disabled to verify the results which should be obtained. When the safe search is enabled, any image containing any sort of  explicit content, is detected, and asked for moderation. If the image is not moderated, then the image is not saved. When the same image was passed through the second test, with safe search disabled, it was stored successfully, thus providing us the expected results.

To conduct the test successfully, I had to create a demo content type in my tests using drupalCreateContentType(), which would have an image field with the ‘Enable Safe Search’ option. This was something new to me to how to add an extra field to the default content type settings. The Drupal documentation on FieldConfig and FieldStorageConfig were of great help to understand the underlying concepts and functions which the field offers, and thus helping me to create custom fields programmatically. However, in order to perform the test, I had to call the API directly, which required a valid API key and an image which actually contains explicit content. Hence, my mentors asked me to override the functions of the class (mocking the services) in such a way that it removes the dependency from both the key and the image. Thus, I created a test module inside the Google Vision module, and override the function.

Summarizing the above, I can say that in addition to learning how to test the constraints and validators, I also came to learn about some really cool techniques, including the creation of custom fields in tests and mocking the services.
The lingotek_test of the Lingotek Translation module is a good reference to learn about how to override the services in web tests. Other references which are useful for mocking are ServiceProviderBase and Mocking for Unit Tests.
Jun 14 2016
Jun 14
TL;DR Safe Search detection of the Google Cloud Vision API allows the end users to avoid any explicit content or violence in images to be uploaded on the site. I worked on integrating this feature to the module as a constraint to those image fields which have the “Safe Search” feature enabled.

Let me first give a brief summary about the current status of the module Google Vision. In the earlier weeks, I had implemented the services and wrote tests for the module, which are now committed to the module.

Now, coming to the progress of this week.

I had started with integrating the Safe Search detection feature in the module. Safe Search detection allows its users to detect any explicit contents within the image, and hence can be very useful for site administrators who do not want to display any explicit images on their sites.

This feature was initially integrated using a function call in the module. However, my mentors suggested that this feature should rather be a Constraint on the image fields, which would be validated if the feature is enabled for the field. It depends on the user whether to keep safe search on their site or not, and they can implement it any time they want by just enabling/disabling the checkbox on the image field. Hence, now it is a user choice, rather than the developer’s choice whether to implement this feature or not.

Presently, the code is under review by my mentors whether it needs changes or is ready for commit.

Constraints and Validators are wonderful features of Drupal 8. Constraints, as the name goes, are certain restrictions which we pose on the various fields. Validators are used to implement the logic to create the situation under which the constraints are to be applied. Some helpful examples of applying custom constraints and validators can also be found Sitepoint.
This week had a lot of new things stored for me. I had no idea about the constraints and validators when I was asked to implement them at the first place. I spent hours on them, learning about them and seeking guidance from my mentors on the issues I faced. I developed gradually on it, and by the end of the week, I was able to implement them for the safe search detection feature.
Jun 13 2016
Jun 13

As a developer or site builder, there will come a time when you have a lot of content to migrate from Drupal into your Pantheon site—a need that will go beyond Feeds and will perhaps have related content, entity references and the like—that will make importing content a challenge. This path will lead you to Drupal’s Migrate module. In Drupal 8, this module is in core.

Much can be said about Migrate and how it works, but to give you the short version: it allows a new Drupal site to connect to a different source (such as another copy of Drupal or even another platform such as Wordpress). Once connected, it will pull content in, one piece at a time, and track it—in case it gets updated, or in case other content relies on it—so it can reconcile these complicated matters for you.

The Simplest Migration: Drupal to Drupal

To demonstrate the power of Migrate, we can do what is called a “drupal to drupal” migration on a site that is based on Drupal 6 or Drupal 7 and bring that content into a new Drupal 8 site.

Let’s look at the steps involved:

  1. Import our old Drupal site into Pantheon if it is not there already.

  2. Create a new Pantheon site to run a Drupal 8 site. Go through the install process.

  3. Enable the experimental migration modules.

  4. Run the “Migrate Upgrade” process.

After completing this process, you should have all of the configuration pertaining to the old site, such as site name, content types, fields, and other settings from the old site. With all of these elements in the site, the real magic can begin. The Migrate module will begin populating these things with content from your old site. If you have used Migrate in Drupal 7 this may be a surprise, that older version did not set things up for you and you had to manually map out all of the fields. The Drupal 8 version goes much further to make your life substantially easier.

Drupal Migration Upgrade, Step by Step

1. Import your existing site into Pantheon if it is not there already. A free sandbox account should be fine.

2. Create a new Drupal 8 site. Upon completion of the install process, remember to commit Pantheon’s settings.php file and switch to “Git” mode.

3. Go to the “Extend” page in Drupal 8, enable the Migrate, Migrate Drupal, and Migrate Drupal UI modules. See Figure 1.

4. After enabling the modules you will see a green message to notify you that the module has been installed, it ALSO provides a link to the upgrade form. See Figure 2.

5. Go to the /upgrade page or follow the link that appeared after enabling the module. See Figure 3.

6. Input the database credentials for the old site. See Figure 4. You can get these credentials from the Pantheon dashboard of the old site under “Connection Info” These database credentials can change as Pantheon containers update. You may need to update this information later if you intend to run migrations repeatedly over time. Over on the GitHub repo for Pantheon documentation, there is a discussion about different ways to automate and simplify the database connection setup. Stay tuned for a future blog post on running migrations through Drush when those recommendations are finalized.

7. Congratulations, you upgraded Drupal! Now it is time to review.

Reviewing the Drupal Migration

After you run the migration, you will want to do a full review to see what worked and what didn’t. This is an important step both for the developer and the site owner.

Things you should verify:

  • Did the site variables, such as site_name, and other site-wide settings come across?

  • Were all of the content types and fields created as you expected? If a field or type is missing, do you still need it in the new site?

  • What content came through, did you get all of the items you were expecting? You will find some occasional differences, such as user 1 is not being imported, because you created that user when you installed Drupal 8, not during migration. You may find some nodes fail due to problems with a field, etc.

Watch out for some of the following:

  • Missing filter formats can cause body text and other field data to go missing if the formatter was not available during the import. This could happen if you did not enable a contrib module, for example. If the filter does not yet exist for Drupal 8, you can get around this by implementing hook_migrate_prepare_row in a custom module to set those to use a different filter. This hook iterates over every single item that is imported, so you can update specific values as they are being imported.

  • User, node, entity, and file references may break, especially if a needed module was not enabled before you migrated. Depending on the version of Drupal you are migrating from you may need to use a patch (see the “migration system” component filter on the core issue queue) to enable support for these fields if they do not work. You would have to run the migration locally on your laptop if you have to do any patching.

  • Some fields or other data may not be relevant to your new site. You may want to delete these after importing, or you may want to consider overriding things using Drush and the migrate-upgrade command (from contrib, 8.x-2.x branch) to generate migration YML files you can use to augment the migrations.

Once you are satisfied that you have the content you need, you can finish your site building tasks and re-run the migration if you have not changed any structure. Re-running the migration will bring in the new elements that have been created since your last import.

[Related] How Agencies Benefit from Pantheon High Performance Web Hosting

Other Migrations & Next Steps

If you use contrib modules, you can take the migrations even further. For example, you can use the Migrate Source CSV module to create migrations that import from a spreadsheet. To work with it all you need to do is create some Drupal 8 configuration files: https://www.drupal.org/project/migrate_source_csv

You can also import from other platforms such as Wordpress. Currently the wordpress_migrate module is in development. Contributions are welcome. https://www.drupal.org/project/wordpress_migrate

If the approach taken here does too much you can create your migrations manually using YML files where you map out the fields by hand. If you wish to build the entire site from scratch in Drupal 8—including creating the content types, etc.—this may be the best approach.

Contrib modules for custom migrations:

Additional documentation:

Topics Drupal, Drupal Planet
Jun 07 2016
Jun 07
TL;DR I have already created services for the functions which bridges the module to the API, implementing the features offered by the Google Cloud Vision API, thus completing my first step towards Integrating Google Cloud Vision API to Drupal 8. This week I worked on generating error reports if the API key is not set by the user, and developing tests to test the API key configuration and whether the key is stored successfully or not.

The first step towards the integration of Google Cloud Vision API to Drupal 8 was completed with the functions moved to services. I had posted the patch for the review by my mentors. They provided their suggestions on the patch, which I worked on, and every step resulting in a better and more beautiful code.

I would also like to share that this week, our team expanded from a team of three, to a team of four members. Yes! Eugene Ilyin, the original maintainer of the module Google Vision API has joined us to mentor me through the project.

Now coming to the progress of the project, the schedule says I need to configure the Google Cloud Vision API at taxonomy field level, so that the end users may use the taxonomy terms to get the desired response from the API. However, the module already used the configuration for Label Detection, and in a discussion with my mentors, it was figured out that the current configuration does not need any changes, as at present the behaviour is pretty clear and obvious to let the developers use it easily; rather we should work on implementing the runtime verification and storage of API key supplied by the end users. I was required to write and implement the code which would give error report if the API key was not saved prior to the use of the module, and also to write tests for verifying the configuration and ensuring the storage of the key.

I created issue for the new task, Implement a runtime requirement checking if API key is not set in the issue queues of the module, and started coding the requirement. I created patches and posted it in the issue to get it reviewed by my mentors. I brought the suggested changes in the code and finally have submitted the patch implementing the required functionalities. On the other hand, the previous issue Moving the common functions to services was also under the review process. I also worked on this issue, solving and implementing minor suggestions before it gets ready to be accepted and committed! And finally, my first patch in this project has been accepted and the changes has been reflected in the module.
At the end of these two weeks, I learnt about services and dependency injection which prove to be very useful concepts implemented in Drupal 8. I also had experiences of writing tests to check the runtime functionality of the module.
May 31 2016
May 31
TL;DR I have started working on the project Integrate Google Cloud Vision API to Drupal 8. As of the task of this week, the common functions have been moved to services. The crucial concepts involved Dependency Injection and use of Guzzle over curl.

My mentors suggested me to create issues for my tasks in the issue queues of the module Google Vision API, where they would track my progress and also discuss and leave suggestions for the improvement of the module. Thus, starting with my task for the first week, I created the issue Moving the common functions to services in the issue queue of the module and started coding the functions into services and injecting them as and when needed. I started the week by learning the concepts of services and containers, and gradually learnt about Dependency Injections in Drupal 8. The post on Services and dependency injection in Drupal 8 and the videos of Drupalize.me were of great help to understand the concept of services and implement dependency injection.

After completing this part, I put the patch for review, and there followed the next part- Use of Guzzle over curl in the services and injecting the httpClient service. I spent significant time learning the concept of Guzzle which was quite new for me. My mentors Naveen Valecha and Christian López Espínola helped me a lot to understand Guzzle, and subsequently this task was completed with Guzzle replacing curl and injection of httpClient. In addition, the present code made use of concatenated strings for sending the data during API call. I changed the code to make use of arrays and Json utility class and its static functions to send the data instead of strings. When the code seemed perfect, my mentors suggested me to perform clean up along with proper documentation.
At the end of the week, I successfully uploaded the patch with all the suggestions implemented, clean up done and documentation added, thereby completing the task for my first week.
May 23 2016
May 23
TL;DR I’ll be working on Integrating Google Cloud Vision API to Drupal 8 this summer as part of Google Summer of Code programme. During my community bonding period I got the chance to work with lots of grey shavers in drupal community.

I have been selected as one of the eleven students for the Google Summer of Code, who would be coding their summer away for Drupal. This summer I will be working on project Integrate Google Cloud Vision API to Drupal 8. The API detects objects ranging from animals to famous monuments and landmarks. Not only that, it supports the recognition of a wide range of famous logos/brands. Face detection, text detection within an image and safe searches are among the other features which this API provides. It is really cool for a Content Management System to have such a powerful feature, so I would be working this summer to implement this feature in Drupal.

Before jumping directly into the coding phase, the students selected for the Google Summer of Code, were given a month to bond with the community, their mentors and other friends with whom they share the same platform. This phase is known as “Community Bonding Period”, which lasted from April 23, 2016 to May 22, 2016.

Everyday brings up new opportunities to do something new, meet like-minded people across the globe, and contribute to awesomeness!! This is the simplest way in which I can put the "Community Bonding" period.

This is the ideal time when the students get closer to the community, interact with their mentors, learn more about the organization codebase and have a more detailed discussion on the project and its timelines before actually start the coding.

To mentor me through the project, I have been blessed with two experienced mentors- Naveen Valecha and Christian López Espínola. I have been discussing with them on the project, its execution and the tough areas for last few weeks. I was also guided to learn some basic methodologies, including the services and configuration management which would be of great help in my project. Meanwhile, I also devoted my time to the community by contributing in the core issues, providing suggestions, developing patches and reviewing the project applications, including Field Group Table Component and AddressField Copy of the newbies in the Drupal world, highlighting how can they develop it in a better way, and thus lightening the burden of the community as far as I could.

In this one month, I met a lot of like-minded across the globe who have gathered together to take the community to greater heights, and I felt myself lucky enough to get an opportunity to work with them. I tried developing good bonding with my mentors, being in touch with them and bringing to their notice all my actions and tracks. 
This I suppose is the most essential purpose of this one month pre-coding phase.

Here, I would also like to share a strange experience of mine during the community bonding period. During this phase, I came to know that the project on which I am working on, was already initiated by Eugene Ilyin. Here lies the bond strength of the Drupal Community. On having the discussion on this with Eugene, he not only agreed to let me work on the project, but has also agreed to mentor me on my project. It once again proved the awesomeness of the community, and how all of us come together, collaborate and help each other to grow as well as take the community to greater heights. This was all about my month of Community Bonding.

The month ends, and now we jump into coding the summer away.
Apr 30 2016
Apr 30
TL;DR My first contribution to Drupal was a small contributed module named Image CircleSlider. It had gone through different stages of examination by different reviewers, and during this process, I came to know about the work flow of a sandbox project to a full project.

Since the release of my first module Image Circle Slider, I have been asked this question at several instances that what did I do to get my module to be accepted.

So here I am discussing my experience of the first module I contributed. I was an intern at Innoraft Solutions Pvt. Ltd., where I came across this jQuery plugin, and was asked to implement it in Drupal. Though it was a very simple plugin, yet was an innovative one, and brought the images in a very modern way, instead of traditional sliding system. After developing the code of the module, I created it as sandbox project, and queued it to get it reviewed by the community.
P.S. A useful tip in order to go to the top of the priority list- Take part in the review process yourself, and help others to get their review done. It earns a special tag to your project: "PAReview Bonus".  Drupal is a friendly community, and we here believe in joining hands and working, instead of the stiff competition which exists in today's world, and hence this review process.
Once the reviewers start reviewing your project, they would be providing their valuable suggestions, and each step would take your project to its completion and acceptance. And when sufficient reviews have been done, and there would be no application blockers, it would be marked as "Reviewed and Tested by the Community", which the pre-final step and ultimately the git administrator would give you the access to release your project as Full Project.
This is a one-time process, and once you have been given git access, you can directly release your projects, and give your service to the community.
Another important point: I have seen many a times that the contributors gets restless during the process, and want to get it done within a single day. However, things does not go this way. Please be patient during the review process. These things does not happen overnight, and you need to be patient and coordinate with the community.

Hope this helps to all the new comers. Happy contributing!!

Mar 15 2016
Mar 15
TL;DR I'll be working on the project Integrate Google Cloud Vision API to Drupal 8 this summer as part of the Google Summer of Code programme. This API offers automated content analysis of the images, including landmark detection, logo detection, face detection, text detection in images among others.

The Google Summer of Code results are out!! And I have been one of those few students who have successfully been able to get their names on the list of selected students, who would be working for the next three months on the Open Source Projects.

Now, let me describe here the project and its applications.

Well, Google Cloud Vision API bring to picture the automated content analysis of the images. The API can not only detect objects ranging from animals to famous monuments, but also detects faces on emotions. In addition, the API can also help censor images, extract text from images, detect logos and landmarks, and even the attributes of the image itself, for instance the dominant color in the image.

What is so special about this project idea? This is the first question which comes in every mind regarding any GSOC project. So here is the answer. This feature has not been implemented in the Content Management Systems(CMS), and its integration in Drupal can give the users, a powerful tool which carries out automated content analysis of the images. Integration of this API would not only add to Drupal characteristics, but also would open up a new horizon for Drupal. Won't it be great if a CMS can detect logo of famous brands? Or some explicit contents not fit to be displayed? Or the detection of natural as well as artificial structures in an image?
So, to summarize the aspects of Vision API, it offers the following features:
  1. Label Detection- It detects broad set of categories in an image, ranging from living animals to non-living places.
  2. Explicit Content Detection- Detects and notifies about adult or violent content within an image.
  3. Logo Detection- Detects the logos of popular brands.
  4. Landmark Detection- Detects natural as well as artificial structures within an image.
  5. Image Attributes- Detects general attributes of an image, such as dominant color and background shades.
  6. Optical Character Recognition(OCR)- Detects and extracts texts within an image. Vision API supports a broad range of languages along with the support for automatic language identification.
  7. Face Detection- Detects multiple faces within an image, along with associated key facial attributes like emotional state. However, it still does not embed the support of facial recognition.
  8. Integrated REST API- Access via REST API to request one or more annotation types per image.

For more details on Google Cloud Vision API, please have a look at Vision API 
Mar 09 2016
Mar 09

We just finished covering how simple configuration is still easy in Drupal 8, but how is Drupal 8 making the hard things possible in a way to justify changing the old variables API? Well, in Drupal 7, when you needed to handle complex configuration, the first step was ditching variable_get(), system_settings_form(), and related APIs. Drupal 8 has improved this situation two ways. First, you don’t have to throw out basic configuration code to handle complex needs. Second, more things are possible.

[Related] Managing Configuration in Code in Drupal 8

The goal of the Configuration Management Initiative was to maintain the declarative nature of configuration even when side-effects or cross-referenced validation are necessary. (Contrast with the Drupal 7 trick of using hook_update_N() as an imperative method.) Specifically, Drupal 8’s configuration management system operates under the perturbation model of constraint programming. That is, modules and core ship with defaults that work with each other, and configuration changes by a site owner create perturbations that either can be satisfied trivially (like the site front page path) or through propagating changes (described below). Sometimes, the constraints can’t be all satisfied, like deleting a field but also using it in a view; Drupal 8 helps here by making dry-run configuration tests possible. At least you can know before deploying to production!

Let’s go on a tour of hard problems in Drupal configuration management by walking through the use cases.

Subscriptions and Side-Effects

Often, configuration takes effect by simply changing the value. One example is the front page path for a site; nothing else needs to respond to make that configuration effective. In Drupal 7, these basic cases typically (and happily) used variables. Things got messy when a module needed to alter the database schema or similar systems to activate the configuration. Drupal 7 didn’t have a answer for this in core, though you could build on top of the Features module.

Anyway, in Drupal 8, side effects happen two ways. You should use the first when possible.


This is the modern and clean method of responding to configuration changes, regardless of whether the changes are to your module’s configuration or not. There are a number of configuration events you can receive. The most basic is ConfigEvents::SAVE, which fires on any change, whether created using the admin GUI, Drush, or a configuration import.

A good example of this approach in core is the system module’s cache tag system. It invalidates rendered cache items when the site theme changes; it does more than that, but we’ll be pulling the examples from there. The foundation for Drupal 8-style event listening is Symfony’s EventSubscriberInterface, which provides an object-oriented way to list the events of interest and set a callback. Drupal 7 developers should think of it like a non-alter hook.

The first step is getting the right files in the right places for the autoloader. You will need ConfigExample.php (assuming you name the class ConfigExample) and a example.services.yml (assuming your module name is “example”). You should end up with something like this, starting from the module root:

  • example/

    • example.info.yml (required for any module)

    • example.services.yml (example based on system.services.yml)

    • src/

      • EventSubscriber/

The second step is to register interest in the appropriate events from your class, which happens by implementing getSubscribedEvents() (which is the only member function required by the EventSubscriberInterface). The following code causes the member function onSave() to run whenever configuration gets saved:

 public static function getSubscribedEvents() {
    $events[ConfigEvents::SAVE][] = ['onSave'];
    return $events;

Third, we need to implement the onSave() callback to invalidate the cache when the appropriate configuration keys change. If system.theme or system.theme.global changes, the code will call the appropriate function to invalidate the cache:

public function onSave(ConfigCrudEvent $event) {
    if (in_array($event->getConfig()->getName(), ['system.theme', 'system.theme.global'], TRUE)) {
         // Invalidate the cache here.

The example above covers the intermediate “respond to a configuration change” use case. If you’d also like to validate configuration on import, you can see an example in SystemConfigSubscriber. It shows both subscribing to import events and stopping propagation on invalid input.

Using Hooks

This is where things get dirty; we’re now in hook_something_alter() territory. It’s hard to reason about things in this territory, but here we are anyway because it’s necessary for a handful of use cases. To be clear, you’re basically killing kittens when you re-jigger data the way an alter hook can. If you’re doing cleanup, I’d recommend queueing something into the batch system instead using the subscription method, even if your batch job has to attempt things and re-enqueue itself if other batch processing needs to finish first. Anyway, warning over. Here’s your example, shamelessly pulled from the API site. This gets the list of deleted configurations matching the key “field.storage.node.body.” If any exist, it adds a new function call to the end of the steps for propagating the configuration.

function example_config_import_steps_alter(&$sync_steps, \Drupal\Core\Config\ConfigImporter $config_importer) {
 $deletes = $config_importer->getUnprocessedConfiguration('delete');
 if (isset($deletes['field.storage.node.body'])) {
    $sync_steps[] = '_additional_configuration_step';

Settings Forms

In Drupal 7, have you ever saved a settings form and then gotten a page back with the old setting still in place (usually because something messed with $conf in settings.php)? Never again in Drupal 8! Configuration forms save to the pre-alteration values, and modules can read those “raw” values themselves through Config::getRawData() or ConfigFormBaseTrait::config() to provide custom forms.

However, if you just need a basic form, you can use pre-alteration values automatically with Drupal 8’s ConfigFormBase, which replaces system_settings_form() and seamlessly integrates with everything in Drupal 8’s configuration world from events to imports to version control.

A great example of ConfigFormBase use is in the system module’s LoggingForm.php.

Discovery and Multiple Objects for a Module

Need more than a (possibly nested), single configuration file for your module? If you provide something like views where there are multiple, independent configurations still owned by your module, you need Configuration Entities. These entities allow enforcing a schema, listing the individual configurations (which correspond one-to-one with YAML files). These entities are cleaner than the Drupal 7 approaches of spamming the variable namespace, creating a configuration table, or using some combination of ctools and Features.

Three-Way Merging

I don’t want to go into deep detail because there’s a great blog post on configuration merging already, but I will underscore the importance of three-way merging for configuration. Unless you have a completely linear flow with one development environment and no changes happening in test or live, there will be cases where configuration diverges both on the development branch and in production versus the last common revision. A three-way merge allows safely determining what changed on each side without the hazards of simply comparing development’s configuration to production’s (which can make additions on one side appear to be deletions on the other). You could kind of do this with Features, but the use of PHP arrays and other serialization made the committed configuration unfriendly to diff utilities. Drupal 8 uses canonicalized YAML output, which is both human- and diff-friendly.

Setting Dynamic Options with PHP

In the Style of settings.php

The biggest difference you’ll notice isn’t in settings.php but in the GUI. Values you coerce here will not appear in the GUI in Drupal 8 (for reasons mentioned in the Settings Forms section above). The following example is shamelessly pulled from the Amazee Labs post.

Drupal 7:

$conf['mail_system']['default-system'] = 'DevelMailLog';

Drupal 8:

$config['system.mail']['interface']['default'] = 'devel_mail_log';

In a Module

This is back into dirty territory because modules ought to provide higher-level ways of altering their behavior other than expecting other modules to hurriedly erase and change configuration values before the module reads them. If you must go down this path, you need to register the class performing the changes as a service under “config.factory.override” and implement ConfigFactoryOverrideInterface. This happens much as the service entry and class for subscribing happen above.

Configuration Lockdown

The most you could do in Drupal 7 was hard-code many variables in settings.php and periodically audit the site’s configuration with Features. With the transparently named Configuration Read-Only Mode module, you can actually prevent ad hoc changes in, say, your test or live environment. On Pantheon, for example, you could do the following to prevent configuration changes in the production environment:

    $settings['config_readonly'] = TRUE;

Bundling Related Configuration

This is still Features territory, actually more than ever. Contrary to rumors, Features is alive and well for Drupal 8. Relieved of the burden of extracting and applying configuration, Features is back to its original role of bundling functionality, most often in the form of configuration YAML to be imported by Drupal 8. So, in short, Features is now just for features.

Other Resources

The latest stack + the latest Drupal: See how Drupal 8 reaches its full potential on Pantheon

Topics Development, Drupal, Drupal Hosting, Drupal Planet
Nov 24 2015
Nov 24

Much of the conversation in the Drupal 8 development cycle has focused on “NIH vs. PIE.” In Drupal 8 we have replaced a fear of anything “Not-Invented-Here” with an embrace of tools that were “Proudly-Invented-Elsewhere.” In practice, this switch means removing “drupalisms,” sections of code created for Drupal that are understood only by (some) Drupal developers. In their place, we have added external libraries or conventions used by a much wider group of people. Drupal has become much clearer, much more explicit about its intentions by making this kind of change all over the codebase.

Replacing drupal_http_request with Guzzle

For instance we have gotten rid of our own “drupal_http_request()” and replaced it with the invented-elsewhere Guzzle library. Guzzle is a stand-alone tool written in PHP for making HTTP requests. Here is the change record that describes how code written in Drupal 7 would be modified to use the new library in Drupal 8. Ultimately it was Guzzle’s superior feature set that made it replace drupal_http_request. Guzzle can do a lot more than drupal_http_request and it does so in a much clearer fashion.

From the change record we see an example of Guzzle:

$client = \Drupal::httpClient();

$request = $client->createRequest('GET', $feed->url);

$request->addHeader('If-Modified-Since', gmdate(DATE_RFC1123, $last_fetched));


Compared to an example from Drupal 7:

$headers = array('If-Modified-Since' => gmdate(DATE_RFC1123, $last_fetched));

$result = drupal_http_request($feed->url, array('headers' => $headers));


The intentions of the code from Guzzle are much more explicit. The method “addHeader” is being called. Any developer could read that line and see what is happening. In the case of the Drupal 7 code the reader would be guessing. And sure, it might be easy enough to guess what drupal_http_request will do when it is passed multidimensional arrays. But it takes a lot of mental overhead for developers to think through the implication of each key within a multidimensional array.

It is not a coincidence that Guzzle, a library shared among many PHP projects, requires developers to be very clear about their intentions. Replacing drupal_http_request with Guzzle made Drupal’s code more explicit and comprehensible. There are numerous other examples where adopting or pursuing a concept from outside Drupal made Drupal itself clearer.

Classed Objects

Perhaps the clearest example of this shift is the switch to classed objects. Prior to Drupal 8, much of Drupal Core still showed its roots in PHP4 when support for object-oriented concepts was immature. Now instead of Drupal entities being represented simply as a “stdClass”, each entity type has its own class with defined properties and methods. Writing actual classes, methods, and interfaces encourages the Drupal community to think harder about what each entity is meant to do.

Drupal has a history of taking our “node” concept and bending it mercilessly to replace less developed parts of core. Anyone else remember usernode, which made a node for every user? When users and nodes were both shoehorned onto the same type of generic class it was easier to justify using one to make up for the shortcomings of the other and muddle our definitions by doing so. Actual classes force us to think more clearly.


Along with the usage of classed-objects has come the adoption of interfaces. One of the problems interfaces have helped solved is the pain of registering new functionality like field widgets. In Drupal 7 and prior, adding a new field formatter meant writing a set of hooks—some of which were absolutely required and others varied by use case. Writing hook_field_formatter_info implies writing hook_field_formatter_prepare_view, but it doesn’t require it. In Drupal 8 the we have Drupal\Core\Field\FormatterInterface to tell us exactly what is needed by a formatter and module writers can extend Drupal\Core\Field\FormatterBase to avoid rewriting boilerplate.

Interfaces also enable Drupal 8 to put services in a Dependency Injection Container. Again, Drupal is replacing NIH with PIE. Previous versions of Drupal assumed that any portion of code could access the global state at any time for any reason. This assumption implies that isolating a section of code so that it can be replaced is very difficult or impossible. In Drupal 8 many corners of Drupal, like breadcrumb handling, have be rewritten into a “service” which is carried in a “container”. The service should declare exactly what it depends and exactly what it will do (through an interface). This enables developers to replace one service with another version. Here is a detailed breakdown from Larry Garfield on how to do so with breadcrumbs. Again, for Drupal to use an outside concept, it’s own code must be more explicit.


Chasing an invented-elsewhere caching strategy has made Drupal’s internal caching system much clearer. Inspired by Facebook’s Big Pipe caching strategy, Drupal developers led by Fabian Franz and Wim Leers have made huge improvements to Drupal 8 caching. And of course they have made the caching system more explicit in doing so. The idea of Big Pipe is simple: make a page load fast by sending only a skeleton of markup and then filling in all of the separately-cached blocks of content. For that strategy to work, each block must be very explicit about how long it can be cached and which changes to context or content would invalidate it. Now, when rendering something, a developer is expected to explicitly state:

  1. The cache tags. These are data points like node ids. The idea is that if a node is resaved, every cache that declared the node’s id as a cache tag can be invalidated.

  2. The cache contexts. Some render elements vary based on more global concepts like the language of the current request/user or the role of the current user. This addition makes it much easier to do something like showing a full article to subscribers and a truncated article to anonymous users.

  3. The max-age of the cache. Some elements have to be regenerated every 10 minutes. Some can stay cached as long as their tags and contexts remain unchanged. Before Drupal 8, cache ages were much more of a guesswork operation.

For more information, watch this talk from Drupalcon Barcelona from Wim Leers and Fabian Franz.

Configuration Management

Drupal 8’s configuration management strategy is perhaps the most touted new feature and it brings plenty of PIE. The Configuration Management Initiative helped pave the way for standardized, exportable configuration; a concept developers from other systems expect as a matter of course. The discussions within the Configuration Management Initiative considered a number of different export formats like JSON and XML. YAML was ultimately chosen and that format is now used throughout Drupal for different purposes like the definition of services, libraries, CSS breakpoints and more. Even the drupalism of .info files in modules and themes has be replaced with this widely understood, invented-elsewhere format.

Additionally, core has taken concepts from CTools module for how configuration moves from the file system to the database to a cache. Now in Drupal 8, configuration can be exported to the file system (in .yml files) to be committed to git and moved across environments. The .yml files can then be imported into a database in a consistent fashion where they are stored and cached.

The main improvement over Drupal 7 is the consistency across Drupal subsystems. In Drupal 7, a developer had to remember that “overridden” meant one thing when Features module used that word in relation to Field configuration and “overridden” meant a slightly different thing when Views UI used it in relation to an exported View. By treating configuration in a consistent manner across subsystems the whole architecture becomes more cohesive.

Increased explicitness often was not the main goal of the above changes. The developers leading those changes often just wanted a better system that was easier to work with. In doing so, we’ve made a version of Drupal that should be clearer and more understandable to developers new to Drupal. We do not have to explain the difference between Filter module storage and Panels. We can just say “our configurations are all stored in the same way”. Drupal becomes more approachable and will travel further because of it.

Topics Drupal Planet, Drupal
Jul 21 2014
Jul 21

Yes Way are a creative agency who connect businesses, brands and communities with the creative talent they need. They specialise in strategic planning for businesses and representation for creative individuals to engage their target audience through branding, events and marketing.

The brief

Godel were approached by Yes Way to help complete designs for their website update and produce a custom responsive website built in a Drupal 7 backend with a totally custom front-end that leveraged a minimalist and modern Aurora subtheme, Singularity grids and a lot of Javascript via Drupal behaviors.

The landing page

The brief was to create a vibrant online presence to reflect the creatives that Yes Way represent; specialists in photography, street art, fine art, illustration and fashion styling. Yes Way wanted to stick with their existing branding, but give it new life through a new design. As such, the new site design that we created for Yes Way is not only clean and minimalist with a typographic focus, but also projects a vibrant persona, bringing creative talent to the forefront through their personal profiles and visual portfolios.

Working on projects like this is a great experience as it allows us to work closely with the client to iteratively improve on an existing product. Although we did the redesign and site build in a short period of time this time, this sort of iterative improvement process can work as on ongoing agreement as well, allowing us to build trust with our clients and gradually make improvements to their product over time, keeping it up to current standards in design and dev and allowing the client freedom to make suggestions based on their changing needs.

The portfolio (in an overlay)

The site

Yes Way's new landing page features a full length background image and a retractable navigation which engages as soon as the viewer starts scrolling. More information is revealed about Yes Way as you scroll down past each header and when the a navigation menu item is clicked the screen smoothly transits to the appropriate area on the site using jQuery.

Godel wanted to bring the site up to date with dynamic and responsive features. Responsive design elements include the use of mmenu which creates a slick, user-friendly navigation pattern for mobile devices. The desktop functions as a "one pager" with some pop-up overlays. The navigation uses the scrollTo library to hijack the normal scrolling behaviour of the browser when the user clicks a menu item from the sticky header. The idea was to make site navigation as easy and fun as possible - the user never has to reload the page or follow a series of links, only interact with a single page.

All of the second-level sections are created using a nice little technique we've created using data-attributes. It allows us to create an immersive Javascript-powered app-style front end for a Drupal CMS backend, which creates websites that don't necessarily have to look "like Drupal sites".

Data attributes and custom display suite fields

This section is a brief technical explanation of our technique, skip it if it's Greek to you!
The day we learnt about custom display suite fields from This PreviousNext blog post was a happy day for us. Although DS offers a lot of great tools for UI-focused node display building, for devs who want more control it was starting to feel a bit limiting. We didn't want to go down the php field route (shudder) so we were happy to be able to create fields with PHP possibilities through this custom DS field technique.

One of the best things about the custom fields is the ability to generate fields that actually contain more data than the eye can see, stored in data attributes of HTML elements away from the visible part of the DOM. For example, we were able to store all of the data for an artist portfolio popup in the teaser tile for that artist that appears on the initial page load. What that means is that when the user clicks on an artist's face to view their portfolio, it loads dynamically in to the page via Javascript and that data that it displays is already stored on the page, just hidden.

First, we define the info hook for our field:

 * Implements hook_ds_fields_info.

function gp_global_ds_fields_info($entity_type) {
  $fields = array();

  $fields['node']['body'] = array(
    'title' => t('Body data attribute'),
    'field_type' => DS_FIELD_TYPE_FUNCTION,
    'function' => 'gp_global_ds_field_body',

  if (isset($fields[$entity_type])) {
    return array($entity_type => $fields[$entity_type]);


Then we make the markup for the field itself, which is surprisingly simple:

 * Return the body as a div with a data attribute.
function gp_global_ds_field_body($field) {
  $entity = $field['entity'];
    $data = $entity->body[LANGUAGE_NONE][0]['safe_value'];
    $content = '';
    return $content;

The key is "data-body", a custom data attribute we create and then store the body text in. It doesn't get rendered on the page until we grab it with our Javascript, like this (abridged version):

(function($, undefined) {

    // We get the element that has the data-attribute on it and extract the data from the attribute
    bodyEl: '.fullwidth .body',
    _this.bodytext = $(element).find('[data-body]').data('body');

    bodyText: function bodyText() 
      var _this = Drupal.behaviors.overlayAnimate;
      // We replace the HTML of the blank element with the data we grabbed earlier.

}) (jQuery);

Why this technique is meaningful

We think that this sort of technique leverages an organised CMS with a very shiny front end. We use techniques like this in combination with very bare themes to build up our own custom front-end markup, and it's a great way to create a really unique-looking site.

Mobile menu

You can see this technique in action with the unique hover state overlays for each featured artist on the main page. The user can click through to more information about each person including a written blurb, gallery of images and even a video. For each of those things, the data is entered as a node in the Drupal backend, sent to the front of the site as a data attribute in a custom display suite field and triggered in to visibility via Javascript.

The artist section

All in all, the user experience is intended to have an immersive web-app feeling, with content loading in to the page quietly, displayed in seamless overlays rather than new page loads and making them most of a one-page layout with some animated navigation styles. Yes Way are able to keep users on their site for longer by holding their attention for longer. Because users aren't directed off site (not even off-page!) they're more likely to click around and explore the single page they see. Because we already load the data into the page before we display it, they get the added benefit of a fast-loading site as well.

We think the result is an engaging site that uses some cool techniques to satisfy a real business need. Check out the website here!

The footer

Godel Article Disclaimer
This article is for educational purposes only. Godel do not represent that information in this article is complete, accurate, current, or suitable for your individual circumstances. If you use or rely on any information on this article, you do so at your own risk – Godel are not responsible for any loss of any kind that you may incur.

Oct 04 2012
Oct 04

Linux Journal just published a special Drupal edition featuring a great article on Trekk written by our own VP of Engineering, Tim Loudon!

Trekk is our Drupal distribution for Universities. It features content sharing across multiple sites (faculty, courses, news, etc.) and robust migration from legacy HTML. The Linux Journal article leads the reader through the process of migrating content by scraping legacy HTML and sharing it between sites.

Linux Journal is offering this special issue as a free download, so please download it and read it and let us know what you think!

May 20 2011
May 20

On a current project, I discovered the truth to the phrase in the Drupal community of "there's a module for that". As I initially looked at what I needed to do for a requirement, I was thinking that I would need to write a small amount of custom code in a glue module, but upon doing more research, I discovered that there were multiple modules that, when pieced together, would do exactly what I needed. And of course, as always, I knew I had to blog about it. Please raise your hand if you have any questions.

First, the use case. I had two content types that needed to be linked together: Video and Transcript. As you can probably tell from the name, the Transcript node is a written transcript of the content of the Video node. For the URL of the Video node, I wanted to use the associated term from a specific taxonomy called Category as part of the URL so that the URL was in the form of


For the Transcript node, I wanted it to be obvious from the URL that it was associated with the specific Video node, so for the video above, it would take the form of


In addition to this, I wanted to make it easy for the user to add a transcript to a video node, and also for the Video and Transcript nodes to be linked to each other.

Here are the modules used for this recipe:

The first step is to create the URL alias for the Video nodes. Out of the box, Token provides a [term] (and corresponding [term-raw]) token, but the results of these are inconsistent (I never really did any research into how those tokens are filled). I chatted briefly with greggles in IRC, and his thought was that it would be easiest to write a custom token (and also better for performance), because that way it could be targeted to the specific vocabulary. Upon doing some research in the Token issue queue, I found the Taxonomy Token module. This allows you to create tokens for specific taxonomy terms in specific vocabularies.

Taxonomy Token Settings

In the picture above, I'm creating single top term tokens for the Category vocabulary. This then creates the [node:vocab:1:term:url] token, and I can then create my URL alias for Video nodes as


So, in the case of the video above with the term National from the Category vocabulary, the URL would be


I'll get back to creating the alias for the Transcript node, but we have to complete some other steps first.

Next, we need to link the Video and Transcript nodes. I don't need to go into great detail here, because Bob Christenson at Mustardseed Media has a great video that demonstrates how to use Nodereference URL and Views Attach to link the two nodes together. In my use case, I end up with a link on my Video node to add a new Transcript node. Clicking on that link takes me directly to the node/add/transcript page. The node title for the Transcript node is automatically generated, and when it is saved, there is a link on the Transcript node to the Video node and vice versa.

Next is creating the title of the Transcript node. What I wanted was to have the title of the Transcript node by the title of the Video node with ' - Transcript' attached (i.e. My Video Node Title - Transcript). As explained in the video, the two content types are linked together by a nodereference field (field_video_ref) in the Transcript node type that refers to the Video node. This creates a [field_video_ref-title-raw] token that I can use. I enable the Automatic Nodetitles module, which adds an "Automatic Title generation" text form group on my content type settings form. Using my handy dandy token, I set the value in the Pattern for the Title" field to be

[field_video_ref-title-raw] - Transcript

and select the "Automatically generate the title and hide the title field" option. When I go to create the Transcript node, the Title field is hidden, but the node title is correctly created when it is saved.

The last requirement to take care of is the URL alias for the Transcript node. As I said above, I want the alias to be the same as the alias for the Video node with '/transcript' appended to the end. Once again, Token module comes to the rescue. Since I am using the field_video_ref field, there are a number of tokens generated for that field:

[field_video_ref-nid] Referenced node ID
[field_video_ref-title] Referenced node title
[field_video_ref-title-raw] Referenced node unfiltered title. WARNING - raw user input.
[field_video_ref-link] Formatted html link to the referenced node.
[field_video_ref-path] Relative path alias to the referenced node.

In my case, I want the [field_video_ref-path] token, since that gives me video/national/my-video-node-title, making my full Transcript alias [field_video_ref-path]/transcript.

Drupal Development

Now that this is all in place, here's the functionality that I have:

  • The Video node URL alias is generated using a specific taxonomy vocabulary
  • A link to create a Transcript node from the Video node
  • A link to the finished Transcript node on the Video node and vice versa
  • The Transcript node title and URL alias generated based on the title and URL of the Video node.

And all of this without writing one line of custom code in a module. All of this demonstrates the power of Token module and how it gives you so many options to automatically generate URLs, titles, and many other pieces of information for you automatically based on specific criteria.

On a side note, the (valid) question could be asked "why not just put a text area in the Video node for the transcript?", and that would be valid. However, don't let that distract from the point of this post, which is to demonstrate the power and flexibility that putting a few modules together gives you out of the box (so to speak) with Drupal.

Oct 14 2010
Oct 14

The other day while hanging out in IRC, I was pinged by katbailey, the Lady of the Lovely Voice (I could listen to her talk for hours) with a question about sorting in Solr when a sort field doesn't contain a value.  In particular, how can you control whether nodes without a value in the sort field show up at the beginning or end of the search results?  In her particular case, there was a Price field that was being sorted on, but not all nodes had a Price value, and the ones without Price were showing up at the beginning of the list.

I hadn't dealt with that before, but Peter Wolanin (aka pwolanin), one of the Solr Gurus, piped up with the answer. It lies in schema.xml, one of the Solr configuration files. In this case, Katherine was using a field with the "fs_" prefix (see my previous post for more info on dynamic fields and how they work).  In the config file, it is configured like this:

<dynamicField name="fs_*"&nbsp; type="sfloat"&nbsp; indexed="true"&nbsp; stored="true" multiValued="false"/>

This is a field of type sfloat, so we need to look at the configuration for that field type. 

<fieldType name="sfloat" class="solr.SortableFloatField" sortMissingLast="true" omitNorms="true"/>

The secret sauce is the sortMissingLast attribute.  It is explained in more detail in comments just above it in the config file.

 <!-- The optional sortMissingLast and sortMissingFirst attributes are
        currently supported on types that are sorted internally as strings.
        - If sortMissingLast="true", then a sort on this field will cause documents
        without the field to come after documents with the field,
        regardless of the requested sort order (asc or desc).
        - If sortMissingFirst="true", then a sort on this field will cause documents
        without the field to come before documents with the field,
        regardless of the requested sort order.
        - If sortMissingLast="false" and sortMissingFirst="false" (the default),
        then default lucene sorting will be used which places docs without the
        field first in an ascending sort and last in a descending sort.

So in this particular case,  it is set so if Katherine was sorting by the Price field, any nodes that didn't have a Price value would be placed after all of the nodes that did have a Price value.  If she wanted to place those items before the nodes with a Price value, she would have replaced

setMissingLast = "true"


setMissingFirst = "true";

And, if she wanted it to vary depending on whether the sort was ascending or descending, she would have added them both and set them both to false.

Now, as a side note, the sharp-eyed among you might have noticed that in Katherine's case, the settings were set properly for what she wanted;  setMissingLast was set to true for that field, so the items without a Price value should have been displayed at the end of the list, not the beginning.  As it turned out, the problem in her case was that the nodes that she thought did not have a value actually had a value of 0, which put them at the top of the list.  So, wielding her Solr fu, she added a line to her hook_apachesolr_update_index() function to only index that field if it has a non-zero value.

function mymodule_apachesolr_update_index(&$document, $node) {

        // The sale_price field will not have been set if the value was 0
         if (isset($fields['sale_price'])) {
            $document->fs_cck_field_sale_price = $fields['sale_price'];

This causes the Price field to keep from being indexed for nodes with value of 0 in it, so they are then placed at the end of the search results when they are sorted by Price.

So thus ends today's lesson in Solr. It should be noted that even though the issue turned out to be something different, all was not lost, because we got to learn about sorting on fields that don't have a value in the sorted field.  A win-win all the way around, don't you think?

Jun 23 2010
Jun 23

As some of you who follow Drupal Planet might have noticed, I have "taken over" the Acquia Drupal podcast.  I've always followed Drupal podcasts, including the Acquia podcast. I had noticed that there had not been any new episodes put out in a while, and so I approached Robert Douglass and Bryan House of Acquia at Drupalcon San Francisco back in April about continuing the podcast .  They agreed, and so I took it from there. The first episode - an interview with Steven Merrill from Treehouse Agency on continuous integration with Hudson and Simpletest - was released yesterday, and more are in the works.  In fact, I interviewed Robert Douglass from Acquia today for the next episode, and we discussed all things Solr in Drupal.  We covered a lot of information, so it should be a very useful episode for anyone interested in Solr.

So keep your eyes (and ears) open for future podcasts, and contact me via our contact form if you or someone else you know are doing something cool in Drupal that you think would be a good podcast topic.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web