Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Mar 03 2021
Mar 03

This post is an updated part of our Marketer's Guide to Drupal series. This guide will walk you through considerations for choosing an open source CMS, plus case studies and CMO advice to bring your site to the next level.

Supercharge SEO with Drupal

“Over the last 20 years, Drupal has grown into one of the largest enterprise content management systems in the world.” - Drupal Founder Dries Buytaert

Along with the growth of Drupal, the marketing landscape is vastly different now than it was 20 years ago. The same techniques once used to make the top of the search engine no longer work, so marketers need to understand the powerful role a CMS like Drupal can play in building a successful SEO strategy. 

The release of Drupal 8 marked a significant stride in giving content authors more control. That focus continues to evolve with Drupal 9. Notable editor-focused features include a templating engine called Twig for component variety, an API-first foundation to “write once, publish everywhere”, and an editor-friendly content block authoring experience. Layout Builder additionally provides a "drag and drop" module that lets editors add components to pages and add pages to the site with no code. Now, Drupal 9 has even more upgrades for marketers, tech-savvy or not.

This shift places the marketing team in the driver’s seat more often and allows them to get involved in the CMS decision. In this post, we’ll outline some ways you can up your SEO game with Drupal.

Traditional SEO is Dead

No longer will well-placed keywords alone get you to the top of the SERP ranks. Content is still King in the world of marketing, and it’s what helps you improve your SEO.

Every algorithm change Google has made has one thing in common: it aims to provide the best content based on what it "thinks" the user is trying to find. In other words, the user’s intent. If you want your rankings to stick, don't try to cheat the system. Attract your prospects with informative, entertaining pieces that they can use to take action. And avoid no-value posts that are keyword-stuffed with your industry and the word "best" 100 times. Google can see through it and so can all of your users.

That said, there are a few other factors that are critical to keeping your rankings high that can’t be ignored, including quick load times and mobile-friendliness. Drupal 9 is built with several of these factors in mind to help us make needed improvements quickly and effectively.

Mobile-First Mentality

Drupal 9 is created with responsive design capabilities built-in, so you can begin to address many problems immediately. That’s not to say all of your responsive problems will be solved. Content editors still need to think through their content and imagery, and themers will still need to do configuration to establish things like breakpoints. But Drupal 9 will set you on the right path, giving you and your team many of the tools you need.

You’ll also have the option to choose different images and content for desktop and mobile versions right from the WYSIWYG editor, making it easier to see the differences for every piece of content when you add it and before you publish. This means a solid visual of both versions in real-time for faster publishing and peace of mind knowing exactly what your users experience on any device. 

The Need for Speed

Another big factor that could affect your rankings is speed on both desktop and mobile. Google places such high importance that they provide a PageSpeed Insights test to show where and how your website is slowing visitors down. Drupal 9 is “smart” in that it caches all entities and doesn’t load JavaScript unless it has to. This means the same content won’t be reloaded over and over and instead can be loaded quickly from the cache. It also uses a feature called velocity, which makes creating and publishing new dynamic content experiences is significantly faster than in Drupal 7 and older Drupal versions.

Responsive design is a must-have in today’s digital landscape and speeding up your website on both desktop and mobile is a surprisingly effective way to contribute to your SEO efforts. In short, if your marketing team is focused (as you should be) on top rankings, Drupal 9 provides many of the tools to make that happen. 

Accessibility = Key for Search

Drupal 8 spurred an overall commitment to accessibility from the community, and with the release of Drupal 9 came another big push toward improving web accessibility, including: 

This is important because, as we know, the relationship between web accessibility and SEO is closely intertwined. Although accessibility is not actually a Google ranking factor (yet), improving accessibility on your website has a high chance of improving your SEO rank.

SEO Friendly Modules for Drupal 9

There are thousands of modules available for Drupal 9, many of which are perfect for marketers. Whether you’re looking to try out something new or to find something that fits what you already know, you have your pick. Here are our favorite SEO modules to use when optimizing your site:

  1. Metatag - allows you to automatically provide metadata, aka "meta tags", that help search engines understand your content. This must-have module offers over 300 different meta tags for different purposes, so take a look and find the right ones for your site.
  2. Two Metatag submodules that we highly recommend are Twitter Cards and Open Graph. Connect your site to Facebook, LinkedIn, Slack, Twitter, and other social platforms and control how links will look when shared on them.
  3. Schema.org Metatag - provides a way of adding some of the hundreds of standardized metadata structures from the international schema.org project on a site, making it easier to clearly define metadata that Google et al can use to more accurately understand your site’s unique information.
  4. Pathauto - helps save you time from manually having to create URL path/aliases as new content is created.
  5. Sitemap - provides a site map that gives visitors an overview of your site. It can also display the RSS feeds for all blogs and categories.
  6. Redirect - Almost every new site needs to incorporate 301 redirects for old page URLs. This gives site admins an easy interface for creating those redirects in Drupal.
  7. Google Analytics - this simple module allows site admins the ability to easily configure Google Analytics in Drupal.
  8. Easy Breadcrumbs - uses the current URL (path alias) and the current page's title to automatically extract the breadcrumb's segments and its respective links.

Thankfully, because Drupal is open source, you’re not out of luck in the instance that you can’t find a module that works for you. There are many options available for making a new one that works, from building it yourself to enlisting help from a Drupal team like Mediacurrent.

Visualize SEO Success with Siteimprove

In addition to Drupal's SEO-friendly modules, an SEO optimization tool like Siteimprove can…

  • Give content editors information about their technical SEO to make more informed decisions
  • Gain an understanding of how SEO and content intersect for your overall strategy
  • Flag potential issues before your content is published
  • Provide insights about the SEO impact on unpublishing a page

The Siteimprove module works directly in Drupal, giving editors access to these insights while they’re adding content. This means no more waiting to fix it post-publish. It is important to correctly set up Siteimprove in order to get the most out of it and effectively transform your strategy into a workable roadmap for your site.

SEO and Beyond

Drupal’s content management system is perfectly structured for search optimization and its core features support many of the critical SEO elements. But features only take you so far on their own. To manage and continuously improve your SEO, consider a dashboard like Siteimprove that you can customize to show you just how the data is processed and how it impacts your business's goals.

Setting those custom data points and interpreting the data that comes in can be time-consuming and difficult, so if you need any help, our team of Siteimprove certified experts can apply our knowledge to configuring your dashboards and making sense of the data. Get started with a Siteimprove tune up.

Feb 11 2021
Feb 11

Last week one of our clients was asking me about how they should think about the myriad of options for website hosting, and it inspired me to share a few thoughts. 

The different kinds of hosting

I think about hosting for WordPress and Drupal websites as falling into one of three groups. We’re going to compare the options using an example of a fairly common size of website — one with traffic (as reported by Google Analytics) in the range of 50,000–100,000 visitors per month. Adjust accordingly for your situation. 

  • “Low cost/low frills” hosting — Inexpensive website hosting would cost in the range of $50–$1,000/yr for a site with our example amount of traffic. Examples of lower cost hosts include GoDaddy, Bluehost, etc.  Though inexpensive, these kinds of hosts have none of the infrastructure that’s needed to do ongoing web development in a safe/controlled way such as the ability to spin up a copy of the website at the click of a button, make a change, get approval from stakeholders, then deploy to the live site. Also, if you get a traffic spike, you will likely see much slower page loads. 
  • “Unmanaged”, “Bare metal”, or “DIY” hosting — Our example website will likely cost in the range of $500–$2,500/yr. Examples of this type of hosting include: AWS, Rackspace, etc. or just a computer in your closet. Here you get a server, but that’s it. You have to set up all the software, put security measures in place, and set up the workflow so that you can get stuff done. Then it’s your responsibility to keep that all maintained year over year, perhaps even to install and maintain firewalls for security purposes. 
  • “Serverless” hosting¹ It’s not that there aren’t servers, they’re just transparent to you. Our example website would likely cost in the range of $2500–5000/yr. Examples of this kind of hosting: Pantheon, WP Engine, Acquia, Platform.sh. These hosts are very specialized for WordPress and/or Drupal websites. You just plug in your code and database, and you’re off. Because they’re highly specialized, they have all the security/performance/workflow/operations in place that 90% of Drupal/WordPress websites need.

How to decide?

I recommend two guiding principles when it comes to these kinds of decisions:

  1. The cost of services (like hosting) are much cheaper than the cost of people. Whether that’s the time that your staff is spending maintaining a server, or if you’re working with an agency like Advomatic, then your monthly subscription with us. Maybe even 10x.  So saving $1,000/yr on hosting is only worth it if it costs less than a handful of hours per year of someone’s time. 
  2. Prioritize putting as much of your budget towards advancing your organization’s mission as possible. If two options have a similar cost, we should go with the option that will burn fewer brain cells doing “maintenance” and other manual tasks, and instead choose the option where we can spend more of our time thinking strategically and advancing the mission.

This means that you should probably disregard the “unmanaged/bare/DIY” group. Whoever manages the site will spend too much time running security updates, and doing other maintenance and monitoring tasks. 

We also encourage you to disregard the “low cost” group. Your team will waste too much time tripping over the limitations, and cleaning up mistakes that could be prevented on a more robust platform.

So that leaves the “serverless” group. With these, you’ll get the tools that will help streamline every change made to your website. Many of the rote tasks are also taken care of as part of the package. 

Doing vs. Thinking

It’s easy to get caught up in doing stuff. And it’s easy to make little decisions over time that mean you spend all your days just trying to keep up with the doing. The decision that you make about hosting is one way that you can get things back on track to be more focused on the strategy of how to make your website better. 

¹ The more technical members of the audience will know that “serverless” is technically a bit different.  You’d instead call this “platform-as-a-service” or “infrastructure-as-a-service”. But we said we’d avoid buzzwords.

Oct 06 2019
hw
Oct 06

One of my sites has a listing of content shown as teaser. The teaser for this content type is defined to show the title, an image, and a few other fields. In the listing, the image is linked to the content so that the visitor may click on the image (or the title) to open the content. All this is easily achievable through regular Drupal site building.

Drupal Manage Fields Page

I wanted to change the functionality so that clicking on the image would open the content in a new tab. This is easy for the title, as the title is linked right from within the template (node--content-type--teaser.html.twig). I just have to add the target="_blank" attribute for the tag for the title and I am done. Doing this for the image is not so easy.

Challenges with the Image Formatter

The reason it isn’t easy for the image is the image field formatter provided by the Drupal core. It provides an option to render the image as a link and link it to either the content or the file itself, but no option to open it in a new tab.

Image Formatter OptionsDigging through the code for image formatter, I found the template that drives it at image-formatter.html.twig. This template also does not help us much directly, as we cannot conditionally modify this template only for teasers for certain content types, like I wanted. If I override this file, it will affect the image formatter everywhere.

I stumbled upon an approach when searching for solutions for this, hoping I will run across an issue in Drupal core which would add this feature. I would simply apply the patch and voila, I would solve my problem, give feedback if it worked, and open source wins. Unfortunately, there was no such issue but I found something related: Image formatter does not support URL/Link options.

Sidenote: I could have created a feature request to add this feature and maybe I’ll do it when I get a chance. Today, I was just eager to get this working.

Back to the issue. I see there is a change in the template to use the link function rather than just writing an tag. This is not very helpful to me. But the issue summary described my problem, and it is strange that the patch (which was committed) does not fix it. So, I dug in more. I read the test in the patch which gave me the approach how I can implement it.

Solution

Since my solution depends on the above patch, it would work only with Drupal 8.7.4 or later. The test in the patch added attributes to the link by accessing the build array and directly setting the attribute on the URL object that is passed to the template. I realised I could do the same with template_preprocess_node.

Usage of template_preprocess_node

In the code sample above, I am targeting the specific content type and display mode (teaser) I want to modify. I use Element::children to loop through all the items in the field and access the URL object. I directly call setOption on the URL object to set the attribute target="_blank".

It might seem a lot of work but this is the fastest way I know (and minimal code change). If you know of a better way, or of the Drupal core issue that would have given this option, please let me know in the comments. I hope this post was useful. Thanks for reading.

Jul 26 2018
Jul 26

Intro

In this post, I’m going to run through how I set up visual regression testing on sites. Visual regression testing is essentially the act of taking a screenshot of a web page (whether the whole page or just a specific element) and comparing that against an existing screenshot of the same page to see if there are any differences.

There’s nothing worse than adding a new component, tweaking styles, or pushing a config update, only to have the client tell you two months later that some other part of the site is now broken, and you discover it’s because of the change that you pushed… now it’s been two months, and reverting that change has significant implications.

That’s the worst. Literally the worst.

All kinds of testing can help improve the stability and integrity of a site. There’s Functional, Unit, Integration, Stress, Performance, Usability, and Regression, just to name a few. What’s most important to you will change depending on the project requirements, but in my experience, Functional and Regression are the most common, and in my opinion are a good baseline if you don’t have the capacity to write all the tests.

If you’re reading this, you probably fall into one of two categories:

  1. You’re already familiar with Visual Regression testing, and just want to know how to do it
  2. You’re just trying to get info on why Visual Regression testing is important, and how it can help your project.

In either case, it makes the most sense to dive right in, so let’s do it.

Tools

I’m going to be using WebdriverIO to do the heavy lifting. According to the website:

WebdriverIO is an open source testing utility for nodejs. It makes it possible to write super easy selenium tests with Javascript in your favorite BDD or TDD test framework.

It basically sends requests to a Selenium server via the WebDriver Protocol and handles its response. These requests are wrapped in useful commands and can be used to test several aspects of your site in an automated way.

I’m also going to run my tests on Browserstack so that I can test IE/Edge without having to install a VM or anything like that on my mac.

Process

Let’s get everything setup. I’m going to start with a Drupal 8 site that I have running locally. I’ve already installed that, and a custom theme with Pattern Lab integration based on Emulsify.

We’re going to install the visual regression tools with npm.

If you already have a project running that uses npm, you can skip this step. But, since this is a brand new project, I don’t have anything using npm, so I’ll create an initial package.json file using npm init.

  • npm init -y
    • Update the name, description, etc. and remove anything you don’t need.
    • My updated file looks like this:
{ "name": "visreg", "version": "1.0.0", "description": "Website with visual regression testing", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } }   "name": "visreg",  "version": "1.0.0",  "description": "Website with visual regression testing",  "scripts": {    "test": "echo \"Error: no test specified\" && exit 1"

Now, we’ll install the npm packages we’ll use for visual regression testing.

  • npm install --save-dev webdriverio chai wdio-mocha-framework wdio-browserstack-service wdio-visual-regression-service node-notifier
    • This will install:
      • WebdriverIO: The main tool we’ll use
      • Chai syntax support: “Chai is an assertion library, similar to Node’s built-in assert. It makes testing much easier by giving you lots of assertions you can run against your code.”
      • Mocha syntax support “Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.”
      • The Browserstack wdio package So that we can run our tests against Browserstack, instead of locally (where browser/OS differences across developers can cause false-negative failures)
      • Visual regression service This is what provides the screenshot capturing and comparison functionality
      • Node notifier This is totally optional but supports native notifications for Mac, Linux, and Windows. We’ll use these to be notified when a test fails.

Now that all of the tools are in place, we need to configure our visual regression preferences.

You can run the configuration wizard by typing ./node_modules/webdriverio/bin/wdio, but I’ve created a git repository with not only the webdriver config file but an entire set of files that scaffold a complete project. You can get them here.

Follow the instructions in the README of that repo to install them in your project.

These files will get you set up with a fairly sophisticated, but completely manageable visual regression testing configuration. There are some tweaks you’ll need to make to fit your project that are outlined in the README and the individual markdown files, but I’ll run through what each of the files does at a high level to acquaint you with each.

  • .gitignore
    • The lines in this file should be added to your existing .gitignore file. It’ll make sure your diffs and latest images are not committed to the repo, but allow your baselines to be committed so that everyone is comparing against the same baseline images.
  • VISREG-README.md
    • This is an example readme you can include to instruct other/future developers on how to run visual regression tests once you have it set up
  • package.json
    • This just has the example test scripts. One for running the full suite of tests, and one for running a quick test, handy for active development. Add these to your existing package.json
  • wdio.conf.js
    • This is the main configuration file for WebdriverIO and your visual regression tests.
    • You must update this file based on the documentation in wdio.conf.md
  • wdio.conf.quick.js
    • This is a file you can use to run a quick test (e.g. against a single browser instead of the full suite defined in the main config file). It’s useful when you’re doing something like refactoring an existing component, and/or want to make sure changes in one place don’t affect other sections of the site.
  • tests/config/globalHides.js
    • This file defines elements that should be hidden in ALL screenshots by default. Individual tests can use this, or define their own set of elements to hide. Update these to fit your actual needs.
  • tests/config/viewports.js
    • This file defines what viewports your tests should run against by default. Individual tests can use these, or define their own set of viewports to test against. Update these to the screen sizes you want to check.

Running the Test Suite

I’ll copy the example homepage test from the example-tests.md file into a new file /web/themes/custom/visual_regression_testing/components/_patterns/05-pages/home/home.test.js. (I’m putting it here because my wdio.conf.js file is looking for test files in the _patterns directory, and I like to keep test files next to the file they’re testing.)

The only thing you’ll need to update in this file is the relative path to the globalHides.js file. It should be relative from the current file. So, mine will be:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); const visreg = require('../../../../../../../../tests/config/globalHides.js');

With that done, I can simply run npm test and the tests will run on BrowserStack against the three OS/Browser configurations I’ve specified. While they’re running, we can head over to https://automate.browserstack.com/ we can see the tests being run against Chrome, Firefox, and IE 11.

Once tests are complete, we can view the screenshots in the /tests/screenshots directory. Right now, the baseline shots and the latest shots will be identical because we’ve only run the test once, and the first time you run a test, it creates the baseline from whatever it sees. Future tests will compare the most recent “latest” shot to the existing baseline, and will only update/create images in the latest directory.

At this point, I’ll commit the baselines to the git repo so that they can be shared around the team, and used as baselines by everyone running visual regression tests.

If I run npm test again, the tests will all pass because I haven’t changed anything. I’ll make a small change to the button background color which might not be picked up by a human eye but will cause a regression that our tests will pick up with no problem.

In the _buttons.scss file, I’m going to change the default button background color from $black (#000) to $gray-darker (#333). I’ll run the style script to update the compiled css and then clear the site cache to make sure the change is implemented. (When actively developing, I suggest disabling cache and keeping the watch task running. It just makes things easier and more efficient.)

This time all the tests fail, and if we look at the images in the diff folder, we can clearly see that the “search” button is different as indicated by the bright pink/purple coloring.

If I open up one of the “baseline” images, and the associated “latest” image, I can view them side-by-side, or toggle back and forth. The change is so subtle that a human eye might not have noticed the difference, but the computer easily identifies a regression. This shows how useful visual regression testing can be!

Let’s pretend this is actually a desired change. The original component was created before the color was finalized, black was used as a temporary color, and now we want to capture the update as the official baseline. Simply Move the “latest” image into the “baselines” folder, replacing the old baseline, and commit that to your repo. Easy peasy.

Running an Individual Test

If you’re creating a new component and just want to run a single test instead of the entire suite, or you run a test and find a regression in one image, it is useful to be able to just run a single test instead of the entire suite. This is especially true once you have a large suite of test files that cover dozens of aspects of your site. Let’s take a look at how this is done.

I’ll create a new test in the organisms folder of my theme at /search/search.test.js. There’s an example of an element test in the example-tests.md file, but I’m going to do a much more basic test, so I’ll actually start out by copying the homepage test and then modify that.

The first thing I’ll change is the describe section. This is used to group and name the screenshots, so I’ll update it to make sense for this test. I’ll just replace “Home Page” with “Search Block”.

Then, the only other thing I’m going to change is what is to be captured. I don’t want the entire page, in this case. I just want the search block. So, I’ll update checkDocument (used for full-page screenshots) to checkElement (used for single element shots). Then, I need to tell it what element to capture. This can be any css selector, like an id or a class. I’ll just inspect the element I want to capture, and I know that this is the only element with the search-block-form class, so I’ll just use that.

I’ll also remove the timeout since we’re just taking a screenshot of a single element, we don’t need to worry about the page taking longer to load than the default of 60 seconds. This really wasn’t necessary on the page either, but whatever.

My final test file looks like this:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); describe('Search Block', function () { it('should look good', function () { browser .url('./') .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove}) .forEach((item) => { expect(item.isWithinMisMatchTolerance).to.be.true; }); }); }); const visreg = require('../../../../../../../../tests/config/globalHides.js');describe('Search Block', function () {  it('should look good', function () {    browser      .url('./')      .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove})      .forEach((item) => {        expect(item.isWithinMisMatchTolerance).to.be.true;      });

With that in place, this test will run when I use npm test because it’s globbing, and running every file that ends in .test.js anywhere in the _patterns directory. The problem is this also runs the homepage test. If I just want to update the baselines of a single test, or I’m actively developing a component and don’t want to run the entire suite every time I make a locally scoped change, I want to be able to just run the relevant test so that I don’t waste time waiting for all of the irrelevant tests to pass.

We can do that by passing the --spec flag.

I’ll commit the new test file and baselines before I continue.

Now I’ll re-run just the search test, without the homepage test.

npm test -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

We have to add the first set of -- because we’re using custom npm scripts to make this work. Basically, it passes anything that follows directly to the custom script (in our case test is a custom script that calls ./node_modules/webdriverio/bin/wdio). More info on the run-script documentation page.

If I scroll up a bit, you’ll see that when I ran npm test there were six passing tests. That is one test for each browser for each test. We have two test, and we’re checking against three browsers, so that’s a total of six tests that were run.

This time, we have three passing tests because we’re only running one test against three browsers. That cut our test run time by more than half (from 106 seconds to 46 seconds). If you’re actively developing or refactoring something that already has test coverage, even that can seem like an eternity if you’re running it every few minutes. So let’s take this one step further and run a single test against a single browser. That’s where the wdio.conf.quick.js file comes into play.

Running Test Against a Subset of Browsers

The wdio.conf.quick.js file will, by default, run test(s) against only Chrome. You can, of course, change this to whatever you want (for example if you’re only having an issue in a specific version of IE, you could set that here), but I’m just going to leave it alone and show you how to use it.

You can use this to run the entire suite of tests or just a single test. First, I’ll show you how to run the entire suite against only the browser defined here, then I’ll show you how to run a single test against this browser.

In the package.json file, you’ll see the test:quick script. You could pass the config file directly to the first script by typing npm test -- wdio.conf.quick.js, but that’s a lot more typing than npm run test:quick and you (as well as the rest of your team) have to remember the file name. Capturing the file name in a second custom script simplifies things.

When I run npm run test:quick You’ll see that two tests were run. We have two tests, and they’re run against one browser, so that simplifies things quite a bit. And you can see it ran in only 31 seconds. That’s definitely better than the 100 seconds the full test suite takes.

Let’s go ahead and combine this with the technique for running a single test to cut that time down even further.

npm run test:quick -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

This time you’ll see that it only ran one test against one browser and took 28 seconds. There’s actually not a huge difference between this and the last run because we can run three tests in parallel. And since we only have two tests, we’re not hitting the queue which would add significantly to the entire test suite run time. If we had two dozen tests, and each ran against three browsers, that’s a lot of queue time, whereas even running the entire suite against one browser would be a significant savings. And obviously, one test against one browser will be faster than the full suite of tests and browsers.

So this is super useful for active development of a specific component or element that has issues in one browser as well as when you’re refactoring code to make it more performant, and want to make sure your changes don’t break anything significant (or if they do, alert you sooner than later). Once you’re done with your work, I’d still recommend running the full suite to make sure your changes didn’t inadvertently affect another random part of the site.

So, those are the basics of how to set up and run visual regression tests. In the next post, I’ll dive into our philosophy of what we test, when we test, and how it fits into our everyday development workflow.

Jan 06 2017
JK
Jan 06

In this article, we will present how we built a simple twitter feed in Drupal 8 with custom block and without any custom module. This block will display a list of tweets pulled from a custom list as in the example shown in the side bar.

As a prerequisite, you need to have a twitter account and a custom list in your feeds.

1) Get code from "twitter publish"

First, we need to get the code that is generated by twitter publishing api.

On this page, enter the needed url for the list of tweets you want to display as in the example below:

embed

The code will be generated from the url and you can add some custom format:

format

Once you have updated your options, you can just coy the custom code:

code

We will use this code to create the block.

2) Custom block

In custom block library (/admin/structure/block/block-content), click on "+ Add custom block" button.

In the custom block body (full HTML), copy the code generated previously in "<> source" mode and click save:

edit block

You now have a custom block that you can place anywhere in your site. To do so go to /admin/structure/block and click on the "Place block" button where you want to display your block.

The result :

Twitter block

For more advance block creation, see also this article.

Dec 15 2015
Dec 15

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Dec 02 2015
Dec 02

Now that the release of Drupal 8 is finally here, it is time to adapt our Drupal 7 build process to Drupal 8, while utilizing Docker. This post will take you through how we construct sites on Drupal 8 using dependency managers on top of Docker with Vagrant.

Keep a clean upstream repo

Over the past 3 or 4 years developing websites has changed dramatically with the increasing popularity of dependency management such as Composer, Bundler, npm, Bower, etc... amongst other tools. Drupal even has it's own system that can handle dependencies called Drush, albiet it is more than just a dependency manager for Drupal.

With all of these tools at our disposal, it makes it very easy to include code from other projects in our application while not storing any of that code in the application code repository. This concept dramatically changes how you would typically maintain a Drupal site, since the typical way to manage a Drupal codebase is to have the entire Drupal Docroot, including all dependencies, in the application code repository. Having everything in the docroot is fine, but you gain so much more power using dependency managers. You also lighten up the actual application codebase when you utilize dependency managers, because your repo only contains code that you wrote. There are tons of advantages to building applications this way, but I have digressed, this post is about how we utilize these tools to build Drupal sites, not an exhaustive list of why this is a good idea. Leave a comment if you want to discuss the advantages / disadvantages of this approach.

Application Code Repository

We've got a lot going on in this repository. We won't dive too deep into the weeds looking at every single file, but I will give a high level overview of how things are put together.

Installation Automation (begin with the end in mind)

The simplicity in this process is that when a new developer needs to get a local development environment for this project, they only have to execute two commands:

$ vagrant up --no-parallel
$ make install

Within minutes a new development environment is constructed with Virtualbox and Docker on the developers machine, so that they can immediately start contributing to the project. The first command boots up 3 Docker containers -- a webserver, mysql server, and jenkins server. The second command invokes Drush to build the document root within the webserver container and then installs Drupal.

We also utilize one more command to keep running within a seperate terminal window, to keep files synced from our host machine to the Drupal 8 container.

$ vagrant rsync-auto drupal8

Breaking down the two installation commands

vagrant up --no-parallel

If you've read any of my previous posts, I'm a fan of using Vagrant with Docker. I won't go into detail about how the environment is getting set up. You can read my previous posts on how we used Docker with Vagrant. For completeness, here is the Vagrantfile and Dockerfile that vagrant up reads to setup the environment.

Vagrantfile





require 'fileutils'

MYSQL_ROOT_PASSWORD="root"

unless File.exists?("keys")
Dir.mkdir("keys")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
File.open("keys/id_rsa.pub", 'w') { |file| file.write(ssh_pub_key) }
end

unless File.exists?("Dockerfiles/jenkins/keys")
Dir.mkdir("Dockerfiles/jenkins/keys")
FileUtils.copy("#{Dir.home}/.ssh/id_rsa", "Dockerfiles/jenkins/keys/id_rsa")
end

Vagrant.configure("2") do |config|

config.vm.define "mysql" do |v|

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.image = "mysql:5.7.9"
      d.env = { :MYSQL_ROOT_PASSWORD => MYSQL_ROOT_PASSWORD }
      d.name = "mysql-container"
      d.remains_running = true
      d.ports = [
        "3306:3306"
      ]
    end

end

config.vm.define "jenkins" do |v|

    v.vm.synced_folder ".", "/srv", type: "rsync",
        rsync__exclude: get_ignored_files(),
        rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"]

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "./Dockerfiles/jenkins"
      d.name = "jenkins-container"
      
      d.volumes = [
          "/home/rancher/.composer:/root/.composer",
          "/home/rancher/.drush:/root/.drush"
      ]
      d.remains_running = true
      d.ports = [
          "8080:8080"
      ]
    end

end

config.vm.define "drupal8" do |v|

    v.vm.synced_folder ".", "/srv/app", type: "rsync",
      rsync__exclude: get_ignored_files(),
      rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"],
      rsync__chown: false

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "."
      d.name = "drupal8-container"
      d.remains_running = true
      
      d.volumes = [
        "/home/rancher/.composer:/root/.composer",
        "/home/rancher/.drush:/root/.drush"
      ]
      d.ports = [
        "80:80",
        "2222:22"
      ]
      d.link("mysql-container:mysql")
    end

end

end

def get_ignored_files()
ignore_file = ".rsyncignore"
ignore_array = []

if File.exists? ignore_file and File.readable? ignore_file
File.read(ignore_file).each_line do |line|
ignore_array << line.chomp
end
end

ignore_array
end

One of the cool things to point out that we are doing in this Vagrantfile is setting up a VOLUME for the composer and drush cache that should persist beyond the life of the container. When our application container is rebuilt we don't want to download 100MB of composer dependencies every time. By utilizing a Docker VOLUME, that folder is mounted to the actual Docker host.

Dockerfile (drupal8-container)

FROM ubuntu:trusty


ENV PROJECT_ROOT /srv/app
ENV DOCUMENT_ROOT /var/www/html
ENV DRUPAL_PROFILE=apa_profile


RUN apt-get update
RUN apt-get install -y \
	vim \
	git \
	apache2 \
	php-apc \
	php5-fpm \
	php5-cli \
	php5-mysql \
	php5-gd \
	php5-curl \
	libapache2-mod-php5 \
	curl \
	mysql-client \
	openssh-server \
	phpmyadmin \
	wget \
	unzip \
	supervisor
RUN apt-get clean


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer


RUN mkdir /root/.ssh && chmod 700 /root/.ssh && touch /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN mkdir /var/run/sshd && chmod 0755 /var/run/sshd
RUN mkdir -p /root/.ssh
COPY keys/id_rsa.pub /root/.ssh/authorized_keys
RUN chmod 600 /root/.ssh/authorized_keys
RUN sed '[email protected]\s*required\s*[email protected] optional [email protected]' -i /etc/pam.d/sshd


RUN composer global require drush/drush:8.0.0-rc3
RUN ln -nsf /root/.composer/vendor/bin/drush /usr/local/bin/drush


RUN mv /root/.composer /tmp/


RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/apache2/php.ini
RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/cli/php.ini


RUN sed -i 's/AllowOverride None/AllowOverride All/' /etc/apache2/apache2.conf
RUN a2enmod rewrite


RUN echo '[program:apache2]\ncommand=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"\nautorestart=true\n\n' >> /etc/supervisor/supervisord.conf
RUN echo '[program:sshd]\ncommand=/usr/sbin/sshd -D\n\n' >> /etc/supervisor/supervisord.conf





WORKDIR $PROJECT_ROOT
EXPOSE 80 22
CMD exec supervisord -n

We have xdebug commented out in the Dockerfile, but it can easily be uncommented if you need to step through code. Simply uncomment the two RUN commands and run vagrant reload drupal8

make install

We utilize a Makefile in all of our projects whether it be Drupal, nodejs, or Laravel. This is so that we have a similar way to install applications, regardless of the underlying technology that is being executed. In this case make install is executing a drush command. Below is the contents of our Makefile for this project:

all: init install

init:
vagrant up --no-parallel

install:
bin/drush @dev install.sh

rebuild:
bin/drush @dev rebuild.sh

clean:
vagrant destroy drupal8
vagrant destroy mysql

mnt:
sshfs -C -p 2222 [email protected]:/var/www/html docroot

What this commmand does is ssh into the drupal8-container, utilizing drush aliases and drush shell aliases.

install.sh

The make install command executes a file, within the drupal8-container, that looks like this:

#!/usr/bin/env bash

echo "Moving the contents of composer cache into place..."
mv /tmp/.composer/* /root/.composer/

PROJECT_ROOT=$PROJECT_ROOT DOCUMENT_ROOT=$DOCUMENT_ROOT $PROJECT_ROOT/bin/rebuild.sh

echo "Installing Drupal..."
cd $DOCUMENT_ROOT && drush si $DRUPAL_PROFILE --account-pass=admin -y
chgrp -R www-data sites/default/files
rm -rf ~/.drush/files && cp -R sites/default/files ~/.drush/

echo "Importing config from sync directory"
drush cim -y

You can see on line 6 of install.sh file that it executes a rebuild.sh file to actually build the Drupal document root utilizing Drush Make. The reason for separating the build from the install is so that you can run make rebuild without completely reinstalling the Drupal database. After the document root is built, the drush site-install apa_profile command is run to actually install the site. Notice that we are utilizing Installation Profiles for Drupal.

We utilize installation profiles so that we can define modules available for the site, as well as specify default configuration to be installed with the site.

We work hard to achieve the ability to have Drupal install with all the necessary configuration in place out of the gate. We don't want to be passing around a database to get up and running with a new site.

We utilize the Devel Generate module to create the initial content for sites while developing.

rebuild.sh

The rebuild.sh file is responsible for building the Drupal docroot:

#!/usr/bin/env bash

if [ -d "$DOCUMENT_ROOT/sites/default/files" ]
then
echo "Moving files to ~/.drush/..."
mv \$DOCUMENT_ROOT/sites/default/files /root/.drush/
fi

echo "Deleting Drupal and rebuilding..."
rm -rf \$DOCUMENT_ROOT

echo "Downloading contributed modules..."
drush make -y $PROJECT_ROOT/drupal/make/dev.make $DOCUMENT_ROOT

echo "Symlink profile..."
ln -nsf $PROJECT_ROOT/drupal/profiles/apa_profile $DOCUMENT_ROOT/profiles/apa_profile

echo "Downloading Composer Dependencies..."
cd $DOCUMENT_ROOT && php $DOCUMENT_ROOT/modules/contrib/composer_manager/scripts/init.php && composer drupal-update

echo "Moving settings.php file to $DOCUMENT_ROOT/sites/default/..."
rm -f $DOCUMENT_ROOT/sites/default/settings\*
cp $PROJECT_ROOT/drupal/config/settings.php $DOCUMENT_ROOT/sites/default/
cp $PROJECT_ROOT/drupal/config/settings.local.php $DOCUMENT_ROOT/sites/default/
ln -nsf $PROJECT_ROOT/drupal/config/sync $DOCUMENT_ROOT/sites/default/config
chown -R www-data \$PROJECT_ROOT/drupal/config/sync

if [ -d "/root/.drush/files" ]
then
cp -Rf /root/.drush/files $DOCUMENT_ROOT/sites/default/
    chmod -R g+w $DOCUMENT_ROOT/sites/default/files
chgrp -R www-data sites/default/files
fi

This file essentially downloads Drupal using the dev.make drush make file. It then runs composer drupal-update to download any composer dependencies in any of the modules. We use the composer manager module to help with composer dependencies within the Drupal application.

Running the drush make dev.make includes two other Drush Make files, apa-cms.make (the application make file) and drupal-core.make. Only dev dependencies should go in dev.make. Application dependencies go into apa-cms.make. Any core patches that need to be applied go into drupal-core.make.

Our Jenkins server builds the prod.make file, instead of dev.make. Any production specific modules would go in prod.make file.

Our make files for this project look like this so far:

dev.make

core: "8.x"

api: 2

defaults:
  projects:
    subdir: "contrib"

includes:
  - "apa-cms.make"

projects:
  devel:
    version: "1.x-dev"

apa-cms.make

core: "8.x"

api: 2

defaults:
projects:
subdir: "contrib"

includes:

- drupal-core.make

projects:
address:
version: "1.0-beta2"

composer_manager:
version: "1.0-rc1"

config_update:
version: "1.x-dev"

ctools:
version: "3.0-alpha17"

draggableviews:
version: "1.x-dev"

ds:
version: "2.0"

features:
version: "3.0-alpha4"

field_collection:
version: "1.x-dev"

field_group:
version: "1.0-rc3"

juicebox:
version: "2.0-beta1"

layout_plugin:
version: "1.0-alpha19"

libraries:
version: "3.x-dev"

menu_link_attributes:
version: "1.0-beta1"

page_manager:
version: "1.0-alpha19"

pathauto:
type: "module"
download:
branch: "8.x-1.x"
type: "git"
url: "http://github.com/md-systems/pathauto.git"

panels:
version: "3.0-alpha19"

token:
version: "1.x-dev"

zurb_foundation:
version: "5.0-beta1"
type: "theme"

libraries:
juicebox:
download:
type: "file"
url: "https://www.dropbox.com/s/hrthl8t1r9cei5k/juicebox.zip?dl=1"

(once this project goes live, we will pin the version numbers)

drupal-core.make

core: "8.x"

api: 2

projects:
  drupal:
    version: 8.0.0
    patch:
      - https://www.drupal.org/files/issues/2611758-2.patch

prod.make

core: "8.x"

api: 2

includes:

- "apa-cms.make"

projects:
apa_profile:
type: "profile"
subdir: "."
download:
type: "copy"
url: "file://./drupal/profiles/apa_profile"

At the root of our project we also have a Gemfile, specifically to install the compass compiler along with various sass libraries. We install these tools on the host machine, and "watch" those directories from the host. vagrant rsync-auto watches any changed files and rsyncs them to the drupal8-container.

bundler

From the project root, installing these dependencies and running a compass watch is simple:

$ bundle
$ bundle exec compass watch path/to/theme

bower

We pull in any 3rd party front-end libraries such as Foundation, Font Awesome, etc... using Bower. From within the theme directory:

\$ bower install

There are a few things we do not commit to the application repo, as a result of the above commands.

  • The CSS directory
  • Bower Components directory

Deploy process

As I stated earlier, we utilize Jenkins CI to build an artifact that we can deploy. Within the jenkins job that handles deploys, each of the above steps is executed, to create a document root that can be deployed. Projects that we build to work on Acquia or Pantheon actually have a build step to also push the updated artifact to their respected repositories at the host, to take advantage of the automation that Pantheon and Acquia provide.

Conclusion

Although this wasn't an exhaustive walk thru of how we structure and build sites using Drupal, it should give you a general idea of how we do it. If you have specific questions as to why we go through this entire build process just to setup Drupal, please leave a comment. I would love to continue the conversation.

Look out for a video on this topic in the next coming weeks. I covered a lot in this post, without going into much detail. The intent of this post was to give a 10,000 foot view of the process. The upcoming video on this process will get much closer to the Tarmac!

As an aside, one caveat that we did run into with setting up default configuration in our Installation Profile was with Configuration Management UUID's. You can only sync configuration between sites that are clones. We have overcome this limitation with a workaround in our installation profile. I'll leave that topic for my next blog post in a few weeks.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web