Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough

Enabling the Media Library in Drupal 8.7

Mar 02 2020
Mar 02
Mar 02 2020
Mar 02

As of Drupal 8.7, the Media and Media Library modules can be enabled and used out-of-box. Below, you'll find a quick tutorial on enabling and using these features.

out-of-box before media and media library

In the past there were two different ways to add an image to a page.

  1. An image could be added via a field, with the developer given control over its size and placement:

    Image field before media library
  2. An image could be added via the WYSIWYG editor, with the editor given some control over its size and placement:

    Image field upload choices screen

A very straightforward process, but these images could not be reused, as they were not part of a reusable media library.

reusing uploaded media Before Drupal 8.7

Overcoming image placement limitations in prior versions of Drupal required the use of several modules, a lot of configuration, and time. Sites could be set up to reference a media library that allowed editors to select and reuse images that had previously been uploaded, which we explained here.

This was a great time to be alive.

What is available with Media Library

Enabling the Media and Media Library modules extends a site's image functionality. First, ensure that the Media and Media Library core modules are enabled. 

Enable media library in drupal

A media entity reference field must be used with the Media Library. It will not work with a regular image field out-of-box.

Image field on manage display page

On the Manage form display page, select "Media library" widget. 

Media library widget on manage display page

On the "Node Add" and "Node Edit" forms, you’ll see the below difference between a regular image field and a field connected to the media library.

Media library field on node edit

Click on “Add media” and you’ll see a popup with the ability to add a new image to the library or to select an image that is already in the library.

Media field grid

With a simple configuration of the field, if multiple media types are allowed in the field, you’ll see vertical tabs for each media type.

Media grid with multiple media types

WYSIWYG configuration

The WYSIWYG editor requires a few steps when configuring the media library for a specific text format. First, a new icon will appear with a musical note overlapping the image icon. This should be added to the active toolbar and the regular image icon should be moved to the available buttons.

wysiwyg toolbar configuration

Under “Enabled filters,” enable “Embed media."  Under the filter settings, vertical tab settings can be chosen for media types and view modes. Once that configuration is saved, you’ll see on a WYSIWYG editor that you have the same popup dialog for adding a new image to the media library, or selecting an already-uploaded image.

wysiwyg media configuration

Once you are on a "Node Add or "Node Edit" page with a WYSIWYG element, you’ll see the media button (image icon plus musical note).

Media button on wysiwyg editor

Clicking on the media button brings up the same, familiar popup that we saw earlier from the image field:

media library grid

This article is an update to a previous explainer from last year. 

Dec 09 2019
Dec 09

When I say that a decade ago, the web wasn’t the same as it is today, would you agree?

Yes, you might, or you might not.

But when we examine the statistics a conclusion can be made: Web is changing all the time and testing has accompanied the change with it. 

Image of five stick figures working on the letter that says com which is hanging via crane wire

Testing is one of the critical processes in application development. The success or the failure of the application entirely depends on it. 

Cypress is one such framework which helps you conquer any type of web testing and is the best option for your website. 

Yeah, you must be wondering that why out of all the testing software in the market I enlighted Cypress. 

Well, let’s find out why. 

Why Cypress?

Cypress is a javascript based end to end testing framework that does not use selenium at all. 

Now, What is selenium?

Well, Selenium automates browsers. What the user does with that power is entirely up to them. Primarily, it is used for automating web applications for testing purposes. It is the core technology in countless other browser automation tools, APIs and frameworks.

So coming back to Cypress, the testing tool is a modular, integrated document assembly and delivery system that improves the management, accessibility, and distribution of content throughout an enterprise. This system can swiftly deploy and it requires little or no user training.

Cypress comes with many handy advantages which would make you choose the software at one go. 

  • Automatic waiting: Cypress has the ability to automatically wait for the DOM (document object model) to load, make the elements visible, allow the animation to be completed, and much more. 
  • Real-time Reloads: Cypress is smart enough to understand that after saving a test file the user is going to run it again, so it automatically triggers the run next to the browser as soon as the user presses to save the file. 
  • Debuggability: The testing framework provides the user with the ability to directly debug a particular app under test from chrome Dev-tools. It presents a straightforward error message and recommends the user on how they should approach it.
  • Architecture: There are many testing tools which work by running outside of the browser and it executes remote commands across the network, but Cypress is the exact opposite. This testing tool is executed in the same run loop as the application.
  • Works on the network layer: Cypress runs at the network layer by reading and changing web traffic. This allows the testing tool to not only change everything that is coming in and out of the browser but also allows to change the code that may interfere with its ability to automate the browser. 
  • It is a new kind of testing: Cypress has ultimate control over the applications, the network traffic, and native access to each host object that unlocks a new way of testing ( this has never been possible before)
Image of cypress logo where the letter cy is in a black circle and press is outside the circle

How is Cypress different from Selenium?

  Cypress Selenium Installation No configuration is needed. All the dependencies and drivers are automatically installed with .exe Installation of  the language binding and configuring of the drivers is required Running Against Browser Cypress only supports chrome 
You can run your text against any browser  Architecture Runs inside the browser and executes in the same loop Runs outside the browser and executes remote commands  Speed Test code runs alongside application code. Therefore generates an extremely fast test. Automation scripts are slow in selenium Wait for the Elements Cypress runs in the browser and knows what is happening. Thus you don’t have to wait when you are using Cypress In order to do effective automation waiting for an element, it is an important task Documentation The team of Cypress has invested a lot of time in documentation hence it is seamless and complete.   The documentation is not complete and difficult to understand.

Limitations and challenges faced in Cypress 

While Cypress does a really great job of giving developers and QA engineers the thing they want in an automation tool, it does have some limitations.

  • Since the structure is very different from selenium end to end tool, the user first needs to understand the structure and then find the best way to create the scripts.
  • As the testing framework is comparatively new, the community is small. It becomes really challenging to find the answers to the problems. 
  • No file upload is supported by this software and Cypress does not support cross-browser testing also. Nobody knows that when these things would be covered, and for what big projects, these features are really important. 
  • Cypress follows the approach that is related to the Page Object Model and this has been proven with time. 
  • The entire framework is only available for one client and i.e javascript. Thus, to work with it, it is important for the user to know the framework.

Can end to end testing deliver quality benefits?

Yes, end-to-end testing is really important it helps ensure accurate functioning of the application by testing it at every layer, right from the front end. Several other benefits of choosing and performing end-to-end testing can be because:

  • It ensures complete correctness as well as the health of an application: In end-to-end testing, the application is tested and validated at all the layers. The layers include-data layer, business layer, integration layer and presentation layer which guarantees the well-being of an application.  
  • It increases the reliance of an application: End-to-end testing increases the reliance and the performance of an application before its releases as the application is tested across different endpoints from different devices.
  • Decreases the future risks that might accompany the process: End-to-end testing presents the user with rigorous iteration and sprint. Thus, there are lesser chances of risks and failures that may come in the near future. 
  • It decreases the repetitive effort: The application is tested thoroughly, thus there is no looking back. The testing reduces the chances of frequent breakdowns and repetitive testing efforts 

End to end testing with Drupal

Cypress makes it easy to add new tests to the website as the user iterates the codes. Here are some of the few concepts that can help you with your Drupal Website. Let’s start the concept with: 

Setting up 

With the help of the standard installation profile and Drupal 8 distribution, the installation can take place along with JSON API. Drupal 8 comes with RESTful Web services which can serve many purposes and facilitates things such as querying nodes by field. 

There are few options for installing Cypress, out of which one of the preferred option is through NPM pacakage.json. The first step is to create your own file in the root of the project. Once the file has been placed, install it by running npm i from the project route. 

The first Test 

After installing cypress with the help of NPM pacakage.json installed, it is the time to test if it is working properly or not.

The test does two things:

  • It visits any website’s root address (that are configured by NPM script)
  • It verifies that the page has an element with “Cypress Testing” in it.

Creating the account 

The next step is to create user accounts. Depending on the environment, some option is more feasible than any other. Therefore, in order to do things, it is important to create Drupal entities. It is also important to access to an administrator account. You can do it manually create them in the database and pass the account credentials to Cypress through an environment variable, or you can let cypress create the account every time it runs the tests. This would reduce the chances of risks and issues that might occur during the procedure. 

The command that is there in cypress i.e cy.exec() provides the user with the access that is there in the system commands (Especially in Drush). The credentials are then decided for the test user. An object is added with the key values that are passed to the test as environmental variables.  Now add username and password to create the user admin account. Now that the credentials are available, it is possible to use them to create the user. 

Logging in 

To test any restricted or authentic users, it is important to log in first. The most obvious way to do this is the same way a user would log in, through the UI. In fact, the user should ensure that logging in through UI is possible. 

After each and every test, Cypress leaves the browser in a state it was in when it finished running the test. It is useful because it leaves a great position to discover the next steps. For this particular case, Cypress will come back to the browser with admin user logged in.
To keep tests independent from each other, Cypress removes the browser cookies before east of the test is operated. This prevents the side effects between tests, but it also means that the user needs to log in each time a test operates that needs authentication.
Now that the login codes have been placed, we need to write it. The user can reuse logs via UI test code, but if the same codes have to be operated before every test, there wouldn’t be much point in having the test, to begin with. Most important, logging in through the UI is slow. If the user has to log in before every test they run, a lot of time will be wasted on logging in. Drupal logs in simply by posting form data to the login URL. 

Seed the data 

It is important to look at how JSON API is used to seed the data which has to be tested and understand that API authenticates the requests. By default (for unsafe and non-read requests) JSON and the standard REST module requires a token request header to be presented. The tokens can then be used to create and delete data by posting the endpoints that are exposed by JSON API module. 

Note that Cypress presents an after hook. It is fascinating to delete the test nodes in the after hook since, at that point, the user has to access to the test node’s id and could delete the test content without having to query by the title. 

However, the approach can be troublesome in the event that needs a test runner to quit or refresh before running the after block. In this case, the test content would never get cleaned up since the user wouldn’t have access to the node’s id in future test runs. Once the test articles are seeded, the “displays published articles” test will visit the node’s page and confirm that the fields
Debugging using DevTools

As we can see that Cypress has grown out to be an excellent Test Runner that helps the user to understand what is happening in an application and in the tests, there’s simply no substituting all the amazing work that the browser has done on their built-in development tools.

Your Cypress test code runs in the same run loop as your application. This means you have access to the code running on the page, as well as the things the browser makes available to you, like document, window, and, of course, debugger

Running Cypress in continuous integration

If you want that automated testing and continuous integration should work together then it is important to have some sort of CI/CD server. These are the hosted servers, and for implementing it with Drupal 8 these tools must work together.

It is important to note that developers must ensure that all tests are passed on the local workstation. The Drupal configuration is exported where the system spins up a fresh installation


End-to-end testing shouldn’t be hard. Cypress makes integration testing pleasing and enjoyable. You can write the end to end tests without worrying about browsers, Selenium, and other scary stuff.

You would agree on the fact that the framework is so nice that planning to use nothing but Cypress for integration testing would be fruitful. Plus, the documentation is pure gold: Cypress Docs are filled up with best practices and examples.

At OpenSense Labs, we have quality Drupal experts who try to enable digital transformation to the enterprise with the services and assistance.  Contact us now at [email protected] 

Dec 09 2019
Dec 09

With Drupal 9 set to be released later next year, upgrading to Drupal 8 may seem like a lost cause. However, beyond the fact that Drupal 8 is superior to its predecessors, it will also make the inevitable upgrade to Drupal 9, and future releases, much easier. 

Acquia puts it best in this eBook, where they cover common hangups that may prevent migration to Drupal 8 and the numerous reasons to push past them.

The Benefits of Drupal 8

To put it plainly, Drupal 8 is better. Upon its release, the upgrade shifted the way Drupal operates and has only improved through subsequent patches and iterations, most recently with the release of Drupal 8.8.0

Some new features of Drupal 8 that surpass those of Drupal 7 include improved page building tools and content authoring, multilingual support, and the inclusion of JSON:API as part of Drupal core. We discussed some of these additions in a previous blog post

Remaining on Drupal 7 means hanging on to a less capable CMS. Drupal 8 is simply more secure with better features.

What Does Any of This Have to Do With Drupal 9?

With an anticipated release date of June 3, 2020, Drupal 9 will see the CMS pivot to an iterative release model, moving away from the incremental releases that have made upgrading necessary in the past. That means that migrating to Drupal 8 is the last major migration Drupal sites will have to undertake. As Acquia points out, one might think “Why can’t I just wait to upgrade to Drupal 9?” 

While migration from Drupal 7 or Drupal 8 to Drupal 9 would be essentially the same process, Drupal 7 goes out of support in November 2021. As that deadline approaches, upgrading will only become an increasingly pressing necessity. By migrating to Drupal 8 now, you avoid the complications that come with a hurried migration and can take on the process incrementally. 

So why wait? 

To get started with Drupal migration, be sure to check out our Drupal Development Services, and come back to our blog for more updates and other business insights. 

Mar 13 2019
Mar 13

Note: This post refers to Drupal 8, but is very applicable to Drupal 7 sites as well

Most Drupal developers are experienced building sitewide search with Search API and Views. But it’s easy to learn and harder to master. These are the most common mistakes I see made when doing this task:

Not reviewing Analytics

Before you start, make sure you have access to analytics if relevant. You want to get an idea of how much sitewide search is being used and what the top searches are. On many sites, sitewide search usage is extremely low and you may need to explain this statistic to stakeholders asking for any time-consuming search features (and yourself before you start going down rabbit holes of refinements).

Take a look for yourself at how the sitewide search is currently performing for the top keywords users are giving it. Do the relevant pages come up first? You’ll take this into account when configuring boosts.

Using Solr for small sites

Drupal 8 Search API comes with database search included. Search API DB has come a long way over the years and is likely to have the features you need for smaller sites. Using a Solr backend is going to add complexity that may not be worth it for the amount of value your sitewide search is giving. Remember, if you use a Solr backend you have to have Solr running on all environments used in the project and you’ll have to reindex when you sync databases.

Not configuring all environments for working Solr

Which takes us to this one. If you do use Solr (or another server-side index) you need to also make sure your team has Solr running on their local environments and has an index for the site. 

Your settings.php needs to be configured to connect to the right index on each environment. We use Probo for review sandboxes so we need to configure our Probo builds to use the right search index and to index it on build.

Missing fields in index or wrong type

Always included the ‘Rendered HTML’ field in your search index rather than trying to capture every text field on all your content types and then having to come back to add more every time you add a field. Include the title field as well, but don’t forget to use ‘Fulltext’ as its field type. Only ‘Fulltext’ text fields are searchable by word.

Not configuring boosts

In your Processor settings, use Type-specific boosting and Tag-boosting via HTML filter. Tag boosting is straightforward: boost headers. For type-specific boosting you’re not necessarily just boosting the most important content types, but also thinking about what’s in the index and what people are likely looking for. Go back to your analytics for this. 

For example, when someone searches for a person’s name, are they likely wanting the top result to be the bio and contact info, a news posting mentioning that person, or a white paper authored by the person? So, even if staff bios are not the most important content on the site, perhaps they will need to be boosted high in search, where they are very relevant.

Not ordering by relevance

Whoops. This is a very common and devastating mistake. All your boost work be damned if you forget this. The View you make for search results needs to order results by Relevance: Descending.

Using AJAX

Don’t use the setting to ‘Use AJAX’ on your search results View. Doing so would mean that search results don’t have unique URLs, which is bad for user experience and analytics. It’s all about the URLs not about the whizzbang.

Not customizing the query string

Any time you configure a View with an exposed filter, take the extra second to customize the query string it is going to use. ‘search’ is a better query string than ‘search_api_fulltext’ for the search filter. URLs are part of your user interface.

No empty text

Similarly, when you add an exposed filter to a search you should also almost always be adding empty text. “No results match your search” is usually appropriate.

Facets that don’t speak to the audience

Facets can be useful for large search indexes and certain types of sites. But too many or too complex facets just create confusion. ‘Content-type’ is a very common facet, but if you use it, make sure you only include in its options the names of content types that are likely to make sense to visitors. For example, I don’t expect my visitors to understand the technical distinction between a ‘page’ and a ‘landing page’ so I don’t include facet links for these.

A screen shot of facets in DrupalYou can exclude confusing facet options 

Making search results page a node

I tell my team to make just about every page a visitor sees a node. This simplifies things for both editors and developers. It also ensures every page is in the search index: If you make key landing pages like ‘Events Calendar’ as Views pages or as custom routes these key pages will not be found in your search results. 

One important exception is the Search Results page itself. You don’t want your search results page in the search index: this can actually make an infinite loop when you search. Let this one be a Views page, not a Views block you embed into a node.

Important page content not in the ‘content’

Speaking of blocks and nodes, the way you architect your site will determine how well your search works. If you build your pages by placing blocks via core Block Layout, these blocks are not part of the page ‘content’ that gets indexed in the ‘Rendered HTML.’ Anything you want to be searchable needs to be part of the content. 

You can embed blocks in node templates with Twig Tweak, or you can reference blocks as part of the content (I use Paragraphs and Block Field.)

Not focusing on accessibility

The most accessible way to handle facets is to use ‘List of Links’ widget. You can also add some visually hidden help text just above your facet links. A common mistake is to hide the ‘Search’ label on the form. Instead of display: none, use the ‘visually-hidden’ class.

Dec 10 2018
Dec 10

Zivtech is happy to be offering a series of public Drupal 8 trainings at our office in downtown Philadelphia in January 2019. 

Whether you consider yourself a beginner or expert Drupal developer, our training workshops have everything you need to take your Drupal skills to the next level. 

Our experience

The Zivtech team has many years of combined expertise in training and community involvement. We have traveled all over the world conducting training sessions for a diverse range of clients including, the United States Department of Justice, the Government of Canada, CERN, Howard Hughes Medical Institute, Harvard University and more. 

We pride ourselves in educating others about open source, and attendees will leave our trainings with the knowledge to build custom Drupal sites, solve technical issues, make design changes, and perform security updates all on their own. We also offer private, onsite trainings that are tailored to your organization's specific needs. 

Our public Drupal trainings for January 2019 include:

Interested in learning more about our upcoming trainings? Click here. You can also reach out to us regarding multi-training and nonprofit discounts, or personalized trainings. 

We hope to see you in January!

Nov 09 2018
Nov 09

Last week, the Children’s Hospital of Philadelphia (CHOP) Vaccine Makers Project (VMP) won a PR News Digital Award in the category “Redesign/Relaunch of Site.” The awards gala honors the year’s best and brightest campaigns across a variety of media. 

PR News Award on a table.

Our CEO, Alex, and our Director of Client Engagement, Aaron, along with members of the Vaccine Makers team attended the event at the Yale Club in New York City.

Screenshot of a Tweet posted by the PR News. Source

The Vaccine Makers Project (VMP) is a subset of CHOP’s Vaccine Education Center (VEC). It’s a public education portal for students and teachers that features resources such as lesson plans, downloadable worksheets, and videos. 

The Vaccine Makers team first approached us in need of a site that aligned with the branding of CHOP’s existing site. They also wanted a better strategy for site organization and resource classification. Our team collaborated with theirs to build a new site that’s easy to navigate for all users. You can learn more about the project here.

Screenshot of a Tweet from Vaccine Makers team. Source

We’d like to thank CHOP and the Vaccine Makers team for giving us the opportunity to work on this project. We’d also like to thank PR News for recognizing our work and hosting such a wonderful event. 

Finally, we’d like to congratulate our incredible team for their endless effort and dedication to this project. 

Nov 06 2018
Nov 06
Jody's desk


After a long run on MacBook Pros, I switched to an LG Gram laptop running Debian this year. It’s faster, lighter, and less expensive. 

If your development workflow now depends on Docker containers running Linux, the performance benefits you’ll get with a native Linux OS are huge. I wish I could go back in time and ditch Mac earlier.


For almost ten years I was doing local development in Linux virtual machines, but in the past year, I’ve moved to containers as these tools have matured. The change has also come with us doing less of our own hosting. My Zivtech engineering team has always held the philosophy that you need your local environment to match the production environment as closely as possible. 

But in order to work on many different projects and accomplish this in a virtual machine, we had to standardize our production environments by doing our own hosting. A project that ran on a different stack or just different versions could require us to run a separate virtual machine, slowing down our work. 

As the Drupal hosting ecosystem has matured (Pantheon, Platform.sh, Acquia, etc.), doing our own hosting began to make less sense. As we diversified our production environments more, container-based local development became more attractive, allowing us to have a more light-weight individualized stack for each project.

I’ve been happy using the Lando project, a Docker-based local web development system. It integrates well with Pantheon hosting, automatically making my local environment very close to the Pantheon environments and making it simple to refresh my local database from a Pantheon environment. 

Once I fully embraced containers and switched to a Linux host machine, I was in Docker paradise. Note: you do not need a new machine to free yourself from OSX. You can run Linux on your Mac hardware, and if you don’t want to cut the cord you could try a double boot.

Philadelphia City Hall outside Jody's office
A cool office view (like mine of Philly’s City Hall) is essential for development mojo


In terms of editors/IDEs I’m still using Sublime Text and vim, as I have for many years. I like Sublime for its performance, especially its ability to quickly search projects with 100,000 files. I search entire projects constantly. It’s an approach that has always served me well. 

I also recommend using a large font size. I’m at 14px. With a larger font size, I make fewer mistakes and read more easily. I’m not sure why most programmers use dark backgrounds and small fonts when it’s obvious that this decreases readability. I’m guessing it’s an ego thing.


In browser news, I’m back to Chrome after a time on Firefox, mainly because the LastPass plugin in Firefox didn’t let me copy passwords. But I have plenty of LastPass problems in any browser. When working on multiple projects with multiple people, a password manager is essential, but LastPass’s overall crappiness makes me miserable.

Wired: Linux, git, Docker, Lando
Tired: OSX, Virtual machines, small fonts
Undesired: LastPass, egos


I typically only run the browser, the text editor, and the terminal, a few windows of each. In the terminal, I’m up to 16px font size. Recommend! A lot of the work I do in the terminal is running git commands. I also work in the MySQL CLI a good deal. I don’t run a lot of custom configuration in my shell – I like to keep it pretty vanilla so that when I work on various production servers I’m right at home.

Terminal screenshot


I get a lot of value out of my git mastery. If you’re using git but don’t feel like a master, I recommend investing time into that. With basic git skills you can quickly uncover the history of code to better understand it, never lose any work in progress, and safely deploy exactly what you want to.

Once I mastered git I started finding all kinds of other uses for it. For example, I was recently working on a project in which I was scraping a thousand pages in order to migrate them to a new CMS. At the beginning of the project, I scraped the pages and stored them in JSON files, which I added to git.  At the end of the project, I re-scraped the pages and used git to tell me which pages had been updated and to show me which words had changed. 

On another project, I cut a daily import process from hours to seconds by using git to determine what had changed in a large inventory file. On a third, I used multiple remotes with Jenkins jobs to create a network of sites that run a shared codebase while allowing individual variations. Git is a good friend to have.

Hope you found something useful in my setup. Have any suggestions on taking it to the next level?

Oct 29 2018
Oct 29

At this year's BADCamp, our Senior Web Architect Nick Lewis led a session on Gatsby and the JAMstack. The JAMStack is a web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. Gatsby is one of the leading JAMstack based static page generators, and this session primarily covers how to integrate it with Drupal. 

Our team has been developing a "Gatsby Drupal Kit" over the past few months to help jump start Gatsby-Drupal integrations. This kit is designed to work with a minimal Drupal install as a jumping off point, and give a structure that can be extended to much larger, more complicated sites.

This session will leave you with: 

1. A base Drupal 8 site that is connected with Gatsby.  

2. Best practices for making Gatsby work for real sites in production.

3. Sane patterns for translating Drupal's structure into Gatsby components, templates, and pages.

This is not an advanced session for those already familiar with React and Gatsby. Recommended prerequisites are a basic knowledge of npm package management, git, CSS, Drupal, web services, and Javascript. Watch the full session below. 

[embedded content]

Sep 25 2018
Sep 25

With phone in hand, laptop in bag and earbuds in place, the typical user quickly scans multiple sites. If your site takes too long to load, your visitor is gone. If your site isn’t mobile friendly, you’ve lost precious traffic. That’s why it’s essential to build well organized, mobile ready sites.

But how do you get good results?

  • Understand whom you’re building for
  • Employ the right frameworks
  • Organize your codebase
  • Make your life a lot easier with a CSS preprocessor

Let’s look at each of these points.

Design For Mobile

When you look at usage statistics, the trend is clear. This chart shows how mobile device usage has increased each year. 

Mobile device usage graphSource

A vast array of mobile devices accomplish a variety of tasks while running tons of applications. This plethora of device options means that you need to account for a wide assortment of display sizes in the design process.

As a front end developer, it’s vital to consider all possible end users when creating a web experience. Keeping so many display sizes in mind can be a challenge, and responsive design methodologies are useful to tackle that problem.

Frameworks that Work

Bootstrap, Zurb, and Jeet are among the frameworks that developers use to give websites a responsive layout. The concept of responsive web design provides for optimal viewing and interaction across many devices. Media queries are rules that developers write to adapt designs to specific screen widths or height.

Writing these from scratch can be time consuming and repetitive, so frameworks prepackage media queries using common screen size rules. They are worth a try even just as a starting point in a project.

Organizing A Large Code Base

Depending on the size of a web project, just the front end code can be difficult to organize. Creating an organizational standard that all developers on a team should follow can be a challenge. Here at Zivtech, we are moving toward the atomic design methodology pioneered by Brad Frost. Taking cues from chemistry, this design paradigm suggests that developers organize code into 5 categories:

  1. Atoms
  2. Molecules
  3. Organisms
  4. Templates
  5. Pages

Basic HTML tags like inputs, labels, and buttons would be considered atoms. Styling atoms can be done in one or more appropriate files. A search form, for example, is considered a molecule composed of a label atom, input atom, and button atom. The search form is styled around its atomic components, which can be tied in as partials or includes. The search form molecule is placed in the context of the header organism, which also contains the logo atom and the primary navigation molecule.

Now Add CSS Preprocessors

Although atomic design structure is a great start to organizing code, CSS preprocessors such as Sass are useful tools to streamline the development process. One cool feature of Sass is that it allows developers to define variables so that repetitive code can be defined once and reused throughout.

Here’s an example. If a project uses a specific shade of mint blue (#37FDFC), it can be defined in a Sass file as $mint-blue = #37FDFC. When styling, instead of typing the hex code every time, you can simply use $mint-blue. It makes the code easier to read and understand for the team. 

Let’s say the client rebrands and wants that blue changed to a slightly lighter shade (#97FFFF). Instead of manually finding all the areas where $mint-blue is referenced on multiples pages of code, a developer can easily revise the variable to equal the new shade ($mint-blue = #97FFFF; ). This change now automatically reflects everywhere $mint-blue was used.

Another useful feature of Sass is the ability to nest style rules. Traditionally, with plain CSS, a developer would have to repetitively type the parent selector multiple times to target each child component. With Sass, you can confidently nest styles within a parent tag, as shown below. The two examples here are equivalent, but when you use Sass, it’s a kind of shorthand that automates the process.

Traditional CSS


Although there are a lot of challenges organizing code and designing for a wide variety of screen sizes, keep in mind that there are excellent tools available to automate the development process, gracefully solve all your front end problems and keep your site traffic healthy.

This post was originally published on July 1, 2016 and has been updated for accuracy.

Sep 18 2018
Sep 18

A slick new feature was recently added to Drupal 8 starting with the 8.5 release  — out of the box off-canvas dialog support.

Off-canvas dialogs are those which slide out from off page. They push over existing content in order to make space for themselves while keeping the existing content unobstructed, unlike a traditional dialog popup. These dialogs are often used for menus on smaller screens. Most Drupal 8 users are familiar with Admin Toolbar's use of an off-canvas style menu tray, which is automatically enabled on smaller screens.

Admin toolbar off-canvas

Drupal founder Dries posted a tutorial and I finally got a chance to try it myself.

In my case, I was creating a form for reviewers to submit reviews of long and complicated application submissions. Reviewers needed to be able to easily access the entire application while entering their review. A form at the bottom of the screen would have meant too much scrolling, and a traditional popup would have blocked much of the content they needed to see. Therefore, an off-canvas style dialog was the perfect solution. 

Build your own

With the latest updates to Drupal core, you can now easily add your own off-canvas dialogs.

Create a page for Your off-canvas content 

The built in off-canvas integration is designed to load Drupal pages into the dialog window (and only pages as far as I can tell). So you will need either an existing page, such as a node edit form, or you'll need to create your custom own page through Drupal's routing system, which will contain your custom form or other content. In my case, I created a custom page with a custom form.

Create a Link

Once you have a page that you would like to render inside the dialog, you'll need to create a link to that page. This will function as the triggering element to load the dialog.

In my case, I wanted to render the review form dialog from the application full node display itself. I created an "extra field" using hook_entity_extra_field_info(), built the link in hook_ENTITY_TYPE_view(), and then configured the new link field using the Manage Display tab for my application entity. 

 * Implements hook_entity_extra_field_info().
function custom_entity_extra_field_info() {
  $extra['node']['application']['display']['review_form_link'] = array(
    'label' => t('Review Application'),
    'description' => t('Displays a link to the review form.'),
    'weight' => 0,
  return $extra;

 * Implements hook_ENTITY_TYPE_view().
function custom_node_view(array &$build, Drupal\Core\Entity\EntityInterface $entity, Drupal\Core\Entity\Display\EntityViewDisplayInterface $display, $view_mode) {
  if ($display->getComponent('review_form_link')) {
    $build['review_link'] = array(
      '#title' => t('Review Application'),
      '#type' => 'link',
      '#url' => Url::fromRoute('custom.review_form', ['application' => $entity->id()]),

Add off-canvas to the link

Next you just need to set the link to open using off-canvas instead of as a new page.

There are four attributes to add to your link array in order to do this:

      '#attributes' => array(
        'class' => 'use-ajax',
        'data-dialog-renderer' => 'off_canvas',
        'data-dialog-type' => 'dialog',
        'data-dialog-options' => '{"width":"30%"}'
      '#attached' => [
        'library' => [

The first three attributes are required to get your dialog working and the last is recommended, as it will let you control the size of the dialog.

Additionally, you'll need to attach the Drupal ajax dialog library. Before I added the library to my implementation, I was running into an issue where some user roles could access the dialog and others could not. It turned out this was because the library was being loaded for roles with access to the Admin Toolbar.

The rendered link will end up looking like:

<a href="https://www.zivtech.com/review-form/12345" class="use-ajax" data-dialog-options="{"width":"30%"}" data-dialog-renderer="off_canvas" data-dialog-type="dialog">Review Application</a>

And that's it! Off-canvas dialog is done and ready for action.

May 18 2018
May 18

The Content Moderation core module was marked stable in Drupal 8.5. Think of it like the contributed module Workbench Moderation in Drupal 7, but without all the Workbench editor Views that never seemed to completely make sense. The Drupal.org documentation gives a good overview.

Content Moderation requires the Workflows core module, allowing you to set up custom editorial workflows. I've been doing some work with this for a new site for a large organization, and have some tips and tricks.

Less Is More

Resist increases in roles, workflows, and workflow states and make sure they are justified by a business need. Stakeholders may ask for many roles and many workflow states without knowing the increased complexity and likelihood of editorial confusion that results.

If you create an editorial workflow that is too strict and complex, editors will tend to find ways to work around the  system. A good compromise is to ask that the team tries something simple first and adds complexity down the line if needed.

Try to use the same workflow on all content types if you can. It makes a much simpler mental model for everyone.

Transitions are Key

Transitions between workflow states will be what you assign as permissions to roles. Typically, you'll want to lock down who can publish content, allowing content contributors to create new drafts only.

Transitions Image from Drupal.orgTransitions between workflow states must be thought through

You might want some paper to map out all the paths between workflow states that content might go through. The transitions should be named as verbs. If you can't think of a clear, descriptive verb that applies, you can go with 'Set state to %your_state" or "Mark as %your_state." Don't sweat the names of transitions too much though; they don't seem to ever appear in an editor-facing way anyway.

Don't forget to allow editors to undo transitions. If they can change the state from "Needs Work" to "Needs Review," make sure they can change it back to "Needs Work."

You must allow Non-Transitions

Make sure the transitions include non-transitions. The transitions represent which options will be available for the state when you edit content. In the above (default core) example, it is not possible to edit archived content and maintain the same state of archived. You'd have to change the status to published and then back to archived. In fact, it would be very easy to accidentally publish what you had archived, because editing the content will set it back to published as the default setting. Therefore, make sure that draft content can stay as draft when edited, etc. 

Transition Ordering is Crucial

Ordering of the transitions here is very important because the state options on the content editing form will appear as a select list of states ordered by the transition order, and it will default to the first available one.

If an editor misses setting this option correctly, they will simply get the first transition, so make sure that first transition is a good default. To set the right order, you have to map each state to what should be its default value when editing. You may have to add additional transitions to make this all make sense.

As for the ordering of workflow states themselves, this will only affect ordering when states are listed, for example in a Views exposed filter of workflow states or within the workflows administration.

Minimize Accidental Transitions

But why wouldn't my content's workflow state stay the same by default when editing the content (assuming the user has access to a transition that keeps it the same)? I have to set an order correctly to keep a default value from being lost?

Well, that's a bug as of 8.5.3 that will be fixed in the next 8.5 bugfix release. You can add the patch to your composer.json file if you're tired of your workflow states getting accidentally changed.

Test your Workflow

With all the states, transitions, transition ordering, roles, and permissions, there are plenty of opportunities for misconfiguration even for a total pro with great attention to detail like yourself. Make sure you run through each scenario using each role. Then document the setup in your site's editor documentation while it's all fresh and clear in your mind.


With Content Moderation, the term "published" now has two meanings. Both content and content revisions can be published (but only content can be unpublished).

For content, publishing status is a boolean, as it has always been. When you view published content, you will be viewing the latest revision, which is in a published workflow state.

For a content revision, "published" is a workflow state.

Therefore, when you view the content administration page, which shows you content, not content revisions, status refers to the publishing status of the content, and does not give you any information on whether there are unpublished new revisions.

Where's my Moderation Dashboard?

From the content administration page, there is a tab for "moderated content." This is where you can send your editors to see if there is content with drafts they need to review. Unfortunately, it's not a very useful report since it has neither filtering nor sorting. Luckily work has been done recently to make the Views integration for Content Moderation/Workflows decent, so I was able to replace this dashboard with a View and shared the config.

Using Views for a Moderation DashboardMy Views-based Content Moderation dashboard.

Reviewer Access

In a typical editorial workflow, content editors create draft edits and then need to solicit feedback and approval from stakeholders or even a legal team. To use content moderation, these stakeholders need to have Drupal accounts and log in to look at the "Latest Revision" tab on the content. This is an obstacle for many organizations because the stakeholders are either very busy, not very web-savvy, or both.

You may get requests for a workflow in which content creation and review takes place on a non-live environment and then require some sort of automated content deployment process. Content deployment across environments is possible using the Deploy module, but there is a lot of inherent complexity involved that you'll want to avoid if you can.

I created an Access Latest module that allows editors to share links with an access token that lets reviewers see the latest revision without logging in.

Access Latest lets reviewers see drafts without logging inAccess Latest lets reviewers see drafts without logging in

Log Messages BUG

As of 8.5.3, you may run into a bug in which users without "administer content" permission cannot add a revision log message when they edit content. There are a fewissues related to this, and the fix should be out in the next bugfix release. I had success with this patch and then re-saving all my content types.

May 01 2018
May 01

When coming up with a security plan for your Drupal website, or any website for that matter, you need to take several key factors into account. These key factors include your server host, server configuration, and authorized users. Typically, the weakest link in that chain is how your authorized users access the server, so first we want to secure access to allow your admins and developers in, but keep hackers out.

Hosting Provider

Choosing your hosting provider is one of the most important decisions to make when it comes to site security. Your server is your first line of defense. Not all hosts have the options that you need to implement best practices for securing the server itself, let alone websites or other services that will be running on it too. 

At Zivtech, we use VPS servers for some hosting solutions for our clients, but we also use specialized hosting solutions such as Pantheon and Acquia when it makes sense. Taking the time to figure out which services your site(s) needs prior to moving to a host will save time later; you won’t need to move to another when you realize they don’t provide the services you really need. It’s the concept of “measure twice and cut once.”

Authorized Users

Many shared hosting solutions are set up with cPanel, which typically gives users FTP access to their web server environment by default. FTP is not encrypted like communications over SSH, so configuring sFTP is recommended if that’s all your host allows. 

The most secure way to connect to your server is through SSH, which is encrypted from end to end. Most VPS hosting companies give users access to their server through SSH by default, unless you install cPanel or other tools later. When using SSH, it’s much more secure to connect using an SSH key to authenticate with the server instead of a password. Typically, VPS hosts give you access to the root user to start with, so we need to stop authentication with that user as soon as possible.

Forcing SSH key authentication and configuring an authorized_keys file for each authorized user on the server is the best way to keep your server locked down from unauthorized access by malicious users. 

Get started by generating an SSH keypair on your local machine. I’m a security geek, so I use 4096-bit keys at this point for added security, but you can also use 2048-bit keys and still be secure at this point.

Below is an example of how you can generate an SSH key if you don’t have one already. The output files will be written to ~/.ssh. Without changing their name, they’ll be ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub. Let’s go ahead and generate our key for our root user now.

ssh-keygen -t rsa -b 4096

After generating your SSH keypair, you’ll want to copy the contents of the ~/.ssh/id_rsa.pub file, as that’s the SSH public key we will be adding to that authorized user’s authorized_keys file. It’s okay to pass this key around in emails, chat programs, and other unencrypted methods of communication; it’s the public key, which can and should be seen by others. The ~/.ssh/id_rsa file, on the other hand, is the private key, which should never be shared through unencrypted methods, and typically should not be seen by anyone but the person who owns it.

Server Configuration 

File Permissions

File permissions are a huge hole for hackers to gain access if they’re not configured properly. Hosting platforms like Acquia and Pantheon typically handle all these permissions, so there isn’t much to worry about in those environments. 

Those running on their on VPS will want to ensure they have the permission for their Drupal codebase, Public Files, and Private Files directories locked down. I’m not going to get into the specifics of file permissions in this post, but there are some great resources on Drupal.org on how to lock down your site’s files.

There are also some great modules out there to help facilitate secure files on your Drupal site. The File permissions module uses Drush to handle setting your site’s permissions, which could be helpful for those not comfortable with command line. The Security Review module allows you to take a look at various security related settings on your site including file permissions, so that is something I highly recommend running every so often on your sites to make sure everything is still locked down.

Apr 03 2018
Apr 03

In part one of this post, I went over how Drupal Security Advisories, SSL/TLS certificates, and thorough user account security help lay the foundation for keeping your Drupal site secure. In part two, we’ll take a look at user roles and permissions, input filters and text formats, and third party libraries.

User Roles and Permissions

To keep your site secure, always make sure that your user roles and permissions are configured properly. Depending on the modules installed and third party integrations, there could be additional permissions and/or roles to configure to ensure the site is still secure after installing a particular module. It’s important to read the full module README and/or module documentation to verify that all configuration options and permissions have been set up securely. In many cases, modules with very important security related permissions will either set them to a sane default configuration, or put up a notice on the modules page within the admin UI. Some will do both. Some will do neither, so that’s why you need to be aware. 

For each module you enable, there can be optional or required permissions that need to be configured. This is one of the easiest things to overlook as a Drupal beginner, so keep an eye on which modules you’re enabling, and if you have permissions set for all your roles before launching the site. 

User 1, the superadmin account, has full access to a Drupal site without needing any additional roles or permissions assigned, so testing with that user is not the best. Create some test users within each role and test how the site works logged in and logged out to be certain that your roles and permissions are configured properly. You should only grant your most trusted admins access to the User 1 account. It’s also a great idea to rename the User 1 username from admin to something custom to avoid easily guessing or enumerating the superadmin username.

Obscurity is not security, but in some cases it does help to further secure the User 1 and other admin accounts on the site when using other security best practices. Disabling the User 1 user completely through MySQL is a more heavy-handed approach to locking down the superadmin user. 

There are also contrib modules that help facilitate additional obscurity on top of your user security, such as Rename Admin Paths and Username Enumeration Prevention. These modules can help keep potential hackers, script kiddies, and bots guessing instead of providing them with easy to use information in their hacking attempts.

Input Filters and Text Formats

Input filters and text formats are a huge part of ensuring a Drupal site doesn’t get hacked. The wrong settings on an input filter or the text format for a text field, such as a comment field, can be an easy way in for potential hackers. 

Content editors should be able to add the content they need without compromising security. Certain input filters should be enabled in the text formats for every format to ensure that HTML can’t be malformed and that markup can’t be added to exploit the site. 

The most common of these exploits is a Cross Site Scripting attack, which ultimately tricks the database into allowing code submissions to the database in a malicious attack. These types of attacks can end up with a hacker taking over your whole server, or just your database, but this can be prevented with the proper text formats and WYSIWYG settings. Let’s take a look at some of the most important settings to configure to ensure your fields are sanitizing all their data properly before being submitted to the database. 

The HTML filter is probably the most important input filter provided by a text format. It allows for specific tags, and even specific attributes of a given tag, in the actual HTML. Without the HTML filter, your site is open season for hackers, script kiddies, and bots to muscle their way in through possible exploits. 

It’s important to ensure that embedded objects, such as <script> and <iframe> tags, are not allowed, or only allowed by trusted users. They can easily be used to run malicious code on a site from external locations. There’s also a newer filter available that disallows usage of <img> tag sources that are not hosted on the site by replacing them with a placeholder image. Be aware that when you create a new custom text format, no input filters are chosen by default. It is highly recommended to use the “Limit allowed HTML tags and correct faulty HTML” HTML filter as a minimum.

There are four text formats available in Drupal core out of the box; Basic HTML, Restricted HTML, Full HTML, and Plain Text. In general, you probably want to give most users Basic HTML, but you might want to give your content editors more control over the markup they can add. Adding an editor role to the Restricted HTML format with some customized settings may be a good idea for those users. 

Text formats

You want to give the least permissive text formats to untrusted users, so Plain Text is a great choice for comments on blog posts to avoid spam links and malicious markup that isn’t filtered properly. Text formats can also be associated with a WYSIWYG, so be sure to enable or disable that for roles you trust or don’t trust as much.

Note: Keep in mind that new text formats and WYSIWYG settings can create new permissions on the site, so be sure to review all roles and permissions again after getting these configured.

Third Party Libraries

One of Drupal’s strengths is how easy it is to integrate with third party JavaScript libraries through contributed themes and modules. The problem is that these third party libraries are not covered by the Drupal Security Team, so they don’t get their own security advisories like Drupal core and Drupal contrib projects do. 

There have been a few cases when very popular libraries like CKEditor have received a Public Service Announcement on the Drupal Security Advisories page. PSAs like these are not very common unless there is a major security advisory for a third party library, or possibly a major incoming security issue for Drupal core that developers should be ready for. This is something that’s often overlooked, even by seasoned Drupal developers.

The lack of a true security advisory built into Drupal for these third party libraries means that it is up to you and/or your team to ensure that any libraries you have installed are still secure. This typically means that you need to sign up for security alerts for that project, if they have them, or keep an eye on the project releases that are security releases. Again, there are times that you might get a warning from the Drupal Security Team in a PSA about a library’s upcoming security release, but don’t count on this as a reliable way to prevent an attack; it’s not the Drupal Security Team’s job to monitor every third party library out there. Luckily, many third party libraries are hosted on GitHub and have an easy to follow releases page, although some do not and require a bit more due diligence to follow the latest security releases. Wouldn’t it be great if every open source project had such an amazing security team like Drupal’s?


A comprehensive Drupal security plan usually requires team effort. There are plenty of options out there to ensure that every Drupal site can stay secure, so don’t leave your site out in the cold for hackers to compromise it, or, even worse, steal user information and post it publicly for other hackers to use. This has become a common trend with sites that are hacked, which creates a bad situation for both the company that was hacked as well as for the users who had their data stolen. 

If you don’t have the time or resources to keep your site secure, seek the help of a team of experts. Keep calm and stay secure out there, fellow Drupalers!

Mar 27 2018
Mar 27

There’s no foolproof way to get an unhackable Drupal site; there could always be an exploit that we don’t know about yet. But there are quite a few ways you can help reduce the risk of getting hacked. Let’s dive into some of the most common ways Drupal sites get hacked, and how to secure your site against these points of entry.

Drupal Security Advisories

One of the most common ways to get hacked is to fall behind on known Drupal Security Advisories. Keeping up to date on the latest advisories is ultimately your first line of defense. 

There are security advisories for both Drupal core and contributed projects with varying levels of security risk for the exploits found. You can sign up for the security email list to make things easier to keep track of. You can do this by logging into your Drupal.org account and editing your user profile. From there you can subscribe to the security newsletter on the newsletters tab. This list will email you soon after Drupal Security Advisories are released to the public, which helps quickly notify you or your team of possible exploits that need to be fixed. 

You can use the Update Status module in Drupal core to see which sites are affected by these advisories. Then add an email address to send Security Alerts to daily or weekly. The Update Status alerts have also notified us in a few cases when the email list was backed up and took longer than it normally would. 

Typically, Drupal core security advisories come out on the third Wednesday of the month unless it’s a highly critical update that needs to be patched sooner. Contrib project security advisories can come out on any given Wednesday. 

Drupal 8 has a new release schedule that schedules upcoming minor releases. It’s important to note that with a new Drupal 8 minor version release, the previous minor version becomes unsupported and no longer gets security advisories or updates. It will be important for development teams to plan for these updates moving forward with Drupal 8, as these updates are not typical to the release schedules of Drupal 6 or Drupal 7 core. 

The Drupal core release schedule for both Drupal 7.x and 8.x can be found on the Drupal core release cycle page on Drupal.org. The overview page outlines the upcoming Drupal 8 core releases and major project milestones. Get familiar with them if you don’t want to be blindsided by some updates you weren’t planning on doing that quarter.

Specialized Drupal hosting companies, such as Pantheon and Acquia, have started applying security updates upstream to your codebase, which can be merged with the click of a button in most cases, or even automatically with some additional configuration. This allows Drupal core security updates to be applied with less effort to sites that are hosted on their platforms. Drupal contrib is still up to you to support though. Other companies like Drop Guard and myDropWizard provide various service plans to keep your whole site up to date and not locked into a specific hosting platform. 

Whether you’re looking to do your Drupal security in-house, host on a specialized hosting service, or pay for a support service provider to handle it for you, Drupal Security Updates are by far the most important aspect of your Drupal site’s security. Instead of derailing this whole post about why Drupal Security Advisories are so important, Google “Drupalgeddon” if you are unfamiliar with the term, and you’ll quickly see why they’re at the top of my list.


Use SSL/TLS for All the Things!

Unencrypted login forms are an easy way for hackers to gain access to a Drupal site. All websites, especially those that have any sort of authentication process, should be using SSL/TLS encryption by default by now. 

SSL/TLS encrypts the communications between a client browser and the server to ensure that all data being transferred back and forth is encrypted from end to end. This stops hackers and malicious users from eavesdropping on data that you’re sending or receiving from the web server. 

This encryption is extremely important for any sites that require authentication, such as Drupal or WordPress sites, as they both have administrative users that can potentially be hacked if the login form is not secured. It’s imperative that sites that handle financial information and other personal identification information always use SSL/TLS, otherwise that data should be considered compromised. You’ll know when your browser is in a secure SSL session; you’ll see various levels of validation, starting with a green lock saying “Secure” next to it and https:// at the start of the URL in your browser address bar. 

SSL cert

Sites that are not secured by SSL/TLS are vulnerable to malicious users being able to “listen in” on the unencrypted communications between the client browser and the web server through various methods. This information could be login credentials, personal identification information, or something like payment information, which should always be encrypted when sending electronically. 

Encrypting these communications with SSL/TLS stops this from happening unless you are involved in a “man-in-the-middle” attack, which involves spoofing the SSL certificate itself and posing as a real, secure site. It’s good practice to verify that the certificate in the browser is actually owned by the site by clicking the green lock and viewing the valid certificate information. 

Any site can get an SSL cert now through the Let’s Encrypt service, so cost or availability is no longer an excuse for running non-SSL websites. Pantheon’s hosting service even provides SSL certificates powered by Let’s Encrypt for all domains on your account as part of their hosting plans.

Extended Validation (EV) certificates provide additional information about the owner of the certificate directly in the address bar, such as the company name and/or the country the cert was issued for. Generally, you’ll see these Extended Validation (EV) certificates used by financial institutions or larger corporations, but they can be used by any website that fills out the information needed for the Extended Validation certificate. At the time of this writing, Let’s Encrypt does not support Extended Validation certificates or wildcard certificates, so there are still some cases that it is not fully applicable, but it’s a great, free service for non-EV and non-wildcard certs.

Drupal User Account Security

We now know that our communications between the browser and the server are encrypted through SSL/TLS, but do we know how secure our Drupal users’ passwords are? To keep your site secure, ensure that all users, especially your content editors and admins, are not your weakest link. There are multiple levels of user security we need to be aware of when setting up our Drupal website. We’ll take a look at these levels and make sure we have all our users connecting as securely as possible.

Let’s start by taking a look at our most common weakest link, the user password. While there is a nice password strength suggestion tool in Drupal core, there’s nothing in core to force users to use secure passwords out of the box. 

Secure locks

At Zivtech, we use the Password Policy contrib module on most Drupal site builds to ensure that all users have password requirements that make them more difficult to crack. We typically require several elements to the password, such as numbers, capitalization, and punctuation, to add to the complexity of each user’s password. This is important to ensure that privileged users and those with the ability to add content to the site do not have their accounts compromised by a weak password. 

Drupal does have some mechanisms built in that will throttle failed login attempts, but these reset over time and the process can be repeated unless additional steps are taken to block the IP. Ultimately, weak passwords can be an easy point of entry for skilled hackers.

Proper administrator and content editor training is also important to ensure that these users don’t unintentionally negate other steps that were taken to secure the site. This means that site administrators and content editors should be informed that sharing login credentials over insecure communications like email should never happen. Credentials shared across email and other unencrypted communications should be considered compromised as soon as you hit the send button. There are some great password managers out there (we use LastPass) that store encrypted credentials and even allow sharing of credentials securely between team members. Always use one of these tools instead. 

PSA: If you are still sending any passwords or login credentials over email, please stop before you cause a major security breach in your organization or a client’s!

Check back next week for part two! We’ll go over security for user roles and permissions, input filters and text formats, and third party libraries.

Jan 18 2018
Jan 18

During the redesign process of a website, there are many small changes that can ultimately affect the traffic of the new site. The key is to identify any changes that might break SEO, or changes that might affect the way the site looks to search engine spiders ahead of time to avoid traffic drops. In the end, we want the site to look fresh and new while still getting the same traffic, or more, as the old design. 

At Zivtech, we look at many factors in the planning phase of a website redesign project and try to identify those that could cause drops in traffic after the new design is launched. Once these have been identified, we ensure all of these tasks have been completed before launch. Let’s take a look at some of these factors and how to avoid traffic drops on your next website redesign project.

Meta Tags

We typically build sites with Drupal, so the Metatag module handles much of the meta tag configuration and display on the site. If you aren’t using Drupal though, there could be some changes to your front-end design that could affect your meta tags and confuse search engine spiders. You’ll need to make sure that all of your pages have meta tags and that there aren’t any duplicates. 

Broken Links

Broken links are a huge problem during website redesigns. This could be a result of changes in the menu structure or in path structures for content types. Broken links mean that users and search engines can’t find the pages they’re looking for, which can really wreak havoc on your site traffic statistics. 

To avoid broken links in Drupal, we can use the Link checker module, but there are also third party tools that can be used for non-Drupal sites. Google Search Console provides some additional tools to identify broken links and 404 pages too.

404 page

URL Redirects

Broken redirects or missing redirects to new URLs are also a big problem on site redesigns. These typically happen due to changes in URL patterns or menu structures. The Redirect module in Drupal provides an interface to add redirects for your pages without any coding experience. Non-Drupal sites can use .htaccess files or redirect statements in their web server configuration to ensure that all URLs that are changing have proper 301 redirects.

XML Sitemap

As URLs are changed on your site during a redesign, you’ll want to ensure that the XML sitemap has updated URLs that match the new ones. The XML sitemap module handles this for us on a Drupal site. 

If you aren’t running Drupal, a plugin for your CMS may handle this, or you’ll need to generate a new sitemap using third party tools. Once this has been completed, you can log in to Google Search Console and resubmit your sitemap for indexing.

Google Analytics

If you forget to place your Google Analytics tracking code in your new site’s markup before launch, you can end up in the dark when it comes to traffic fluctuations. The Google Analytics module handles the placement of this tracking code on a Drupal website, and even provides a status warning on the Drupal status page if the tracking ID has not been configured yet. 

Those who aren’t using Drupal should follow the instructions provided by Google Analytics to place the code snippet in their site’s markup, or use a plugin provided by their CMS of choice. With the Google Analytics tracking code in place, your organization can get a much better overview of how your site performs after the redesign launches. It’s much easier to track your successes or failures in your redesign if you were already running Google Analytics, but a relaunch is a great time to start using it too. 

While the factors in this post are some of the most important that we look at during a site redesign project at Zivtech, each project is unique and could require additional changes to your site to ensure you avoid traffic drops after launching your new design. 

Overall, you want to identify any changes that could affect URLs, meta data, and even content structure that search engine spiders or your visitors might be confused about. Even small changes or a missing meta tag can affect your search engine rankings, which can lead to traffic drops. Do your future self a favor and make a list of the individual factors that could affect your site. Then ensure that list is completed before calling your next website redesign a success.

Jan 09 2018
Jan 09

Why we abandoned SASS and switched to PostCSS in our Drupal theme

A few months ago at Zivtech, our team started to look at best ways to improve the performance of our theme and take full advantage of controlling our markup with twig. We had ported our D7 theme for Drupal 8 and added a lot of great functionalities, such as Pattern Lab, CSS regression tests, etc. We wrote our own utility classes, Mixins and SASS functions, integrated flexboxgrid, and used Gutenberg as a responsive typography system. While we had all that we needed, we still ended up with too much convolution in our process and bloated CSS bundles at the end of our projects.

While SASS has helped us tremendously and allowed fast paced development in the past few years, we lost track of our CSS output. It’s a common complaint about preprocessors, and if we take a closer look at the important CSS conventions we need to watch for (DRY, reusable, less specific, no deep nesting, etc.), I can see how we slowly drifted away from a rigorous implementation. There are several reasons for it, SASS not necessarily being the culprit. It is a very versatile tool, and as such, the responsibility falls on its user. The real question is how to implement a proper workflow that we can enforce as a team in order to:

  • Deliver a consistent product
  • Improve performance and quality
  • Facilitate development among developers

The answer may be…write less CSS! Not LESS, less. Leverage TWIG to write or generate dynamic classes based on a solid set of utility classes. The concept is not new, but front-end Drupal developers have been burned by the lack of control of the markup for a long time. We don’t have excuses now, so let’s change our ways and enter the postmodern era.

Red symbol

What are the benefits of abandoning SASS in favor of writing vanilla CSS and post processing it?

I think there are good answers all over the web, but I’ll throw in my two cents. The most compelling argument for me resides in the limitations imposed by using vanilla CSS. By that I mean that the best solution will often reside in adding a utility class to your markup. Moreover, if no shortcut is available (using a @mixin, or worse, an @extend -- in which case you’ll most likely repeat yourself), you might actually LEARN something by having to create a template or write a preprocess function. You’ll eventually benefit from having this exposed for further modifications. You also won’t add unnecessary CSS, obviously.

You will also be more mindful of your specificity and probably the nesting of your rules. Long, extraneous selectors are bad for performance, therefore utility classes will come in handy again and be the preferable solution. 

I think a clean looking SASS stylesheet can be misleading. I know I am very tedious with certain aspects of my code writing (indentation, line breaks, spacing, readability in general) but SASS is very deceptive in that aspect; the code rendered is a bundle that you rarely get to take a close look at. In other words, one clean @mixin line may result in 10 lines of CSS repeated all over your stylesheet. Or the five @extend you wrote will result in a giant multi-line selector that’s rather heavy on your browser (let’s remember CSS parsers read right to left).

You get the idea. We were seeking a process that actually reinforces good practice. 

There are many PostCSS NPM modules available to port the same types of functions SASS handles natively. But apart from the mandatory @import we needed to be able to get a modular organization of our files, we didn’t want PostCSS to make the same mistakes that our SASS implementation was doing in the past. 

By using a smart utility class system (we chose Basscss) and reworking our templates, we managed to reduce our theme CSS bundle file from 54k to 25k. That’s an improvement in itself for sure, but not as much as the exponential effect it will have on a finished website. By laying down a solid foundation and workflow we can ensure:

  • A drastic reduction of the bundle size
  • A clear, well defined system with understandable rules
  • A way to allow our site builders to implement minimal styling (layout, spacing)
  • A more manageable stylesheet structure for teams (by reducing the amount of tools)
  • A thorough template based system in which developers have more control over markup and dynamic classes with native TWIG functions

Again, this is not a bashing of SASS, which is a powerful and very useful system of writing front end code. This is merely a recalibrating of our priorities, and a way to streamline our code and processes.

But truly, if I need functions and more complicated dynamic handling of front-end code, I’ll say that writing them in SASS is not for me, and I’d rather use the language that was built for this purpose - JavaScript. Plus, in a Drupal context, we can bind our back end to JS and use DrupalSettings to leverage variables from our website config. Seems like a no brainer in the long term.

What we used

We stuck with gulp for our setup, and here’s the list of the plugins:

  • BassCSS - We find it to be a very succinct, concise, and easy to work with CSS utility collection tool
  • gulp-postcss - self explanatory
  • postcss-cssnext - a great postcss plugin that bundles all of your extra PostCSS needs
  • gulp-sourcemaps - we still want to map our CSS files for faster debugging
  • postcss-flexibility - a polyfill for flexbox
  • gulp-concat-css - bundle all your CSS files into one
  • gulp-cssnano - for a clean, compressed bundle
  • css-mqpacker - group your media queries together for better performance
  • browser-sync - just because it’s an amazing piece of software
  • gulp-css-info - super useful plugin to parse the CSS and create a searchable style reference doc - we turned this one into a CSS Info Drupal 8 plugin

A few other tools are being used to tie all these together, but you can find the full setup by downloading the theme on Drupal.org at https://www.drupal.org/project/bear_skin

Watch a video of the module in action.

[embedded content]

Questions? Comments? Act below!

Dec 21 2017
Dec 21

As a content management system, Drupal is designed to simplify the process for adding, modifying, and removing content, even for users without much technical expertise. 

Beyond its core functionality, Drupal has a number of modules that make life even easier for content writers and editors. Some of these modules, like Views and CKEditor, were added to core when Drupal 8 was released. 

These are some of our other favorite modules that can further simplify workflows for content editors. 

Real-time SEO for Drupal

Content writers always need to strike the right balance between user friendliness and search engine optimization in their work. Content should incorporate SEO strategies in order to appear in relevant searches while also remaining relevant and appealing to site users. 

Real-time SEO for Drupal promises to help “optimize content around keywords in a fast, natural, non-spam way.” The module analyzes elements of your content like page length, meta descriptions, keywords, and subheadings. This helps boost SEO without sacrificing readability, striking that careful balance. This module also requires the metatag module.


Drupal identifies every piece of content with a node ID, which is displayed in the URL. The Pathauto module uses tokens to automatically create URL aliases based on a specific patterned system that you establish. 

These URLs are more user friendly than the standard node identifiers. They’re also beneficial for site structure and linking because they’re easier to recall. 


Link checker

Link checker helps detect broken links in your content. The module checks remote sites and evaluates HTTP response codes, then displays broken links in the reports/logs section as well as on the content edit page. 


Scheduler allows you to save a post as a draft and automatically publish at a later date. This module is great for those who work with an editorial calendar. Content writers can prepare posts ahead of time, schedule them to publish, and not worry about needing to set a reminder to publish on the desired day.


Diff is another useful module for teams with a number of content contributors or editors. It adds a tab to show you all of the changes that were made to each piece of content. Revisions allows you to do this as well, but Diff goes a step further by showing when an individual word is added, changed, or deleted. 

If you’re a content editor, which modules do you find most useful?

Oct 23 2017
Oct 23

What is Decoupling?

Decoupling has been gaining momentum in the past couple years. An increasing number of websites and applications combine their content management system’s backend and editorial capabilities with a separate framework that renders the front end. 

The idea is to make data available in a different format (usually JSON) so the framework can parse it, and so the developer can take full control of the markup, UI, routing, etc. While it’s not ideal for certain types of sites (if you have a lot of pages for instance), it becomes very handy when dealing with single page applications or projects that require a lot of user interaction.

I recently attended Decoupled Dev Days in New York City. This two day event was a way to gather a small portion of the Drupal community (and others) for an in-depth look at the work many people are putting toward making Drupal an attractive backend for a decoupled app. Guest speakers were also main contributors for Angular.js and Ember.js, which was beneficial; the goal was not to make another Drupal centric conference, but rather to attract a broader audience within the tech community.

It was a great opportunity to see the community at work and to get insights about implementation, performance, tools, and more while working on a decoupled app myself.

Two sessions were presenting what we would call Drupal install profiles - basically starter kits for a Drupal 8 decoupled project. Contenta and Reservoir both aim to get your Drupal install ready for beheading. They are also both quite opinionated in what they keep and remove from a regular install. I’d recommend reading this comparison article to learn more. Let’s keep in mind that these projects are in constant development and the early writing about them may not reflect some of the latest development phases.

Now An Then

Our team started working on a decoupled project for NowAnThen.com a few months ago. This site, a product of Wonderful Machine, is aimed towards engaging photographers, genealogists (amateur and expert!), and educators in creating timelines of their own images or of photos uploaded throughout the site. Users add the photos to their personalized timelines to create another medium for visual storytelling. Some timelines have only a few photos while others (such as the Global Timeline found on the home page) will have a vast amount of images attached to it. No matter what, the images should be able to load smoothly onto the timeline in order to create a seamless history viewing experience.

Now An Then

When we first heard the pitch for this site, we were immediately intrigued and up for the challenge. The site is essentially based on a scrollable, searchable timeline of images with metadata (caption, linked author etc). We inherited a prototype that was buggy and not scalable. The client wanted a robust solution in order to handle high traffic and offer more features while keeping the performance optimal. A user’s focus should be on the images and content of the site, rather than the site’s shortcomings.

Now An Then

We chose to decouple our own install profile with React for this project. It made perfect sense to just use Drupal for some basic functionality (paging, login and account creation, and obviously all the good backend stuff). Keeping this functionality as simple as possible was essential for this project. We chose to do the routing with Drupal as it seemed to add a lot of extra work with React for no legitimate reason. We basically decoupled the main content region of our theme layer.

I had worked with Angular and Drupal in the past for a simple project (bearangular.zivtech.com) as a way to get some chops, and I found React’s learning curve to be easier. It’s quite subjective, but I felt the level of abstraction was higher with Angular, making my understanding of the Javascript’s “behind the scenes” much more difficult.

My co-worker had already set up the app and made the Drupal data available to React through a custom module that passed the data through the Drupal.settings API. I didn’t have much to do to get up to speed with some of the syntax and JSX custom elements - everything was pretty straight forward. I had to brush up my ES6 skills and after doing a lot of jQuery, I actually really enjoyed jumping back into vanilla JS.

We used webPack to compile the JS and went through the usual npm workflow. Module installations were super simple, which was a nice change from the Drupal module world. It was all fun and games until we had to optimize for performance. We were building something that had not been done before and it required some serious brain twisting.

The homepage is a timeline of all images on the site, and this is where things got interesting. We needed to imagine a scenario with, say, a timeline of one million images. As we are dealing with overflown content, it seemed clear we could only load what’s in the viewport, such as an infinite scroll. But it’s not that simple.

We wanted to provide a smooth experience while scrolling, so the regular infinite scroll was not exactly an ideal solution. Instead, we decided to work with subsets of 15 images loaded initially, loading the next subset at a half past scroll in any direction (and unmounting the subset on the further opposite). We also had to have the date scroller respond to the timeline both when dragging it, clicking anywhere on its axis, or on scrolling through the timeline.

In the end, the challenges in this project had less to do with React itself and more to do with problem solving and implementation. React’s abilities excelled for the purpose of this project. It provided a complex solution to be implemented simply within the existing build. We believe decoupling will become an essential part of our development process for projects that have similar needs to NowAnThen.

Oct 18 2017
Oct 18

Over the years, Zivtech has worked on many different types of existing Drupal websites and web applications. These projects have ranged from sites that were built with Drupal’s best practices to those built by developers with little to no Drupal experience. At Zivtech, we typically call a website that follows little to none of Drupal’s best practices a “lemon.” Our CTO, Jody Hamilton, did a great two part blog series called Lemon: Drupal Diseases and Cures if you would like to know more about what a Drupal lemon is.

If your site fits into the category of a lemon, it likely requires too much work to fix and should probably be rebuilt. In many cases though, our developers find that we can “rescue” the site to get it back into a secure and maintainable state after a site audit. We perform a site audit to identify the issues that can be fixed, and then provide an informative report about what problems have been found and how we can resolve them.

Our extensive experience with site audits has helped us identify common mistakes that Drupal sites are often guilty of. In this post we’ll outline the common mistakes and configuration issues that we find on most lemons. Some of these issues are even common on sites that have mostly followed Drupal’s best practices.

Drupal Core and Contrib Security Advisories

Outdated Drupal Security Advisories for Drupal Core and Drupal Contrib projects are the cause of the most common issue we encounter during our site audit process. In many cases, the sites we audit have not been maintained properly for an extended period of time. This results in Drupal Core and Drupal Contrib projects falling behind on varying levels of Drupal Security Advisories.

The worst case scenario is a Drupal site that has not been updated to at least Drupal Core 7.32. These sites are most likely already compromised by the Drupalgeddon Security Advisory. A much more in-depth security audit will typically be done in this case to ensure that the site is secured and cleaned up from the Drupalgeddon exploits.

Overall, we look to lock down any security holes from Drupal Security Advisories as a top priority for client sites, especially sites with user data. These are the first tasks that we perform after the audit when we begin to fix the identified issues.

JS/CSS Aggregation

Improperly configured JS/CSS Aggregation settings are another common mistake we find during our site audit process. When configured properly, these settings allow for JavaScript and CSS files that Drupal serves from modules and themes in the HTML to be combined and even compressed.

When these settings are not configured properly, the site visitor’s browser must perform, in some cases, many more requests to render the page content, slowing down page load times. When configured properly, this setting will speed up a site’s page load times and overall performance.

Drupal Core’s PHP Module Enabled

The PHP Filter module has thankfully been removed from Drupal 8 core, but we see this module enabled on Drupal 7 sites more often than we would like. Not only is this a huge security hole, as it allows PHP code to be run directly on your Drupal site by malicious users, but it’s also difficult to disable without a thorough review of the content.

Without a thorough review of the content tables in the database, you could very well expose PHP code in plain text on the site that could end up an even worse security hole if not identified prior to disabling the module.

Poorly Written Custom Code

Many of our site audits include reviewing and fixing poorly written or poorly optimized custom module or theme code written by previous developers or development shops.

PHP developers who are just getting started with Drupal often make the mistake of using straight PHP instead of Drupal’s API. For instance, not applying the check_plain function to the t function on a Drupal input field is a common mistake that we find. Without the check_plain function, the field is vulnerable to a XSS, Cross Site Scripting attack.

This is obviously a major security concern. We want to notify the client immediately and start making the changes needed to lock down their custom code. Identifying and resolving these types of issues are of the highest priority during our site audit process.

Full HTML Input Filters

Drupal’s Input Filters are overlooked by many who are inexperienced with Drupal, but they can be a huge security hole if not configured properly. On most problem sites, we find that either the Full HTML Input Format is configured for all users, or the Filtered HTML Input Filter has had its HTML filtering disabled.

Both of these configurations can potentially allow malicious users to embed code on your site that you probably don’t want your users accessing. To resolve this issue, we ensure that all default and custom Input Filters on the client site are configured securely. We also look through the database for potentially malicious code that has already been injected into nodes or comments.

Unused Modules or Themes Installed

During different stages of a site’s development, there may be changes made to use one module over another for similar functionality. This is a common but low risk issue that we find.

Allowing unused module and themes on the site can lead to security issues if no one manages Security Advisories in a timely manner. They also lead to much larger databases. Uninstalling modules and themes that are no longer in use is one of the final steps in our site audit as we start to look at ways to optimize the file system and performance of the site.

Unused or Too Many Content Types

Another common but low risk mistake is the overuse of content types. We find many content types that exist that aren’t even being used.

Keep in mind that each field added to a content type adds about three database tables. Minimizing content types to only those that are necessary helps keep your database size down. It also helps content editor usability by removing confusing content types.

Once we identify all of the content types that are needed, we verify which ones can be removed and start cleaning them up. You might be surprised how much space in your database is wasted space.

Unused or Too Many User Roles

User roles are one of the most overused systems we find during site audits. Overusing roles not only makes your site permission structure harder to manage, but it also can slow down your site considerably, especially when trying to manage permissions in roles.

Misconfigured role weights can also lead to major security holes. By default, Drupal provides some roles to get a site started, and in some cases additional roles must be created to provide the correct permission level for a given user. Sometimes roles are used for cases when something like Organic Groups would be a better choice. In other cases, it’s just impossible to manage the permissions page without having to edit each role individually to modify the permissions, which can be very tedious for site administrators.

To remedy this sort of issue, we typically discuss with the client how their users should be able to interact with the site through user stories. Then we determine how the existing user roles and permissions can be improved or simplified for their use cases.

Outdated Staff and Admin Users

Many of the sites that we audit have been online for several years. This usually means that there are at least a handful of outdated staff and admin users that should possibly be blocked, removed, or downgraded at the very least.

During an audit, we look to identify any weak links in the user roles, permissions, and user credentials to ensure that the site is as secure as possible on the user side. This may include adding a password policy, removing or blocking old users, and changing old admin or staff roles to less permissive ones.

In the end, we want to ensure that the users who can access the site through logging in are only those that the client wants to connect to their site. This makes for better security and user experience.

Server Configuration and Permissions

In addition to the overview that we provide, we also look at the server configuration, file permissions, and other aspects of the site’s hosting infrastructure.

There are a few common server and file permission issues we run into during audits. We typically find that either a server is configured improperly to allow improper access to the site’s files, or the web server itself is giving away information like its version number to anyone who wants to know. The version number could be the only information an attacker needs to perform a 0-Day exploit or other exploits that have not been resolved.

The sites that have these issues are also the ones that we find with many Drupal Core and Contrib Security Advisories, which increases the chance of serious security issues. Server configuration and permission issues are another high priority issue that we work with our clients to resolve as soon as we find them.

The First Step to a Better Site

A site audit is often the first step in improving your website. Our audit process has made our team familiar with common issues like these and how to fix them. If your site is outdated or was built by inexperienced developers, engage a team of experts for an audit to ensure that your site isn’t vulnerable to attack and to improve overall performance and user experience.

Oct 10 2017
Oct 10

As a Drupal expert, many of the projects I’ve done over the years have been marketing websites. Drupal is widely understood as a content management system that’s used to power sites like ours, but this is actually only the tip of the iceberg of what Drupal can do for an organization. Our team has used Drupal to build a variety of complex custom web applications that help companies work more efficiently.

Do you need an intranet?

We’ve used Drupal to build intranets that securely keep internal content and documents for staff eyes only. Drupal has an abundance of community features that make it easy to have wikis, commenting, user profiles, and messaging. Many organizations we’ve worked with integrate their intranet with their LDAP or other Single Sign On system. 

Radial intranetRadial's intranet allows team members to quickly locate information about co-workers

We’ve also used Drupal for our own intranet for the past eight years. Our intranet helps keep our internal knowledge base easy to access and organizes information like our servers, sites, clients, and projects.

Do you run on spreadsheets and email?

Some of the projects I’ve really enjoyed developing have used Drupal as a tool to increase the efficiency of critical business processes. Organizations tend to rely heavily on Excel or Google spreadsheets and email to manage information and communications. When your needs outgrow those tools, it’s time for a web application.

ConantConant's SmartDeps web application improved their workflow and allowed them to stop relying on email

Have you outgrown Excel?  

Data organization needs often outgrow the spreadsheet sweet spot. Typically, you’ll see some of the following problems:

  • Versions of a spreadsheet are getting emailed around.
  • Mistakes are being made in a spreadsheet, causing serious problems.
  • Data has been deleted and lost.
  • To minimize mistakes, one person has been made the editor of the spreadsheet, bottlenecking the process.
  • Manual work is being done where it doesn’t need to be. For example, does someone check the spreadsheet every week and then manually send out emails based on information there?

A web application like Drupal stores data safely in a database and provides an interface for people to access and update the data in a much more controlled way than a spreadsheet can. You can decide who should be able to see the data (which is always up to date) and who should be allowed to edit it or delete it. You can also control data validation, greatly reducing mistakes in data entry. You can track changes to the data. You can make easy to read reports. You can create automated workflows, like sending automated emails for example.

The best part of all this? This type of development work typically costs much less than what you spent on your marketing website. Mainly that’s because you don’t need custom design work and implementation on this kind of tool. 

Have you outgrown Email?

I have to admit, email is not my friend. I’m a project-based worker, and I need my written communication organized by project and by task. When I’m cc’ed on an email with ten people on it and the conversation veers from one topic to another while the subject line stays the same, I really can’t follow what’s going on. It seems as though the only way to keep up with these emails is to do nothing but tend to one’s email. And what happens when a new person is hired and all the history of your organization can be found only in old emails that they don’t have?

No organization can completely remove the reliance on email these days. But if you are running your business on email, a web application may be able to help. 

Often a Software as a Service solution (SaaS) can help organize communication. For communications surrounding sales leads, Salesforce can help. For discussions, Slack. For project-based organization, Jira or Basecamp. But if you have a very specific process around some of your communications, a custom Drupal-based application can be a great fit.

Here are a few examples our team has worked on: 

  • A legal printing company was relying on email to get work requests from customers, then emailing back and forth with estimates and questions. We built them a custom web application in which the customers enter in the request and the system organizes the process, greatly speeding up the work and automating many aspects.
  • A foundation was receiving grant applications by email and organizing the applicants and review process in Excel. We built a system to manage the entire process online.
Copland FoundationThe Aaron Copland Fund for Music can now manage their grant cycles entirely from the web app

Need help?

If you have a pain point at work that revolves around organization or access of information, an efficient solution that saves you time and money might be easier to come by than you think. Search for a SaaS product geared toward your needs, and if you find your needs are too unique for what’s out there, let’s talk about a custom web application.

Jul 11 2017
Jul 11

You don't need to go fully 'headless' to use React for parts of your Drupal site. In this tutorial, we'll use the Drupal 8 JsonAPI module to communicate between a React component and the Drupal framework.  

As a simple example, we'll build a 'Favorite' feature, which allows users to favorite (or bookmark) nodes. This type of feature is typically handled on Drupal sites with Flag module, but let's say (hypothetically...) that you have come down with Drupal-module-itis. You're sick of messing around with modules and trying to customize them to do exactly what you want while also dealing with their bugs and updates. You're going custom today.

Follow along with my Github repo.

Configure a custom Drupal module

First thing's first: data storage. How will we store which nodes a user has favorited? A nice and easy method is to add an entity reference field to the user entity. We can just hide this field on the 'Manage Form Display' and/or 'Manage Display' settings since we'll be creating a custom user interface for favoriting.

When you install the Favorite module in the repo, it will add the field 'field_favorites' for you to the user entity: see the config/install directory.

Next up: define the part of the page that React will replace. Typically, you will replace an HTML element defined by an id with a React component. If this is your first time hearing this, you should do some basic React tutorials. I started with one on Code Academy

Look at favorite.module. I use template_preprocess_node to add a div with id 'favorite' to every node on full view mode. This will be the div that I will replace with a React component.

Note that I also attach a 'library' called 'favorite/favorite' to the render array. This means my 'favorite' module needs to define a library called 'favorite' in a 'favorite.libraries.yml' file. This library defines which JavaScript file to include on the page.

Integrating React in a Drupal module

The JavaScript file we're attaching with a Drupal library is 'favorite.bundle.js.' This is actually a generated file that includes React and other dependencies as well as our custom JavaScript all in one. The tool we're using to do this is Webpack. Take a look inside the js directory of favorite module.

The JavaScript dependencies are being managed via npm as defined in the package.json file. To develop locally, run `npm install` within the js directory, which will download packages into a node_modules subdirectory. Note that there is a .gitignore file to prevent node_modules from being added to the repo. Everything the live site will need will be included in the favorite.bundle.js file, and the other files are only needed to regenerate that file with changes via webpack.

Webpack configuration is in the webpack.config.js file. If you add more js files (typically you add a new file for every React component), add them to the 'entry' array. In order to run webpack to generate the bundled js file, you should install it globally on your machine `sudo npm install webpack -g`. Now you should be able to run `webpack` in the js directory and have it recreate the bundled js file. It also catches syntax errors and gives helpful feedback, so pay attention if the command fails.

There is also a config file .babelrc in the js directory. Babel is a JS compiler which will convert code written with ES2015 syntax to be ES5 compatible. It also handles JSX, which is a syntax used to define React components. The webpack process will use Babel.

Creating the React component for Drupal

Finally, we have the custom React code in favorite.js. If you edit this, remember to run `webpack` to update the bundled js file. I typically look at React files starting at the bottom.

ReactDOM.render(<Favorite />, document.getElementById('favorite'));

The last line is replacing the HTML element with id of 'favorite' with a 'Favorite' React component. The rest of the file defines that Favorite component.

The render function outputs a link which will say either 'Favorite' or 'Unfavorite.' 

  render() {
    if (this.state.user_uid == "0") {
      return null;
    var linkClass = 'unfavorited';
    var text = 'Favorite';
    if (this.state.favorited) {
      linkClass = 'favorited';
      text = 'Unfavorite';
    return (
      <a href="#" className={linkClass} onClick={this.toggleFavorite}>{text}</a>

In order for this logic to work, we need to have the current user's uid stored in the state (this.state.user_uid) as well as whether or not the user has already favorited this node (this.state.favorited). We also need to have a function 'toggleFavorite' in the component to handle when a user clicks the favorite/unfavorite link to update the database and change state.favorited.

Let's look at toggleFavorite() first and then how that initial state is set.

  toggleFavorite() {
    var favorited = !this.state.favorited;
  saveFavorite(favorited) {
    var endpoint = '/jsonapi/user/user/' + this.state.user_uuid + '/relationships/field_favorites';
    var method = 'POST';
    if (!favorited) {
      method = 'DELETE';
    fetch(endpoint, {
      method: method,
      credentials: 'include',
      headers: {
        'Accept': 'application/vnd.api+json',
        'Content-Type': 'application/vnd.api+json'
      body: JSON.stringify({
        "data": [
          {"type": 'node--' + this.state.node_type, "id": this.state.node_uuid}
    }).then((response) => {
      if (response.ok) {
        response.json().then((data) => {
            favorited: favorited
      else {
        console.log('error favoriting node');

The behavior depends on the current state.favorited. If the user had already favorited this node, we'll be unfavoriting it on click. Otherwise, we'll be favoriting it. To favorite it, we'll POST data to the JsonAPI endpoint, and to unfavorite it, we'll delete the data. The JsonAPI Drupal module handles relationships including entity references according to the JsonAPI Spec so there are special endpoints for adding or removing data from a relationship.

We're using 'fetch' to make the request. By specifying `credentials: 'include'` we'll pass along the user's cookie. Provided that the user has access to add and delete field_favorites on their own user account, the request should succeed.

Note that there is some more state data that we need here. We need to have the node type in this.state.node_type and the node's uuid in this.state.node_uuid. After the request is made, we update state.favorited via this.setState(). Changing the state causes a re-render so you should see the text of the favorite link update after click.

Other considerations for using React with Drupal

Finally, how did we get all the values we rely on related to the current user and current node into the state? We could have done this a few different ways. One approach would be to use the drupalSettings system to pass data from Drupal to the DOM on initial page load. In this example though, I created a custom JSON endpoint using Drupal routing/controller to provide the data that I want and call it from the React component.

Take a look at the constructor() and getData() methods in the Favorite react component. I use 'fetch' again, this time doing a GET request on my custom path, and then setting the state with the data I get back. The custom path is defined in favorite.routing.yml to use the controller FavoriteController.php, which returns Json with the desired data.

Jul 06 2017
Jul 06

You’re about to begin a huge overhaul of your higher education website and one of the first steps is choosing a content management system. It’s likely that Drupal and WordPress have come up in your research, and you may be trying to decide between the two.

Drupal and WordPress are often compared to one another because they’re both open source content management systems. Both are capable of creating clean, responsive websites that are easy to manage for content editors. The functionality of both can be extended using third party code. And the code for both is openly available for anyone to use, change, and distribute, meaning there are no licensing fees like those required by closed source options. 

There are a significant number of higher education websites on Drupal; Harvard, Brown, and Oxford University all use the CMS, to name a few. According to Drupal.org, 71% of the top 100 universities use Drupal. And there’s some sound reasoning behind that.

Both WordPress and Drupal have strengths and are well suited for a diverse range of projects. WordPress was primarily built for standalone websites or blogs with minimal variation in content types. Drupal was built for more complex, feature rich websites that require significant interconnectivity between pages or site sections, like those required by higher education. 

Here are some factors to consider when choosing between the two content management systems. 

Complex Architecture

If you’re setting out to redesign a higher ed website, you’re likely looking at a fairly complex endeavor. Your website probably requires more complicated architecture than most; you’ll need various sections that are targeted toward different groups of users, such as prospective students, current students, alumni, faculty, and staff. 

Drupal’s ecosystem was built around cases like these. It can handle thousands of users and different content types. Upgrades in Drupal 8 have also resulted in better caching features that make for improved page load times. 

WordPress works well for general, standalone marketing sites, but it will struggle with aspects like multiple install profiles, content sharing networks, consistency, maintainability, and connectivity with other parts of the site.

Users and Permissions

Your website also most likely has extensive user and permission requirements. Different groups will need to perform different tasks and interact with the site in a variety of ways. You may also have different departmental sites that will need to be managed by different teams while staying consistent with branding guidelines.  

Drupal allows for multi-site functionality that can also be centrally managed. Different users and departments can be given diverse permissions and roles so that you can limit their capabilities to just what they need and nothing more.


No CMS is completely immune to security vulnerabilities. It’s possible that WordPress has had more security issues in the past simply due to the fact that it’s a more widely used CMS. WordPress relies heavily on plugins when used for more complex websites, and these plugins are often susceptible to security issues. 

Drupal is well known as a very secure content management system and is trusted by WhiteHouse.gov and other federal government sites. Drupal has a dedicated security team that receives security issues from the general public and coordinates responses. Issues are resolved as quickly as possible and users are alerted to vulnerabilities through regular announcements. The security team also provides documentation on how to write secure code and how to secure your site. With these practices, you can rest assured that all of your student and faculty data would be protected. 

Ease of Use

For simpler sites, WordPress beats Drupal when it comes to ease of use. Because it was developed for less complex, standalone websites, it’s very easy to get it up and running, even for those who aren’t very tech savvy. Drupal’s complexity means it has a steep learning curve and takes longer to build. 

Drupal is a feature-rich CMS that can build more advanced sites, but it also requires more technical experience. You need a team of experts with ample experience to get your project accomplished, and this is likely to be more expensive than a team of WordPress developers. 

But the extra price that you pay for a team of experts will pay off in the end when you have a website that is capable of doing everything you need it to. Drupal's high barrier to entry with respect to module development also means the quality of modules available is higher, and the choices are fewer but more obvious. 

Which Should You Choose?

When it comes down to it, Drupal is likely the better choice between the two. It’s clear that while WordPress has its strengths, Drupal is a better choice for more advanced sites, like those required by higher education.

Drupal provides a strong base to begin rapidly building a complex system. It’s often the CMS of choice for large websites that require significant interconnectivity between different sections. It also allows for a wide range of user roles and permissions, and security is a priority for the entire community. All of these aspects make it a great CMS choice for higher education websites.

Jun 13 2017
Jun 13

Computers are finicky. As stable and reliable as we would like to believe they have become, the average server can cease to function for hundreds of different reasons. Some of the common problems that cause websites or services to crash can’t really be avoided. If you suddenly find your site suffering from a DDOS attack or a hardware failure, all you can do is react to the situation.
But there are many simple things that are totally preventable that can be addressed proactively to ensure optimal uptime. To keep an eye on the more preventable issues, setting up monitoring for your entire stack (both the server as well as the individual applications) is helpful. At Zivtech, we use a tool called Sensu to monitor potential issues on everything we host and run.
Sensu is a Ruby project that operates by running small scripts to determine the health of a particular application or server metric. The core project contains a number of such scripts called “checks.” It’s also very easy to write custom checks and they can be written in any language, thus allowing developers to easily monitor new services or applications. Sensu can also be run via a client server model and issue alerts to members of the team when things aren’t behaving properly.

Server checks

As a general place to start, you should set up basic health checks for the server itself. The following list gives you a good set of metrics to keep an eye on and why it is in your best interest to do so.


What to check

Monitor the RAM usage of the server versus the total amount of RAM on the server.

Potential problem monitored

Running out of RAM indicates that the server is under severe load and application performance will almost certainly be noticeable to end users.

Actions to take

Running low on RAM may not be a problem if it happens once or twice for a short time. Sometimes there are tasks that require more resources and this may not cause problems, but if the RAM is perpetually running at maximum capacity, then your server is probably going to be moving data to swap space (see swap usage below) which is much slower than RAM. 

Running near the limits of RAM constantly is also a sign that crashes are eminent since a spike in traffic or usage is surely going to require allocating resources that the server simply doesn’t have. Additionally, seeing spikes in RAM usage may indicate that a rogue process or poorly optimized code is running, which helps developers address problems before your users become aware of them.

Linux swap usage

What to check

Check swap usage as a percentage of the total swap space available on a given server.

Potential problem monitored

When the amount of available RAM is running short or the RAM is totally maxed out, Linux moves data from RAM to the hard drive (usually in a dedicated partition). This hard drive space is known as swap space. 
Generally, you don’t want to see too much swap space being used because it means that the available RAM isn’t enough to handle all the tasks the server needs to perform. If the swap space is filled up completely, then it means that RAM is totally allocated and there isn’t even a place on disk to dump extra data that the system needs. When this happens, the system is probably close to a crash and some services are probably unresponsive. It can also be very hard to even connect to a server that is out of swap space as all memory is being used completely at this point and new tasks must wait to run.

Actions to take

If swap is continually running at near 100% allocation, it probably means the system needs more RAM, and you’d want to increase the swap storage space as part of this maintenance. Keeping an eye on this will help ensure you aren’t under-allocating resources for certain machines or tasks.

Disk space

What to check

Track current disk space used versus the total disk space on the server’s hard drives, as well as the total inodes available on the drives.

Potential problem monitored

Running out of disk space is a quick way to kill an application. Unless you have painstakingly designed your partitions to prevent such problems (and even then you may not be totally safe), when a disk fills up some things will cease working. 
Many applications write files to disk and use the drive to store temporary data. Backup tasks rely on disk space as do logs. Many tasks will cease functioning properly when a drive or partition is full. On a website running Drupal, a full drive will prevent file uploads and can even cause CSS and JavaScript to stop working properly as well as data not being persisted to the database.

Actions to take

If a server is running low on space, it is relatively easy to add more. Cloud hosting providers usually allow you to attach large storage services to your running instance and if you use traditional hardware, drives are easy to upgrade. 
You might also discover that you’ve been storing data you don’t need or forgot to rotate some logs which are now filling up the drive. More often than not, if a server is running out of space, it is not due to the application actually requiring that space but an error or rogue backup job that can be easily rectified.


What to check

Track the CPU usage across all cores on the server.

Potential problem monitored

If the CPU usage goes to 100% for all cores, then your server is thinking too hard about something. Usually when this happens for an extended period of time, your end users will notice poor performance and response times. Sites hosted on the server might become unresponsive or extremely slow.

Action to take

In some cases, an over-allocated CPU may be caused by a runaway processes but if your application does a lot of heavy data manipulation or cryptography, it might be an indication that you need more processing power. 
When sites are being scraped by search providers or attacked by bots in some coordinated way, you might also see associated CPU spikes. So this metric can tip you off to a host of issues including the early stages of a DDoS attack on the server. You can respond by quickly restarting processes that are misbehaving and blocking potentially harmful IPs, or identify other performance bottlenecks.

Reboot required

What to check

Linux servers often provide an indication that they should be rebooted, usually related to security upgrades.

Potential problem monitored

Often after updating software, a server requires a reboot to ensure critical services are reloaded. Until this is done, the security updates are often not in full effect.

Action to take

Knowing that a server requires a reboot allows your team to schedule downtime and reduce problems for your end users.

Drupal specific checks

Zivtech runs many Drupal websites. Over time we have identified some metrics to help us ensure that we are always in the best state for security, performance, search indexes, and content caching. Like most Drupal developers, we rely on drush to help us keep our sites running. We have taken this further and integrated drush commands with our Sensu checks to provide Drupal specific monitoring.

Drupal cron

What to check

Drupal’s cron function is essential to the health of a site. It provides cleanup functions, starts long running tasks, processes data, and many other processes. Untold numbers of Drupal’s contributed modules rely on cron to be running as well.

Potential problem monitored

When a single cron job fails, it may not be a huge problem. But the longer a site exists without a successful cron run, the more problems you are likely to encounter. Services start failing without cron. Garbage cleanup, email notifications, content updates, and indexing of search content all need cron runs to complete reliably.

Action to take

When a cron job fails, you’ll want to find out if it is caused by bad data, a poorly developed module, permissions issues, or some other issue. Having a notification about these problems will ensure you can take proactive measures to keep your site running smoothly.

Drupal security

What to check

Drupal has an excellent security team and security processes in place. Drush can be used to get a list of modules or themes that require updates for security reasons. Generally, you want to deploy updates as soon as you find out about them.

Potential problem monitored

By the time you’ve been hacked, it’s too late for preventative maintenance. You need to take the security of your site’s core and contributed modules seriously. Drupal can alert site administrators via email about security alerts, but moving these checks into an overarching alerting system with company wide guidelines about actions to take and a playbook for how to handle them will result in shorter times that a given site is vulnerable.

Action to take

Test your updates and deploy them as quickly as you can.

Be Proactive

It’s difficult to estimate the value of knowing about problems before they occur. As your organization grows, it becomes more and more disruptive to be dealing with emergency issues on servers. The stress of getting a site back online is exponentially more than the stress of planned downtime.

With a little bit of effort, you can detect issues before they become problems and allocate time to address these without risking the deadlines for your other projects. You may not be able to avoid every crash, but monitoring will enable you to tackle certain issues before they disrupt your day and will help you keep your clients happy.

Jun 08 2017
Jun 08

It’s no secret that Drupal is incredibly powerful when it comes to its capabilities as a content management system. Many businesses choose Drupal not only because of the lower cost that comes with it being open source, but also because it can be customized to build out the exact features that they need. But how does it stack up for the content editors?

At Zivtech, we recently launched our new Drupal 8 site. As a member of the marketing team, I spend a significant amount of time writing and editing our site’s content, whether that’s static services page content, upcoming events, or new blog posts. Needless to say, I want the experience to be intuitive, very user-friendly, and headache free. And I definitely don’t want to have to bother our developers with questions about where to find things or how to make edits.

Drupal 8 has seen improvements in a lot of areas when compared to Drupal 7, and content editing is no exception. Here are some of my favorite content editing improvements in Drupal 8.

A WYSIWYG is Included in Core

Drupal is comprised of its core codebase, and then thousands of modules that can be added for custom functionality. A WYSIWYG (meaning “what you see is what you get”) is used to provide standard text editing options when adding content across an entire site, and used to require separate module installation in Drupal 7.


WYSIWYGs are useful for a number of reasons. A key objective when using a CMS is that the content will always come out looking uniform. Fonts, sizes, headings, and colors should always be consistent. A uniform site looks cohesive and makes sense. If you have the option to choose a size, font, and color for each specific post, each individual post may look fine, but the site as a whole will look chaotic. You can provide users with all the options that they need in about fifteen buttons. 
The WYSIWYG CKEditor was finally added to Drupal core with the release of Drupal 8, which is a huge improvement. You no longer need to install a separate module for this functionality.

Resizing Images is as Easy as Pie

Adding images to blog posts used to be a bit of a struggle for me because I couldn’t resize them once I uploaded them into the body of the post. Our new D8 site allows me to easily drag the corner of an image to resize. This is a major time saver and also no longer makes me want to pull my hair out.

Resizing ImagesI resized this image of an image. It was super easy.

Quicker Edits with Quick Edits

Drupal 8 also offers the option for quick editing. This means that content editors no longer have to navigate to the node edit page to make a quick change. Edits can be made directly from the view of the content.

User Roles and Permissions (Still)

While it’s not new with Drupal 8, user roles and permissions are still worth mentioning. Drupal offers the ability to pick and choose which site users get which types of permissions. This means you can restrict access to less tech savvy people so that they don’t accidentally break something. Content editors can just worry about editing content. 

Frustrated by your own content editing experience? Get in touch with us about a switch to Drupal 8.

Jun 01 2017
Jun 01

You have a great idea for a new project. Maybe you want to redesign your site to improve your conversions and SEO. Perhaps your team wants to build a shiny new intranet that solves all of your employees’ needs. Or you want to develop a new app to solve a problem in a new, innovative way. You know what you want to achieve, but do you know how to get there?
Without a well thought out and detailed strategy, achieving your goals might feel impossible. Here are 7 signs that your project needs a proper discovery phase before investing in any type of design and development project.

You’re Not Sure Which Technology is Best for Your Project

No matter what kind of site or application you’re setting out to build, there are many technologies to choose from, and picking the right technology out of the gate is imperative for long-term success.

During a discovery, you’ll break down all of your project goals and features and the right tech partner will work with you to determine the best technology to get the job done.

You Haven’t Defined Your Technical Requirements

You may know what platform you’d like to build on - say Drupal, for example - but what other technical requirements are involved in building your project?

Concepts for projects are often very high level and business-oriented. For example, you know that you want an eCommerce website to sell your products, but you have no idea which software would best support your custom ordering and fulfillment integrations.

Determining technical requirements during a discovery helps to better define how your project will go from a great idea to a fully functioning site or application to run your business. Every page and feature will be determined along with the tools that are necessary to build them. Your technical requirements will also help guide your overall project roadmap and the estimate for your investment.

You Haven’t Set a Budget Yet

Speaking of estimates, have you determined a budget? It can be difficult to come up with a number before you know what will really be involved in fleshing out your idea.

Once your technical requirements have been defined, your technology partner will be able to provide an estimate for all of your must-have features. From there, you can set your budget for the first phase and decide what can be left out and completed in a second phase, and so on.

You Don’t Know Your User Personas

Well-defined user personas are critical for a website or web application’s success. You can’t build a successful digital product if you haven’t carefully thought about who exactly you’re building it for.

The right discovery will include thorough research and user testing to determine what exactly your users are looking for. It will also explore the paths they take and establish journey maps for the best possible user experience. Well developed user acceptance criteria helps further validate decisions for the product to ensure every decision is made with your users’ best interests in mind.

Your Stakeholders Can’t Come to a Consensus

When there are a lot of cooks in the kitchen, it can be difficult to get everyone to agree. Many digital projects span multiple divisions of organizations, resulting in a large number of stakeholders and opinions. Naturally, each department and stakeholder has different priorities and goals that will help them do their jobs better, which can make it difficult to nail down a project’s requirements and overarching approach.

It’s critical to get all of the stakeholders and department representatives together with a third party technology partner to build consensus and analyze all of the edge cases. Ultimately, after in-person meetings and strategy sessions, you and your tech team will be able to establish project goals and a plan that takes everyone’s needs and priorities into account without blowing up the budget.

Your Existing Site or Application Isn’t Up to Snuff

If your current website or web application isn’t working as well as you’d like it to, it’s important to figure out exactly why that is before you begin a new project to make improvements.

A site audit will provide an in-depth look at any existing problems and provide insight into how to fix them. Whether there are technical issues and bugs, a lack of conversions on your site, or poor adoption of your new application, the right discovery will dig into the issues and shortcomings to come up with solutions that will achieve your goals.

You Can’t See the Forest Through the Trees

When you’re deeply involved with a project, it’s easy to get caught up in the small details and intricacies. This can lead to a lack of innovation and fresh ideas that may not align with your end users’ needs and experiences. An unbiased third party can help you take a step back and offer a fresh, expert perspective that will help your site or application thrive.

Leverage a technology team’s tools, experience, and expert staff to help you and your team innovate and achieve your goals. Looking for the right partner to get you there? Let us know. We'd be happy to help!

Apr 20 2017
Apr 20

Entity Views Attachment, or EVA, is a Drupal module that allows you to attach view displays to entities of your choosing. We used it recently on a project and loved it. You know it’s good because it has a no-nonsense name and an even better acronym. (Plus, the maintainers have taken full advantage of the acronym and placed a spaceman on the project page. Nice!)

Since the now-ubiquitous Paragraphs module provides the “paragraph” entity type, I figured these two will make good dancing partners.

Getting them to tango is simple enough. You create a paragraph bundle, target that bundle in the settings on an EVA view display, then arrange the view in the paragraph’s display settings. Voila – your view display shows up wherever you add this paragraph!

By attaching a view display to a paragraph entity and enabling that paragraph on a node’s paragraph reference field, you give your content editors the ability to place a view wherever they want within their page content. Better still, they can contextualize what they are doing since this all happens in the edit form where the rest of the node content lives. As far as I can tell, no other approach in the Drupal ecosystem (I’m looking at you Blocks and Panels) makes adding views to content this easy for your editors.

Case Study

The concept is pretty straightforward, but with a little cleverness it allows you to build some complex page elements. Let’s walk through an example. Consider the following design:

This mockup represents Section nodes and lists of Subpage nodes that reference them. In addition, the buttons links should point to the parent Section node. With a little elbow grease, we can build a system to output this with our friends EVA and Paragraphs.

Here’s how I’m breaking this down conceptually:

We have three things to build:

  1. A create a container paragraph bundle

  2. A child paragraph bundle with a Section entity reference field

  3. An EVA of subpages to attach to the child paragraph bundle

Building the Subpage EVA

As I mentioned before, Subpage nodes will reference Section nodes. With this in mind, we can build the EVA that lists subpages and expects a section node ID to contextually filter to subpages that reference that node.

Building the Section paragraph type

Next, we’ll create the Section paragraph type that will handle each grouping of a section node with its related subpages. The Section paragraph will have one field, an entity reference field limited to Section nodes, that gives us all the data we need from our section.

We’ll attach our EVA to this paragraph type and configure it to pass the referenced node’s ID as the contextual filter using tokens in the EVA settings. You will need to install the Token module to do this. Go to /admin/help/token to see all available tokens once installed. You need to grab the node ID through your entity reference field, so your token argument should look something like this:


We pass that token to our contextual filter, and we can tell our view to use that argument to create a link to the section node for our “View All Subpages” link. To do this, we’ll add a global text area to the view footer and check the “Use replacement tokens from the first row” checkbox. Then we’ll write some HTML to create a link. It’ll look something like this:

<a href="https://www.zivtech.com/node/{{ raw_arguments.nid }}">View all Subpages</a>

Building the Section List paragraph type

Lastly, we’ll create the Section List paragraph type. This only really needs a paragraph reference field that only allows the user to add Section paragraphs, but I also added a title field that will act as a header for the whole list.

Tip: Install Fences module to control your field’s wrapper markup. I used this here to wrap the title in <h2> tags.

We’re finished!

Now that everything is built, we can allow users to select the Section List paragraph type in a paragraph reference field of our choosing. A user adds a Section List, then adds Sections via the entity reference. It looks like this in the node edit form:

Do you have any cool ways you use the EVA module in your builds? Let us know in the comments section below.

Mar 14 2017
Mar 14

With roughly 1.2 million websites using Drupal across the world, including marquee sites such as NBCUniversal and pharmaceutical giant Novartis, it’s clear that it’s a powerful content management system capable of supporting large organizations.

Drupal is a go-to choice for these institutions in part because of its reliance on open source software (OSS). Its source code is openly available for anyone to use and contribute to, which is actually one of its greatest strengths.

When making your organization’s software choices in the past, you may have glazed over any mentions of whether the code was open or closed source. But specifically choosing open source can have a number of benefits for your business, and beyond that, choosing Drupal can provide your business with the powerful platform it needs to succeed.

Why Open Source?

Lower Cost

The majority of open source software is freely distributed, meaning there’s a huge cost benefit when you choose open over closed software. Open source tools also don’t restrict the number of users due to licensing. If your business chooses to use open source, you’ll never have to pay for additional licenses as your company grows. Just add user accounts and go.


Security is one of open source software’s greatest strengths. OSS is constantly under peer review by a community of experts, all of whom count on the same source code to keep their businesses running securely and efficiently. With a multitude of eyes on every project, open source tools are always checked and rechecked for security vulnerabilities. Problems often surface immediately thanks to the large number of users and contributors who maintain the code. You can sleep well at night knowing that your site is safe and secure.


There’s no barrier to entry with open source software. Anyone who wants to use it can get started for free. As a result, there’s an incredibly diverse population of individuals and businesses who use it. Open source is the foundation for all kinds of digital projects, making it more likely that someone has already created a tool for exactly what you want to accomplish.

This means more flexibility in the tools you use, and in the ability to add more tools. OSS grows and changes rapidly as people use it to accomplish all sorts of different goals.

No Vendor Lock-in

The accessible nature of OSS generally means that there are a large number of vendors that work with it. As a business, you have more options when looking for a partner for a digital project. If you need a new vendor for any reason, you’ll be able to find one who already knows the ins and outs of the software that you’re building with.

Why Drupal?


The Drupal community has more than one million members and more than 100,000 of these members actively contribute to its code. A passionate and active community aims to ensure that Drupal and its code base are up to the highest standards. A dedicated security team has steps in place to ensure that insecure code isn’t distributed to the public. You want passionate people as the brains behind your website, and Drupal has thousands of them.


Drupal is comprised of its core code, which includes basic features and functionality, and thousands of additional modules. Modules are blocks of code that extend the functionality of Drupal’s core. Developers can add functionality to a site by installing an existing module, or by creating a customized one to accomplish what they need.

Modules add a lot of power to the development process. You can customize your website to your specific needs, something that isn’t possible with other content management systems. Among others, there are modules that allow you to quickly navigate to specific administrative pages, create slideshows, and add web forms. Check out our list of some of our favorites.

Third Party Integration

Drupal gets along really well with third party applications. These integration capabilities allow for less complicated workflows and more flexibility. A single solution that integrates the tools you already use, like MailChimp and Salesforce, greatly improves productivity and reduces headaches.

Open Source Optimized

Drupal boasts all of the benefits of open source software and amplifies them. The contributing community is one of the strongest open source communities. Contributors follow incredibly stringent coding standards to ensure that the code works, and that it works well. Security vulnerabilities are stopped before they happen. Plus, with such a wide variety of organizations already running their sites on Drupal, it’s hard to argue that it’s not a great choice for your business too.

Feb 16 2017
Feb 16

There's no shortage of generic web performance optimization advice out there. You probably have a checklist of things to do to speed up your site. Maybe your hosting company sent you a list of performance best practices. Or maybe you use an automated recommendation service.

You've gone through all the checklist compliance work, but haven't seen any change in your site's speed. What's going on here? You added a CDN. You optimized your image sizes. You removed unused code. You even turned off database logging on your Drupal site and no one can read the logs anymore! But it should be worth it, right? You followed best practice recommendations, so why don't you see an improvement?

Here are three reasons why generic performance checklists don't usually help, and what really does.

1. Micro-Optimizations

Generic performance recommendations don't provide you with a sense of scale for how effective they'll be, so how do you know if they're worth doing?

People will say, "Well, if it's an improvement, it's an improvement, and we should do it. You're not against improvements are you?" This logic only works if you have infinite time or money. You wouldn't spend $2000 to make changes you knew would only save 1ms time on a 10s current page load.

Long performance checklists are usually full of well-meaning suggestions that turn out to be micro-optimizations on your specific site. It makes for an impressive list. We fall for it because it plays into our desire for completion. We think, "ROI be damned! There's an item on this list and we have got to do it."

Just try to remember: ABP. Always Be Prioritizing.

You don't have to tackle every item on the list just for completion's sake. You need to measure optimizations to determine whether you're adding a micro-optimization or slaying a serious bottleneck.

2. Redundant Caching

In security, the advice is to add more layers of protection. In caching, not so much. Adding redundant caching will often have little to no effect on your page load time.

Caching lets your process take a shortcut the majority of the time. Imagine a kid who takes a shortcut on her walk to school. She cuts through her neighbor's backyard instead of going around the block. One in 10,000 times, there's a rabid squirrel in the yard, so she takes the long way. Her entrepreneurial friend offers to sell her a map to a new shortcut. It's a best practice! It cuts off time from the original full route that she almost never uses but it's longer than her usual shortcut. It will save her a little time on rabid squirrel days. Is it worth the price?

The benefit of a redundancy like this is marginal, but if there's a significant time or cost investment it’s probably not worth it. It's better to focus on getting the most bang for your buck. Keep in mind that the time involved to add caching includes making sure that new caches invalidate properly so that your site does not show stale content (and leave your editors calling support to report a problem when their new post does not appear promptly.)

3. Bottlenecks or Bust

Simply speaking, page load time consists of two main layers. First there is the server-side (back-end) which creates the HTML for a web page. Next, the client-side (front-end) renders it, adding the images, CSS, and JavaScript in your web browser.

The first step to effective performance optimization is to determine which layer is slow. It may be both. Developers tend to optimize the layer of their expertise and ignore the other one. It's common to focus efforts on the wrong layer.

Now on the back-end, a lot of the process occurs in series. One thing happens, and then another. First the web server routes a request. Then a PHP function runs. And another. It calls the database with a query. One task completes and then the next one begins. If you decrease the time of any of the tasks, the total time will decrease. If you do enough micro-optimizations, they can actually add up to something perceptible.

But the front-end, generally speaking, is different. The browser tries to do many things all at the same time (in parallel). This changes the game completely.

Imagine you and your brother start a company called Speedy Car Cleaning. You want your customers to get the fastest car cleaning possible, so you decide that while you wash a car, your brother will vacuum at the same time. One step doesn't rely on the other to be completed first, so you'll work in parallel. It takes you five minutes to wash a car, and it takes your brother two minutes to vacuum it. Total time to clean a car? Five minutes. You want to speed things up even more, so your brother buys a more powerful vacuum and now it only takes him one minute. What's the new total time to clean a car?

If you said five minutes, you are correct. When processes run in parallel, the slowest process is the only one that impacts total time.

This is a lesson of factory optimization as well. A factory has many machines and stations running in parallel, so if you speed up the process at any point that is not the bottleneck, you'll have no impact on the throughput. Not a small impact - no impact.

Ok, then what can we do?

So is it worthless to follow best practices to optimize your speed? No. You might get lucky, and those optimizations will make a big impact. They have their place, especially if you have a fairly typical site.

But if you don't see results from following guesses about why your site is slow, there's only one sure way to speed things up.

You have to find out what's really going on.

Establish a benchmark of current performance. Determine which layer contributes the most to the total time. Use profiling tools to find where the big costs are on the back-end and the bottlenecks on the front-end. Measure the effects of your improvements.

If your site's performance is an issue, ask an expert for a performance audit. Make sure they have expertise with your server infrastructure, your CMS or framework, and front-end performance. With methodical analysis and profiling measurements, they'll get to the bottom of it. And don't sweat those 'best practice' checklists.

Jan 17 2017
Jan 17

Drupal has a powerful suite of tools and features for just about any type of website, and eCommerce websites are no exception. Drupal has a very active and healthy eCommerce community that creates some amazing features.

The Drupal eCommerce market mainly consists of Ubercart and Drupal Commerce, but there are also some impressive emerging technologies to keep an eye on, such as the Stripe payment API, that allow for more customized solutions.

The eCommerce solution that you choose will depend upon the particular Drupal site you’re developing. Is it a simple online store with just a few products, or a much more robust site with a wide variety of products? This is one of the strengths of eCommerce with Drupal: you can build the kind of solution you need without the extra, unnecessary features and functionalities.

Ubercart vs Drupal Commerce

Drupal Commerce is by far the most popular Drupal eCommerce solution these days. It’s the successor to the original eCommerce solution, Ubercart. Drupal Commerce was written for Drupal 7 by the same developer that created Ubercart, Ryan Szrama, using the latest APIs and features for Drupal 7. Drupal Commerce is supported by Ryan Szrama’s company, Commerce Guys, and the Drupal community as a whole. Developers are more likely to add new features to Commerce because it is more widely used in Drupal 7 and beyond. The Drupal 8 version is still in beta, so more work needs to be done to get it to a full release. Check out the Drupal Commerce issue queue to see where you might be able to help.

Drupal Commerce learned from Ubercart's primary shortcoming; it was difficult to configure and not very modular. Where Ubercart was one module that was difficult to build upon, Drupal Commerce has a small set of features in the main module and a large suite of available contributed modules. This allows for easier configuration and more customizations.

Kickstart Your Online Store

One of the most useful features available for Drupal Commerce is the Commerce Kickstart Drupal distribution. This is a great way for non-technical store owners to get a Drupal Commerce store up and running quickly and easily. It comes with an impressive installer that allows you to install an example store to see how everything can be configured. It then allows you to wipe the site clean and start a fresh build of your own custom store.

The Commerce Kickstart comes with some additional built in modules and configurations that help get a Drupal Commerce site up and running quickly. This is a more efficient solution than building from scratch with the Drupal Commerce module and any contributed modules that are necessary to achieve the desired functionality. The Commerce Kickstart distribution shows off the power of Drupal distributions; it’s a turnkey solution for Drupal eCommerce websites.

Stripe API

One of Drupal’s greatest advantages over its competitors is how well it integrates with third party APIs. This allows for integration with many different payment gateways, one being Stripe API. Drupal developers can integrate Stripe with a Drupal site through a custom module and create highly customized payment solutions. This type of customization allows for a variety of solutions for selling and accepting payments that would be more challenging to implement with Drupal Commerce.

Which Solution Should I Choose?

The solution you choose depends on the site’s needs. A small online store that only needs a simple shopping cart might be best suited for Ubercart. At its core, Ubercart is still the easiest to set up without using a Drupal distribution like Commerce Kickstart.

Drupal Commerce is a much more robust eCommerce solution for Drupal with enterprise level features that large online stores use to sell products like product displays. On top of that, you get all the features of a normal Drupal website like content types, taxonomies, and user permissions.

If you are looking to build a very customized payment solution that doesn’t really fit into either of these categories, perhaps a custom solution with the Stripe API module is best.

In the end, the Drupal eCommerce solution you choose should be easy to use for your store administrators and easy for your customers to buy your products online.

Nov 15 2016
Nov 15

This post is the second in a series covering Zivtech's usage of Gulp for front-end development in Drupal 8.

In the last post, I covered how to setup Gulp for teamwork on Drupal 8 projects. In this post, I'll go over how to get started with writing Gulp tasks. I'll also break down a specific task for Sass linting to ensure good code quality.

Maintainable and Readable Gulp tasks

With any mid-to-large sized Drupal 8 theme, it's really easy for the main Gulp file (gulpfile.js) become unwieldy and complex. With dozens of tasks doing all kinds of automated work, before too long, gulpfile.js becomes a soup of illegible code.

Additionally, members of your team might have different ways of naming Gulp tasks. One person might write a Sass building task called "buildSass" and another might create an identical task called "css."

It'd be nice to strip down gulpfile.js, make it readable, and somehow compartmentalize each task separately. Also, we want to cut down on task naming variations and create a unified system for structuring our tasks.

My current favorite way to handle these wishes is gulp-require-tasks. Basically, each task is written as an individual, CommonJS style module. Then, the tasks are arranged in directories, and that directory structure defines the task name. It is a very simple and predictable way to setup Gulp tasks.

Structuring Gulp tasks

Start off by creating the file tree structure below:

├── project/ │ ├── .gitignore (ignore node_modules, gulpfile.yml) │ ├── package.json │ ├── gulpfile.js │ ├── default.gulpfile.yml │ ├── sass │ │ ├── styles.scss │ ├── js │ │ ├── scripts.js │ ├── gulp-tasks │ │ ├── styles │ │ │ ├── lint.js │ │ │ ├── build.js │ │ ├── scripts │ │ │ ├── lint.js │ │ │ ├── build.js

The YAML settings file, default.gulpfile.yml, was discussed in the last post of this series, if you need a refresher.

gulp-require-tasks lets these tasks be accessible according to their structure. For example, to build the styles, you'll run "gulp styles:build" and to lint the JavaScript, you'll run "gulp scripts:lint." If you don't like the colon delimiter, you can change that too.

Update Gulp settings

In the last post we started the default.gulpfile.yml, and now we'll edit that same file to add in settings for the Gulp tasks we'll create in this project.

Open the file: it should look like this:

themeName: "myTheme" themeDescription: "myTheme description"

Expand on that by adding settings for source and destination paths of Sass and JS:

themeName: "myTheme" themeDescription: "myTheme description" styles: src: "sass//*.scss", dest: "css" lint: enabled: true failOnError: false scripts: src: "js//*.js", lint: enabled: true failOnError: false

Under the "styles" and "scripts" sections of the YAML, you can see I added some linting options too. From within the YAML settings, people can enable or disable linting, and also decide if they want the Gulp process to stop when linting errors are detected.

Pulling these settings out of the Gulp tasks themselves and into this YAML file means that developers don't have to search through the tasks looking for settings to change. Instead, they have every setting exposed to them in this one, concise file.

Importing tasks for Gulp

We haven't written any Gulp tasks yet, but we can go ahead and setup importing them so they can be used.

Open up the gulpfile.js we started in the last post. It should look like this:

(function () { 'use strict'; var gulp = require('gulp'); var yaml = require('js-yaml'); var fs = require('fs'); var assign = require('lodash.assign'); // read default config settings var config = yaml.safeLoad(fs.readFileSync('default.gulpfile.yml', 'utf8'), {json: true}); try { // override default config settings var customConfig = yaml.safeLoad(fs.readFileSync('gulpfile.yml', 'utf8'), {json: true}); config = assign(config, customConfig); } catch (e) { console.log('No custom config found! Proceeding with default config only.'); } })();

If you recall, we loaded the default.gulpfile.yml and overrode that with any settings from gulpfile.yml if it exists. The gulpfile.yml file has the exact same structure has default.gulpfile.yml, but settings can have different values. This lets other developers on the team override some settings if needed.

At this point in gulpfile.js, the config is loaded and ready to be used. Next, we integrate gulp-require-tasks.

(function () { 'use strict'; var gulp = require('gulp'); var yaml = require('js-yaml'); var fs = require('fs'); var assign = require('lodash.assign'); var gulpRequireTasks = require('gulp-require-tasks'); // read default config settings var config = yaml.safeLoad(fs.readFileSync('default.gulpfile.yml', 'utf8'), {json: true}); try { // override default config settings var customConfig = yaml.safeLoad(fs.readFileSync('gulpfile.yml', 'utf8'), {json: true}); config = assign(config, customConfig); } catch (e) { console.log('No custom config found! Proceeding with default config only.'); } gulpRequireTasks({ path: process.cwd() + '/gulp-tasks', arguments: [config] }); })();

Setting up gulp-require-tasks is super easy. We tell it where our gulp tasks are located, in the "gulp-tasks" directory.

Then, to each module (i.e. 1 module will be 1 Gulp task) in the directory, gulp-require-tasks passes arguments to each task. The first argument is always gulp itself. The "arguments" setting for gulp-require-tasks is an array of other things you want to pass to each module. I've opted to pass in "config," which is the object representing the settings merge in the YAML files.

This is essentially all you need in gulpfile.yml. However, I also like to add shortcut tasks too, that combine other tasks for quicker use. For example, general "build" and "lint" tasks might be like this:

gulp.task('build', ['styles:build', 'scripts:build']); gulp.task('lint', ['styles:lint', 'scripts:lint']);

Modular Gulp tasks

Let's start off creating the Sass linting task. To help with this, I recommend using gulp-sass-lint. You'll want to read over how to setup sass-lint, which I won't cover in detail here. Essentially, you create a .sass-lint.yml file in the root of the project. That file contains all the rules you want to validate; for example, should developers avoid styling with IDs or should they use RGB rather than HEX values for colors.

After sass-lint rules are in place, open up the styles linting file. Here you'll see the guts of the linting task:

'use strict'; var cached = require('gulp-cached'); var sassLint = require('gulp-sass-lint'); var gulpif = require('gulp-if'); module.exports = function (gulp, options) { if (options.styles.lint.enabled) { return gulp.src(options.styles.src) .pipe(cached('styles:lint')) .pipe(sassLint()) .pipe(sassLint.format()) .pipe(gulpif(options.styles.lint.failOnError, sassLint.failOnError())); } else { return console.log('css linting not enabled'); } };

For the three required packages, you'll want to "npm install" them of course. Don't forget the "--save-dev" flag to get those packages stored in package.json!

The bulk of the code exists within the standard, CommonJS "module.exports" directive. A Gulp process is passed into the task as well as the set of options from default.gulpfile.yml.

We start off by running a quick if/else check so that we short-circuit out of this task if the user disabled Sass linting. Then, we pipe in the files that we selected in the Gulp settings' "styles.src" section. Files are then piped through gulp-cached, which keeps a list of the source files (and contents!) in memory. This makes the task faster.

Next, the styles are linted and the results are formatted and reported out to the console. Finally, we use gulp-if to determine if the Gulp process gets terminated now should there be linting errors.

The sky's the limit

I leave it as an exercise for the reader to go about developing the other Gulp tasks. In the next post, I'll go over some other, more complicated Gulp tasks to show more advanced usage. Until then, you're more than welcome to look over and reference our own Gulp tasks we publish for Bear Skin.

Posts in this series

  1. Use Gulp for Drupal 8 with Teams, Part 1: Gulp Setup
  2. Use Gulp for Drupal 8 with Teams, Part 2: Creating tasks
Nov 07 2016
Nov 07

The Bearskin 8 theme is built for Drupal 8 to streamline front-end development and add value for clients in the process.

Because Drupal 8 is brand new to everyone, we learn as we go, and implement best practices as they’re created. We’ve covered our bases with the available resources online (drupal.org, blog posts, code forums, internal discussions and code sharing), not to mention attending and presenting sessions at Drupal events.

With new systems, and specifically open source ones, best practices generally evolve rapidly, from experiment to consensus; patch to accepted contribution and solution to standard.

New does not have to be scary. It can also be exciting, and sometimes the hurdle is more about spreading this feeling, especially to clients.

But let’s be honest, Drupal for a front-end developer is far from a joy ride. Markup is barely accessible without knowing your preprocess functions, or by adding contributed modules that will help you do that through the UI. You add more weight to the code base and often have to export settings through features. That’s just the beginning. You see where I am going with this? Configuration management clusters, database synchronisation requirements, code maintainability conflicts across teams, and so on.

The good news is that Drupal 8 enables a front-end developer to separate the theme layer in a way that makes sense, maintaining autonomy while leveraging parts of site building that a front end developer should control.

Hell, we may see the emergence of a new breed of Drupalers. We can call them the Templaters.

Site builders out there: dive into TWIG if you have not done so already! Need convincing? Here are highlights:

  1. Component based approach to templating though specific inheritance functions (includes, extends and embeds) Increased security (default secure HTML output) Powerful, expressive (semantic) template language; easy syntax coupled with great features
  2. Front-end devs in charge of markup without having to dive into PHP preprocess functions or rely on back-end devs
  3. Integration (Pattern Lab)
  4. Easy to debug (devel, kint) and well documented
  5. Avoids Panels; build and register your own layouts; uses TWIG for the logic and Display Suite for UI management

How can we be sure that the path to develop Bear Skin was the right one?

We developed our early version of the Bear Skin theme for D8 prior to the official release. For that first attempt, we basically just ported our existing D7 theme layer.

While it worked, we quickly realized that we were not taking advantage of the new configuration under the hood in D8. In parallel we were experiencing difficulties in streamlining our design/wireframe/prototype/development process.

A robust website is composed (or at least should be, by modern standards) of more than 70% reusable components. The code base is smaller and more flexible, because the architecture relies on elements that can be reused throughout instead of being replicated in various contexts.

We had integrated an atomic design approach to our flow and defined our comprehensive hierarchy of the web components we usually use, but we weren’t quite sure how it could effectively translate to a Drupal build. Welcome Pattern Lab! Originally written for mustache, it was quick to be adapted for TWIG and guess what, TWIG is our new friend.

We studied, asked questions, researched, stole, shared, listened and were able to narrow down our conceptions on what was right (for us, that is). Many shops/people developed this concept early on and helped confirm the proper approach, each with their own variations (phase2, Forum One, Aleksi Peebles and John Albin with Zen).

How about a living styleguide that serves as the source for our Drupal theme layer?

We built a styleguide, defined our atoms, created the templates (TWIG), wrote our styles and eventually told Drupal to use (and reuse) them. It’s not sorcery, and seeing many Drupal shops and developers working in this direction made us feel comfortable investing the time to go further.

Afraid of templates? Don’t be. D7 gave them a bad reputation, but let’s move on! The problem is in defining a solid front-end architecture so that a site does not get over-templated. And let’s remember that everything is already templated by default. So why not have these templates at hand, living comfortably in a style guide where you have access to all of them at a glance, organized within a hierarchy you or your team have defined?

This is not only an improvement in output and scalability, but it also respects that this workflow forces us to implement good practices, and Pattern Lab has become our safeguard during the front end planning and implementation.

Let’s Dive in, Shall we?


Here is what our root looks like:

|-- README.md
|-- backstop.json
|-- bear_skin.breakpoints.yml
|-- bear_skin.info.yml
|-- bear_skin.layouts.yml
|-- bear_skin.libraries.yml
|-- bear_skin.theme
|-- bin
|-- bower.json
|-- components
|-- config
|-- css
|-- default.gulpfile.yml
|-- docs
|-- favicon.ico
|-- fonts
|-- gulp-tasks
|-- gulpfile.js
|-- gulpfile.yml
|-- images
|-- js
|-- logo.png
|-- logo.svg
|-- node_modules
|-- out.txt
|-- package.json
|-- pattern-lab
|-- screenshot.png
|-- templates
|-- theme-settings.php

For those familiar with a D8 theme, Bear Skin includes the following:

  • a .info file for meta-data about your theme (bear_skin.info.yml)
  • a libraries file for defining all of your asset libraries (bear_skin.libraries.yml)
  • a breakpoints config file (bear_skin.breakpoints.yml)
  • a .theme file for conditional logic, preprocessing, and basic theme settings (bear_skin.theme)
  • a theme-settings.php file for modifying the theme settings form (theme-settings.php)
  • a logo file (logo.png)
  • a favicon file (favicon.ico)
  • a screenshot file that is shown on the Appearance page (screenshot.png)
  • a typical theme folder structure that includes directories for css, .js, templates, images, and fonts

In addition, our setup also includes:

  • a bin directory for shell script on post install
  • a component dir (we will go into detail about this one later in this post)
  • a config dir that sets up default settings when installing the theme, such as block placement and theme settings
  • a docs dir that details the steps to install and use the theme
  • a gulp task dir (usually a gulp file) to componentize and separate tasks for better (re)-usability. Our gulpfile.js calls for all these submodules
  • a pattern-lab dir that contains config files for pattern lab
  • a .bowerrc file that contains a path for bower to install dependencies
  • .sass-lint and .eslintrc contain the default settings for our javascript and sass linting tasks
  • a backstop.json file that contains the config for our css regression tests
  • a layouts.yml file that registers templates used by the layouts modules and display suite
  • a default.gulpfile.yml file (to be copied as a gulpfile.yml) that configures the various options for browsersync, pattern-lab, paths and more
  • a package.json file that contains our NPM packages dependencies
|-- _annotations
|-- _data
|-- _layouts
|-- _macros
|-- _meta
|-- _patterns
|  |-- 0-Atomic-Design-Plan
|  |-- 00-utilities
|  |-- 01-atoms
|  |-- 02-molecules
|  |  |-- banner
|  |  |-- banner-with-page-title
|  |  |-- blocks
|  |  |-- comments
|  |  |-- messages
|  |  |  |-- _messages.scss
|  |  |  |-- messages.json
|  |  |  |-- messages.twig
|  |  |  |-- messages~error.json
|  |  |  |-- messages~warning.json
|  |  |-- navigation
|  |-- 03-organisms
|  |-- 04-layouts
|  |-- 05-pages
|-- _twig-components
|-- bear_skin.scss

What we did here is organize our components using the atomic design concept, and each of them has a directory containing a .twig, .json and .scss file. We also included a yeoman script to facilitate generating components.

The .twig file is our reusable template. It is going to be picked by pattern lab (Drupal only reads html.twig files).

The .json will add static data with either strings or includes from other patterns.

The .scss file will be the stylesheet for this component (only). Additional .yml or .md files can be added to display different types of information about the pattern.

The additional .json files with the ~ symbol are duplicating the component with different data provided by the json code. This is useful if you don’t want to create many TWIG files that have the same purpose but different data attributes, for instance.

The next step is to include, embed or extend this template on the Drupal side with a .html.twig file, which will get picked up by the Drupal twig engine system. This is also the place to add some Drupal specific data attributes (if needed) or specify which part of the twig component you wish to override.

We place these in the “templates” directory, following the same atomic structure.

|-- 01-atoms
|-- 02-molecules
|  |-- block--main-search.html.twig
|  |-- block--system-branding-block.html.twig
|  |-- block--system-menu-block.html.twig
|  |-- block.html.twig
|  |-- breadcrumb.html.twig
|  |-- details.html.twig
|  |-- form.html.twig
|  |-- menu-local-tasks.html.twig
|  |-- menu.html.twig
|  |-- pager.html.twig
|  |-- status-messages.html.twig
|-- 03-organisms
|-- 04-layouts
|-- 05-pages
|-- html.html.twig
This model basically follows the MVP architectural pattern, which helps separate concerns between logic and presentation. In our case the data model is Drupal, the presenter is our native twig files in Pattern Lab, and finally the passive view is the drupal templates.

Let’s discuss the massive advantages of doing things this way.

  • A style guide organized with these principles essentially becomes a comprehensive map for all developers working on the project, and clients alike.
  • Front end developers have much more control over the markup (structure, classes, other data attributes etc).
  • The front end developer does not need to wait to be handed the site once back end devs are done with a build. Development can be conducted in a parallel process.
  • Prototype: the living styleguide can easily become a way for you to prototype components or pages for your clients.
  • The site is prepared for scalability: a living styleguide approach enables us (and forces us) to consider how reusable each new component may be, and where it falls in our overall front end architecture.
  • We style once, we build once: CSS and templates are shared between Pattern Lab and Drupal.
  • Components: when styling, separating components helps avoiding over-nesting the Sass. We just don’t dump the CSS in a large files that already have one, two, or three parent selectors. We end up with more manageable and clean code.
  • We minimize the code output. One template can serve many pieces of content when reused, with or without a modifier.

Nothing is set in stone and we are likely to see this process evolve further, but these are great improvements over D7 and reasons to get excited about D8 builds. As a team it allows our vision about front end architecture and development to come to life, but also expands it as we keep discovering ways to streamline a shared development environment and offer our clients solutions that fit them best.< /p>

Get started: DOWNLOAD the theme, follow the docs and see for yourself!

Read more about Pattern Lab, Drupal, Browsersync and Regression Testing.

Special thanks to James Cole, Sean Wolfe and Stephanie Semerville for building this along with me. It’s a labor of love that accepts your contributions as well :) Found a bug? send your pull requests over here.

Oct 26 2016
Oct 26

Gulp is a mainstay of front-end development nowadays. Of course, like all front-end development tools, there is a massive proliferation of build systems, from Webpack to SystemJS and Grunt to Gulp. Yet, we at Zivtech find ourselves using mostly Gulp, particularly when dealing with Drupal 8 projects.

This article is the first of a series of posts where I outline how Zivtech uses Gulp. In this first part, I'll talk about our reasoning and setup process.

Why does Zivtech use Gulp for Drupal 8?

The choice of Gulp over other front end tools is due to how Drupal utilizes front-end assets. It's perfectly fine to use something like Webpack or Browserify with Drupal, but those all-encompassing, "build and combine all the things!" systems are best used for projects that don't have a built-in asset pipeline. For example, Drupal concatenates and minifies CSS and JS for us, and it's really just over-compiling (is that a word?) to use something that Drupal obviates.

Also, we use Gulp over Grunt or even Broccoli (because yes, that's a thing too) strictly because Zivtech does a lot of node.js development as well. The concept of streams and buffers in Gulp are used throughout node.js, and it makes sense that we'd align with our other development.

Many projects and distributed teams

As a client services company, Zivtech has many projects and several teams working on projects. Thus, our building tasks have to be somewhat abstract so as to apply to most situations. So the first step to conquering the Gulp pipeline is figuring out a way to make the tasks themselves static, but let the configuration remain changeable.

Some examples of these changeable settings include: the website address that Browsersync should proxy when watching your development. It's possible that this website address could change on a per-user basis too. Also, the website name would change on a per-site basis too.

Within each project, we could just alter the Gulp tasks directly to account for these differences. Yet some people on the team may not be too familiar with Gulp and you might be sending them into the weeds trying to suss out "that one weird setting" they should change.

At this point you might be thinking we should make a settings file for each project's Gulp tasks, and you'd be correct if so! The Gulp tasks remain the same, but the settings always change.

As it turns out, Drupal 8 has a preferred method for settings files: the YAML format. Being a flexible guy, I vote for just sticking with what the system wants. Thus, our new settings files will be written in YAML.

Using YAML for Gulp settings

First, let's think about how we're going to implement settings from a big picture perspective. We've already determined that we'll work in YAML and we'll have a default group of configuration settings available. We also want each member of the team to be able to override some settings to fit their situations.

It makes sense that we'll have a file called default.gulpfile.yml for the default settings. Gulp should merge another file, we'll call it gulpfile.yml, on top of the default. The default settings get tracked in Git or your chosen version control system, but the other one should not. This allows for complete flexibility of any setting you or one of your teammates might want.

In default.gulpfile.yml, start off by creating some basic settings:

themeName: "myTheme" themeDescription: "myTheme description"

Next, create a gulpfile.yml to contain your customized settings:

themeName: "myRenamedTheme"

When Gulp runs, the themeDescription setting should match default, but the themeName setting should be overridden.

Finally, in your gulpfile.js:

(function () { 'use strict'; var gulp = require('gulp'); var yaml = require('js-yaml'); var fs = require('fs'); var assign = require('lodash.assign'); // read default config settings var config = yaml.safeLoad(fs.readFileSync('default.gulpfile.yml', 'utf8'), {json: true}); try { // override default config settings var customConfig = yaml.safeLoad(fs.readFileSync('gulpfile.yml', 'utf8'), {json: true}); config = assign(config, customConfig); } catch (e) { console.log('No custom config found! Proceeding with default config only.'); } })();

Now, when you run any Gulp task, your config files will get merged by lodash. One day, Object.assign will be more widely available, and lodash won't be needed any longer. For now, things work fine this way.

You'll notice that loading the custom config is in a try ... catch block. We do that so there are no show-stopping errors if the custom config is not found. Additionally, if it's not found we can let the user know that only default settings are in use.

Wrapping up

Well, this has been a high-level explanation of how and why we use Gulp at Zivtech for D8 projects.

In the coming articles in this series, I'll expand on the simple gulpfile.js and default.gulpfile.yml files started. I plan to outline our process for linting and compiling CSS, linting and compiling JavaScript, and a couple extra tasks too, like integrating Bower and favicon generation. Until then!

Posts in this series

  1. Use Gulp for Drupal 8 with Teams, Part 1: Gulp Setup
  2. Use Gulp for Drupal 8 with Teams, Part 2: Creating tasks
Oct 14 2016
Oct 14

Websites have a shelf life of about 5 years, give or take. Once a site gets stale, it’s time to update. You may be going from one CMS to another, i.e., WordPress to Drupal, or you may be moving from Drupal 6 to Drupal 8. Perhaps the legacy site was handcrafted, or it may have been built on Squarespace or Wix.

Content is the lifeblood of a site. A developer may be able to automate the migration, but in many cases, content migration from an older site may be a manual process. Indeed, the development of a custom tool to automate a migration can take weeks to create, and end up being far costlier than a manual effort.

Before setting out, determine if the process is best accomplished manually or automatically. Let’s look at the most common concerns for developers charged with migrating content from old to new.

1. It’s All About Data Quality

Old data might not be very structured, or even structured at all. A common bad scenario occurs when you try to take something that was handcrafted and unstructured and turn it into a structured system. Case in point would be an event system managed through HTML dumped into pages.

There's tabular data, there are dates, and there are sessions; these structured things represent times and days, and the presenters who took part. There could also be assets like video, audio, the slides from the presentation, and an accompanying paper.

What if all that data is in handcrafted HTML in one big blob with links? If the HTML was created using a template, you might be able to parse it and figure out which fields represent what, and you can synthesize structured data from it. If not, and it's all in a slightly different format that's almost impossible to synthesize, it just has to be done manually.

2. Secret Data Relationships

Another big concern is a system that doesn't expose how data is related.

You could be working on a system that seems to manage data in a reasonable way, but it's very hard to figure out what’s going on behind the scenes. Data may be broken into components, but then it does something confusing.

A previous developer may have used a system that's structured, but used a page builder tool that inserted a text blob in the top right corner and other content in the bottom left corner. In that scenario, you can't even fetch a single record that has all the information in it because it's split up, and those pieces might not semantically describe what they are.

3. Bad Architecture

Another top concern is a poorly architected database.

A site can be deceptive because it has structured data that describes itself. The system could find stuff as each element was requested, but then it is really hard to find the list of elements and load all of the data in a coordinated way.

It's just a matter of your architecture. It’s important to have a clearly structured, normalized database with descriptively named columns. And you need consistency, with all the required fields actually in all the records.

4. Automated Vs. Manual Data Migration

Your migration needs to make some assumptions about what data it’s going to find and how it can use that to connect to other data.

Whether there are 6 or 600,000 records of 6 different varieties, it's the same amount of effort to automate a migration. So how do you know if you should be automating, or just cutting and pasting?

Use a benchmark. Migrate five pieces of content and time out how long that takes. Multiply by the number of pieces of content in the entire project to try to get a baseline of what it would take to do it manually. Then estimate the effort to migrate in an automated fashion. Then double it. Go with the number that’s lower.

One of the reasons to pick a system like Drupal is that the data is yours. It's an open platform. You can read the code and look at the database. You can easily extract all of the data and take it wherever you want.

If you’re with a hosted platform, that may not be the case. It's not in the hosted platform’s best interest to give you a really easy way to extract everything so you can migrate it somewhere else.

If you're not careful and you pick something because it seems like an easy choice now, you run the risk of getting locked in. That can be really painful because the only way to get everything out is to cut and paste. It’s still technically a migration. It's just not an automated one.

Oct 07 2016
Oct 07
A recent report from Sucuri found that the vast majority of hacked websites are hosted on the WordPress CMS (content management system). Nearly 16,000 sites have been hacked in 2016. According to the report, “the three CMS platforms most being affected are WordPress, Joomla! and Magento.” But, the findings go on to say that these platforms are no more or less secure than Drupal, even though Drupal doesn’t even make the list.

That’s because security has more to do with humans than code. “In most instances, the compromises analyzed had little, if anything, to do with the core of the CMS application itself, but more with improper deployment, configuration, and overall maintenance by the webmasters and their hosts,” explains Sucuri.

The Password is...

The ways that people get hacked are, for the most part, straightforward. The worst offender is a bad password. The best passwords can’t be guessed and are a mix of letters, numbers and characters. But people's memory being what it is, most passwords are easy to remember, and as a result, easy to hack. Even if a user has a secure password, he might repeat it on a number of sites. As soon as one site loses its data security, hackers will gain entry all over the web with that one frequently used password.

Another common problem is passwords that are shared across an organization, but remain unchanged when an employee leaves. If the former staffer was fired, or has had a negative experience with the company, there’s a chance that the password will fall into enemy hands.

A site that stores valuable user information (such as credit cards or personal data) is especially at risk. While the employee herself may not pose a security threat, a bad actor such as a relative or neighbor could gain access to credentials and wreak havoc.

Give permissions only to trusted users, and have protocols in place for removing access for ex-employees. It’s a good idea to set up password constraints (must contain certain characters). Some companies set up automatic expiration, in which employees are required to reset their passwords every 60 days, but this is a debated idea. Many argue that forcing password changes is not a great plan since change is hard on the memory, so people tend to use easier passwords when forced to switch frequently. If a password is good, then changing it only mitigates issues but doesn’t completely eliminate them.

Plugins, Modules and Hosts

The code underlying WordPress gets a lot of attention and is often fixed so vulnerabilities are more often in plugins. The Slider Revolution (AKA RevSlider) and GravityForms plugins have provided opportunities for hackers to get into a site and facilitate the installation of malware on visitors’ computers. While fixes for these gaps have been put in place, there will always be another vulnerability around the corner. It’s a game of cat and mouse.

Then there are other ways to hack into an account that have nothing to do with the CMS. Was the site’s host account hacked? Historically, it’s been too easy to call a provider’s customer service, provide the bare minimum to the customer service rep, and get into the backend of the site. That’s not technical, and there’s no need to be a skilled hacker. Fortunately, service providers are getting smarter about these schemes.

Drupal vs. Wordpress?

Drupal’s security relies upon a strong, coordinated effort. In general, Drupal is more secure overall, with a dedicated security team that operates using a series of protocols and a chain of responsibility for handling issues. As a Drupal shop, Zivtech receives weekly emails with alerts about security updates. Your CMS may do the same. Be sure to check.

Drupal is built upon rigorous coding standards, with tools to ensure that strict security practices are followed. The entire system is designed to make sure that all code that accesses the database is sanitized.

Best Practices for Drupal Security

There are ways that you can audit your site to check that you are being cautious. Drupal has specific protocols, such as ensuring that the files on the file system are safe and set up properly and that an outside system can’t connect to the database.

Certain modules should never be turned on, like the PHP module. The PHP module enables an outsider to hack into your site if you’re not extremely careful. There are a number of security updates incorporated into the latest version, Drupal 8, including the removal of the PHP filter.

First, make sure you have an SSL certificate. You can get them for free at Let’s Encrypt.

Next, if you’ve already taken all the standard steps to secure your site but still want to go a little further, you can also delete all readme text files that come with your CMS. This will reduce the surface area for an attack. By default, the readme files are accessible by anyone who visits your site. This could be a problem if an issue was discovered in a specific version of Drupal or a Drupal module. You can imagine that if there was a hack against Drupal version 7.10, hackers would scan sites for the 7.10 CHANGELOG.txt file to create a list of targets. Reduce that risk by deleting those files, or make them impossible to read over the internet.

Fending off security attacks is like playing hide and seek with frequently shifting rules. The developers behind the most popular CMS platforms work tirelessly to keep up. The primary reason that WordPress sites are attacked more frequently is actually all about the numbers. It's the most popular CMS, and therefore the most vulnerable.

Oct 07 2016
Oct 07
A recent report from Sucuri found that the vast majority of hacked websites are hosted on the WordPress CMS (content management system). Nearly 16,000 sites have been hacked in 2016. According to the report, “the three CMS platforms most being affected are WordPress, Joomla! and Magento.” But, the findings go on to say that these platforms are no more or less secure than Drupal, even though Drupal doesn’t even make the list.

That’s because security has more to do with humans than code. “In most instances, the compromises analyzed had little, if anything, to do with the core of the CMS application itself, but more with improper deployment, configuration, and overall maintenance by the webmasters and their hosts,” explains Sucuri.

The Password is...

The ways that people get hacked are, for the most part, straightforward. The worst offender is a bad password. The best passwords can’t be guessed and are a mix of letters, numbers and characters. But people's memory being what it is, most passwords are easy to remember, and as a result, easy to hack. Even if a user has a secure password, he might repeat it on a number of sites. As soon as one site loses its data security, hackers will gain entry all over the web with that one frequently used password.

Another common problem is passwords that are shared across an organization, but remain unchanged when an employee leaves. If the former staffer was fired, or has had a negative experience with the company, there’s a chance that the password will fall into enemy hands.

A site that stores valuable user information (such as credit cards or personal data) is especially at risk. While the employee herself may not pose a security threat, a bad actor such as a relative or neighbor could gain access to credentials and wreak havoc.

Give permissions only to trusted users, and have protocols in place for removing access for ex-employees. It’s a good idea to set up password constraints (must contain certain characters). Some companies set up automatic expiration, in which employees are required to reset their passwords every 60 days, but this is a debated idea. Many argue that forcing password changes is not a great plan since change is hard on the memory, so people tend to use easier passwords when forced to switch frequently. If a password is good, then changing it only mitigates issues but doesn’t completely eliminate them.

Plugins, Modules and Hosts

The code underlying WordPress gets a lot of attention and is often fixed so vulnerabilities are more often in plugins. The Slider Revolution (AKA RevSlider) and GravityForms plugins have provided opportunities for hackers to get into a site and facilitate the installation of malware on visitors’ computers. While fixes for these gaps have been put in place, there will always be another vulnerability around the corner. It’s a game of cat and mouse.

Then there are other ways to hack into an account that have nothing to do with the CMS. Was the site’s host account hacked? Historically, it’s been too easy to call a provider’s customer service, provide the bare minimum to the customer service rep, and get into the backend of the site. That’s not technical, and there’s no need to be a skilled hacker. Fortunately, service providers are getting smarter about these schemes.

Drupal vs. Wordpress?

Drupal’s security relies upon a strong, coordinated effort. In general, Drupal is more secure overall, with a dedicated security team that operates using a series of protocols and a chain of responsibility for handling issues. As a Drupal shop, Zivtech receives weekly emails with alerts about security updates. Your CMS may do the same. Be sure to check.

Drupal is built upon rigorous coding standards, with tools to ensure that strict security practices are followed. The entire system is designed to make sure that all code that accesses the database is sanitized.

Best Practices for Drupal Security

There are ways that you can audit your site to check that you are being cautious. Drupal has specific protocols, such as ensuring that the files on the file system are safe and set up properly and that an outside system can’t connect to the database.

Certain modules should never be turned on, like the PHP module. The PHP module enables an outsider to hack into your site if you’re not extremely careful. There are a number of security updates incorporated into the latest version, Drupal 8, including the removal of the PHP filter.

First, make sure you have an SSL certificate. You can get them for free at Let’s Encrypt.

Next, if you’ve already taken all the standard steps to secure your site but still want to go a little further, you can also delete all readme text files that come with your CMS. This will reduce the surface area for an attack. By default, the readme files are accessible by anyone who visits your site. This could be a problem if an issue was discovered in a specific version of Drupal or a Drupal module. You can imagine that if there was a hack against Drupal version 7.10, hackers would scan sites for the 7.10 CHANGELOG.txt file to create a list of targets. Reduce that risk by deleting those files, or make them impossible to read over the internet.

Fending off security attacks is like playing hide and seek with frequently shifting rules. The developers behind the most popular CMS platforms work tirelessly to keep up. The primary reason that WordPress sites are attacked more frequently is actually all about the numbers. It's the most popular CMS, and therefore the most vulnerable.

Sep 20 2016
Sep 20

As a junior developer ramping up to learning Drupal, I spent a lot of time clicking through the UI. After getting familiar with it, I wanted to take a look behind the scenes at Drupal’s codebase. Writing code for a Drupal site can be an overwhelming experience because, even though it’s written in PHP, there’s a dense API behind it. One of the biggest parts of that API is the hook system. The first exposure I had to writing PHP with Drupal was through update hooks. So I wanted to review how hooks work, and how cool they are to use!

What is a Hook? 

Drupal has a lot of excellent Community Documentation, and their page on hooks is thorough. It says: 

“Hooks are how modules can interact with the core code of Drupal. They make it possible for a module to define new urls and pages within the site (hook_menu), to add content to pages (hook_block, hook_footer, etc.), to set up custom database tables (hook_schema), and more. 

Hooks occur at various points in the thread of execution, where Drupal seeks contributions from all the enabled modules. For example, when a user visits a help page on a Drupal site, as Drupal builds the help page it will give each module a chance to contribute documentation about itself. It does this by scanning all the module code for functions that have the name mymodule_help($path, $arg), where "mymodule" is the module's name, e.g., the block module's help hook is called block_help and the node module's help hook is called node_help. The hook may provide parameters; hook_help's parameters $path and $arg allow the developer to determine what page or pages the help messages will appear on.
A hook can be thought of as an event listener in the sense that an event triggers an action.”

Read more
Sep 13 2016
Sep 13

PHP can be challenging to learn, especially if you’re learning Drupal at the same time. Three things stood out to me while I learned how to write PHP within Drupal, and I’m hoping by highlighting them, it might help other junior developers. 

I’m going to use a snippet from a basic Drupal form with a submit handler (see code for the entire form here). It’s an example of code you will see often, since Drupal re-uses their form API for consistency in form processing and presentation. This particular code snippet adds a fieldset called ‘name’ to the form.

 * Returns the render array for the form.
function my_module_my_form($form, &$form_state) {
  $form['name'] = array(
    '#type' => 'fieldset',
    '#title' => t('Name'),
    '#collapsible' => TRUE,
    '#collapsed' => FALSE,

Three items within just these few lines seemed strange to me starting out.

Function t 

  '#title' => t('Name'),

You’ll see this bugger everywhere! Of course I know how to write a string, but what is the t for? Straight from the Drupal 7.x API

“Translates a string to the current language or to a given language.”

As explained here, each string within function t can be translated through the Drupal UI, which is great for the accessibility of your site. All strings need to be passed through this function, but it’s worth noting that content translations are actually handled elsewhere (see the Internationalization module for more information). 

Read more
Aug 18 2016
Aug 18
The goal of any company is to reduce costs and increase profit, especially when it comes to online and IT projects. When an IT undertaking is a transitional effort, it makes sense to consider staff augmentation and outsourcing.

Consider the marketing efforts of one worldwide corporation. Until recently, each brand and global region built and hosted its own websites independently, often without a unified coding and branding standard. The result was a disparate collection of high maintenance, costly brand websites.

A Thousand Sites: One Goal

The organization has created nearly a thousand sites in total, but those sites were not developed at the same time or with the same goals. That’s a pain point. To solve this problem, the company decided to standardize all of its websites onto a single reference architecture, built on Drupal.

The objective of the new proprietary platform includes universal standards, a single platform that can accommodate regional feature sets, automated testing, and sufficient features that work for 95% of use cases for the company’s websites globally.

While building a custom platform is a great step forward, it must then be implemented, and staff needs to be brought up to speed. To train staff on technical skills and platforms, often the best solution is to outsource the training to experts who step in, take over training, and propel the effort forward quickly.

As part of an embedded team, an outsourced trainer is an adjunct team member, attending all of the scrum meetings, with a hand in the future development of the training materials.

Train Diverse Audiences

A company may invest a lot of money into developing custom features, and trainers become a voice for the company, showing people how easy it is to implement, how much it is going to help, and how to achieve complex tasks such as activation processes. The goal is to get people to adopt the features and platform. Classroom style training allows for exercises on live sites and familiarity with specific features.

The Training Workflow

Trainers work closely with the business or feature owner to build a curriculum. It’s important to determine the business needs that inspired the change or addition.

Starting with an initial outline, trainers and owners work together. Following feedback, more information gets added to flesh it out. This first phase can take four to five sessions to get the training exactly right for the business owner. For features that follow, the process becomes streamlined. It's more intuitive because the trainer has gotten through all the steps and heard the pain points, but it’s important to always consult the product owner. Once there is a plan, the trainers rehearse the curriculum to see what works, what doesn’t work, what’s too long, and where they need to cut things.

Training Now & Future

Training sessions may be onsite or remote. It is up to the business to decide if attendance is mandatory. Some staffers may wish to attend just to keep up with where the business is going.

Sessions are usually two hours with a lot of time for Q&A. With trainings that are hands-on, it’s important to factor in time for technical difficulties and different levels of digital competence.

Remote trainings resemble webinars. Trainers also create videos to enable on demand trainings. They may be as simple as screencasts with a voiceover, but others have a little more work involved. Some include animations to demo tasks in a friendlier way before introducing a more static backend form. It is the job of the trainer to tease out what’s relevant to a wide net of audiences.

The training becomes its own product that can live on. The recorded sessions are valuable to onboard and train up future employees. Trainers add more value to existing products and satisfy management goals.
Aug 16 2016
Aug 16
Podcasts are a great way to get intel on the go. Listen while you walk, drive, exercise, or unwind. According to recent research in The Infinite Dial 2016, podcast listening has experienced sharp gains, with an average of 5 podcasts consumed per week.

If you can think of a topic, there’s probably a podcast about it. Web development is no different. Here are three podcasts to listen to if you’re a developer.

Code Newbies

For new developers, it’s enlightening to listen to others talk about how they started coding and the struggles that they faced along the way. Saron always interviews interesting people (including many women in tech) and has a segment where she asks about the worst advice the interviewee has ever received. With over one hundred episodes, there’s a lot of content to leave you feeling inspired.

Javascript Jabber

This podcast has an incredible amount of content at over two hundred episodes and counting. Rather than a single interviewee, the episodes include panels of individuals, all with different but equally insightful perspectives. As a newer developer, you’ll get exposure to new concepts and resources, which is extremely valuable considering Javascript is all the rage these days.

PhillyDev Podcast

This podcast is fairly new and was created by a NYCDA grad. It’s fun and informative and provides a closer look at the Philly tech scene: it highlights the movers and shakers, how they got into the field, and the technologies they use. Steven always includes his “sweet nug” segment where he asks the interviewee to share a piece of wisdom or a recommendation for his listeners.

Do you have a favorite podcast about development? Let us know in the comments below!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web