Aug 06 2019
Aug 06

We were honored to have two members of our team present sessions at Drupal North this past June in the beautiful and vibrant city of Montreal, Canada.  Drupal North is an annual,  free, three-day conference focusing on Drupal-related topics and the community that drives the Drupal Project forward.

We were pleased to make new connections while also reconnecting with friends and peers. As always, it was also a privilege to be given the opportunity to share our expertise.

Crispin Bailey
Director of UX + Design

[embedded content]

The American Foundation for the Blind (AFB), established in 1921, is an amazing organization that advocates on behalf of visually impaired persons, providing community, resources and opportunities.

This talk covered the process we went through to overhaul the site's messaging, content architecture, and visual design, culminating in a fully-responsive HTML prototype and style guide that was used to implement a brand new, fully-accessible, Drupal 8 website.

Andrew Mallis
CEO

[embedded content]

 
Our client, the deYoung Museum, had an ambitious design project scope with tight timelines on delivery that required us to be nimble and innovative. This talk covered the team’s journey, how we repurposed GatherContent as a CMS, and how we automated deployments via its workflow states using Circle CI and Netlify.

It also looks at the components architectures that more greatly empowered our client to own their digital stories, and author them to the point where insights.famsf.org/gauguin received a Webby Honoree award.

Andrew Mallis
CEO

[embedded content]

Google Analytics provides a richness of data giving insight to a user’s needs and behaviors once they’re on your site. However, the amount of data it presents can be disorienting, leaving users unable to focus on key metrics, or misrepresenting the implications of critical indicators in relation to their goals.

In this talk, Andrew Mallis presented best practices to access the clearest, and most useful analytics data necessary to better tell your story.
 
 
We hope you find these talks helpful and insightful, and we look forward to seeing you at future events.

Aug 06 2019
Aug 06

Author’s Note: No rabbits were harmed in the writing of this post.

This summer, I had the opportunity to attend Decoupled Days 2019 in New York City. The organizers did a fabulous job of putting together an extremely insightful, yet approachable, program for the third year of the conference to date.

In case you haven’t heard of the event before, Decoupled Days is somewhat of a boutique conference that focuses solely on decoupled CMS architectures, which combine a CMS like Drupal with the latest front-end web apps, native mobile and desktop apps, or even IoT devices. Given the contemporary popularity of universal Javascript (being used to develop both the front and back ends of apps), this conference also demands a strong interest in the latest JavaScript technologies and frameworks.

If you weren’t able to attend this year, and have the opportunity in the future, I would highly recommend the event to anyone interested in decoupled architectures, whether you’re a beginner or an expert in the area. With that in mind, here are a few of the sessions I was able to attend this year that might give you a sense of what to expect.

Christopher Bloom (Phase2) gave an excellent performance at his session, giving all of us a crash course in TypeScript. He provided a very helpful live demo of how TypeScript can make it much easier and safer to write apps in frameworks like Vue.js, by allowing for real-time error-checking within your IDE (as opposed to at runtime in your browser) and providing the ability to leverage ES6 Class syntax and decorators together seamlessly.

Jamie Hollern (Amazee Labs) showed off how it’s possible to have a streamlined integration between your Drupal site and a fully-fledged UI pattern library like Storybook. By using the Component Libraries, GraphQL, and GraphQL Twig contributed modules in concert with a Storybook-based style guide, Jamie gave a fabulous demonstration of how you can finally stick to the classic DRY (Don’t Repeat Yourself) principle when it comes to a Drupal design system. Because Storybook supports front-end components built using Twig markup, and allows for populating those components using mock data in a GraphQL format, we can use those same Twig templates and GraphQL queries within Drupal with almost no refactoring whatsoever. If you’re interested in learning more about building sites with Drupal, GraphQL, and Twig, Amazee Labs has also published an “Amazing Apps” repo that acts as a sample GraphQL/Twig sample project.

For developers looking to step up their local development capabilities, Decoupled Days features two sessions on invaluable tools for doing decoupled work on your local machine.

Kevin Bridges (Drud) showed how simple and straightforward it can be to use the DDEV command-line tool to quickly spin up a Drupal 8 instance (using the Umami demo site profile, for example), enable the JSON:API contributed module in order to expose Drupal’s data, and then install (using gatsby-cli) and use a local Gatsby.js site to ingest and display that Drupal data.

Matt Biilman (Netlify) also demonstrated the newly launched Netlify Dev command-line tool for developers who use Netlify to host their static site projects. With Netlify Dev, you could spawn an entire local environment for the Gatsby.js site you just installed using DDEV, which will be able to locally run all routing rules, edge logic, and cloud functions. Netlify Dev will even allow you to stream that local Gatsby.js site to a live URL (i.e., on Netlify) with hot-reloading, so that your remote teammates can view your site as you build it.

As usual, Jesús Manuel Olivas (WeKnow) is always pushing the Drupal community further and further, this time by demonstrating his own in-house, Git-based CMS called Blaze, which is a platform that could potentially replace the need for Drupal altogether. Blaze promises to provide the lightest-weight back-end possible for your already light-weight, static-site front-end. In lieu of a relatively heavyweight back-end like Drupal, Blaze would provide a familiar WYSIWYG editor and easy content modeling with a few clicks. The biggest wins from using any Git-based CMS, though, are the extremely lowered costs and lightning-fast change deployments (with immediate updates to your code repo and CI/CD pipeline).

Aug 06 2019
Aug 06

Drupal is certainly not only the open-source CMS game in town, but when consulting with clients about the best solution for the full range of their needs, it tends to be my go-to.

Here’s why: 

  • Architecture
  • Scalability
  • Database Views
  • Flexibility
  • Security
  • Modules
  • Search
  • Migration  

Architecture

Drupal 8 is built on modern programming practices, and of course, the same will be true for June 2020 release of Drupal 9. 

A Development – Test – Production environment is the default assumption with a Drupal 8 site. Too often, other CMS sites are managed as a single instance, which is a single point of failure. 

Also, Drupal comes with a built-in automated testing framework. Drupal 8 supports unit integration and system/functional testing using the PHP Unit framework. Drupal is built to be inherently extensible through configuration, so every data type can be templated without touching code to achieve fully customized, structured data collection.

Scalability

Drupal has proven to be scalable at the most extreme traffic levels. Weather.com, as just one example, is a Drupal site. Many of the Federal cabinet-level agencies using open source have built their web infrastructures on Drupal. Drupal has built-in functionality, such as a robust caching API and JS/CSS minification/aggregation to optimize page load speed.

Database Views

Drupal Views, which is in Drupal 8 core, is a powerful tool that allows you to quickly construct database views, with AJAX filtering and sorting included. This allows you to quickly construct and publish lists of any data in your Drupal site, without needing a developer to do it for you.

Flexibility

There are several components of flexibility. 

Drupal 8 was built as an API-first CMS, explicitly supporting the idea that the display layer for content stored in a Drupal CMS may not be Drupal. The API first design of Drupal 8 also means that it is easier to integrate Drupal with third-party applications, as the API framework is already in place.

Customers vary widely in the ways in which they currently consume content. We assume that new ways will emerge for consuming content in the future, and even though we may not be in a position to predict right now what that will look like, Drupal is well poised to support what comes next.

Security

The only totally secure CMS is the one installed on a server that is sitting at the bottom of the Mariana trench, with no connectivity to anything!  However, Drupal has been tested in the most rigorous and security-conscious environments across government and industry. With a dedicated security team managing not just Drupal core but also many popular modules, and the openness inherent in open source, Drupal is a solid, secure, platform for any website.

Modules / Extensions

Drupal modules are created and contributed to the community because they solve a problem. If you have the same or similar problem to solve you may be a simple module install away from solving that problem. Also, all Drupal modules are managed and accessible through a single repository at Drupal.org, providing a critical layer of vetting and security.

Search

Current versions of Drupal come with  powerful and unparalleled out-of-the-box search functionality. Also, SOLR integration is plug-and-play with Drupal, allowing you to extend the capabilities of search to index documents, or across multiple domains, or to build faceted search results to improve the user experience.

Migration

Face it: migration is not fun with any CMS. However, the Drupal 8 migration (and the same will be true for Drupal 9) API is highly capable of importing complex data from other systems. Simpler CMS platforms tend to offer simple migration for out-of-the-box content types (posts and pages), but not so much for complex data or custom content types.

Summary

Settling on the right CMS platform is often not an obvious choice. Weighing the relative benefits of every option can take time and calls for expert consultation. In instances where complexity increases, and there’s a need to integrate the CMS with outside data sources, I’ll admit to a Drupal bias. This is based on my experience of Drupal as a CMS framework that was designed specifically for the challenges of a mobile-first, API driven, integrated digital environment.

Looking for further exploration into the relative merits of your open source CMS options? Contact us today for an insightful, informative and fully transparent conversation.


 

Aug 06 2019
Aug 06

Long articles with many sections often discourage users from reading. They start reading and usually leave before reaching half of such articles.

To avoid this type of user experience, I recommend to group each section in your article into a collapsible tab. The article reader then will be able to digest the text in smaller pieces.

The Collapse Text Drupal 8 module adds editor filter plugin to your editor. You then will be able to create collapsible text tabs with a tag system similar to HTML.

Read on to learn how to use this module!

Step #1. Install the Required Module

  • Open the terminal application of your computer
  • Go to the root of your Drupal installation (the composer.json file is located inside this directory)
  • Type the following command:

composer require drupal/collapse_text

Type composer installation command

  • Click Extend
  • Scroll down until you find the Collapse Text module and enable it
  • Click Install

Click Install

Step #2. Create an Editor Role

  • Click People > Roles > Add role

Add Collapsible Blocks to Text-Heavy Nodes in Drupal 8

  • Enter the Role Name Editor and click Save
  • Click the dropdown besides Editor and select Edit permissions

Click the dropdown besides Editor

  • Check these permissions:
    • Comment
      • Edit own comments
      • Post comments
      • View comments
    • Contact
      • Use the site-wide contact form
    • Filter
      • Use the Full HTML text format
    • Node
      • Article: Create new content
      • Article: Delete own content
      • Article: Delete revisions
      • Article: Edit own content
      • Article: Revert revisions
      • Article: View revisions
      • Access the Content overview page
      • View published content
      • View own unpublished content
    • System
      • Use the administration pages and help
      • View the administration theme
    • Taxonomy
      • Tags: Create terms
      • Access the taxonomy vocabulary overview page
    • Toolbar
      • Use the toolbar
    • User
      • Cancel own user account
      • View user information
  • Click Save permissions

Click Save permissions

Step #3. Create a User with the New Editor Role

  • Click People > Add user
  • Create a user with the Editor role
  • Click Create new account

Click Create new account

Step #4. Add the Plugin to the Text Format

  • Click Configuration > Text formats and editors

Click Configuration > Text formats and editors

  • Click the Configure button for the Full HTML format

Click the Configure button

  • Enable the Collapsible text blocks filter and check that it comes after the other two filters specified in the description

Enable the Collapsible text blocks filter

The Full HTML format has these two filters disabled by default, so we are good to go.

  • Click Save configuration

Click Save configuration

Step #5. Create Content

  • Log out and log back in as the user with the Editor role

Log out and log back in

  • Click Content > Add content
  • Write a proper title for the node

The Tabs Structure

Each tab is declared between a pair of tags.

To show an opened tab (not collapsed at all) you put the text between the [collapse] and [/collapse] tags.

To show a collapsed tab you put the text between the [collapsed] and [/collapsed]tags.

The opening [collapse] and [collapsed] tags support two “attribute values”:

  • title
  • class

If you don’t specify a title attribute, the module will take the first title available between the [collapse]/[collapsed] tags.

If you don’t specify a title attribute, the module will take the first title available

It is possible to nest collapsible tabs.

It is possible to nest collapsible tabs

  • Finish editing the node form and click Save

Finish editing the node form and click Save

The image is floated, that is a Bartik specific style. Let’s apply some CSS.

Step #6. Basic Styling

Hint: I’m going to edit the original core theme files because I’m working on a sandbox environment. That is not recommended on a production server. As a matter of fact, it is not a good practice at all. If you want to improve your Drupal theming skills, take a look at this OSTraining class.

  • Open the file core/themes/bartik/css/components/field.css
  • Add this code to the end of the file:
@media all and (min-width: 560px) {
 .node .field--type-image {
   float: none;
 }
  • Open the file core/themes/bartik/css/components/node.css
  • Add this code to the end of the file:
/* Collapse Text Styles */
.open,
.shut {
font-family: sans-serif;
}

.open {
background: black;
color: white;
}

.shut {
background: #444;
color: #CCC;
}

summary {
background-color: red;
color: transparent;
}

.nested1 {
background-color: rgba(224, 110, 108, 0.25);
}
  • Save both files
  • Click Configuration > Performance > Clear all caches
  • Refresh the site

Click Configuration > Performance > Clear all caches

Refresh the site

I hope you liked this tutorial. Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Aug 06 2019
Aug 06

Did you know that the term “One-Stop-Shop” is one of the most clichéd marketing taglines to use, according to Hubspot? Thankfully, I came across that article before I sat down to write this one. So, I’m NOT going to say that a Drupal distribution is a “one-stop-shop” for a quick and easy way to launch your website. Let’s put it this way instead – If you want to build a Drupal website and eager to see it go live real quick, while making sure that you want to save time on maintenance too, Drupal distributions are meant for you.  

What is a Drupal distribution?

A Drupal distribution is an all-inclusive package to get your website up and running quickly. This package consists of the Drupal Core (basic features), installation profiles, themes (for customized designs), libraries (consists of assets like CSS or Javascript) and modules specific to an industry. For example, if you run a publishing company, a distribution like Thunder can help you speed up your development process. Here you can find modules like Paragraphs, Media Entity, Entity Browser and features like Thunder admin theme, scheduled publishing and much more – all in one place. 

Why should you use a Drupal 8 distribution?

Let me give you a few reasons for that -

  • You don’t have to scramble your way through thousands of Drupal modules only to find a few that you really need.  
  • Configuring Drupal core is easier too as most part of it comes preconfigured.
  • The features and modules included in a Drupal distribution are time tested, optimized and proven for quality.
  • Maintenance of a Drupal distribution is simpler because updates for all modules and features can be performed on one shot!  
  • Since you don’t have to reinvent the wheel every time, you save on time. You save precious resource time. Which also means, you save on money! 
  • Now that you have saved some time, you can spend more time on customizing and personalizing these components to tailor-fit your business needs.

Top 15 Drupal Distributions (alphabetically sorted)

1. CiviCRM Starter Kit

The CiviCRM Starter Kit brings together the power of Drupal and the open-source CRM tool – CiviCRM. The popular CRM is used by more than 8000 organizations to centralize constituent communications. Along with core Drupal and CiviCRM, the distribution also packs in CiviCRM related modules like CiviCRM Cron, Webform CiviCRM, CiviCRM Clear All Caches, etc.

 2. Commerce Kiskstart

If you are looking to quickly get your e-commerce store up and running on Drupal Commerce framework, this one’s for you. Commerce Kickstart is a Drupal distribution made for both Drupal 7 and Drupal 8 and is maintained by Centarro (previously Commerce Guys). The Commerce Kickstart 2.x version comes loaded with beautiful themes, catalog, promotion engines, variety of payment tools, utility tools, shipping and fulfilment tools, analytics and reporting tools, marketing tools, search configuration, custom back office interface and much more.

drupal distributions

                                Source - https://www.drupal.org/project/commerce_kickstart

3. Conference Organizing Distribution

Creating a website for events and conference gets easier with this Drupal distribution. Conference Organizing Distribution (COD) was made for Drupal 7 but is being actively ported to Drupal 8. With COD, you can -

  • Create/manage tickets for event registrations
  • Create announcements for paper submissions
  • Moderate session selections
  • Provide an option for attendees to vote for their favourite sessions
  • Schedule sessions on any day and place
  • Easily manage sponsorships
  • Event management made easy with a powerful event management dashboard
  • Keep a track on multiple events and sessions
  • Sell tickets with Drupal commerce

4. Contenta

This API-first Drupal distribution provides you with a framework that is API ready. It reduces the complexity and pain of using or trying decoupled/headless Drupal. Contenta also comes pre-installed with code and demo content along with front-end application examples. Even if you are new to Drupal, Contenta offers simple and quick ways to get the Drupal CMS part ready and you can then focus on frontend frameworks you intend to use. If you’re looking for a complete solution for a headless Drupal project, ContentJS is your best bet. Content JS integrates Contenta CS with front-end framework NodeJs for a powerful, high performing digital experience.

5. Drupal Government Distributions (federal, regional, local)

The aGov Drupal 8 distribution was developed to meet the guidelines of the Australian government. It allows government bodies to follow standards like the WCAG 2.0 AA, Australian government Web guide, AGLS metadata and Digital Service Standard. However, the developers of aGov, PreviousNext, no longer develop of support this distribution as they are now focused on the GovCMS Drupal distribution. GovCMS was built on the foundation of aGov to build more secure, compliant and adaptable government websites. 
deGov Drupal 8 distribution was built for German government websites and used Acquia Lightning to offer more valuable features and functionalities. Some features common to all the Drupal government distributions-

  • Meeting all government standards
  • Workbench moderation
  • Citizen engagement portals
  • Responsive design
  • Example content
  • Intranet/Extranet

6. Acquia Lightning

True to its name, Acquia Lightning is a light-weight Drupal 8 distribution that you can use to develop and deploy a website in lightning speed (up to 30% lesser development time!). Developed by Acquia, Lightning aims to provide full flexibility and a great authoring experience to editorial teams and content authors. Built on Drupal 8, it offers powerful features like page layouts, drag and drop of assets using Panels, rich text, media, slideshows, Google maps, content scheduling and much more. You can also streamline the workflow process of publishing, reviewing, approving and scheduling content.

7. Open Atrium

OpenAtrium is a Drupal distribution built specifically for organizations to be able to create a collaborative intranet solution for social collaboration and knowledge management. It offers features like a drag and drop layout, events management (Events), document management (Files), issue tracking, granular access controls, media management, a worktracker (to monitor tasks and maintain transparency), and much more. It is also offers responsive layouts and themes.

8. Open Academy

Built on the Panopoly base distribution, the Drupal distribution is tailor-made for higher education websites which can be further extended and customized. It is an easy-to-use tool that does not need users to be technical. You can have a great website without any customizations too! Open Academy distribution consists of a Drupal 7 installation profile and features meant for managing courses, departments, faculty, presentations, news, events, publications and more. The themes provided are optimized and mobile ready.

9. Open Social

Open Social is a Drupal 8 distribution that allows organizations to create intranets, online communities and other social portals easily. It is being used by hundreds of organizations including NGOs and government bodies to facilitate communication and connection with their volunteers, employees, members and customers. It also has features like multi-lingual support, private file system, social login, Geo-location maps, etc.

drupal distribution social module

Source : https://www.drupal.org/project/social

10. Opigno LMS

Opigno Distribution is a Learning Management System built on Drupal. It is an easily scalable solution built not just for universities but also for organizations looking to create e-learning solutions. It allows to manage training paths that are organized in courses, activities and modules. It also provides with features like adaptive learning paths, management of skill acquisition, quizzes, blended learning (online modules + in-house sessions + virtual classrooms), award certificates, forums, live meetings and more.

opigno lms drupal distributions

                                               
 Source: https://www.drupal.org/project/opigno_lms

11. Panopoly

This is a base Drupal distribution – which basically means it also acts like a foundation or a base framework for many other distros to be built upon. Panopoly Distribution is powered by the magic of the Panels module and its features like In-place editor, Panelizer, Fieldable Panel Panes, etc. The Panopoly package consists of contributed modules and libraries. It offers cross-browser and responsive layouts, drag and drop page customizations, a powerful easy-to-use Admin interface, etc. It can also be extended through many Panopoly apps. 

12. Presto!

Want a Drupal 8 starter-kit that can meet all your content management needs and get you up and running, presto?! Count on Presto! Whats better, you can start using Presto right out-of-the-box! It is power packed with some great content features like Intelligent content editing, Promo bar (inline alerts for news/announcements), Divider (adding space), Carousel (interactive images), Blocks, etc. It also comes shipped with a responsive theme based on Bootstrap framework that can be further customized to add more layouts. It also lets you easily integrate with Drupal Commerce to make selling on your website easier. With Presto, you can reduce the development time by 20%!

13. Reservoir

Like Contenta, Reservoir too is an API-first Drupal distribution for decoupling Drupal. With this tool, you can build content repositories that are ready to be consumed by front-end applications. It is packed with all necessary web service APIs necessary to create decoupled websites. Reservoir was developed with the objective to make Drupal more accessible to developers, to provide best-practices for building decoupled applications and to provide a starting point for Drupal developers (with less or no Drupal experience) to build a content repository. Reservoir uses JSON API (a specification used for APIs in JSON) to interact with the back-end content. It also ships with API documentation, OpenAPI format export (compatible with a plethora of tools) and a huge set of libraries, SDKs and references.

14. Thunder

This Drupal 8 distribution is designed exclusively for professional publishing. Thunder was originally designed for and by Hubert Burda Media. The Drupal distribution is loaded with features meant for the publishing sector like the Paragraph module, drag and drop of content, Media Entity, Entity browser, Content lock, Video embed field, Facebook Instant articles, Google AMP, LiveBlog, Nexx.tv video player and much more. All of this along with Drupal core features and responsive themes. 

Drupal distribution module thunder


Source: https://www.drupal.org/project/thunder

15. Varbase

Are you lost in a mountain of Drupal modules and wondering which one to pick? Looking for a package that can jumpstart your web development process right away? Varbase is your go-to Drupal distribution then! Varbase provides you with all the necessities and essential modules, features and configurations to speed up your time to market. 

Aug 06 2019
Aug 06

At the start of every month, we gather all the Drupal blog posts from the previous month that we’ve enjoyed the most. Here’s an overview of our favorite posts from July related to Drupal - enjoy the read!

5 Reasons to Upgrade Your Site to Drupal 8, then 9

Our July selection begins with a blog post by Third & Grove titled “5 Reasons to Upgrade Your Site to Drupal 8, then 9”. Since the upgrade from Drupal 8 will be a smooth and simple one, the best thing to do is to make the move to D8 now and start benefiting from its superior capabilities, as the author of this blog post, Curtis Ogle, also emphasizes. 

In this post, Curtis thus presents his top 5 features of Drupal 8 that make a very strong case for the upgrade. These are: configuration management right in the core; RESTful APIs; twig templates (his personal favorite one); all the contrib modules from D7; and, lastly, the fact that D8 is future proof, with all future upgrade paths considerably smoother than with previous versions.

Read more

The Top Four Benefits of Building a Site on Drupal 8 

Still very much in line with the previous post, this next one was written by BounteousChris Greatens and outlines the main benefits of choosing to build a website in Drupal 8. With an abundance of different CMS solutions, the ones that hold the obvious advantage are those who offer both excellent authoring and administrative features, as well as development capabilities. 

According to Chris, there are 4 main features that make Drupal stand out among other CMS: flexibility, scalability, security and, exactly as in the previously mentioned blog post, the ability of future-proofing. All of Drupal’s additional capabilities only add to this, making it a viable platform for various use cases.

Read more

Prepare for Drupal 9: stop using drupal_set_message()!

Next up, we have a blog post by Gábor Hojtsy reporting on the most recent state of deprecated code in preparation for Drupal 9, which contains two important findings.

The first one is that as much as 29% of all analyzed instances of deprecated API uses can be attributed to drupal_set_message() - so, basically, no longer using this API means you’ll already be 29% on your way towards Drupal 9 readiness.

Gábor’s second finding is that 76% of deprecated API use (47% other API uses beside drupal_set_message()’s 29%) can in fact already be resolved now, 10 months before the release of Drupal 9. This gives project maintainers and contributors plenty of time to work towards D9 compatibility. 

Read more

5 Reasons to Attend and Sponsor Open Source Events

A really great post from July that had us recall the awesome Drupal community is “5 Reasons to Attend and Sponsor Open Source Events”, written by Promet Source’s Chris O’Donnell. He answers the question “Is it worth to keep sponsoring DrupalCamps and other events?” with a hard “Yes” and five (well, six, actually) supporting reasons.

These reasons are: it’s good for business; you (as a company) owe it to the community; you’re able to find new talented developers at these events; you learn a lot; there are various fun activities; and, the sixth bonus reason, you meet many amazing Drupalists and forge new friendships. This last reason alone is actually enough to justify going to at least one or two ‘Camps a year.

Read more

Drupal + Javascript: Exploring the Possibilities

Hook42’s Emanuel London’s introduction to the exploration of the possibilities of Drupal in combination with JavaScript is another post from July that we enjoyed. Excited as he was about the plethora of emerging JavaScript frameworks and the flexibility they offer, Emanuel was a bit disappointed by the fact that the Drupal community hasn’t kept up-to-date with all these technologies, and thus decided to remedy this in a series of blog posts. 

Future posts in the series will explore some of the tools for native mobile app development, e.g. ReactNative, as well as some Drupal tools, modules and distributions, such as ContentCMS. By the end of the series, we’ll hopefully be better prepared for Drupal-powered mobile app development and maybe even compete with WordPress in that area.

Read more

Eight reasons why Drupal should be every government’s CMS

It is a well-known fact in the community that Drupal is the go-to choice for government websites, thanks in large part to its security and multisite capabilities. Anne Stefanyk of Kanopi Studios further underlines this with six additional reasons why governments should choose Drupal as their preferred CMS.

Besides security and multisite/multilingual support, Drupal’s advantage also lies in: its mobility, accessibility, easy content management, ability to handle large amounts of traffic and data, flexibility, and affordability.

These are all aspects crucial to the experience of a government website. As such, Drupal truly is best suited for this role, as is also evidenced by the over 150 countries relying on Drupal to power their websites.

Read more

Getting Start with Layout Builder in Drupal 8

Nearing the end of July’s list, we have a post by Ivan Zugec of WebWash, essentially a tutorial on using Drupal’s recently stable Layout Builder. It contains all the basics you need to get started with this powerful new functionality. 

The first part of the post covers using the Layout Builder to customize content types, with Ivan working on the Article content type as an example. It details how to create a default layout for articles, as well as how to override it for a single article.

The second part then deals with using the module as a page builder, customizing the layout of an individual piece of content, from creating a custom block to embedding images. The post concludes with links to some additional modules and a FAQ section. 
 
Read more

An Open Letter to the Drupal Community

We round off July’s list with J.D. Flynn’s open letter to the Drupal community. This is a very interesting post which deals with a recent positive addition to drupal.org and how it can be exploited to “game the system” - namely, issue credits. 

The problem with the issue credit system is that it can be used to amass hundreds of credits with fixes for simple novice issues, which leaves fewer of these novice issues to fledgling developers trying to get their foot in the door, as well as gives unjustified credibility to the person or company in question and demoralizes other developers. 

J.D. presents four possible solutions to this: weighted credits; mandatory difficulty tagging of issues; credit limits; and a redistribution of credits. He finishes with a call to action to new developers to seek out help and to seasoned developers to offer mentorship to newcomers.

Read more

We hope you enjoyed our selection of Drupal blog posts from July and perhaps even found some thoughts that inspired ideas of your own. Don’t forget to visit our blog from time to time so you don’t miss any of our upcoming posts! 
 

Aug 06 2019
Aug 06

So far we have learned how to write basic Drupal migrations and use process plugins to transform data to meet the format expected by the destination. In the previous entry we learned one of many approaches to migrating images. In today’s example, we will change it a bit to introduce two new migration concepts: constants and pseudofields. Both can be used as data placeholders in the migration timeline. Along with other process plugins, they allow you to build dynamic values that can be used as part of the migrate process pipeline.

Syntax for constants and pseudofields in the Drupal process migration pipeline

Setting and using constants

In the Migrate API, a constant is an arbitrary value that can be used later in the process pipeline. They are set as direct children of  the source section. You write a constants key whose value is a list of name-value pairs. Even though they are defined in the source section, they are independent of the particular source plugin in use. The following code snippet shows a generalization for settings and using constants:

source:
  constants:
    MY_STRING: 'http://understanddrupal.com'
    MY_INTEGER: 31
    MY_DECIMAL: 3.1415927
    MY_ARRAY:
      - 'dinarcon'
      - 'dinartecc'
  plugin: source_plugin_name
  source_plugin_config_1: source_config_value_1
  source_plugin_config_2: source_config_value_2
process:
  process_destination_1: constants/MY_INTEGER
  process_destination_2:
    plugin: concat
    source: constants/MY_ARRAY
    delimiter: ' '

You can set as many constants as you need. Although not required by the API, it is a common convention to write the constant names in all uppercase and using underscores (_) to separate words. The value can be set to anything you need to use later. In the example above, there are strings, integers, decimals, and arrays. To use a constant in the process section you type its name, just like any other column provided by the source plugin. Note that you use the constant you need to name the full hierarchy under the source section. That is, the word constant and the name itself separated by a slash (/) symbol. They can be used to copy their value directly to the destination or as part of any process plugin configuration.

Technical note: The word constants for storing the values in the source section is not special. You can use any word you want as long as it does not collide with another configuration key of your particular source plugin. A reason to use a different name is that your source actually contains a column named constants. In that case you could use defaults or something else. The one restriction is that whatever value you use, you have to use it in the process section to refer to any constant. For example:

source:
  defaults:
    MY_VALUE: 'http://understanddrupal.com'
  plugin: source_plugin_name
  source_plugin_config: source_config_value
process:
  process_destination: defaults/MY_VALUE

Setting and using pseudofields

Similar to constants, pseudofields stores arbitrary values for use later in the process pipeline. There are some key differences. Pseudofields are set in the process section. The name is arbitrary as long as it does not conflict with a property name or field name of the destination. The value can be set to a verbatim copy from the source (a column or a constant) or they can use process plugins for data transformations. The following code snippet shows a generalization for settings and using pseudofields:

source:
  constants:
    MY_BASE_URL: 'http://understanddrupal.com'
  plugin: source_plugin_name
  source_plugin_config_1: source_config_value_1
  source_plugin_config_2: source_config_value_2
process:
  title: source_column_title
  my_pseudofield_1:
    plugin: concat
    source:
      - constants/MY_BASE_URL
      - source_column_relative_url
    delimiter: '/'
  my_pseudofield_2:
    plugin: urlencode
    source: '@my_pseudofield_1'
  field_link/uri: '@my_pseudofield_2'
  field_link/title: '@title'

In the above example, my_pseudofield_1 is set to the result of a concat process transformation that joins a constant and a column from the source section. The result value is later used as part of a urlencode process transformation. Note that to use the value from my_pseudofield_1 you have to enclose it in quotes (') and prepend an at sign (@) to the name. The new value obtained from URL encode operation is stored in my_pseudofield_2. This last pseudofield is used to set the value of the URI subfield for field_link. The example could be simplified, for example, by using a single pseudofield and chaining process plugins. It is presented that way to demonstrate that a pseudofield could be used as direct assignments or as part of process plugin configuration values.

Technical note: If the name of the subfield can be arbitrary, how can you prevent name clashes with destination property names and field names? You might have to look at the source for the entity and the configuration of the bundle. In the case of a node migration, look at the baseFieldDefinitions() method of the Node class for a list of property names. Be mindful of class inheritance and method overriding. For a list of fields and their machine names, look at the “Manage fields” section of the content type you are migrating into. The Field API prefixes any field created via the administration interface with the string field_. This reduces the likelihood of name clashes. Other than these two name restrictions, anything else can be used. In this case, the Migrate API will eventually perform an entity save operation which will discard the pseudofields.

Understanding Drupal Migrate API process pipeline

The migrate process pipeline is a mechanism by which the value of any destination property, field, or pseudofield that has been set can be used by anything defined later in the process section. The fact that using a pseudofield requires to enclose its name in quotes and prepend an at sign is actually a requirement of the process pipeline. Let’s see some examples using a node migration:

  • To use the title property of the node entity, you would write @title
  • To use the field_body field of the Basic page content type, you would write @field_body
  • To use the my_temp_value pseudofield, you would write @my_temp_value

In the process pipeline, these values can be used just like constants and columns from the source. The only restriction is that they need to be set before being used. For those familiar with the rewrite results feature of Views, it follows the same idea. You have access to everything defined previously. Anytime you use enclose a name in quotes and prepend it with an at sign, you are telling the migrate API to look for that element in the process section instead of the source section.

Migrating images using the image_import plugin

Let’s practice the concepts of constants, pseudofields, and the migrate process pipeline by modifying the example of the previous entry. The Migrate Files module provides another process plugin named image_import that allows you to directly set all the subfield values in the plugin configuration itself.

As in previous examples, we will create a new module and write a migration definition file to perform the migration. It is assumed that Drupal was installed using the standard installation profile. The code snippets will be compact to focus on particular elements of the migration. The full code is available at https://github.com/dinarcon/ud_migrations The module name is UD Migration constants and pseudofields and its machine name is ud_migrations_constants_pseudofields. The id of the example migration is udm_constants_pseudofields. Refer to this article for instructions on how to enable the module and run the migration. Make sure to download and enable the Migrate Files module. Otherwise, you will get an error like: “In DiscoveryTrait.php line 53: The "file_import" plugin does not exist. Valid plugin IDs for Drupal\migrate\Plugin\MigratePluginManager are:...”. Let’s see part of the source definition:

source:
  constants:
    BASE_URL: 'https://agaric.coop'
    PHOTO_DESCRIPTION_PREFIX: 'Photo of'
  plugin: embedded_data
  data_rows:
    -
      unique_id: 1
      name: 'Michele Metts'
      photo_url: 'sites/default/files/2018-12/micky-cropped.jpg'
      photo_width: '587'
      photo_height: '657'

Only one record is presented to keep snippet short, but more exist. In addition to having a unique identifier, each record includes a name, a short profile, and details about the image. Note that this time, the photo_url does not provide an absolute URL. Instead, it is a relative path from the domain hosting the images. In this example, the domain is https://agaric.coop so that value is stored in the BASE_URL constant which is later used to assemble a valid absolute URL to the image. Also, there is no photo description, but one can be created by concatenating some strings. The PHOTO_DESCRIPTION_PREFIX constant stores the prefix to add to the name to create a photo description. Now, let’s see the process definition:

process:
  title: name
  psf_image_url:
    plugin: concat
    source:
      - constants/BASE_URL
      - photo_url
    delimiter: '/'
  psf_image_description:
    plugin: concat
    source:
      - constants/PHOTO_DESCRIPTION_PREFIX
      - name
    delimiter: ' '
  field_image:
    plugin: image_import
    source: '@psf_image_url'
    reuse: TRUE
    alt: '@psf_image_description'
    title: '@title'
    width: photo_width
    height: photo_height

The title node property is set directly to the value of the name column from the source. Then, two pseudofields. psf_image_url stores a valid absolute URL to the image using the BASE_URL constant and the photo_url column from the source. psf_image_description uses the PHOTO_DESCRIPTION_PREFIX constant and the name column from the source to store a description for the image.

For the field_image field, the image_import plugin is used. This time, the subfields are not set manually. Instead, they are assigned using plugin configuration keys. The absence of the id_only configuration allows for this. The URL to the image is set in the source key and uses the psf_image_url pseudofield. The alt key allows you to set the alternative attribute for the image and in this case the psf_image_description pseudofield is used. For the title subfield sets the text of a subfield with the same name and in this case it is assigned the value of the title node property which was set at the beginning of the process pipeline. Remember that not only psedufields are available. Finally, the width and height configuration uses the columns from the source to set the values of the corresponding subfields.

What did you learn in today’s blog post? Did you know you can define constants in your source as data placeholders for use in the process section? Were you aware that pseudofields can be created in the process section to store intermediary data for process definitions that come next? Have you ever wondered what is the migration process pipeline and how it works? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 05 2019
Aug 05

When deploying changes to a Drupal environment, you should be running database updates (e.g. drush updb, or through update.php) first, and only then import new configuration. The reason for this is that update hooks may want to update configuration, which will fail if that configuration is already structured in the new format (you can't do without updates either; update hooks don't just deal with configuration, but may also need to change the database schema and do associated data migrations). So, there really is no discussion; updates first, config import second. But sometimes, you need some configuration to be available before executing an update hook.

For example, you may want to configure a pathauto pattern, and then generate all aliases for the affected content. Or, you need to do some content restructuring, for which you need to add a new field, and then migrate data from an old field into the new field (bonus tip: you should be using post update hooks for such changes). So, that's a catch-22, right?

Well, no. The answer is actually pretty simple, at least in principle: make sure you import that particular configuration you need within your update hook. For importing some configuration from your configuration sync directory, you can add this function to your module's .install file:

/**
 * Synchronize a configuration entity.
 * 
 * Don't use this to create a new field, use 
 * my_custom_module_create_field_from_sync().
 *
 * @param string $id
 *   The config ID.
 *
 * @see https://blt.readthedocs.io/en/9.x/readme/configuration-management/#using-update-hooks-to-importing-individual-config-files
 */
function my_custom_module_read_config_from_sync($id) {
  // Statically cache storage objects.
  static $fileStorage, $activeStorage;

  if (empty($fileStorage)) {
    global $config_directories;
    $fileStorage = new FileStorage($config_directories[CONFIG_SYNC_DIRECTORY]);
  }
  if (empty($activeStorage)) {
    $activeStorage = \Drupal::service('config.storage');
  }

  $config_data = $fileStorage->read($id);
  $activeStorage->write($id, $config_data);
}

Use it like this:

my_custom_module_read_config_from_sync('pathauto.pattern.landing_page_url_alias');

As you might have seen in the docblock above that function, it is not actually suitable for creating fields. This is because just importing the configuration will not create the field storage in the database. When you need to create a field, use the following code:

/**
 * Creates a field from configuration in the sync directory.
 *
 * For fields the method used in kankernl_custom_read_config_from_sync() does
 * not work properly.
 *
 * @param string $entityTypeId
 *   The ID of the entity type the field should be created for.
 * @param string[] $bundles
 *   An array of IDs of the bundles the field should be added to.
 * @param string $field
 *   The name of the field to add.
 *
 * @throws \Drupal\Core\Entity\EntityStorageException
 */
function my_custom_module_create_field_from_sync($entityTypeId, array $bundles, $field) {
  // Statically cache storage objects.
  static $fileStorage;

  // Create the file storage to read from.
  if (empty($fileStorage)) {
    global $config_directories;
    $fileStorage = new FileStorage($config_directories[CONFIG_SYNC_DIRECTORY]);
  }

  /** @var \Drupal\Core\Entity\EntityStorageInterface $fieldConfigStorage */
  $fieldStorage = \Drupal::service('entity_type.manager')
    ->getStorage('field_storage_config');

  // If the storage does not yet exit, create it first.
  if (empty($fieldStorage->load("$entityTypeId.$field"))) {
    \Drupal::service('entity_type.manager')->getStorage('field_storage_config')
      ->create($fileStorage->read("field.storage.$entityTypeId.$field"))
      ->save();
  }

  /** @var \Drupal\Core\Entity\EntityStorageInterface $fieldConfigStorage */
  $fieldConfigStorage = \Drupal::service('entity_type.manager')
    ->getStorage('field_config');

  // Create the field instances.
  foreach ($bundles as $bundleId) {
    $config = $fieldConfigStorage->load("$entityTypeId.$bundleId.$field");
    if (empty($config)) {
      $fieldConfigStorage->create($fileStorage->read("field.field.$entityTypeId.$bundleId.$field"))
        ->save();
    }
  }
}

And, once again, a usage example:

my_custom_module_create_field_from_sync('node', ['basic_page', 'article'], 'field_category');

The function will check whether the field already exists, so it is safe to run again, or to run it for a field that already exists on another bundle of the same entity type.

Note that when using post update hooks, it will be important to create a single hook implementation that applies all required actions for what should be considered a single change, because there are no guarantees about the order of post update hooks. So that would for example constitute:

  1. Create a new field.
  2. Migrate data from the old field to the new field.
  3. Remove the old field.

Hopefully, this helps someone get out of that catch 22. Whatever you do, don't run your config import before your database updates.

Aug 05 2019
Aug 05

I've been putting off learning how to build sites in Drupal 8, and migrating my existing Drupal 7 sites over to Drupal 8. Why? Drupal 8 uses a lot of new tools. I want to learn how to set up a Drupal 8 site in the "right" (optimal) way so that I don't incur technical debt for myself later on. That means I have a lot of tools to learn. That takes time, which I don't have a lot of. So I've procrastinated.

Now, two deadlines are creeping up on me. The first is the release of Drupal 9, planned for June 3, 2020. At that point, there's a bit longer before Drupal 7 goes unsupported (November 2021 is the current plan), but it's time to move.

The second is the requirement for Strong Customer Authentication (SCA) on European payments the middle of this coming September. One of my D7 sites runs Ubercart and uses Ubercart Stripe, which won't support SCA.The only alternative to upgrading is to disable Stripe as a payment method. Actually, AndraeRay has taken maintainer status of Ubercart Stripe and nearly has an SCA-compliant 7.x-3.x branch ready. Many thanks!

So it's time to start. But how do I set up my development workflow? Having done a lot of reading, here's what I propose. I'd really value input from the Drupal community here; I'm bound to have missed something important, and I've one big question I can't answer (see below). Please pile in at the Comments.

My Proposed Approach

  • Set up a development server, separate from the server that live sites will be deployed on. This will have plenty of RAM, the same version of PHP as the deployment environment, and Composer installed.
  • I plan to use a private git repository to store each site's code and configuration.
  • On the development server, use Composer to install drupal-composer/drupal-project.
  • Use .gitignore to exclude all core and contributed modules and themes from the git repository.
  • I'll also move environment-specific settings (such as database credentials) to a settings.local.php file, which will also be excluded using .gitignore.
  • Add everything else to git.
  • As I work on the site, I'll commit, add and push changes to the git repository.
  • I'll also set up a sync directory using $config_directories[‘sync’], so that I can use Drupal console to export the site's configuration and include that in git as well.
  • On the production server, I can periodically pull everything from the git repository. When I do so, I'd use composer update (never composer install). This removes the need for the production server to have a large amount of RAM (since generating composer.lock is done on the development server). I'd then use Drupal console to import the synchronised configuration, and clear all caches.
  • For theming, on Drupal 7 I built my own responsive themes by subtheming Zen. Zen for Drupal 8 is in Alpha only; the last (alpha) release was 3 years ago and the last commit was 20 months ago. So it looks like the best approach is to subtheme Classy directly.
  • Previously, I've found Sass and Compass to be a great help with building a custom theme, so I'd plan to use them again. I'd include one simple .scss file that is excluded from git to override the base hue of the theme; this allows my development site to have a different colour scheme to the production one, to help me always remember which site I'm on.

All of the above can be modified to allow a staging copy of the site as well, if that becomes helpful.

Questions

I have one big question.

This workflow allows me to make configuration changes on the development site, and then push them up to the deployment site when ready.

It seems to me that, whilst configuration needs pushing from development to production, content needs pushing the other way. That's to say, I'll want regularly to make sure my development site contains the latest content on the production site. New and altered pages need pushing down, as do nodes that use the new content types I've developed in development.

Configuration: Development => Staging => Production
Content: Production => Staging => Development

How do you do this?

Specifically, I don't think I want to dump the entire database from production, and replace the development database contents with the sqldump. Doing that would also override configuration changes on the development site, and that's not the way this whole workflow is designed.

So is there a generally adopted method to use, to dump the site content from production and import it to development, but without making any changes to the configuration that is stored in the development database?

Over to you

So over to you, reader. I've learnt a lot, but I have a lot to learn still.

Can you help answer that question?

Have I missed anything really important? Would anything in the approach above be unwise, and if so what and why? Am I right that Classy is the best base theme, or is another preferable? (It would need to be actively developed, likely to stick around as Drupal moves through 8.8, 8.9, 9.x and so on, and not slow the site down significantly or make it harder to maintain by adding lots of preprocessing or complex div structures / css libraries). Am I using git and composer in the right way? Any other advice?

Thanks in advance!

Aug 05 2019
Aug 05

Creating a faceted search in Drupal implies some configuration steps. This can be overwhelming to people new to Drupal.

The MixItUp Views Drupal 8 module allows you to create a simplified version of a faceted search based on the taxonomies of the content type. It also provides a nice animation, that makes the user experience even better.

This could be, for example, a good alternative to small commerce sites or other types of online catalogs at their initial state.

Screenshot of an e-commerce catalogue with a filtered search of products

Let’s start!

Step #1. Install the required modules

  • Open the terminal application of your computer
  • Go to the root of your Drupal installation (the composer.json file is located inside this directory)
  • Type the following command:

composer require drupal/mixitup_views

Type in the installation command

  • Go to the vendor folder of your Drupal installation
  • Locate the /patrickkunka directory
  • Move its whole content to the /libraries directory (you will have to create a libraries directory if you do not have one already):

Move the directory

  • Leave the empty /patrickkunka directory inside the /vendor folder
  • Make sure that mixitup.js and mixitup.min.js files are inside /libraries/mixitup/dist

Make sure the JS files are inside the proper folder

  • Click Extend
  • Scroll down until you find the MixItUp Views module and check it
  • Click Install

Click Install

Step #2. Create the Taxonomy Terms

For this example, it is necessary to create 3 vocabularies and their respective terms according to this structure.

  • Color
    • Black
    • Red
    • Blue
    • Grey
    • Lightblue
    • Orange
    • White
    • Yellow
  • Brand
    • Brand A
    • Brand B
  • Type
    • Long Sleeve
    • Short sleeve
  • Click Structure > Taxonomy > Add vocabulary

Click Structure > Taxonomy > Add vocabulary

Once you have created the first vocabulary, you have to add terms to it.

  • Click Add term

Click Add term

  • Enter each one of the Color terms and click Save each time

Click Save each time

  • Once you have added all Color terms, repeat the process for the other 2 vocabularies and their terms

Repeat the process for the other 2 vocabularies and their terms

Repeat the process for the other 2 vocabularies and their terms

Step #3. Add the Taxonomy Fields to the Article Content Type

  • Click Structure > Content types > Article > Add field

Click <i>Structure > Content types > Article > Add field

  • Select the "Taxonomy" term from the dropdown and give this field a proper name. It makes sense to use the same vocabulary name.
  • Click Save and continue

Click Save and continue

  • Set the Allowed number of values to Unlimited (only for the Color taxonomy term)
  • Click Save field settings

Click Save field settings

Note: This is because a shirt can have more than one color, but it cannot have long and short sleeves at the same time.

  • Select by each Reference type the corresponding vocabulary and click Save settings

click Save settings

  • Repeat this process for the other 2 taxonomy terms

This is just an example. For a real product node, you would need other fields like a price field and a link to the cart.

For a real product node, you would need other fields like a price field and a link to the cart

  • Click the Manage form display tab
  • Locate the Color field and change the field widget to Check boxes/radio buttons
  • Click Save

Click Save

  • Click the Manage display tab
  • Rearrange the taxonomy fields and hide their labels
  • Disable the Tags field by dragging it to the Disabled section
  • Click Save

Click Save

Step #4. Create Content

  • Click Content > Add content > Article
  • Enter proper text and images. Make sure you select a value for each taxonomy field. This makes sense since we want to filter the results of the view according to the taxonomy terms (you could have made these fields required to avoid empty fields here)
  • Click Save

Click Save

  • Repeat the process for each one of the nodes

Repeat the process for each one of the nodes

Step # 5. Create the View

  • Click Structure > Views > Add view

Click <i>Structure > Views > Add view

  • Show Content of Type Article
  • Check Create a page
  • Under Page Display Settings select MixItUp of Fields (you can use Teasers as well)
  • Click Save and edit

Click Save and edit

  • Click the Add button in the Field section of the Views UI

Click the Add button in the Field section of the Views UI

  • In the search box type: "appears in: article" (without the double quotes)
  • Check the fields Image, Color, Brand, and Type
  • Click Add and configure fields

Click Add and configure fields

  • For each one of the taxonomy terms click Apply
  • Change the dimensions of the image field and link it to the Content
  • Click Apply

Click Apply

  • In the Format section, click the MixItup Settings

In the Format section, click the MixItup Settings

This is where you can configure settings, such as the animation of the elements of the view, the aggregation type, and other additional sorting and filtering options. You should take a look at each one of these configurations and read their description.

  • Leave the defaults and click Apply
  • Save the view

Go to the page and test the filtered search

Go to the page and test the filtered search

Go to the page and test the filtered search

Congratulations! You have just added another practical module to your site-builder knowledge base. Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Aug 05 2019
Aug 05

I catch up with Brendan Blaine, a developer for the Drupal Association, to find out what it takes to run events.drupal.org, why the conferences run so smoothly, and always remember to use a coaster.

It's really exciting to be a part of the team that puts on DrupalCon.

Aug 05 2019
Aug 05

Using cloud is about leveraging its agility among other benefits. For the Drupal-powered website, a right service provider can impact how well the website performs and can affect the business revenue.

A robust server infrastructure, such as AWS, when backs up the most advanced CMS, Drupal, it proves to accelerate the website’s performance, security and availability.

But why AWS and what benefits does it offer over others? Let’s deep dive to understand how it proves to be the best solution for hosting your Drupal websites.

Points To Consider For Hosting Drupal Websites

The following are the points to keep in mind while considering providers for hosting your pure or headless Drupal website.

Better Server Infrastructure: Drupal specialised cloud hosting provider should offer a server infrastructure that is specifically optimized for running Drupal websites in a way they were designed to run.

Better Speed: It should help optimise the Drupal website to run faster and should have the ability to use caching tools such as memcache, varnish, etc.

Better Support: The provider should offer better hosting support with the right knowledge of a Drupal website.

Better Security and Compatibility: The hosting provider should be able to provide security notifications, server-wide security patches, and even pre-emptive server upgrades to handle nuances in upcoming Drupal versions.

Why not a traditional server method?

There are two ways of hosting Drupal website via traditional server setups: 

  • a shared hosting server, where multiple websites run on the same server
  • or  a dedicated Virtual Private Server (VPS) per website. 

However, there are disadvantages to this approach, which are:

  1. With a lot of non-redundant single-instance services running on the same server, there are chances if any component crashes, the entire site can get offline.
  2. Being non-scalable, this server does not scale up or down automatically and requires a manual intervention to make changes to the hardware configuration and may cause server to go down due to an unexpected traffic boost.
  3. The setup constantly runs at full power, irrespective of usage, causing wastage of resources and money.

    Hosting Drupal on AWS

Amazon Web Services (AWS) is a pioneer of cloud hosting industry providing hi-tech server infrastructure and is proved to be highly secure and reliable.
With serverless computing, developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. It enables you to build modern applications with increased agility and lower total cost of ownership and time-to-market.

With serverless being the fastest-growing trend with an annual growth rate of 75% and foreseen to be adopted at a much higher rate, let’s understand the significance of the AWS components in the Virtual Private Cloud (VPC). Each of these components proves it to be the right choice for hosting pure or headless websites.

AWS-syanmic-content-srijan-technologiesArchitecture diagram showcasing Drupal hosting on AWS

  •  Restrict connection: NAT Gateway

Network Address Translation (NAT) gateway enables instances in a private subnet to connect to the internet or other AWS services. Hence, the private instances in the private subnet are not exposed via the Internet gateway, instead, all the traffic is routed via the NAT gateway. 

The gateway ensures that the site will always remain up and running. AWS takes over the responsibility of its maintenance.

  • Restrict access: Bastion Host
Bastion hosts protects the system by restricting access to backend systems in protected or sensitive network segments. Its benefit is that it minimises the chances of any potential security attack.
  • Database: AWS Aurora

The Aurora database provides invaluable reliability and scalability, better performance and response times. With fast failover capabilities and storage durability, it minimizes technical obstacles.

  • Upload content: Amazon S3

With Amazon S3,  store, retrieve and protect any amount of data at any time in a scalable storage bucket. Recover lost data easily, pay for the storage you actually use, protect data from unauthorized use and easily upload and download your data with SSL encryption.

  • Memcached/Redis: AWS Elasticache
Elasticache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store in the cloud.
  • Edge Caching: AWS CloudFront
CloudFront is an AWS content delivery network which provides a globally-distributed network of proxy servers which cache content locally to consumers, to improve access speed for downloading the content.
  • Web servers: Amazon EC2

Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud.

Amazon Route 53 effectively connects user requests to infrastructure running in AWS and can also be used toroute users to infrastructure outside of AWS.

Benefits of Hosting Drupal Website on AWS

Let’s look at the advantages of AWS for hosting pure or headless Drupal websites.

High Performing Hosting Environment

The kind of performance you want from your server depends upon the type of Drupal website you are building. A simple website with a decent amount of traffic can work well on a limited shared host platform. However, for a fairly complex interactive Drupal site, a typical shared hosting solution might not be feasible. 

Instead, opt for AWS, which provides a server facility and you get billed as per your usage.

Improved Access To Server Environment 

Shared hosting environment restricts its users to gain full control and put a limitation on their ability to change configurations for Apache or PHP, and there might be caps on bandwidth and file storage. These limitations get removed when you're willing to pay a higher premium for advanced level access and hosting services.

This is not true with AWS, which gives you direct control over your server instances, with permissions to SSH or use its interface control panel to adjust settings.

Control over the infrastructure 

Infrastructure needs might not remain constant and are bound to change with time. Adding or removing hosting resources might prove to be difficult or not even possible and would end up paying for the unused resources.

However, opting for AWS will let you pay for the services you use and can shut them off easily if you don’t need them anymore. On-demand virtual hosting and a wide variety of services and hardware types make AWS convenient for anyone and everyone.

No Long-term Commitments

If you are hosting a website to gauge the performance and responsiveness, you probably would not want to allocate a whole bunch of machines and resources for a testing project which might be over within a week or so.

The convenience of AWS on-demand instances means that you can spin up a new server in a matter of minutes, and shut it down (without any further financial cost) in just as much time.

Refrain Physical Hardware Maintenance

The advantage of using virtual resources is to avoid having to buy and maintain physical hardware. 

Going with virtually hosted servers with AWS helps you focus on your core competency - creating Drupal websites and frees you up from dealing with data center operations.

Why Choose Srijan?

Srijan’s team of AWS professionals can help you migrate your website on AWS cloud. With an expertise in enhancing Drupal-optimised hosting environment utilising AWS for a reliable enterprise-level hosting, we can help you implement various AWS capabilities as per your enterprises’ requirements. Drop us a line and let our experts explore how you can get the best of AWS.

Aug 05 2019
Aug 05

In the previous entry, we learned how to use process plugins to transform data between source and destination. Some Drupal fields have multiple components. For example, formatted text fields store the text to display and the text format to apply. Image fields store a reference to the file, alternative, and title text, width, and height. The migrate API refers to a field’s component as a subfield. Today we will learn how to migrate into them and know which subfields are available.

Examples of migrating into subfields

Getting the example code

Today’s example will consist of migrating data into the `Body` and `Image` fields of the `Article` content type that are available out of the box. This assumes that Drupal was installed using the `standard` installation profile. As in previous examples, we will create a new module and write a migration definition file to perform the migration. The code snippets will be compact to focus on particular elements of the migration. The full code snippet is available at https://github.com/dinarcon/ud_migrations The module name is `UD Migration Subfields` and its machine name is `ud_migrations_subfields`. The `id` of the example migration is `udm_subfields`. Refer to this article for instructions on how to enable the module and run the migration.

source:
  plugin: embedded_data
  data_rows:
    -
      unique_id: 1
      name: 'Michele Metts'
      profile: 'freescholar on Drupal.org'
      photo_url: 'https://agaric.coop/sites/default/files/2018-12/micky-cropped.jpg'
      photo_description: 'Photo of Michele Metts'
      photo_width: '587'
      photo_height: '657'

Only one record is presented to keep snippet short, but more exist. In addition to having a unique identifier, each record includes a name, a short profile, and details about the image.

Migrating formatted text

The `Body` field is of type `Text (formatted, long, with summary)`. This type of field has three components: the full text (value) to present, a summary text, and a text format. The Migrate API allows you to write to each component separately defining subfields targets. The next code snippets shows how to do it:

process:
  field_text_with_summay/value: source_value
  field_text_with_summay/summary: source_summary
  field_text_with_summay/format: source_format

The syntax to migrate into subfields is the machine name of the field and the subfield name separated by a slash (/). Then, a colon (:), a space, and the value. You can set the value to a source column name for a verbatim copy or use any combination of process plugins. It is not required to migrate into all subfields. Each field determines what components are required, so it is possible that not all subfields are set. In this example, only the value and text format will be set.

process:
  body/value: profile
  body/format:
    plugin: default_value
    default_value: restricted_html

The `value` subfield is set to the `profile` source column. As you can see in the first snippet, it contains HTML markup. An `a` tag to be precise. Because we want the tag to be rendered as a link, a text format that allows such tag needs to be specified. There is no information about text formats in the source, but Drupal comes with a couple we can choose from. In this case, we use the `Restricted HTML` text format. Note that the `default_value` plugin is used and set to `restricted_html`. When setting text formats, it is necessary to use its machine name. You can find them in the configuration page for each text format. For `Restricted HTML` that is /admin/config/content/formats/manage/restricted_html.

Note: Text formats are a whole different subject that even has security implications. To keep the discussion on topic, we will only give some recommendations. When you need to migrate HTML markup, you need to know which tags appear in your source, which ones you want to allow in Drupal, and select a text format that accepts what you have whitelisted and filter out any dangerous tags like `script`. As a general rule, you should avoid setting the `format` subfield to use the `Full HTML` text format.

Migrating images

There are different approaches to migrating images. Today, we are going to use the Migrate Files module. It is important to note that Drupal treats images as files with extra properties and behavior. Any approach used to migrate files can be adapted to migrate images.

process:
  field_image/target_id:
    plugin: file_import
    source: photo_url
    reuse: TRUE
    id_only: TRUE
  field_image/alt: photo_description
  field_image/title: photo_description
  field_image/width: photo_width
  field_image/height: photo_height

When migrating any field you have to use their machine in the mapping section. For the `Image` field, the machine name is `field_image`. Knowing that, you set each of its subfields:

  • `target_id` stores an integer number which Drupal uses as a reference to the file.
  • `alt` stores a string that represents the alternative text. Always set one for better accessibility.
  • `title` stores a string that represents the title attribute.
  • `width` stores an integer number which represents the width in pixels.
  • `height` stores an integer number which represents the height in pixels.

For the `target_id`, the plugin `file_import` is used. This plugin requires a `source` configuration value with a URL to the file. In this case, the `photo_url` column from the source section is used. The `reuse` flag indicates that if a file with the same location and name exists, it should be used instead of downloading a new copy. When working on migrations, it is common to run them over and over until you get the expected results. Using the `reuse` flag will avoid creating multiple references or copies of the image file, depending on the plugin configuration. The `id_only` flag is set so that the plugin only returns that file identifier used by Drupal instead of an entity reference array. This is done because each subfield is being set manually. For the rest of the subfields (`alt`, `title`, `width`, and `height`) the value is a verbatim copy from the source.

Note: The Migrate Files module offers another plugin named `image_import`. That one allows you to set all the subfields as part of the plugin configuration. An example of its use will be shown in the next article. This example uses the `file_import` plugin to emphasize the configuration of the image subfields.

Which subfields are available?

Some fields have many subfields. Address fields, for example, have 13 subfields. How can you know which ones are available? The answer is found in the class that provides the field type. Once you find the class, look for the `schema` method. The subfields are contained in the `columns` array of the value returned by the `schema` method. Let’s see some examples:

  • The `Text (plain)` field is provided by the StringItem class.
  • The `Number (integer)` field is provided by the IntegerItem class.
  • The `Text (formatted, long, with summary)` is provided by the TextWithSummaryItem class.
  • The `Image` field is provided by the ImageItem class.

The `schema` method defines the database columns used by the field to store its data. When migrating into subfields, you are actually migrating into those particular database columns. Any restriction set by the database schema needs to be respected. That is why you do not use units when migrating width and height for images. The database only expects an integer number representing the corresponding values in pixels. Because of object-oriented practices, sometimes you need to look at the parent class to know all the subfields that are available.

Another option is to connect to the database and check the table structures. For example, the `Image` field stores its data in the `node__field_image` table. Among others, this table has five columns named after the field’s machine name and the subfield:

  • field_image_target_id
  • field_image_alt
  • field_image_title
  • field_image_width
  • field_image_height

Looking at the source code or the database schema is arguably not straightforward. This information is included for reference to those who want to explore the Migrate API in more detail. You can look for migrations examples to see what subfields are available. I might even provide a list in a future blog post. ;-)

Tip: You can use Drupal Console for code introspection and analysis of database table structure. Also, many plugins are defined by classes that end with the string `Item`. You can use your IDEs search feature to find the class using the name of the field as hint.

Default subfields

Every Drupal field has at least one subfield. For example, `Text (plain)` and `Number (integer)` defines only the `value` subfield. The following code snippets are equivalent:

process:
  field_string/value: source_value_string
  field_integer/value: source_value_integer
process:
  field_string: source_value_string
  field_integer: source_value_integer

In examples from previous days, no subfield has been manually set, but Drupal knows what to do. As we have mentioned, the Migrate API offers syntactic sugar to write shorter migration definition files. This is another example. You can safely skip the default subfield and manually set the others as needed. For `File` and `Image` fields, the default subfield is `target_id`. How does the Migrate API know what subfield is the default? You need to check the code again.

The default subfield is determined by the return value of `mainPropertyName` method of the class providing the field type. Again, object oriented practices might require looking at the parent classes to find this method. In the case of the `Image` field, it is provided by ImageItem which extends FileItem which extends EntityReferenceItem. It is the latter that contains the `mainPropertyName` returning the string `target_id`.

What did you learn in today’s blog post? Were you aware of the concept of subfields? Did you ever wonder what are the possible destination targets (subfields) for each field type? Did you know that the Migrate API finds the default subfield for you? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 03 2019
Aug 03

In the previous entry, we wrote our first Drupal migration. In that example, we copied verbatim values from the source to the destination. More often than not, the data needs to be transformed in some way or another to match the format expected by the destination or to meet business requirements. Today we will learn more about process plugins and how they work as part of the Drupal migration pipeline.

Syntax for process plugin definition and chaining

Syntactic sugar

The Migrate API offers a lot of syntactic sugar to make it easier to write migration definition files. Field mappings in the process section are an example of this. Each of them requires a process plugin to be defined. If none is manually set, then the get plugin is assumed. The following two code snippets are equivalent in functionality.

process:
  title: creative_title

process:
  title:
    plugin: get
    source: creative_title

The get process plugin simply copies a value from the source to the destination without making any changes. Because this is a common operation, get is considered the default. There are many process plugins provided by Drupal core and contributed modules. Their configuration can be generalized as follows:

process:
  destination_field:
    plugin: plugin_name
    config_1: value_1
    config_2: value_2
    config_3: value_3

The process plugin is configured within an extra level of indentation under the destination field. The plugin key is required and determines which plugin to use. Then, a list of configuration options follows. Refer to the documentation of each plugin to know what options are available. Some configuration options will be required while others will be optional. For example, the concat plugin requires a source, but the delimiter is optional. An example of its use appears later in this entry.

Providing default values

Sometimes, the destination requires a property or field to be set, but that information is not present in the source. Imagine you are migrating nodes. As we have mentioned, it is recommended to write one migration file per content type. If you know in advance that for a particular migration you will always create nodes of type Basic page, then it would be redundant to have a column in the source with the same value for every row. The data might not be needed. Or it might not exist. In any case, the default_value plugin can be used to provide a value when the data is not available in the source.

source: ...
process:
  type:
    plugin: default_value
    default_value: page
destination:
  plugin: 'entity:node'

The above example sets the type property for all nodes in this migration to page, which is the machine name of the Basic page content type. Do not confuse the name of the plugin with the name of its configuration property as they happen to be the same: default_value. Also note that because a (content) type is manually set in the process section, the default_bundle key in the destination section is no longer required. You can see the latter being used in the example of writing your Drupal migration blog post.

Concatenating values

Consider the following migration request: you have a source listing people with first and last name in separate columns. Both are capitalized. The two values need to be put together (concatenated) and used as the title of nodes of type Basic page. The character casing needs to be changed so that only the first letter of each word is capitalized. If there is a need to display them in all caps, CSS can be used for presentation. For example: FELIX DELATTRE would be transformed to Felix Delattre.

Tip: Question business requirements when they might produce undesired results. For instance, if you were to implement this feature as requested DAMIEN MCKENNA would be transformed to Damien Mckenna. That is not the correct capitalization for the last name McKenna. If automatic transformation is not possible or feasible for all variations of the source data, take notes and perform manual updates after the initial migration. Evaluate as many use cases as possible and bring them to the client’s attention.

To implement this feature, let’s create a new module ud_migrations_process_intro, create a migrations folder, and write a migration definition file called udm_process_intro.yml inside it. Follow the instructions in this entry to find the proper location and folder structure or download the sample module from https://github.com/dinarcon/ud_migrations It is the one named UD Process Plugins Introduction and machine name udm_process_intro. For this example, we assume a Drupal installation using the standard installation profile which comes with the Basic Page content type. Let’s see how to handle the concatenation of first an last name.

id: udm_process_intro
label: 'UD Process Plugins Introduction'
source:
  plugin: embedded_data
  data_rows:
    -
      unique_id: 1
      first_name: 'FELIX'
      last_name: 'DELATTRE'
    -
      unique_id: 2
      first_name: 'BENJAMIN'
      last_name: 'MELANÇON'
    -
      unique_id: 3
      first_name: 'STEFAN'
      last_name: 'FREUDENBERG'
  ids:
    unique_id:
      type: integer
process:
  type:
    plugin: default_value
    default_value: page
  title:
    plugin: concat
    source:
      - first_name
      - last_name
    delimiter: ' '
destination:
  plugin: 'entity:node'

The concat plugin can be used to glue together an arbitrary number of strings. Its source property contains an array of all the values that you want put together. The delimiter is an optional parameter that defines a string to add between the elements as they are concatenated. If not set, there will be no separation between the elements in the concatenated result. This plugin has an important limitation. You cannot use strings literals as part of what you want to concatenate. For example, joining the string Hello with the value of the first_name column. All the values to concatenate need to be columns in the source or fields already available in the process pipeline. We will talk about the latter in a future blog post.

To execute the above migration, you need to enable the ud_migrations_process_intro module. Assuming you have Migrate Run installed, open a terminal, switch directories to your Drupal docroot, and execute the following command: drush migrate:import udm_process_intro Refer to this entry if the migration fails. If it works, you will see three basic pages whose title contains the names of some of my Drupal mentors. #DrupalThanks

Chaining process plugins

Good progress so far, but the feature has not been fully implemented. You still need to change the capitalization so that only the first letter of each word in the resulting title is uppercase. Thankfully, the Migrate API allows chaining of process plugins. This works similarly to unix pipelines in that the output of one process plugin becomes the input of the next one in the chain. When the last plugin in the chain completes its transformation, the return value is assigned to the destination field. Let’s see this in action:

id: udm_process_intro
label: 'UD Process Plugins Introduction'
source: ...
process:
  type: ...
  title:
    -
      plugin: concat
      source:
        - first_name
        - last_name
      delimiter: ' '
    -
      plugin: callback
      callable: mb_strtolower
    -
      plugin: callback
      callable: ucwords
destination: ...

The callback process plugin pass a value to a PHP function and returns its result. The function to call is specified in the callable configuration option. Note that this plugin expects a source option containing a column from the source or value of the process pipeline. That value is sent as the first argument to the function. Because we are using the callback plugin as part of a chain, the source is assumed to be the last output of the previous plugin. Hence, there is no need to define a source. So, we concatenate the columns, make them all lowercase, and then capitalize each word.

Relying on direct PHP function calls should be a last resort. Better alternatives include writing your own process plugins which encapsulates your business logic separate of the migration definition. The callback plugin comes with its own limitation. For example, you cannot pass extra parameters to the callable function. It will receive the specified value as its first argument and nothing else. In the above example, we could combine the calls to mb_strtolower() and ucwords() into a single call to mb_convert_case($source, MB_CASE_TITLE) if passing extra parameters were allowed.

Tip: You should have a good understanding of your source and destination formats. In this example, one of the values to want to transform is MELANÇON. Because of the cedilla (ç) using strtolower() is not adequate in this case since it would leave that character uppercase (melanÇon). Multibyte string functions (mb_*) are required for proper transformation. ucwords() is not one of them and would present similar issues if the first letter of the words are special characters. Attention should be given to the character encoding of the tables in your destination database.

Technical note: mb_strtolower is a function provided by the mbstring PHP extension. It does not come enabled by default or you might not have it installed altogether. In those cases, the function would not be available when Drupal tries to call it. The following error is produced when trying to call a function that is not available: The "callable" must be a valid function or method. For Drupal and this particular function that error would never be triggered, even if the extension is missing. That is because Drupal core depends on some Symfony packages which in turn depend on the symfony/polyfill-mbstring package. The latter provides a polyfill) for mb_* functions that has been leveraged since version 8.6.x of Drupal.

What did you learn in today’s blog post? Did you know that syntactic sugar allows you to write shorter plugin definitions? Were you aware of process plugin chaining to perform multiple transformations over the same data? Had you considered character encoding on the source and destination when planning your migrations? Are you making your best effort to avoid the callback process plugin? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 03 2019
Aug 03

Approaching 20 years old, the Drupal Community must prioritize recruiting the next generation of Drupal Professionals

Kaleem Clarkson Ferris Wheel in Centennial Olympic Park in Atlanta, Georgia

Time flies when you are having fun. One of those sayings I remember my parents saying that turned out to be quite true. My first Drupal experience was nearly 10 years ago and within a blink of an eye, we have seen enormous organizations adopt and commit to Drupal such as Turner, the Weather Channel, The Grammys, and Georgia.gov.

Throughout the years, I have been very fortunate to meet a lot of Drupal community members in person but one thing I have noticed lately is that nearly everyone’s usernames can be anywhere between 10–15 years old. What does that mean? As my dad would say, it means we are getting O — L — D, old.

For any thriving community, family business, organization, or your even favorite band for that matter, all of these entities must think about succession planning. Succession what, what is succession planning?

Succession planning is a process for identifying and developing new leaders who can replace old leaders when they leave, retire or die. -Wikipedia

That's right, we need to start planning a process for identifying who can take over in leadership roles that continue to push Drupal forward. If we intend to keep Drupal as a viable solution for large and small enterprises, then we should market ourselves to the talent pool as a viable career option to help in an attempt to lure talent to our community.

There are many different way’s to promote our community and develop new leaders. Mentorship. Mentorship helps ease the barrier for entry into our community by providing guidance into how our community operates. Our community does have some great efforts taking place in the form of mentoring such as Drupal Diversity & Inclusion (DDI) initiative, the core mentoring initiative and of course the code and mentoring sprints at DrupalCon and DrupalCamps. These efforts are awesome and should be recognized as part of a larger strategic initiative to recruit the next generation of Drupal professionals.

Companies spend billions of dollars a year in recruiting but as an open-source community, we don’t have billions so

… what else can we do to attract new Drupal career professionals?

This year’s Atlanta Drupal Users’s Group (ADUG) decided to develop the Drupal Career Summit, all in an effort to recruit more professionals into the Drupal community. Participants will explore career opportunities, career development, and how open source solutions are changing the way we buy, build, and use technology.

  • Learn about job opportunities and training.
  • Hear how local leaders progressed through their careers and the change open source creates their clients and business.
  • Connect one-on-one with professionals in the career you want and learn about their progression, opportunities, challenges, and wins.

On Saturday, September 14 from 1pm -4:30pm. Hilton Garden Inn Atlanta-Buckhead 3342 Peachtree Rd., NE | Atlanta, GA 30326 | LEARN MORE

Student and job seekers can attend for FREE! The Summit will allow you to meet with potential employers and industry leaders. We’ll begin the summit with a panel of marketers, developers, designers, and managers that have extensive experience in the tech industry, and more specifically, the Drupal community. You’ll get a chance to learn about career opportunities and connect with peers with similar interests.

We’re looking for companies that want to hire and educate. You can get involved with the summit by becoming a sponsor for DrupalCamp Atlanta. Sponsors of the event will have the opportunity to engage with potential candidates through sponsored discussion tables and branded booths. With your sponsorship, you’ll get a booth, a discussion table, and 2 passes! At your booth, you’ll get plenty of foot traffic and a fantastic chance to network with attendees.

If you can’t physically attend our first Career Summit, you can still donate to our fundraising goals. And if you are not in the position to donate invite your employer, friends, and colleagues to participate. Drupal Career Summit.

Aug 03 2019
Aug 03

Decoupled Days 2019 last month, the third edition of the conference, was fantastic. I had the privilege to speak and attend last year, as well. The conference has quickly risen to be one of my favorite conference of the year.

Not familiar with Decoupled Days? Spawned in the “Hallway Track” of DrupalCon between its founders, the conference originated as Decoupled Drupal Days in 2017. Last year saw the phasing out of the word “Drupal” as the conference became focused on decoupling in general, not just Drupal. That is one reason it has quickly become a favorite event. It is an engineering and design conference. The act of decoupling in a system requires specific system design and presents engineering challenges. The organizers identify it as:

The only conference on the future of CMS, headless CMS, and decoupled CMS.

It’s not just about Drupal. Moreover, it’s not about abandoning Drupal. It is about using the correct tooling for the exact problem and how do our previous paradigms fit into this new architecture. One of the most exciting concepts discussed throughout the sessions was the shift of “where Drupal lives in the architecture stack.” Oh, and of course the exciting “battle” between GraphQL and RESTful APIs; more precisely for the event, the JSON API specification.

GraphQL vs JSON API

One identifiable topic at the conference was GraphQL. Facebook created the GraphQL specification, and the GraphQL Foundation now manages the specification. It has also grown in popularity thanks to venture-capital based companies like Apollo which have built server and client tooling around GraphQL. 

There were no JSON API specific sessions at the conference, but the JSON API maintainers were in attendance to listen and take in information rather than speak.

One of the most significant conversation pieces about GraphQL is that it isn’t a RESTful API interface. GraphQL is a query-like language that allows you to pass queries or mutations (non-read operations) as a GET or POST to the GraphQL endpoint. Each GraphQL server has its schema that maps to the backend data storage.

I enjoyed having many conversations about the different use cases for GraphQL and JSON API. I feel that GraphQL is a perfect fit to get up and running with a read-only API that requires a specially designed schema created by the consumer side. After that, when an API requires manipulation of data, or you don’t want to define the API schema, JSON API comes out on top. Again, it comes down to preference and proper tooling for the project.

After my Delivering Headless Commerce session, I was able to sit down with Mateu Aguiló BoschWim LeersGabe Sulliceand others to discuss the Drupal JSON API implementation. One of the biggest takeaways from that conversation is the plan forward to provide cross-bundle entity collections. The JSON API Cross Bundles module is being developed to sandbox feature development and eventually move support into Drupal core once an evaluated. 

Decoupled architectures in practice

I made it to two sessions which covered decoupled architectures that placed Drupal in a different position of the architecture stack.

Dominic Laylock from Mentrix discussed how the Economist scaled their content platform using an event-driven decoupled architecture. Drupal had become the monolithic centerpiece of the platform. With growing multichannel requirements, the Economist revised to an event-driven and microservice architecture. Drupal moved to the background, and its content pushed out, treating it as a content source like other items in their stack. Data is taken from Drupal and processed into a canonical data source based on Schema.org by workers and pushed into their content store. All consumers access content through a GraphQL endpoint that only accesses the content store. The session page: https://decoupleddays.com/session/content-systems-architecture-approaches-decoupled-world

Mateu Aguiló Bosch’s session focused on the importance of using a proxy in front of your Drupal API server. Mateu’s main argument was that your API proxy should be a Node.js server. In all reality, the proxy could be replaced by any service which handles concurrent requests and allows routing between different backend services. Many cloud providers offer API gateways, or you could write on in Golang or ReactPHP. The benefit of using Node.js? A Node.js server can execute server-side rendering for modern JavaScript frameworks. Server-side rendering improves your JavaScript application’s SEO and accessibility by rendering HTML on the server and not the client — like a standard server-side application. Here is a link to the session for when videos are available: https://decoupleddays.com/session/nodejs-proxy-between-drupal-and-your-consumers

Decoupled: 3 years of hosting and building

Michael Schmid of Amazee discussed their history of the building and hosting decoupled architectures — which spawned the creation of the amazee.io hosting company. Amazee Labs is the agency leading the development of the GraphQL server spec in Drupal. It was interesting to learn that the development and training were leveraged through client work, much like Centarro with Drupal Commerce. Here is a link to the session for when videos are available: https://decoupleddays.com/session/3-years-decoupled-website-building-and-hosting-our-learnings

Delivering Headless Commerce

On Thursday, I gave my Delivering Headless Commerce session. Centarro has been developing API-First improvements for Drupal Commerce, alongside features such as Addressbook, Pricelists, and Invoices. The presentation covered the available API integration options in Drupal and how to build consumers for a headless Drupal Commerce site. Both GraphQL and JSON:API have their benefits and challenges when implementing headless eCommerce. The slides go into depth on using either technology for the various components that go into a headless Drupal Commerce build. I discuss improvements we have made and our current development status to improve the API capabilities of Drupal Commerce.

You can also catch my talk again at the Drupal Chicago meetup on August 7th: https://www.meetup.com/drupalchicago/events/262875406/

Aug 02 2019
Aug 02

Our lead community developer, Alona Oneill, has been sitting in on the latest Drupal Core Initiative meetings and putting together meeting recaps outlining key talking points from each discussion. This article breaks down highlights from meetings this past May. You'll find that the meetings, while also providing updates of completed tasks, are also conversations looking for community member involvement. There are many moving pieces as things are getting ramped up for Drupal 9, so if you see something you think you can provide insights on, we encourage you to get involved.

Out of the box meeting  07/30/19

Issues were being worked on by, Ofer ShaalKeith Jay, and Mark Conroy.

Still needs work:

Needs to be reviewed:

Admin UI meeting! 07/31/19

Meetings are for core and contributed project developers as well as people who have integrations and services related to core. 

  • Usually happens every other Wednesday at 2:30pm UTC.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • There are roughly 5-10 minutes between topics for those who are multitasking to follow along.

Beta & Stable issue blockers that are not components and need design:

  • Refer to the spreadsheet of Claro components to find other items that need worked on. Found in the "other design issues" tab.
  • Also, there are several issues that are being marked as beta&stable blockers.
    • If you catch any issue that you think that needs design work please add them there so the design team is aware of them.
    • If you want to go through any of those and add screenshots or other info we'll really appreciate that!

Editor role next steps:

Tables on mobile:

  • Table overflow on mobile currently needs reworked.
  • There is a design proposal currently under review.
    • Andrew Nevins (anevins), recommend to look into accessible-only solutions to this problem, as making a table responsive (apart from simply adding a horizontal scrollbar) can disrupt the semantics of the table to people who need it the most.
Aug 02 2019
Aug 02

In the previous entry, we learned that the Migrate API is an implementation of an ETL framework. We also talked about the steps involved in writing and running migrations. Now, let’s write our first Drupal migration. We are going to start with a very basic example: creating nodes out of hardcoded data. For this, we assume a Drupal installation using the standard installation profile which comes with the Basic Page content type.

As we progress through the series, the migrations will become more complete and more complex. Ideally, only one concept will be introduced at a time. When that is not possible, we will explain how different parts work together. The focus of today's lesson is learning the structure of a migration definition file and how to run it.

Writing the migration definition file

The migration definition file needs to live in a module. So, let’s create a custom one named ud_migrations_first and set Drupal core’s migrate module as dependencies in the *.info.yml file.

Now, let’s create a folder called migrations and inside it a file called udm_first.yml. Note that the extension is yml, not yaml. The content of the file will be:


type: module
name: UD First Migration
description: 'Example of basic Drupal migration. Learn more at https://understanddrupal.com/migrations.'
package: Understand Drupal
core: 8.x
dependencies:
  - drupal:migrate

The final folder structure will look like:


id: udm_first
label: 'UD First migration'
source:
  plugin: embedded_data
  data_rows:
    -
      unique_id: 1
      creative_title: 'The versatility of Drupal fields'
      engaging_content: 'Fields are Drupal''s atomic data storage mechanism...'
    -
      unique_id: 2
      creative_title: 'What is a view in Drupal? How do they work?'
      engaging_content: 'In Drupal, a view is a listing of information. It can a list of nodes, users, comments, taxonomy terms, files, etc...'
  ids:
    unique_id:
      type: integer
process:
  title: creative_title
  body: engaging_content
destination:
  plugin: 'entity:node'
  default_bundle: page

YAML is a key-value format with optional nesting of elements. They are very sensitive to white spaces and indentation. For example, they require at least one space character after the colon symbol (:) that separates the key from the value. Also, note that each level in the hierarchy is indented by two spaces exactly. A common source of errors when writing migrations is improper spacing or indentation of the YAML files.

A quick glimpse at the file reveals the three major parts: source, process, and destination. Other keys provide extra information about the migration. There are more keys that the ones shown above. For example, it is possible to define dependencies among migrations. Another option is to tag migrations so they can be executed together. We are going to learn more about these options in future entries.

Let’s review each key-value pair in the file. For id, it is customary to set its value to match the filename containing the migration definition, but without the .yml extension. This key serves as an internal identifier that Drupal and the Migrate API use to execute and keep track of the migration. The id value should be alphanumeric characters, optionally using underscores to separate words. As for the label key, it is a human readable string used to name the migration in various interfaces.

In this example we are using the embedded_data source plugin. It allows you to define the data to migrate right inside the definition file. To configure it, you define a data_rows key whose value is an array of all the elements you want to migrate. Each element might contain an arbitrary number of key-value pairs representing “columns” of data to be imported.

A common use case for the embedded_data plugin is testing of the Migrate API itself. Another valid one is to create default content when the data is known in advance. I often present introduction to Drupal workshops. To save time, I use this plugin to create nodes which are later used in the views creation explanation. Check this repository for an example of this. Note that it uses a different directory structure to define the migrations. That will be explained in future blog posts.

For the destination we are using the entity:node plugin which allows you to create nodes of any content type. The default_bundle key indicates that all nodes to be created will be of type “Basic page”, by default. It is important to note that the value of the default_bundle key is the machine name of the content type. You can find it at /admin/structure/types/manage/page In general, the Migrate API uses machine names for the values. As we explore the system, we will point out when they are used and where to find the right ones.

In the process section you map columns from the source to node properties and fields. The keys are entity property names or the field machine names. In this case, we are setting values for the title of the node and its body field. You can find the field machine names in the content type configuration page: /admin/structure/types/manage/page/fields. Values can be copied directly from the source or transformed via process plugins. This example makes a verbatim copy of the values from the source to the destination. The column names in the source are not required to match the destination property or field name. In this example they are purposely different to make them easier to identify.

You can download the example code from https://github.com/dinarcon/ud_migrations The example above is actually in a submodule in that repository. The same repository will be used for many examples throughout series. Download the whole repository into the ./modules/custom directory of the Drupal installation and enable the “UD First Migration” module.

Running the migration

Let’s use Drush to run the migrations with the commands provided by Migrate Run. Open a terminal, switch directories to Drupal’s webroot, and execute the following commands.


$ drush pm:enable -y migrate migrate_run ud_migrations_first
$ drush migrate:status
$ drush migrate:import udm_first

The first command enables the core migrate module, the runner, and the custom module holding the migration definition file. The second command shows a list of all migrations available in the system. Only one should be listed with the migration ID udm_first. The third command executes the migration. If all goes well, you can visit the content overview page at /admin/content and see two basic pages created. Congratulations, you have successfully run your first Drupal migration!!!

Or maybe not? Drupal migrations can fail in many ways and sometimes the error messages are not very descriptive. In upcoming blog posts we will talk about recommended workflows and strategies for debugging migrations. For now, let’s mention a couple of things that could go wrong with this example. If after running the drush migrate:status command you do not see the udm_first migration, make sure that the ud_migrations_first module is enabled. If it is enabled, and you do not see it, rebuild the cache by running drush cache:rebuild.

If you see the migration, but you get a yaml parse error when running the migrate:import command check your indentation. Copying and pasting from GitHub to your IDE/editor might change the spacing. An extraneous space can break the whole migration so pay close attention. If the command reports that it created the nodes, but you get a fatal error when trying to view one, it is because the content type was not set properly. Remember that the machine name of the “Basic page” content type is page, not basic_page. This error cannot be fixed from the administration interface. What you have to do is rollback the migration issuing the following command: drush migrate:rollback udm_first, then fix the default_bundle value, rebuild the cache, and import again.

Note: Migrate Tools could be used for running the migration. This module depends on Migrate Plus. For now, let’s keep module dependencies to a minimum to focus on core Migrate functionality. Also, skipping them demonstrates that these modules, although quite useful, are not hard requirements for running migration projects. If you decide to use Migrate Tools make sure to uninstall Migrate Run. Both provide the same Drush commands and conflict with each other if the two are enabled.

What did you learn in today’s blog post? Did you know that Migrate Plus and Migrate Tools are not hard requirements for Drupal migrations projects? Did you know you can place your YAML files in a migrations directory? What advice would you give to someone writing their first migration? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your friends and colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 02 2019
Aug 02

A few days ago, on Wednesday, July 31st, Acquia held a webinar on digital experience titled “Think Bigger: Your Digital Experience is More Than Your Website”. 

The two presenters were Justin Emond, CEO of Third & Grove, and Tom Wentworth, SVP of Product Marketing at Acquia. 

They talked more generally about the experience economy and the recent important changes in digital experiences, and more specifically about digital experience platforms (DXP); namely, why an open DXP is the best solution and how Acquia’s services can serve as the foundation for an open DXP.

As with all Acquia webinars, a recording will be publicly available soon for anyone who wasn’t able to attend it or who wants to revisit certain points. In the meantime, we hope this recap will fill in enough gaps to make the wait easier or maybe even compel you to start rethinking your digital strategy today in preparation for the future.

Experience is everywhere

As Tom states, we are now in the “experience economy”, with 1:1 personalization a necessity for brands that plan to win in this economy. 

Today, everything is essentially an experience; we’re surrounded and bombarded by them. Competition among brands, too, works mostly on the basis of customer experience, which means brands need to constantly focus on delivering the best possible experience if they want to stand out. 

The physical world is full of amazing, memorable experiences (Disney, for example, has decades of them under its belt and is hence able to focus on all their minor details). But - what about the digital? What are our most memorable experiences in the digital sphere?

For both, it holds true that it takes a lifetime of great experiences to create an iconic brand. In the digital, however, you can undo a lot of positive experiences and even destroy a brand with a single bad experience, from which it is extremely difficult to come back. 

Why is it so hard to create great digital experiences?

The recent explosion of channels has made user journeys hard to predict, as they interact with brands through various channels, some of which didn’t even exist a few years ago, while those that haven’t yet been invented will also become touchpoints with brands.

Current martech systems are siloed. They each focus on different parts of the customer journey and, by consequence, each have their own view of this journey. But, not only are the tools siloed - the very organization of the teams is siloed as well.

This kind of organization makes it impossible sometimes to deliver an integrated customer experience. And the problems becomes even worse at scale, with even greater technological and organizational limitations to delivering a great, 1:1 customer experience. 

So, how can you tackle this and win out in the experience economy?

Well, the most important thing is - breaking down the silos, both on the technological and organization level. In order to deliver an integrated digital experience, you need one common view of the customer which is consistent across all channels. 

This brings about obvious advantages: the ability to come to market and take advantage of new channels faster, more consistent user experiences, reusable content, automated decision making, more governance, etc.

In the “old” internet, every brand needed a website - this is also the reason why the CMS was created, as a better way to manage these websites. But, today, a website alone isn’t enough; today, every brand needs a digital experience platform - an open DXP.

Planning your optimal DXP

Well, but, isn’t a DXP essentially the same thing as a CMS? It’s true that a DXP is a product, a platform, a solution - but, at the bottom line, it’s a strategy of how you’re going to interact with your customers to achieve desired goals. 

So, a DXP is a strategic perspective on how to approach this problem, whereas a CMS is a tactical solution. 

Web content management

The web CMS is still the basis for any DXP (“content is king”). The focus, then, should be on specific use cases from which you can work. Some of the most common of these are:

  • Multichannel delivery: this use case rests on the perception of content as a service, content in the sense of enabling people and making their lives easier. An API-first strategy is vital for this, as you need to be open with distributing and sharing content with other platforms.
  • Cross-channel strategy: a bit more complex than the previous point, here the focus is more on mapping the customer journey and figuring out how the customer moves through multiple touchpoints of interaction and what the entire integrated story then is.
  • Campaign management: the most important thing here is to be aware of how the CMS, personalization and marketing tools all interact. They need to work really well together in order to get the most out of the campaign.
  • Commerce: the recent emergence of cloud commerce platforms, such as BigCommerce and Shopify Plus, has made it possible to invest less into the backend (since it’s in the cloud) and allocate a bigger part of your budget to other areas, such as marketing. 
  • Customer data: what you do with data is more important than how you collect it or store it. The question here is: how are you going to extrapolate the insights and how can you best leverage them?
  • Work backwards: the future is uncertain and unpredictable. If you acknowledge that, you can work backwards from it, starting with the realization that your DXP will have to be adaptive to change and new tools; we are in an era of unprecedentedly fast digital innovation, after all.

Trends

1. If you want agile marketing, you need high developer velocity.

In software development, agile has completely replaced the waterfall approach. Now we’re starting to see this as a marketing trend as well: small releases, continuous iteration, better insights on the performance of a campaign and consequently the ability to adapt faster. But the catch is - successful agile marketing demands high developer velocity.

2. If you need cutting-edge commerce, you need to be disruption-ready.

With e-commerce becoming the most popular form of shopping, innovations in this sphere will be particularly important for brands, hence they will have to be especially adaptive in this area. Commerce cloud applications mentioned earlier are an example of these very recent breakthrough technologies.

3. If you need a decoupled or headless approach, don’t go with a technology that wants to do several different things at the same time. 

Very likely, such a tool won’t do any of the things as well as you would need it too. Because of this, a microservices approach is becoming more and more popular, using for example a JavaScript framework on the front-end in combination with one (or more) CMS.

Open DXP is the only DXP that has it all

Because of all the considerations and trends just discussed, you need to embrace an open architecture for your DXP, one without the restrictions of a lock-in.

Unified content and data create a seamless 1:1 customer experience. Acquia is helping their clients with bringing together all the data obtained from their customers, connecting all that data together in order for a single, unified view of the customer, and getting the content to the customer through whichever channels they interact with a brand on. 

Acquia Open Experience Platform

The Acquia Open Experience Platform consists of two parts: the marketing hub and the experience factory. The latter is built on the Drupal CMS and then extended with preconfigured features that are ideal for mid-market organizations. 

So, with all the advanced integrations such as Mautic or Acquia Lift, how can you achieve better business outcomes? In what way do they empower you? The answer is: they enable you to connect the right person at the right time with the right content on the right channel.

The “open” refers to more than just open source; it’s about being an open platform. In this context, this means utilizing Acquia’s open DXP alongside competitive products; whatever technology their clients need, Acquia wants to make all these different technologies work better together. 

In this sense, Acquia’s DXP is positioned as an open alternative to proprietary platforms such as for example the Adobe Experience Manager or Salesforce’s Lightning Platform. 

Some additional resources

Q&A session

Q: Can an organization get started with only Acquia Lightning and then add on other services later?
A: Absolutely; there are some foundational investments you really need, such as Lightning. Then you can add on Lift to extend your Drupal site with personalization, then Mautic for marketing, etc. Think of your DXP as a journey, not just as a touchpoint on that journey.


Q: Can Acquia Lift be integrated with other CMS platforms or does it only work with Drupal?
A: Yes, it does work with other platforms; it was designed as CMS-neutral.


Q: What’s the biggest mistake you’ve encountered when helping your customers move to a DXP?
A: There were two crucial mistakes, actually. Firstly - not accepting that the future is unknowable and that things change; and, secondly - a lack of discovery (the discovery checklist linked above is an excellent starting point).


Q: What does a digital experience look like in 2025?
A (Justin): It’s going to be similar, in the sense that there will still be a website, but also different in terms of the way people will interact. There will be an even greater focus on mobile experience, but voice is more limited in its use cases, so it likely won’t be as important as the hype predicts.
A (Tom): Because the pace of technology has never advanced faster, it’s hard to predict what the digital experience will look like even next year. New platforms are emerging every day and we’ll likely continue to see this; the winners will be the organizations that are able to successfully reach their customers with personalized content across all channels. The most important thing will be constant innovation; it will need to happen on a monthly basis. This is true for both the platforms themselves as well as for the organizational aspect. 

Conclusion

We hope this recap has given you a better understanding of what an (open) DXP is and why a focus on the digital experience will continue to be more and more important thanks to technological advancements. 

A lot of brands already demand a multichannel and cross-channel experience for their customers, but the only integrated solutions are expensive and limited proprietary tools. 

Now, Acquia’s positioning itself as the only open provider of these services has the potential to completely change the name of the DXP game. We’re excited to see how their upcoming tools, e.g. Content Cloud, will act as further disruptors of the industry.

We conclude with the one major takeaway from all this: because the future is uncertain, you need to set a strategy that will allow you to adapt to any new technologies in order to stay in the game.
 

Aug 01 2019
Aug 01

Palantir defined personas for the various site audiences. The site needed to be able to surface relevant content for teens, parents, physicians, and people in crisis situations. We tested wireframes in-depth and performed chalkmark tests around the menu structure, both of which helped make sure pathways were simple and straightforward for all audiences.

For general site search, we implemented Solr-based Acquia Search, which provided more advanced capabilities than the standard Drupal search functionality. Palantir added recommended results, so if there is something our client wants to bump to the top of search results based on a specific keyword, they now have that ability. For example, if a user were to search for the term “cancer,” our client can now make sure that results for the oncology department get bumped to the top of the results list.

Aug 01 2019
Aug 01

Written by Sven Berg Ryen, Leader of the GDPR audit team at Ramsalt Lab

EU Cookie Compliance, one of the top 100 Drupal modules, is a Drupal module that offers a cookie consent banner with various features, making it more convenient for your site to become GDPR compliant. GDPR is the new data privacy regulation that came into effect on 25 May 2018 and it sets out to bolster the rights citizens of the EU have over their data which is held by companies. Ramsalt Lab is currently supporting the module development as part of our GDPR audit services.

According to GDPR, if you have any traffic from EU citizens on your site, you need to ask for consent before you, or third party scripts, process any of their personal data.

This is all very well, you can ask for consent first, and then only use the visitor’s private data if they consent, but under GDPR you’re required to do so only when the visitor is an EU or EEC citizen. That still leaves billions residents outside of the EU where privacy laws may not require consent (one could argue whether this is good or bad) for storing cookies that identify individuals. Wouldn’t it be nice if you can comply with EU regulations and at the same time not pester those outside of the area where GDPR is enforced?

Luckily, EU Cookie Compliance has a feature to the rescue. It can first check whether the user resides in the countries that GDPR affects, and then display the banner accordingly, only when applicable.

So the technical parts

To achieve this, you need an additional addon; either the Smart IP or geoIP modules - or the geoIP PHP library. It may be easiest to use the module route, since adding the PHP library may not be feasible on your hosted server or cloud solution.

We will here use Smart IP, since that’s the only module that the Drupal 8 version of EU Cookie Compliance supports. There is now also a beta version of GeoIP available for Drupal 8, so at some point, EU Cookie Compliance may support GeoIP also in the 8.x module version. You can follow this issue for the progress.

The option to show the banner only to EU countries can be found near the bottom of the module settings page. A notification can be seen when the Smart IP module is not enabled.

EU Countries configuration section when no geolocation utility has been selected.

Enabling and setting up the Smart IP module

Install and enable the Smart IP module, using your preferred technique (such as composer/drush or direct download from drupal.org). In Drupal 8, you also have to enable a Smart IP data source module.

The Drupal 8 module gives you to the following geolocation lookup options:

  • Free and licensed geolocation files from ip2location.com (signup required to get access to the free version). For the purposes of this module, you only need the DB1 database, with coverage of countries. Attribution is required when you use the free database.
  • Geolocation service from ipinfodb. A free API is available. You are however limited to 2 requests per second, and will be blacklisted if you exceed that limit. Also, the service limits you to lookup requests from one server IP only, which may not be ideal if you’re planning to test the service from your localhost. Note that the module utilizes the ip-city endpoint, and not the faster ip-country one. Sign up to get an API key.
  • MaxMind GeoIP2. A free database is available, updated on the first Tuesday of each month. No signup is required to use the free version, though attribution is required.
  • MaxMind GeoIP2 Precision API service offering lookup at the country level at $0.0001 per request. A free trial is available.

Some fallback options are available, and will be accessible if the headers exist in the web page query when you open the configuration page (which means they may not be available on your localhost, but could be available on your server).

  • Cloudflare headers
  • The mod_geoip module in Apache
  • Nginx headers

The Drupal 7 version of Smart IP offers all of the above and in addition some legacy lookup services.

We will be using the Smart IP MaxMind GeoIP2 binary database, because it has a free version of the database that will automatically be updated once a month on cron run. In other words, you need to enable the smart_ip_maxmind_geoip2_bin_db submodule (part of smart_ip).

Configuring Smart IP for GDPR

After having enabled the required modules, head over to /admin/config/people/smart_ip.

Select the “Use MaxMind GeoIP2 binary database” option to see the configuration for the service. Choose the Lite database version, the Country level edition and make sure that Yes is chosen under Automatic updates.

Further down, in the second pane, configure your settings to allow geolocation lookup for all desired user roles. Then, since I guess you care about privacy, either opt to not save the user’s geolocation on account creation, or enable the feature to prevent storing location details from GDPR countries.

Scroll all the way to the bottom and press “Save configuration”. If you get an error at this point, you need to set a private file path in settings.php.

Smart IP configuration section with recommended configuration highlighted.

After having configured Smart IP, you need to head over to MaxMind’s website and download the GeoLite2 Country file in DB format. Then, expand the archive, grab just the file labeled “GeoLite2-Country.mmdb” and drop it into “[PATH_TO_PRIVATE_FOLDER]/smart_ip”. After you add this file manually once, the Smart IP module will take care of the automatic monthly updates.

Note: In Drupal 7, the GeoLite 2 country database is downloaded automatically when configuring the module, so there’s no need for a manual download.

Configuring EU Cookie Compliance

Next, head back to the settings for EU Cookie Compliance at admin/config/system/eu-cookie-compliance and enable the “Only display banner in EU countries” option. If your site uses any caching at all, you’ll want to enable the Javascript based option.

EU Cookie Compliance configuration section for limiting the display of the banner to only show up in EU countries.

After enabling this feature, you will need to rebuild Drupal cache, in order for Drupal to pick up the new path that is used to determine if the user is in the EU.

Testing

Note: If you’re on an EU Cookie Compliance version prior to 8.x-1.7, you need the patch from this EU Cookie Compliance issue in order for the debug feature in Smart IP to work. The Drupal 7 version of EU Cookie Compliance doesn’t have this problem (though you should always make sure that your version is up-to-date to get the latest bug fixes and features).

This feature involves a few moving parts, so to ensure everything has been set up correctly, there’s a handy debug feature in Smart IP that can be used. This way, you can check that you are indeed displaying the banner only to European countries where GDPR legislation apply. The easiest way to check if the settings are correct is to temporarily set up debugging in Smart IP for the Anonymous user and open an Incognito window. This way you can ensure that no existing cookies are giving false assurance that the feature is working.

Try using a value such as 151.101.2.217 (which at the time of this article is one of the IPs for the drupal.org server, situated in the US). Notice that no banner is shown when you debug smart IP with this value.

Try 185.91.65.150 (the IP for the server where drupalnorge.no is hosted, which is in Norway) and the banner should appear.

Section of Smart IP configuration showing an IP number has been configured for debugging purposes.

After testing is completed, remember to disable debugging for the anonymous user by clearing the value on the Smart IP configuration page.

Conclusion

A little work is required to set up EU Cookie Compliance to display the GDPR cookie banner only to countries and territories where the law requires one. Resulting from this, you will hopefully have happier users.

If you need help setting up your GDPR cookie banner, or have questions about how your site can become GDPR compliant, you can always get in touch with us at Ramsalt Lab through our contact page.

Written by Sven Berg Ryen
Developer and Leader of the GDPR audit team at Ramsalt Lab

Sven Berg Ryen

Aug 01 2019
Aug 01

For the sixth year in a row, Acquia has been recognized as a leader in the Gartner Magic Quadrant for Web Content Management.

For the sixth year in a row, Acquia has been recognized as a leader in the Gartner Magic Quadrant for Web Content Management. Acquia first entered the Web Content Management Magic Quadrant back in 2012 as a Visionary, and since then we've moved further than any other vendor to cement our leadership position.

As I've written before, analyst reports like the Gartner Magic Quadrant are important because they introduce organizations to Acquia and Drupal. As I've put if before If you want to find a good coffee place, you use Yelp. If you want to find a nice hotel in New York, you use TripAdvisor. Similarly, if a CIO or CMO wants to spend $250,000 or more on enterprise software, they often consult an analyst firm like Gartner..

In 2012, Gartner didn't fully understand the benefits of Acquia being the only WCM company who embraced both Open Source and cloud. Just seven years later, our unique approach has forever changed web content management. This year, Acquia moved up again in both of the dimensions that Gartner uses to rank vendors: Completeness of Vision and Ability to Execute. You'll see in the Magic Quadrant graphic that Acquia has tied Sitecore for the first time:

The 2019 Gartner Magic Quadrant for Web Content ManagementAcquia recognized as a leader, next to Adobe, Sitecore and Episerver, in the 2019 Gartner Magic Quadrant for Web Content Management.

In mature markets like Web Content Management, there is almost always a single proprietary leader and a single Open Source leader. There is Oracle and MongoDB. Splunk and Elastic. VMWare and Docker. Gitlab and Github. That is why I believe that next year it will be Acquia and Adobe at the very top of the WCM Magic Quadrant. Sitecore and Episerver will continue to fight for third place among companies who prefer a Microsoft-centric approach. I was not surprised to see Sitecore move down this year as they work to overcome technical product debt and cloud transition, leading to strange decisions like acquiring a services company.

You can read the complete report on Acquia.com. Thank you to everyone who contributed to this result!

August 01, 2019

1 min read time

db db
Aug 01 2019
Aug 01

Time to Vote graphic

Voting is now open for the 2019 At-Large Board positions for the Drupal Association! If you haven't yet, check out the candidate’s profiles. Get to know your candidates, and then go vote.

Cast Your Vote!

Voting is open to all individuals who had a Drupal.org account by the time nominations opened and who have logged in at least once in the past year. While a Drupal Association membership is not currently required, it is strongly encouraged.

To vote, you will rank candidates in order of your preference (1st, 2nd, 3rd, etc.). The results will be calculated using an "instant runoff" method. For an accessible explanation of how instant runoff vote tabulation works, see videos linked in this discussion.

Election voting is from 1 August, 2019 through 16 August, 2019. During this period, you can continue to review and comment on the candidate profiles.

Have questions? Please contact me: Rachel Lawson.

Aug 01 2019
Aug 01

The Migrate API is a very flexible and powerful system that allows you to collect data from different locations and store them in Drupal. It is in fact a full-blown extract, transform, and load (ETL) framework. For instance, it could produce CSV files. Its primarily use, thought, is to create Drupal content entities: nodes, users, files, comments, etc. The API is thoroughly documented and their maintainers are very active in the #migration slack channel for those needing assistance. The use cases for the Migrate API are numerous and vary greatly. Today we are starting a blog post series that will cover different migrate concepts so that you can apply them to your particular project.

Understanding the ETL process

Extract, transform, and load (ETL) is a procedure where data is collected from multiple sources, processed according to business needs, and its result stored for later use. This paradigm is not specific to Drupal. Books and frameworks abound on the topic. Let’s try to understand the general idea by following a real life analogy: baking bread. To make some bread you need to obtain various ingredients: wheat flour, salt, yeast, etc. (extracting). Then, you need to combine them in a process that involves mixing and baking (transforming). Finally, when the bread is ready you put it into shelves for display in the bakery (loading). In Drupal, each step is performed by a Migrate plugin:

The extract step is provided by source plugins.
The transform step is provided by process plugins.
The load step is provided by destination plugins.

As it is the case with other systems, Drupal core offers some base functionality which can be extended by contributed modules or custom code. Out of the box, Drupal can connect to SQL databases including previous versions of Drupal. There are contributed modules to read from CSV files, XML documents, JSON and SOAP feeds, WordPress sites, LibreOffice Calc and Microsoft Office Excel files, Google Sheets, and much more.

The list of core process plugins is impressive. You can concatenate strings, explode or implode arrays, format dates, encode URLs, look up already migrated data, among other transform operations. Migrate Plus offers more process plugins for DOM manipulation, string replacement, transliteration, etc.

Drupal core provides destination plugins for content and configuration entities. Most of the time targets are content entities like nodes, users, taxonomy terms, comments, files, etc. It is also possible to import configuration entities like field and content type definitions. This is often used when upgrading sites from Drupal 6 or 7 to Drupal 8. Via a combination of source, process, and destination plugins it is possible to write Commerce Product Variations, Paragraphs, and more.

Technical note: The Migrate API defines another plugin type: `id_map`. They are used to map source IDs to destination IDs. This allows the system to keep track of records that have been imported and roll them back if needed.

Drupal migrations: a two step process

Performing a Drupal migration is a two step process: writing the migration definitions and executing them. Migration definitions are written in YAML format. These files contain information about the how to fetch data from the source, how to process the data, and how to store it in the destination. It is important to note that each migration file can only specify one source and one destination. That is, you cannot read form a CSV file and a JSON feed using the same migration definition file. Similarly, you cannot write to nodes and users from the same file. However, you can use as many process plugins as needed to convert your data from the format defined in the source to the format expected in the destination.

A typical migration project consists of several migration definition files. Although not required, it is recommended to write one migration file per entity bundle. If you are migrating nodes, that means writing one migration file per content type. The reason is that different content types will have different field configurations. It is easier to write and manage migrations when the destination is homogeneous. In this case, a single content type will have the same fields for all the elements to process in a particular migration.Once all the migration definitions have been written, you need to execute the migrations. The most common way to do this is using the Migrate Tools module which provide Drush commands and a user interface (UI) to run migrations. Note that the UI for running migrations only detect those that have been defined as configuration entities using the Migrate Plus module. This is a topic we will cover in the future. For now, we are going to stick to Drupal core’s mechanisms of defining migrations. Contributed modules like Migrate Scheduler, Migrate Manifest, and Migrate Run offer alternatives for executing migrations.

 

This blog post series is made possible thanks to these generous sponsors. Contact Understand DRupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 01 2019
Aug 01

We’re thrilled to be attending Drupal Camp Pannonia from the 1st to 3rd August!

Ratomir and Dejan from the CTI Serbia office will be attending Drupal Camp to discover the biggest breakthroughs of the Open Source platform in 2019.

Grand-terrace-palic

The Grand Terrace in Palić, where Drupal Camp Pannonia is held.

 

Held in Palić, Serbia, near the Hungarian border, Drupal Camp Pannonia brings together some of Europe's top Drupal experts for a weekend of knowledge sharing and Open Source collaboration.

CTI Digital has long worked with a global roster of clients serving international customers. In early 2019, we made a permanent investment in Europe and opened a Serbia office. 

We selected Serbia as the Drupal community is growing quickly and filled with world-class talent. Supporting Drupal Camp Pannonia is a crucial part of our investment in the European Drupal community. If you’d like to chat with Dejan or Ratomir about CTI Digital or Drupal, please email [email protected] to set up a meeting.

Ratomir-and-Dejan-1-1

Dejan (Left) and Ratomir (Right)

We're pleased to announce Dejan (@dekisha007) is also conducting a session on Day 1, Friday 1st August at 15:45. He'll be delving into a new front-end practise we have developed at CTI Digital using Pattern Lab.

Here's what to expect:

  • Pattern Lab and Atomic Design in Drupal
  • Tailwind CSS
  • Advantages of Vue.js over React in Drupal
  • Wrapping all of the above up with CTI’s custom base theme

Be sure to catch Dejan’s talk on day 1, along with a host of brilliant sessions and workshops by checking out the online schedule.

Or follow @dcpannonia and @CTIDigitalUK for all the action on twitter.

Aug 01 2019
Aug 01

What is a land acknowledgement?

Here’s the gist: in America (North and South), as well as Australia and other colonized nations, if you’re not an indigenous person, you live on stolen land. We start our meetings by, among other things, asking where people are from, and asking them to acknowledge the indigenous history of the land they live on. 

Why do we do them?

In order to remember, and recognize that the land we live on has been colonized. However, simply stating this isn’t enough - let’s do more!

What are we doing?

A blog post, where members of our community will each share a couple paragraphs of research that they have done on the land where they live. (and please share your sources).

Alanna Burke, Lenni Lenape Land | Pottstown, Pennsylvania, USA

The Lenni Lenape people lived in an area that covered Delaware, Eastern Pennsylvania, Southeastern New York, and New Jersey when colonizers came in the 1600s. While most Lenape now reside in Oklahoma [1], there is still a tribe in New Jersey - the Nanticoke Lenni-Lenapes. The Nanticokes are ancestors of the Lenni-Lenapes, and the tribes are interconnected. “Our compound tribal name, a practice not uncommon among modern tribes, honors our ancestors from the two dominant ancient tribes which comprise our tribal nation [2].”

Despite living in their homeland, they are not federally-recognized. Federal recognition gives them tribal sovereignty - the ability to govern themselves. (In my research, there appeared to be some dispute over whether they were even state-recognized or eligible for any federal monies [3]).

There are still Lenape in Pennsylvania, too, though I struggled to find an organized presence online. One local woman, Uhma Ruth Py, speaks regularly at events and works with local museums to help preserve her heritage [4]. The Lenape Nation of Pennsylvania is a non-profit dedicated to increasing awareness of Lenape culture and history, and it recently helped to move the Lenape artifacts from The University of Pennsylvania Museum of Archaeology and Anthropology to the Lenape Nation’s Cultural Center in Easton, PA [5].

[1] https://en.wikipedia.org/wiki/Lenape#Oklahoma

[2] https://nanticoke-lenape.info

[3] https://www.gao.gov/products/GAO-12-348

[4] https://www.readingeagle.com/life/article/lenni-lenape-woman-keeps-her-native-american-heritage-alive

[5] https://www.lenape-nation.org/2nd-project

Alex Laughnan, Ohlone Lands | San Francisco, CA, USA

As a resident of the Bay Area specifically San Francisco, I wanted to better understand the indigenous people that were disrupted, forced to give up the original lands that they had lived on during the colonization of America, and have been marginalized historically.

San Francisco and most of the Bay Area is on the land of the Ohlone people. “Today’s Ohlone/Costanoan people are the descendants of speakers of six related Costanoan languages that were spoken in west central California, from San Francisco Bay to Monterey Bay, when Spanish missionaries and settlers arrived in the 1770s.” (Milliken) While many thousands of Ohlone people were present upon the initial settlement from Spanish explorers, they were either converted via the missions or exiled and pushed into southern California.

In 1769, Spanish explorers entered the present-day San Francisco peninsula. “By 1801 all of the native San Francisco Peninsula people had joined Mission Dolores. Over the next few years, speakers of other languages — Bay Miwoks from east of San Francisco Bay and Coast Miwoks, Patwins and Wappos from north of the bay — joined Mission Dolores, swelling its population to over 1,200 people. They intermarried with its San Francisco Bay Costanoan speakers and with one another. Although most of the northerners returned home when missions San Rafael and San Francisco Solano were opened in the northern part of the San Francisco Bay Area, some remained at Mission Dolores. When the process of closing the missions began in 1834, the 190 members of the Mission Dolores Indian community included only 37 descendants of the original San Francisco Peninsula local groups.” (Milliken) Unfortunately, due to disease and colonization many members of the Ohlone/Costanoan tribes were wiped out. Historical policies and acts, as recent as the 1950s, from the United States federal government continued to disenfranchise the tribes through insufficient recognization and compensation for the land taken from them.

In present time, the term “Ohlone” is not unilaterally accepted in all indigenous communities on the San Francisco peninsula. Many, such as the Ramaytush, continue to push for better clarity around the different groups of people that existed in the area before Spanish settlement. “Today’s Ohlone/Costanoans are not a single community in either the social sense or the political sense. They do not gather as a united body for holidays or traditional ceremonies. They do not recognize a single Ohlone/Costanoan leadership or corporate organization.” (Milliken) Instead, the Ohlone people are often organized into groups each with a sense of community that their ancestors developed from shared experiences in the various Bay Area Spanish missions. “Most of the tribes continue to preserve and revitalize their cultural history through education, restoration of their native languages, and the practice of cultural storytelling.” (Cogswell)

References

Alex McCabe, Seminole lands | Orlando, FL, USA

The Seminole people are the result of unbowed resistance to hundreds of years of murder, war, and theft. Once part of the Mississippian culture that spanned much of the eastern portion of what is today known as the United States, invasions by Spain, England, and the United States killed or forcibly removed tribes from their ancestral homelands. The survivors fled south into Florida’s swamps, and mingled together with other tribes, eventually undergoing the process of ethnogenesis to become one tribe that the United States would come to know as the Seminoles [1][2]. It was not until the middle of the 19th century that the government of the United States ceased its efforts to remove the Seminole people from Florida [3].

At this point, Seminoles began to trade with white people. In 1907, the first Seminole reservation was created [3], but it wasn’t until 1957 that the Seminole tribe was officially recognized by the federal government, and even then it was at gunpoint - the Seminoles were forced to organize politically along the lines set forth in the 1934 Indian Reorganization Act [4].

Today, the Seminole tribe is very much alive, well organized, and financially successful. Tobacco and gaming bring in income [5], and in 2007, they closed on a deal that sold the international Hard Rock brand to the Seminole tribe [6]. They also operate museums, such as the Ah-Tah-Thi-Ki museum [7], in order to preserve Seminole culture.

[1] https://www.semtribe.com/STOF/history/indian-resistance-and-removal
[2] https://en.wikipedia.org/wiki/Seminole#History
[3] https://www.semtribe.com/STOF/history/timeline
[4] https://www.semtribe.com/STOF/history/survival-in-the-swamp
[5] https://www.semtribe.com/STOF/history/seminoles-today
[6] https://en.wikipedia.org/wiki/Hard_Rock_Cafe#Acquisition_by_the_Seminole...
[7] https://www.ahtahthiki.com/

Tara King, Pueblo Lands | Albuquerque, New Mexico

Albuquerque is on the land of several Pueblo peoples. There are 19 sovereign Pueblo nations in New Mexico, as well as three Apache tribes and the Navajo Nation.  

New Mexico was first colonized by the Spanish in 1540, when Coronado claimed the land on behalf of Spain.  Centuries of colonization, slavery, and genocide followed--everything from forced labor to "boarding" schools where children were separated from their families and cultural traditions destroyed.  Juan de Oñate led the Acoma Massacre of 1599 in response to rebellion at Acoma. During the Massacre, over 800 Acoma people died, many others were enslaved, and men over the age of 25 had one foot amputated. 

In 1680, the Pueblo Revolt succeeded against the Spanish, killing 400 Spanish and driving 2,000 settlers from the land. The revolt was coordinated by Popé (Ohkay Ohwingeh) from Taos Pueblo, though people from most Pueblos participated.  Pueblo control of New Mexico continued until at least 1692, and some Pueblos (such as Hopi Pueblo) never re-entered Spanish control. In 1706, Albuquerque was founded as a trading post between the Pueblo peoples and the Hispanos (descendants of the Spanish settlers).

In 1848, New Mexico became part of the United States after the Mexican-American War, though New Mexico was not granted statehood until 1912.  Between 1965-1975, the US Government built Cochiti Dam, an engineering project to control the Rio Grande river.  The dam not only altered the natural flow of the river, but it also destroyed sacred lands for the Cochiti people and flooded (then salinated) the historic fields that are central to Cochiti culture and survival. 

Thousands of Pueblo people live and thrive in New Mexico today.  To name just a few, Deb Haaland (Laguna) is one of the first two Native women elected to the US Congress.  Rebecca Roanhorse (Ohkay Ohwingeh) is the author of a very fun series of fantasy novels called The Sixth World.   If you visit Albuquerque, you can shop at Red Planet, possibly the world's only indigenous comic book store: https://redplanetbooksabq.com/ or visit the Indian Pueblo Cultural Center, a museum dedicated to allowing Pueblo people to tell their own story: https://www.indianpueblo.org. Albuquerque is also home to the Gathering of Nations, North America's largest pow-wow: https://www.gatheringofnations.com/.

I work toward dismantling colonialism in Albuquerque and New Mexico by learning about Native history and sharing it widely, by respecting, learning about, and advocating for Native land-use practices, and by supporting Native-owned businesses.

Lauren Maffeo | Bethesda, Maryland, USA

Paleo-Indians inhabited the Chesapeake Bay region as early as 9500 BC. In 1608, John Smith sailed up the Potomac River after co-founding the Jamestown settlement in Virginia. After sailing to the Little Falls of the Potomac (north of the present-day Chain Bridge), Smith and his peers met the first inhabitants of what's known today as Montgomery County, Maryland.

Members of the Piscataway and Nacotchtank tribes arrived in the region up to 10,000 years before Smith. Members of those tribes had crossed the Alleghenies before reaching the Potomac River Valley.

Due to an abundance of game, fruit, nuts, and other natural resources, members of both tribes traversed the area. And due to an abundance of fish, they were able to net thousands of shad at once.

Debris like broken arrowheads have been found on modern sites in Bethesda, including the National Institutes of Health. This suggests that the tribes built hunting camps throughout the modern Bethesda region.

2,000 years before Smith's arrival, the Piscataway and Nacotchtank tribes settled in small agricultural villages near the Potomac River. Choosing to live communally, they harvested the original succotash. (Native American for “broken corn kernels.”) They also buried their dead in the area.

I live in the area of modern-day Bethesda where Western Avenue and River Road meet. Native Americans and Europeans both used trails across this area in the 17th century. Although trade between the two groups was prosperous at first due to European demand for fox fur, the Piscataway and Nacotchtank tribes were eventually pushed back across the Alleghenies.

A few decades after John Smith arrived in Bethesda, Henry Fleet sailed up the Potomac River and stayed with the Piscataway tribe from 1623 to 1627. After returning to England and sharing the resources he had found, Fleet won funding for another expedition. At that time, members of tribes living on the land known as Bethesda were forced into reservations and died due to lack of immunity to communicable illness.

References

Want to contribute?

This post is intended as a living resource. If you would like to contribute information or have feedback, please let us know on the drupal.org issue so we can update and improve. 

If you’d like to learn more about our efforts to support the Drupal community by dismantling racism, please join our meetings on Drupal slack in #diversity-inclusion. 

Jul 31 2019
Jul 31

Pantheon is an excellent hosting service for both Drupal and WordPress sites. But to make their platform work and scale well they have set a number of limits built into the platform, these include process time limits and memory limits that are large enough for the vast majority of projects, but from time to time run you into trouble on large jobs.

For data loading and updates their official answer is typically to copy the database to another server, run your job there, and copy the database back onto their server. That’s fine if you can afford to freeze updates to your production site, setup a process to mirror changes into your temporary copy, or some other project overhead that can be limiting and challenging. But sometimes that’s not an option, or the data load takes too long for that to be practical on a regular basis.

I recently needed to do a very large import for records into a Drupal database and so started to play around with solutions that would allow me to ignore those time limits. We were looking at needing to do about 50 million data writes and the running time was initially over a week to complete the job.

Since Drupal’s batch system was created to solve this exact problem it seemed like a good place to start. For this solution you need a file you can load and parse in segments, like a CSV file, which you can read one line at a time. It does not have to represent the final state, you can use this to actually load data if the process is quick, or you can serialize each record into a table or a queue job to actually process later.

One quick note about the code samples, I wrote these based on the service-based approach outlined in my post about batch services and the batch service module I discussed there. It could be adapted to a more traditional batch job, but I like the clarity the wrapper provides for breaking this back down for discussion.

The general concept here is that we upload the file and then progressively process it from within a batch job. The code samples below provide two classes to achieve this, first is a form that provides a managed file field which create a file entity that can be reliably passed to the batch processor. From there the batch service takes over and using a bit of basic PHP file handling to load the file into a database table. If you need to do more than load the data into the database directly (say create complex entities or other tasks) you can set up a second phase to run through the values to do that heavier lifting. 

To get us started the form includes this managed file:

   $form['file'] = [
     '#type' => 'managed_file',
     '#name' => 'data_file',
     '#title' => $this->t('Data file'),
     '#description' => $this->t('CSV format for this example.'),
     '#upload_location' => 'private://example_pantheon_loader_data/',
     '#upload_validators' => [
       'file_validate_extensions' => ['csv'],
     ],
   ];

The managed file form element automagically gives you a file entity, and the value in the form state is the id of that entity. This file will be temporary and have no references once the process is complete and so depending on your site setup the file will eventually be purged. Which all means we can pass all the values straight through to our batch processor:

$batch = $this->dataLoaderBatchService->generateBatchJob($form_state->getValues());

When the data file is small enough, a few thousand rows at most, you can load them all right away without the need of a batch job. But that runs into both time and memory concerns and the whole point of this is to avoid those. With this approach we can ignore those and we’re only limited by Pantheon’s upload file size. If they file size is too large you can upload the file via sftp and read directly from there, so while this is an easy way to load the file you have other options.

As we setup the file for processing in the batch job, we really need the file path not the ID. The main reason to use the managed file is they can reliably get the file path on a Pantheon server without us really needing to know anything about where they have things stashed. Since we’re about to use generic PHP functions for file processing we need to know that path reliably:

$fid = array_pop($data['file']);
$fileEntity = File::load($fid);
$ops = [];

if (empty($fileEntity)) {
  $this->logger->error('Unable to load file data for processing.');
  return [];
}
$filePath = $this->fileSystem->realpath($fileEntity->getFileUri());
$ops = ['processData' => [$filePath]];

Now we have a file and since it’s a csv we can load a few rows at time, process them, and then start again.

Our batch processing function needs to track two things in addition to the file: the header values and the current file position. So in the first pass we initialize the position to zero and then load the first row as the header. For every pass after that we need to find point we left off. For this we use generic PHP files for loading and seeking the current location:

// Old-school file handling.
$path = array_pop($data);
$file = fopen($path, "r");
...
fseek($file, $filePos);

// Each pass we process 100 lines, if you have to do something complex
// you might want to reduce the run.
for ($i = 0; $i < 100; $i++) {
  $row = fgetcsv($file);
  if (!empty($row)) {
    $data = array_combine($header, $row);
    $member['timestamp'] = time();
    $rowData = [
             'col_one' => $data['field_name'],
             'data' => serialize($data),
             'timestamp' => time(),
    ];
    $row_id = $this->database->insert('example_pantheon_loader_tracker')
             ->fields($rowData)
             ->execute();

    // If you're setting up for a queue you include something like this.
    // $queue = $this->queueFactory->get(‘example_pantheon_loader_remap’);
    // $queue->createItem($row_id);
 }
 else {
   break;
 }
}
$filePos = (float) ftell($file);
$context['finished'] = $filePos / filesize($path);

The example code just dumps this all into a database table. This can be useful as a raw data loader if you need to add a large data set to an existing site that’s used for reference data or something similar.  It can also be used as the base to create more complex objects. The example code includes comments about generating a queue worker that could then run over time on cron or as another batch job; the Queue UI module provides a simple interface to run those on a batch job.

I’ve run this process for several hours at a stretch.  Pantheon does have issues with systems errors if left to run a batch job for extreme runs (I ran into problems on some runs after 6-8 hours of run time), so a prep into the database followed by running on queue or something else easier to restart has been more reliable.

View the code on Gist.

Jul 31 2019
Jul 31

It's that time of the month again! The time when we express our thanks to those Drupal teams who've generously (and altruistically) shared valuable free content with us, the rest of the community. Content with an impact on our own workflow and our Drupal development process. In this respect, as usual, we've handpicked 5 Drupal blog posts, from all those that we've bookmarked this month, that we've found most “enlightening”.

From:
 

  • Drupal 8 site-building best practices
  • to valuable built-in tools for image optimization in Drupal
  • to altruistically shared tips on how to enforce coding standards in Drupal
  • to best modules for implementing social login functionality into one's Drupal website
  • to... specific metrics that one should prioritize when evaluating his/her website's marketing efforts
     

… the July edition of our “OPTASY favorites” series is, again, packed with high quality, and (most of all) useful content.

But, let's get straight to our selection: here are the 5 blog posts that we've added to our list of resources this month.

Integrating social login functionality into Drupal websites must be one of the most common tasks that we deal with as a development team.

So, having a list of modules designed precisely for this is so reassuring:
 

  • we save precious time
  • we know that it's a Drupal solution (not a different, third-party software component) that we integrate into our projects
     

So, the InternetDevel have shared their list of Drupal 8 modules for social login just in time to... add to our bookmarks and resources & tools list.

Their blog post highlights 5 modules to evaluate first — to see if they fit a specific Drupal 8 web project's particularities — whenever we need to enable social login on the websites that we work on.

Our list of best practices for developing websites in Drupal 8 remains an open one. We keep on adding valuable suggestions on:
 

  • life-saving modules that we should be using, that we might have overlooked or underated
  • new approaches to Drupal development that help us save valuable time and avoid emberassing mistakes
     

That's why the ADCI Solution's team blog post on the best practices to adopt and to stick to when building a website in Drupal 8 couldn't have escaped our “radar”.

From:
 

  • module recommendations for SEO, security, maintaince and performance issues
  • to useful tips on how to address the editorial experience on the websites that we build
  • to what modules we should remove once we move a website from the dev environment to live environment
     

.. we've found a whole bunch of reasons why this post deserves its place in our top 5 Drupal blog posts from July.
 

Another one of those articles that we've “bumped into” precisely when we were looking for the solution that it presents.

In this case here we were looking to set up, document and enforce coding policy and procedures to ensure that our developers comply with the Drupal coding standards.

In their “revelatory” blog post, the Lullabot team puts the spotlight on a tool  — GrumPHP — that checks your Drupal code, before commiting it. One that detects any violation of the well-known Drupal coding standards.

Then, they go on anticipating and detailing each possible challenge that you might face when using this tool and they even share their solutions for overcoming them.

Overall, GrumPHP makes such an interesting discovery that we can't wait to leverage it in our next Drupal project. 
 

Disclaimer: this is not a Drupal blog post, yet the tips included there are highly applicable to websites running on Drupal, as well.

For us, it's been more of a recap of all those key metrics to use for evaluating our marketing strategies.

And it's convenient to have all 6 of them listed in one post that we can keep at hand and use whenever we need to check whether our marketing efforts do have an impact.

From:
 

  • time on site
  • to number of pages visited
  • to traffic sources
     

… plus 3 other crucial metrics for us to turn into the main criteria to use when assessing our marketing strategy's efficency, the article's such a welcomed “reminder”.

A handy resource that we've added to our list of 5 favorite Drupal blog posts from July.
 

Optimizing images is on top of our list of performance-boosting techniques. Therefore, the WishDesk's post on those Drupal 8 built-in tools designed precisely for this purpose came in more than handy.

And not only do they outline all the image optimization features that Drupal 8 provides us with, right out of the box, but they:
 

  • go into details of the whole process and its particularities in this version of Drupal
  • stress out the crucial importance of optimizing one's images for SEO and performance purposes
     

The END!

These are the 5 Drupal blog posts of the month which, in our opinion, shared the most useful and usable content.

Do you have your own list of favorites, too?

Image by Gerd Altmann from Pixabay

Jul 31 2019
Jul 31

Drupal 8 is built on PHP, but using new architecture paradigms that can be difficult to grasp for developers coming from a Drupal 7 background. The Typed Data API lies at the core of Drupal 8, and provides building blocks used throughout the Drupal 8 architecture. In this presentation, Jay Friendly, Morpht's Technical Director, dives into the Typed Data API, what it is, how it works, and why it is so awesome!

31 July 2019

Jul 30 2019
Jul 30

Testing integrations with external systems can sometimes prove tricky. Services like Acquia Lift & Content Hub need to make connections back to your server in order to pull content. Testing this requires that your environment be publicly accessible, which often precludes testing on your local development environment.

Enter ngrok

As mentioned in Acquia’s documentation, ngrok can be used to facilitate local development with Content Hub. Once you install ngrok on your development environment, you’ll be able to use the ngrok client to connect and create an instant, secure URL to your local development environment that will allow traffic to connect from the public internet. This can be used for integrations such as Content Hub for testing, or even for allowing someone remote to view in-progress work on your local environment from anywhere in the world without the need for a screen share. You can also send this URL to mobile devices like your phone or tablet and test your local development work easily on other devices.

After starting the client, you’ll be provided the public URL you can plug into your integration for testing. You’ll also see a console where you can observe incoming connections.

Resources

Jul 30 2019
Jul 30

I posted my analysis of top uses of deprecated code in Drupal contributed projects in May 2019 two months ago. There are some great news since then. First of all, Matt Glaman of Centarro implemented support for deprecation messages directly in reports, so we can analyse reports much better in terms of actionability. Second, Ryan Aslett at the Drupal Association implemented running all-contrib deprecation testing on drupalci.org (Drupal's official CI environment). While showing that data on projects themselves is in the works, I took the data to crunch some numbers again and the top deprecated API uses are largely the same as two months ago. However, we have full analysis of all deprecation messages and their fixability which gets us two powerful conclusions.

Stop using drupal_set_message() now!

If there is one thing you do for Drupal 9 compatibility, make it getting rid of using drupal_set_message(). As the API documentation for drupal_set_message() explains you should use the messenger service and invoke the addMessage() method on it.

Of the total of 23750 deprecated API use instances found in 7561 projects, 29% of them were instances of drupal_set_message(). So statistically speaking if you stop using this single function, you are already 29% on your way to Drupal 9 compatibility (likely more for most projects).

Dezső Biczó already built automated deprecation fixers covering drupal_set_message() and more, based on rector.

Figure visualising the data explained in the text.

76% of deprecated API use can already be resolved

On top of the 29% of drupal_set_message(), there is another 47% of various other API uses that can be fixed now. This means that those deprecated APIs were marked before Drupal 8.6.0 was released and are therefore in currently unsupported Drupal core versions. So stopping the use of them would still keep your code compatible with all currently supported Drupal core versions. In other words, as of today, drupal.org project maintainers can resolve three quarters of the outstanding code changes for Drupal 9 compatibility. Considering we are still ten months from before Drupal 9's planned release date, this is quite good news!

Upgrade Status is a nice visual tool to explicitly list all the errors in the projects you scan. It provides instructions on how to fix them and immediately fixable issues are highlighted.

Work with project maintainers

You are not a drupal.org project maintainer but want to help? Yay! Based on the above it may be tempting to run to the issue queue and submit issues for fixing all the things. Good plan! One thing to keep in mind though is to work with the maintainers of projects so your help benefits the project most effectively. Drupal.org project owners may specify Drupal 9 plan information that should help you engage with them the best way (use the right meta issue, know of their timeline plans, and so on). Check the project page of projects you are interested to be involved with to make sure you contribute the best way.

https://t.co/hf2ENvlZSo projects can now specify Drupal 9 porting information, so *you* can direct *your* contributors to provide the most valuable help on the way to Drupal 9, fund the process or just step back (for now). Edit your project to help your contributors help you! pic.twitter.com/l1OWwOllBK

— The Drop is Always Moving (@DropIsMoving) May 21, 2019

More to come

I think the data is super-interesting and I plan to do more with it, for example cross-referencing with project usage. Stay tuned for more information in the future. In the meantime all the source data is available for your mining as well.

Correction: An earlier version of this post said there were 43% additional fixable deprecations for a total of 72%. Thanks to Ryan Aslett for corrections, the correct numbers are 47% and 76% respectively. Images and text fixed.

Disclaimer: The data is based on the state of contributed projects on July 29, 2019 based on Drupal core's 8.8.x development branch on July 29, 2019. As contributed module maintainers commit fixes, the data will get better. As core introduces more deprecations (until Drupal 8.8.0-alpha1), the data could get worse. There may also be phpstan Drupal integration bugs or missing features, such as not finding uses of deprecated global constants yet. This is a snapshot as the tools and state of code on all sides evolve, the resulting data will be different.

Jul 30 2019
Jul 30

Today, User experience (UX) is not just about how a user feels when interacting with your website. In this world of rapidly growing interfaces and APIs, content plays a supreme role in offering your users with exceptional UX. To keep up the pace, you need to adopt hot-selling, fast-moving front-end technologies like Angular JS, React JS, etc. that can deliver your content in an application-like speed.  Headless Drupal (or decoupled Drupal) is one such approach that is gaining much popularity because of its innovative ability to deliver outstanding digital experiences. Bigwigs like Weather.com, The Tonight Show, Great Wolf Resorts, Warner Music Group and many more, have taken the headless Drupal route offering their customers with interactive and unique front-end designs and fast-loading websites.

What is Headless Drupal?

headless drupal

To go headless or not is a rather tricky decision to make in this digital world. So what’s the whole buzz about going Headless? Simply put, in a headless Drupal architecture, the front-end (consumers of content) of the CMS is detached from the back-end (provider of content). 

Conventionally, Drupal websites are meant to multi-task. Which means, Drupal manages both - the back-end content management as well as the front-end rendering of content. There is no doubt that Drupal CMS on its own can deliver a rich user experience to the end user but when it comes down to instantaneous responses for a request, delivering content seamlessly in different interfaces, it does fall short. In a decoupled Drupal architecture, instead of the Drupal’s theme layer, a client-side framework like AngularJS, React or Backbone.JS is used. A user request does not have to be processed by the server all the time, which can drastically improve the speed and UX of your Drupal website.

Technically speaking, a headless Drupal website sends out data in JSON format instead of HTML. A powerful front-end UI framework renders this data in JSON format and delivers the web page.

decoupled drupal                           Headless Drupal Architecture

Categorizing Decoupled Drupal

In a traditional Drupal CMS architecture, the browser invokes a request that is processed by PHP logic which then renders the HTML and sends it back to the browser. Of course, the developer can embed Javascript for some client-side improvements but this can result in a situation where different client-side frameworks are being used for different modules. Thus making it an extremely complex system.

Progressive Decoupling

If you are looking to preserve your Drupal Theme layer and yet be able to provide immediate responses to the browser, the Progressive Decoupling approach is your best move. Here you can have your cake and eat it too! The initial application state is rendered by Drupal which can be then manipulated by client-side coding. Modules can be written in PHP or Javascript while you can avail the powerful performance benefits of Drupal.

This version of decoupled Drupal allows for contextualized interfaces, content workflow, site preview, and other features to remain usable and integrated with Drupal as a whole. While content editors and site assemblers feel at home with this decoupled Drupal version, it also allows front-end developers to pursue their own velocity while keeping site assemblers unblocked, by dedicating a portion of the page to a JavaScript framework.

In short, a progressively decoupled Drupal offers an approach that does a great job in striking the perfect balance between editorial needs like layout management and developer desires to use more JavaScript.

decoupling drupal
A graph illustrating the progressive decoupling spectrum for these examples – Source- Acquia
 

Fully Decoupled Architecture

And then there’s the Full decoupling – where Drupal’s presentation layer is completely replaced with a client-side framework. This version of the decoupled CMS allows an uninterrupted workflow as the client-side framework also acts as a server-side pre-renderer. Drupal CMS is purely used as a content repository that takes care of all the back-stage jazz. When you completely ignore Drupal’s theme functionality you are also letting go of some effective performance benefits that Drupal provides. Also a lot of rebuilding would need to be done to fully decouple the administrative interface and front-end of a Drupal website. Using Javascript on the server-side also complicates the infrastructure requirements.

While a fully decoupled Drupal architecture has gained more attention in recent years with the growth of JavaScript showing no signs of slowing down, it involves separation of concerns between your content structure and its presentation. In a nut shell, creating a fully decoupled Drupal system is like treating your web experience as a separate application that needs to be served content.

Is it a good idea?

Traditionally, Drupal CMS is meant to do both – content management and rendering the front-end for the whole website. A lot of pressure, don’t you think? Drupal experts believe that Drupal’s strengths lies in the power and flexibility of its back-end and that it needs to be service oriented first instead of HTML oriented. Decoupling Drupal means letting some other system manage the front-end while Drupal takes care of the back-end system. Why is it a good idea to decouple Drupal, you ask?

If you want to adopt cutting-edge and modern front-end technologies that Drupal cannot provide you will need a powerful front-end framework like React JS or Angular JS. With a headless Drupal approach, you can have all of this and still maintain your robust backend Drupal CMS.

  • With the Headless Drupal architecture, you can “Write once and publish everywhere”. This system allows content editors, marketers and business owners to create content once and deliver it to multiple interfaces seamlessly.
  • With a decoupled CMS, detaching the front-end from the back-end content management system will allow for more flexibility and efficiency of the Drupal content model. Just like how delegating work decreases your workload and increases productivity.
  • A layered architecture promotes a more secure system. Site admins can restrict access to different areas of the infrastructure. 
  • Headless Drupal allows front-end developers to have full control over the presentation, UI design and UX of the website. The combination of a great client-side framework and a seasoned front-end developer can get you a website with a rich, faster, application-like user-experience, and seamless interactivity.
  • Integrating with third party applications is easier and more efficient with a headless Drupal model.
  • Both the front-end and back-end developers can work independently which can lead to efficient and speedy delivery of a project.
  • If you want to redesign your website, you won’t have to re-implement your Drupal CMS. Likewise, revamping your back-end system can be accomplished without having to alter your front- end.

Is headless Drupal for everybody?

Although decoupling Drupal can help you achieve your goals of an uninterrupted and application-like user- experience, it might not be a good fit for everyone. Here’s why –

  • Websites like News sites or Blogs, that don’t really need much user interactivity, will not benefit from decoupling their Drupal website.
  • When you opt for a fully decoupled Drupal architecture for your website, you are letting go of some of the top (and free) functionalities that come with the Drupal theme layer like the block placements, layout and display management, content previews, UI localization, security features like cross-site scripting (XSS), etc. Some of them cannot be replicated by a client-side framework.
  • If budget is an issue you need to keep in mind about the price you will have to shell out for experienced front-end developers. Also the cost for rebuilding a missing (otherwise freely available) Drupal feature from scratch. 

Who uses Headless Drupal?

Many top enterprises have taken the headless Drupal approach and successfully so! Their websites are fast to load and offer interactive experiences to their users in all devices and interfaces. Some examples are –

  • The Tonight Show with Jimmy Fallon – uses Backbone.js and Node.js for the front-end
  • Weather[dot]com – uses Angular.js for the front-end
  • Great Wolf Resorts – uses CoffeeScript and Spine framework
  • EC Red Bull Salzburg – uses Angular.js for the front-end
  • Warner Music Group – uses Angular.js for the front-end

…And many more on this list here.

Jul 30 2019
Jul 30

Ever since the cloud computing proliferated enterprise digital transformation, new cloud platform services have started thronging the scenes. Now, the cloud ride is burgeoning even faster in 2019 and the cloud vendor innovation pace is at sky-high. The revenue is soaring. Amazon Web Services (AWS), one of the giants in this space, has witnessed a 45% rise in revenue year over year and reached $7.43 Billion for the fourth quarter of 2018.

A humanoid made out of cardboard placed near a creek with Amazon written on it


AWS has been a force to reckon with when it comes to buying storage space for holding a colossal database, provision of bandwidth for hosting a website or processing power in order to run intricate software remotely. With AWS, the necessity of buying and running own hardware gets eliminated and organisations or individuals can pay for only what they actually use. Netflix, leading media services provider and a go-to option for streaming movies and web series, leverages AWS for almost all its backend infrastructure, storing and streaming its online content.

Netflix, from being a DVD-by-mail service to one of the most sought after media streaming services in the world, has come a long way. The number of Netflix subscribers has grown multifold (approximately 150 million in 2019). With 37% of the internet users around the globe binging movies and web series on Netflix, the power of AWS has massively helped them to keep up with the growing strength of customer base and scale at demand. If AWS can play such an influential role on a big enterprise like Netflix, you can hope for wondrous things to transpire when another magic pearl is added. Drupal can do miracles along with different AWS products and there are various ways to leverage products of Amazon Web Services with Drupal for your web development solution. But first, let’s take a quick look at AWS and a plentitude of products that it offers.

AWS in a nutshell

It is imperative to understand where AWS stands today in the market share before getting acquainted with its various provisions. In comparison to other big cloud service providers - Microsoft and Azure - there is a clear lead in the market maintained by AWS.

A bar graph in blue and green colours to show the statistics on Amazon Web Services (AWS) market shareSource: Canalys

Amazon Web Services is definitely one of the most sought after cloud solutions in the market. So what is it? It is a comprehensive cloud platform by e-commerce giant Amazon that provides software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) offerings. AWS offers cloud services from multiple data centres and availability zones that are spread across different regions of the world.

A Well-Architected Framework can be put into use that is built to assist cloud architects in the development of a safe, high-performing, resilient, and efficacious infrastructure for their applications. This framework is based on five pillars namely operational excellence, security, reliability, performance efficiency and cost optimisation.

Five different icons stacked together resembling graph, shield, lightning, speedometer, dollar


AWS provides a huge set of cloud-based services comprising categories like analytics (Amazon CloudSearch, Amazon Athena etc.), application integration (Amazon Simple Notification Service, Amazon MQ etc.), robotics (AWS RoboMaker), compute (AWS Elastic Beanstalk, AWS Lambda etc.), database (Amazon Aurora, Amazon Redshift etc.), and satellite (AWS Ground Station) among others for helping organisations move rapidly, lessen IT costs, and be highly scalable.

Different icons resembling joystick, laptop, robot, globe, mobile phone, cloud, goggles stacked together to represent AWS services Source: AWS

Different ways to leverage AWS services with Drupal

Drupal, an open-source content management framework, is an astounding digital experience platform that helps you disseminate the right content to the right person at the right time on the right devices. Its great content authoring capabilities, provision of stupendous web performance, multilingual features, high scalability, easy integration with the best tools that are available outside of its periphery, mobile-first approach, multisite offering, and immense security make it one of the leaders in the content management system (CMS) market. No wonder its usage has continuously risen to new heights.

Graph showing red, blue and green lines to depict Drupal usage statisticsDrupal usage statistics | Source: BuiltWith

Whether you need to deploy a production-ready Drupal website or build innovative solutions with Drupal 8, many of the products from AWS can be of magnificent use in Drupal development. Let’s take a look:

Production grade Drupal configuration

You can deploy a highly available Drupal architecture on the AWS cloud using a quick start guide. This allows you to leverage AWS services and further improve the performance and extend the functionality of your CMS. AWS’ flexible compute, storage and database services make it a top-notch platform for running Drupal workloads.

The core AWS components that are used for this implementation involve the AWS services like EC2 (Elastic Compute Cloud), EFS (Elastic File System), RDS (Relational Database Service), VPC (Virtual Private Cloud), Auto Scaling, CloudFormation, Elastic Load Balancing, IAM (Identity and Access Management), ElastiCache, CloudFront and Route 53.

The AWS Regions assist in governing network latency and regulatory compliance. Regions are designed by taking availability into consideration and comprise at least two availability zones. Regional endpoints are supported by most AWS services thereby minimising data latency as they provide an entry point for service requests in that region. 

Illustration showing a big square containing smaller squares and circles in green, blue, orange and yellow colours to represent the mode of Drupal deployment on AWSSource: AWS

For the deployment of production-grade Drupal configuration, AWS CloudFormation gives you an automated, simple way for creating and handling a collection of related AWS resources. The main template takes care of building the network-related resources first and then the launch of different templates for Drupal and Amazon Aurora ensues. Modularising CloudFormation code involves other templates and the creation of an Amazon Machine Image (AMI) for Drupal requires an additional template that uses AWS Lambda. For installing Drupal on all the instances in the Auto Scaling group, AMI turns out to be effective. This negates the possibility of repeated downloads.

There are optional templates that can be leveraged like deploying an ElastiCache cluster, building CloudFront web distribution and creating DNS (Domain Name System) records in Route 53 public hosted zone. If you use ElastiCache or CloudFront, the configuration of Drupal with requisite default settings is done. Optimisation of Drupal’s caching and content delivery network settings can be done once Drupal stack is deployed. And when you delete the main template, it deletes the entire stack.

This quick start’s highly available reference architecture for Drupal deployment requires an HTTP(S) load balancer, two or more Drupal servers on Apache web server, shared file storage, shared ElastiCache for Memcache cluster, CloudFront distribution, and Route 53. Deployment of Drupal can be done into a new Virtual Private Cloud (VPC) which involves building a new AWS environment comprising VPC, subnets, NAT gateways, security groups, bastion host and a lot of other infrastructure components. Or, the deployment of Drupal can also be done into an existing VPC that enables Drupal in your existing AWS infrastructure.

Alternative to deploying and hosting production-ready Drupal

For deploying a high-availability Drupal website, this AWS documentation another way round. It exhibits the process of deploying and hosting Drupal. In this, the implementation of an architecture, in order to host Drupal for a production workload, requires minimal governance responsibilities from you.

AWS Elastic Beanstalk, Amazon RDS and Amazon EFS can be leveraged. Once the uploading of Drupal files is done, Elastic Beanstalk governs the deployment process automatically that involve application health monitoring, load balancing, capacity provisioning, auto-scaling among others. RDS offers a cost-effective and resizable capacity while managing time-consuming database administration tasks for you.

Serverless implementation using [email protected]

Flowchart containing different icons connected by arrows to represent Serverless implementation using Drupal and AWSSource: AWS

Drupal can also be a fantastic solution for implementing serverless architecture. The union of Amazon CloudFront, [email protected] and headless Drupal can offer the lowest latency and personalised experience to the users. Deployment of CloudFront allows you to cache and accelerate your Drupal content with the assistance from a globally distributed set of CloudFront nodes. In this, every CloudFront distribution constitutes one or more origin locations. An origin is where Drupal content resides. Deployment of Drupal 8 is done running by running the supplied Amazon CloudFormation stacks. In this, AWS services like EC2, EFS, RDS and Aurora are of great use as well. It is all wrapped in a highly available design with the help of multiple Availability Zones and the configuration is done in such a manner that auto-scaling can be successfully done using EC2 Auto Scaling groups.

Creation of URL aliases for the content is done using the path module that is available in Drupal 8. Within the Drupal 8 administration, ‘Aggregate CSS Files’ and ‘Aggregate JavaScript Files’ are enabled by default. Therefore, the need for bandwidth gets reduced between the Origin AWS infrastructure and CloudFront Edge nodes. Internal Drupal caching is disabled by default that has the authority over the maximum amount a time a page could be cached by browsers and proxies. For altering file URLs and easily caching CSS, JavaScript images, audio and videos within CloudFront, it is also suggested to enable Drupal’s CDN module. Subsequently, CloudFront distribution is created with the help of CloudFront console that involves configurations on Origin, default cache behaviour settings and distribution settings.

Interactive screens using AWS IOT

Drupal is an incredible option for building a scalable digital signage solution for a variety of organisations and can reduce costs, speed up time to market, and help in creating engaging experiences for the people. Metropolitan Transportation Authority (MTA), that plays a significant role as the largest public system in the United States of America, has benefitted by leveraging Drupal and AWS IoT services.

A digital signage powered board at a railway station with the name of railway station written inside itSource: Acquia

Drupal, which powers MTA’s website, also helped them to serve content and data to thousands of digital signs in hundreds of stations in New York City. The utilisation of digital signage’s benefit in station countdown clocks has allowed MTA to offer a great customer experience.

The content can be built inside Drupal and data is pulled from external feeds so that countdown clocks can be supplied with data. The data can be pulled from transit information weather and message provider because Drupal is equipped with provider APIs and once the data is given context via Drupal content model, it is pushed to the digital signs. This is done with the assistance of a data pipeline that’s implemented for utilising IoT service from AWS. 

Cross-channel experience with Amazon Alexa

Amalgamation of Amazon Alexa and Drupal can be great for allowing content to be accessed both via web and voice assistants. Alexa Drupal module helps in the integration. For this, Drupal website must be available online and using HTTPS. To begin with, Alexa module can be installed and enabled on the Drupal site followed by the creation of a new Alexa Skills Kit. Subsequently, application ID, that is provided by Amazon in ‘Skill Information’ is to be copied and submitted to the Drupal site’s configuration. Configuration of Alexa skill in Alexa Skills Kit can then be done and a customised handler module can be built for handling custom Alexa skills.

[embedded content]


A digital agency used this process to build a solution that leveraged both Alexa and Drupal. This they demonstrated through fictional grocery store called Freshland Market. In this, a user opts for a food recipe from Freshland Market’s Drupal site and gets all the ingredients required to cook the food. The food recipe that was asked by the user was for 8 persons but the site has that information for 4 persons. So, the Freshland Market Alexa skill adjusts the quantity of ingredients by itself for 8 persons. In the midst of an array of questions and the relevant ingredients and cooking procedures that the user involves himself with, the food preparation activity turns out to be very simple for the user and it doesn’t require the user to look at the laptop or mobile phone at any stage.

Open source photo gallery using Amazon Rekognition and Amazon S3

Amazon Rekognition’s powerful face and object recognition capabilities can be leveraged with Drupal to a great extent. Its deep learning feature assesses a plethora of images and then utilises all of that data to label objects and detect faces in separate photos. Amazon S3 can help in storing all the photos on a website in one S3 bucket.
 
In a bid to create an open and powerful solution for building galleries and sharing images, a digital agency integrated S3, Rekognition and AWS Lambda with Drupal 8. The main objectives behind this implementation of an open source photo gallery were that it should be ‘self-hosted’, be easily able to upload plentitude of photos, use Drupal as a content store, leverage S3 for file storage and utilise Rekognition for automatic face and object recognition. The expected outcome was to make Drupal even better for photo sharing.

Flowchart showing a person working on laptop, a droplet containing number 8, red boxes, gama symbol, and laptop iconsSource: Acquia

They succeeded by developing an automated image processing workflow. In this, a user uploads a single picture or a set of pictures to Drupal 8 with the help of Entity Browser Drupal module. With the help of S3 File System module, Drupal, then, stores each of the pictures in an Amazon S3 bucket. For every new picture that gets copied into S3 bucket, an AWS Lambda function is triggered and the Lambda function sends the image to Rekognition. The function, then, receives back facial and object recognition data and calls a REST API  resource on the Drupal 8 site for delivering the data through JSON. Rekognition API Drupal module helped in parsing the data and storing labels and recognised faces in Drupal taxonomies and then relating the labels and faces to the Media Image entity for each of the uploaded pictures.

Conclusion

Drupal 8 keeps setting the bar higher when it comes to ease of use, offering limitless new ways to tailor and deploy your content to the Web, easily customise data structures, listing and pages, reaping the benefits of new capabilities for exhibiting data on mobile devices, building APIs and adapting to multilingual needs. Digital innovation is the forte of Drupal 8. And when AWS services are used along with Drupal, there is no stoppage to building exciting solutions.
 
We believe in open source innovation and are committed to offering great digital experiences with our expertise in Drupal development. Talk to our Drupal experts at [email protected] and let us know how do you want us to be a part of your digital transformation endeavours.

Jul 30 2019
Jul 30

We are well aware of the fact that Drupal Cache API is a remarkable feature introduced in Drupal 8. Still, this topic remains unrevealed to many developers as they consider caching to be a critical aspect of a website. In one of our earlier posts, we have exemplified Cache tags https://www.innoraft.com/blogs/how-does-entity-cache-work-drupal-8. Here is a guide that helps you easily grab in some basic concepts of Cache Context. 

Cache Context is basically a service that helps in creating multiple cached versions of something depending upon the context/request; be it a view, block or any other section on the page. 

For instance, let us consider a block displaying a list of tutorial links on a D8 instance. Now authorised users will be given access to all the links while anonymous ones will be provided only with the free tutorials. This data completely depends upon the role of the user. Hence ‘user.roles’ can be used as a cache context in such a scenario. For simplicity let us assume that there exist only two roles authenticated and anonymous. When an authenticated user hits the page, the version of the block with access to all links will be displayed. Hereafter if another authenticated user visits the page; the cached version of the block will be served thereby enhancing the site performance. When an anonymous user comes to the same page the entire request is carried out and the display with limited access to links will be shown. In such a way we can explicitly decide as to when the cache of the element will be invalidated based on the context.

D8 core provides few predefined cache contexts that are available at https://www.drupal.org/docs/8/api/cache-api/cache-contexts .

Our main focus here would be how to define and use a custom cache_context according to our requirement.

Let us consider a simple example to invalidate the cache of a block that displays a personalised message based on the summary that the user has provided in the user edit page.

Prerequisite: Add a field for filling in summary in the user edit form([base_url]/user/[user_id]/edit) provided by default in drupal.

Cache context can be registered as any other service in the module.services.yml file:

services:
  cache_context.user_summary:
    class: Drupal\example_cache_context\CacheContext\UserSummaryCacheContext
    arguments: ['@current_user']
    tags:
      - { name: cache_context }

   example_cache_context.services.yml

The trick here is to understand the naming convention for a new cache context. The name of the service should be of the format cache_context.* i.e should start with ‘cache_context.’ followed by the appropriate name. Hence the name cache_context.user_summary. Similarly, we can define a further level of hierarchy as well. As per the above snippet, the code for cache context goes into src/CacheContext/UserSummaryCacheContext.php. The service takes in the current_user service as an argument. The detail on how to pass a service as an argument to another is available at https://www.drupal.org/docs/8/api/services-and-dependency-injection/structure-of-a-service-file. We need to tag this service to cache_context as well.

The summary field should be used to create this cache_context logic as per the following code snippet:


<?php

namespace Drupal\example_cache_context\CacheContext;

use Drupal\Core\Cache\Context\CacheContextInterface;
use Drupal\Core\Session\AccountProxy;
use Drupal\user\Entity\User;
use Drupal\Core\Cache\CacheableMetadata;

class UserSummaryCacheContext implements CacheContextInterface {
  /**
   * @var \Drupal\Core\Session\AccountProxyInterface
   */
        protected $user_current;

  /**
  * {@inheritdoc}
  */
        public function __construct(AccountProxy $user_current) {
                $this->user_current = $user_current;
        }

  /**
  * {@inheritdoc}
  */
        public static function getLabel() {
                return t('User Summary cache context');
        }

  /**
  * {@inheritdoc}
  */
        public function getContext() {
    $id = $this->user_current->id();
    $user_details = User::load($id);
    $summary = $user_details->get('field_summary')->getValue()[0]['value'];
    return $summary;
        }

  /**
  * {@inheritdoc}
  */
  public function getCacheableMetadata() {
    return new CacheableMetadata();
  }
}



UserSummaryCacheContext.php

Here the SummaryCacheContext class implements the interface CacheContextInterface. The variable $user_current which is an instance of AccountProxyInterface is declared protected and used as per the arguments mentioned in the services.yml file. The function getContext() contains necessary code for the cache invalidation based on the context. We can implement any other logic according to our requirement here.

The CacheContext code is now ready to use. In order to test the code, create a block within src/Plugin/Blocks and place it on any page:


<?php

namespace Drupal\example_cache_context\Plugin\Block;

use Drupal\Core\Block\BlockBase;
use Drupal\Core\Cache\Cache;
use Drupal\user\Entity\User;
use Symfony\Component\DependencyInjection\ContainerInterface;

/**
 * Provides a block for particular user's summary
 *
 * @Block(
 * id = "user_summary_block",
 * admin_label = @Translation("User Summary Block")
 * )
 */


class UserSummary extends BlockBase {

  /**
   * {@inheritdoc}
   */
        public function build() {
                $build = [];
                $UserId = \Drupal::currentUser()- >id();
                $user = User::load($UserId);
                $summary = $user->get('field_summary')->getValue()[0]['value'];
                $build['user_summary'] = [
                        '#markup' => $summary
                ]; 
                return $build;
        }

  /**
   * {@inheritdoc}
   */
  public function getCacheContexts() {
        return Cache::mergeContexts(
                parent::getCacheContexts(),
                ['user_summary']
        );
  }
}

 UserSummary.php

To verify if the context has been added properly, use the chrome dev tools:

dev_tools

If these headers are not visible you might need to configure settings to enable these as per https://www.drupal.org/docs/8/api/responses/cacheableresponseinterface#debugging

This eliminates the need for cache clearance of entire site after the update of any data which slows down the website. Hope this blog gave a better insight as to the usage of cache context in Drupal 8. You can now invalidate cache as per the context keeping in mind the performance of the site while updating the user of every new piece of information appearing on the website. 

Jul 30 2019
Jul 30

Agiledrop is highlighting active Drupal community members through a series of interviews. Now you get a chance to learn more about the people behind Drupal projects.

We had an amazing talk with the super friendly Maria Totova, a driving force behind the Bulgarian Drupal community, organizer of various educational events, avid speaker and co-founder of Drupal Girls. Have a read and learn more about her numerous interesting projects and her love for Drupal. 

1. Please tell us a little about yourself. How do you participate in the Drupal community and what do you do professionally?

My name is Maria Totova and I am a back-end developer from Bulgaria. I have been using Drupal for the last 4 years and I absolutely love it! I work as a Drupal developer at trio-group communication & marketing gmbh, a leading German brand and communication agency, where we create individualized marketing and business solutions.

I am also a board member at Drupal Bulgaria, a non-profit NGO and the official Drupal foundation in my country, as well as an education manager, community leader & instructor at Coding Girls, a non-profit NGO and an international movement. Last, but not least, I am very happy to be a co-founder of Drupal Girls, a subdivision of Coding Girls, devoted especially to raising the interest towards Drupal and growing a strong and diverse community.

Being part of all these amazing institutions, I have the great pleasure to organize and conduct different kinds of events: meetups, workshops, courses and camps. I do my best to spread some Drupal love in high-schools and universities as well by teaching and mentoring students there. I especially love being a speaker at Drupal conferences and I always try to contribute and share what I have learnt. 

2. When did you first come across Drupal? What convinced you to stay, the software or the community, and why?

When I came across Drupal, I was a freelancer using WordPress to build rather tiny websites for small companies. So, I can say that I discovered Drupal at a stage of my life when I was searching for a change, for something more. And I found it. I started working with big brands on larger, more complex projects for great companies.

What I particularly like about Drupal is that it brings many challenges and opportunities. It's never boring. I learn a lot and do different things every day, develop all kinds of various functionalities all the time.

But most of all, thanks to Drupal I have met and continue meeting so many exciting people! I've got amazing colleagues, so smart and really crazy! :) I have found great mentors who have been helping me grow as a developer and I have made friends for life.

Indeed, the Drupal community is full of awesome people, inspiring folks, so open-minded and always ready to help. I love that! I have found the place where I fit in and feel safe and comfortable.

3. What impact has Drupal made on you? Is there a particular moment you remember?

I remember my first encounter with Drupal. :) It was not really a love at first sight... When a friend of mine, who, funny enough, hates Drupal, mentioned it, I decided to take a look. I visited drupal.org, went briefly through the quite strange, full of unknown terminology D7 docs, thought the themes were not so appealing but still decided to install it and dive in a little deeper.

Then, I encountered the content types and modules, and I was like: “Gee, I want to use this!” :D Of course, I went on learning Drupal, built my portfolio website in the process and a few months later I applied for a Drupal job.

Guess what? They called me and hired me on the very next day! I was over the moon! Since then, I have been absolutely enjoying my work every day at every company! How has Drupal changed my life? Phew, it has turned it upside down and inside out but in a very, very good way. I love it and I am happy. Thank you, Drupal! :)

4. How do you explain what Drupal is to other, non-Drupal people?

I always enjoy explaining what Drupal is to my friends and students. I start by underlining the fact that Drupal is not only a CMS but also a powerful framework. On one hand, you have the full capacity to structure your data and become a great content modeler without even realizing it.

On the other hand, you can build various complex custom solutions via Drupal APIs. I tell them how easy it is to install it for less than 10 min. Then, you receive a solid base that you can build on with only the functionalities you need, depending on the type of your project and without any unnecessary stuff. I describe what an impressive technology Drupal is and focus on its main features: modularity, security, performance, reliability, flexibility, multilingual support, mobile-first approach and so on.

Of course, I don’t forget to highlight the significance of the Drupal community: all the contributions, support and the amazing events that it brings along. In the end, what persuades them best is simply seeing my enthusiasm and understanding that Drupal brings real fun. :)

5. How did you see Drupal evolving over the years? What do you think the future will bring?

When I started with Drupal, it was version 7. Previously, I had experience in writing object-oriented PHP using CodeIgniter (they have the best docs ever!) and I loved the MVC pattern. It took me some time to understand the drupalisms but soon I grew fond of the hook system and everything.

However, the changes in Drupal 8 brought pure delight. The OOP paradigm and Symphony have made a huge difference. I am eager to see what the future brings, especially in terms of decoupled Drupal and consumer applications. Having in mind our great community, I am pretty sure that Drupal will continue evolving and shining!

6. What are some of the contributions to open source code or to the community that you are most proud of?

The Drupal Girls project is one of the things I am quite proud of. The idea behind it is to promote Drupal among ladies and bring more diversity to the Drupal community. We do this by organizing workshops, events & courses and inviting girls to join us in a safe, supportive and inclusive environment. Our main target group is high-school & university students, but we are also happy to work with teachers, instructors and developers using other technologies.

Since our vision is based on integration, we are always happy to have men at our events as well. We all know that men and women think in a different way and this is actually a very good thing! We find out different aspects while working on projects and complement each other.

In fact, more and more companies are starting to realize how important diversity is and how beneficial it is to their organizations. I am very happy that Trio, the company I work for, supports our mission and provides us with the space and everything we need for our events. I hope that more people and organizations will consider joining our initiative by establishing a local community in their city.

Since we are part of the Coding Girls family, a non-profit & non-government organization, all our work is completely volunteer. Thus, we are constantly looking for more mentors and instructors willing to educate and encourage girls to get started with Drupal.
 
The Drupal 8 Companion Guide is another project that is part of Drupal Girls and which I presented at Drupal Europe in Darmstadt last year. It has still some work in progress, but I will do my best to publish it soon. It is a structured and portable reference manual to various Drupal materials which both learners and instructors can adopt anytime, anywhere.

It aims to help beginners focus on the important concepts without losing too much time in a prolonged research and before they give up. I have been using it for conducting our workshops and courses as well as for building a curriculum for our Trio internship programs for university students. We are planning to provide it to high schools this autumn, too. Of course, the guide can also be used in a self-paced & self-study manner by newcomers on their journey through the Drupal realm.
 
In the meantime, I enjoy being a speaker at Drupal conferences and sharing my knowledge, experience or lessons learnt with the folks there. I particularly like the lively discussions at the end of the sessions, and I am always looking forward to them. One of the local camps that is especially important for me is Drupal Bootcamp Plovdiv and I am very proud to be among its organizers.

It is a two-day conference for total beginners that consists of various presentations, discussions, quizzes and workshops. At the end of the conference, every participant has their own project and a good basic understanding of Drupal.

We have been doing it for a third year in a row and I absolutely love to see new eager-to-learn eyes every time! In addition, thanks to the latest changes on community projects on drupal.org, I am also happy to give credits to our great speakers and mentors!

7. Is there an initiative or a project in Drupal space that you would like to promote or highlight?

Ah, there are so many great Drupal initiatives and projects that I simply cannot list them all. Of course, the first three that come up to my mind are the Drupal 9, the Admin UI & JavaScript Modernisation and the Documentation strategic initiatives. These folks are doing a wonderful job and they deserve our respect.

As a developer, I am deeply interested in the D8DX: Improving the D8 developer experience community initiative. Since I come from a Drupal 7 world, and I remember the multi-language combinations and struggles there, I cannot forget to mention how impressed and grateful I am to the Multilingual initiative!

Finally, the Promote Drupal initiative is really important to us all and should definitely be highlighted!

As for projects, I am particularly fond of Thunder: we have been using it as a foundation for developing our own distribution and I enjoy being one of the devs working on it. I also like Drupal Commerce and I am always happy to see new e-shops built on it. Most definitely, every project on drupal.org deserves a recognition for all the efforts of their maintainers and contributors!

8. Is there anything else that excites you beyond Drupal? Either a new technology or a personal endeavor. 

When I am not busy with Drupal, I volunteer the rest of my time to Coding Girls as an education manager, instructor and mentor. Coding Girls is an international organization promoting an increased presence of girls and women in technology, leadership and entrepreneurship. We have communities in different cities around the world and are constantly growing.

I am the community leader of Coding Girls Plovdiv in my hometown, where we have been organizing free meetups, courses, workshops and all kinds of tech events for more than two years. Apart from the summer break, we are quite busy as we have an event almost every Thursday. This is how I have gained solid experience in organizing events and I enjoy it a lot!

Besides, now I have the chance to do the thing I love as much as programming – teaching. I know how important mentorship is and I am happy to do it for other people, to pay it forward. :)

Jul 30 2019
Jul 30

The Intense Drupal 8 module provides a nice whole screen zoom of the images on your site. Keep reading if you want to learn how to install and use this module with a practical example.

Step #1. Download and Install the Required Modules

  • Open the terminal application on your PC
  • Go to the root of your Drupal installation (the composer.json file is inside this directory)
  • Type the following command:

composer require drupal/intense

Enter the composer command

  • Click Extend
  • Scroll down, search and check the following modules:
    • Blazy
    • Blazy UI
    • Intense images
  • Click Install
  • The System will install the core Media module, which is required
  • Click Continue

Click Continue

After installing the modules, it is necessary to download, unzip and place the required libraries in place.

  • Create a libraries directory on the root of your Drupal installation (the core directory is there)
  • Download the Intense library from this GitHub page.

Download the Intense library

  • Place the zip file inside the libraries folder
  • Extract it
  • Rename the extracted directory to intense
  • Repeat the process with the Blazy library
  • Rename it to blazy
  • At the end you should have the following file structure:

At the end you should have the following file structure

Step #2. Add a Field to the Article Content Type

Our site is promoting fishing trips, so a price field is required.

  • Click Structure > Content types > Article
  • Click Manage fields

Click Manage fields

  • Click Add field
  • Select Number (decimal)
  • Give the field a proper label
  • Click Save and continue

Click Save and continue

Click Save field settings

Click Save field settings

  • Check this field as Required
  • Add the prefix ‘$ ‘ (don’t forget the quotation marks)
  • Click Save settings

Click Save settings

Step #3. The Intense Image Configuration

  • Click Structure > Content types > Article > Manage display
  • Look for the Image field Format and select Blazy from the dropdown
  • Click the cogwheel on the right
  • Select the Media Switcher option and choose Image to Intense
  • Click Update

Click Update

  • Place the Price field right over the Comments field
  • Click Save

Click Save

If you want to have this effect on the teasers of your articles, just edit the Teaser view mode with the same configuration options.

Edit the Teaser view mode with the same configuration options

Step #4. Create Content

  • Click Content > Add content > Article
  • Write proper content
  • Upload an image
  • Click Save

Click Save

You should see a cross over the image if you hover with your cursor over it. The cursor will turn itself into a cross too.

You should see a cross over the image

Click the image, it will zoom and cover the whole screen.

Click the image, it will zoom and cover the whole screen

You can pan over the image by moving your mouse. The image closes when clicking once again.

I hope you liked reading this tutorial and putting it in practice (of course)!

Thank you and stay tuned for more Drupal content.


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Jul 30 2019
Jul 30

Sites with long pieces of content or with a long landing page often have a little arrow at the bottom, which helps you get back to the top of the site instead of scrolling the whole way back.

The Back To Top Drupal 8 module helps site-builders who are not yet ready to work with templates or JavaScript to place this kind of button on their sites.

Keep reading to find out how.

Step #1. Download and Install the Required Module

  • Open the terminal application on your PC
  • Go to the root of your Drupal installation (the composer.json file is inside this directory)
  • Type the following command:

composer require drupal/back_to_top

Type the composer command

  • Click Extend
  • Scroll down and search the Back to top module, check it
  • Click Install

Click Install

Step #2. The Module Configuration

  • Click Configuration > User Interface > Back to Top
  • Check Prevent on administration pages and node edit
  • Change the Button text
  • Leave the default PNG 24 image button
  • Click Save configuration

Click Save configuration

Step #3. Replace the Button Image

The image file is called backtotop.png and is located at /modules/contrib/backtotop.

  • Rename the file backtotop.png to backtotop1.png
  • Paste a new image file (70px wide, 70px high) called backtotop.png into this directory
  • Paste another image file with the same dimensions called backtotop3.png.

This file will be used to achieve a hover effect.

This file will be used to achieve a hover effect

The backtoto2x.png is for retina displays there. You can replace this file with the same method. Make sure the file is this time 140px wide and 140px high.

Step #4. Edit the CSS Code

To display the green arrow when hovering over the yellow arrow you have to edit the CSS code of the module.

The CSS file is located at /modules/contrib/back_to_top_css.

  • Open this file in the code editor of your liking
  • Edit the #backtotop:hover selector
  • Add the following code:
#backtotop:hover { background: url(../backtotop3.png) no-repeat center center; bottom: 20px; cursor: pointer; display: none; height: 70px; position: fixed; right: 20px; text-indent:-9999px; width: 70px; z-index: 300; }

Add this code

  • Click Configuration > Performance > Clear all caches
  • Important! Clear also the cache of your browser

Head over to a long article on your site and scroll down. The yellow arrow should appear.

Head over to a long article on your site and scroll down. The yellow arrow should appear

Hover over it to see how the other image gets pulled.

Hover over it to see how the other image gets pulled

If you click on this button the page will scroll back to the top.

The Text/CSS Button

I was not able to edit the colors of the button through the user interface. However, you can tweak the look and feel of it by editing the CSS file located at /modules/contrib/back_to_top/css/back_to_top_text.css.

You can tweak the look and feel of the button

You can tweak the look and feel of the button

I hope you liked reading this tutorial. Stay tuned for more Drupal content. Thanks!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Jul 29 2019
Jul 29

In the coming weeks, you can expect a series of changes going into the development pipeline to support the CiviCRM-Drupal 8 integration. Individually, these will seem unrelated and disjoint - they may not explicitly reference “D8”. I wanted to spend a moment to discuss the concept which ties them together: the clean install process, which will make Civi-D8 an equal member of the Civi CMS club and a good base for continued development and maintenance.

This work on D8 stabilization has been made possible by the generous funders of the Civi-D8 Official Release MIH. If you’d like to see more topics addressed, please consider contributing to the MIH.

What do you mean by "clean" install process?

A "clean" install process is a set of steps for building a site in which CiviCRM comes in a direct fashion from source-code with a bare minimum of intermediate steps.

To see this concept in action, we can compare CiviCRM's integrations with Drupal 7 and Joomla:

  • CiviCRM-D7 is amenable to a (comparatively) clean install process. There are three Github projects (“civicrm-core”, “civicrm-packages”, and “civicrm-drupal”) which correspond directly to folders in the web-site. If you copy these projects to the right locations, then you have an (almost) valid source-tree.

  • CiviCRM-Joomla is not amenable to a clean install process. You might think it's similar -- it uses a comparable list of three projects (“civicrm-core”, “civicrm-packages”, “civicrm-joomla”). The problem is that “civicrm-joomla” does not correspond to a singular folder -- the install process requires a diasporadic distribution of files. The install script which handles this is tuned to work from the “civicrm-X.Y.Z-joomla.zip” file, and producing that file requires a tool called ”distmaker”. “distmaker” is fairly heavy - it requires more upfront configuration, is brittle about your git status, runs slower, and produces 200+mb worth of zipfiles. In short, building a CiviCRM-Joomla site from clean source is more difficult.

Why does a "clean" process matter?

It's easier to develop and maintain software when the build is clean and intuitive. Specifically:

  • It's easier to climb the ladder of engagement from user/administrator to contributor/developer.

  • It's easier to lend a hand - when someone submits a proposed patch, it's easier to try it out and leave a friendly review.

  • It's easier to setup automated QA processes for evaluating proposals and upcoming releases.

  • It's easier to setup sites for RC testing, issue triage, pre-release demos, and so on.

  • It's easier to pre-deploy a bugfix that hasn't been officially released yet.

Anecdotally, more experts with stronger collaborations have grown-up and stayed around in the Civi-D7 and Civi-WP realms than the Civi-Joomla realm. And that does not feel like a coincidence: as a developer who watches the queue, I'm generally intimidated by a Civi-Joomla patch -- even if it looks simple -- purely on account of the difficult workflow. I believe that a reasonably clean/intuitive build process is prerequisite to a healthy application and ecosystem.

Moreover, a clean build of Civi-D8 is important for Civi's future. Civi-D7 cannot be the reference platform forever - if we expect Civi-D8 to take that mantle, then it needs to be on good footing.

What kind of changes should we expect?

From a civicrm.org infrastructure perspective: Expect automatic setup of D8 test/demo sites - in the same fashion as D7, WordPress, and Backdrop. This means PR testing for "civicrm-drupal-8". For bug triage, it means normalized and current test builds. For release-planning and QA, the test matrices will provide test-coverage for D8 (similar to the other CMS integration tests). These services are blocked on the need for a clean process.

From a site-builder perspective: Expect the recommended template for `composer.json` to be revised. This should improve support for backports and extended security releases. Early adopters may eventually want to update their `composer.json` after this settles down; however, the details are not set in stone yet.

From a developer perspective: Expect the process for setting up `git` repos to become simpler. Instead of using the bespoke `gitify`, you'll be able to use the more common `composer install --prefer-source`.

From a code perspective: Expect changes to code/use-cases which (directly or indirectly) require auto-generated files. For example, when initializing Civi's database, the current Civi-D8 installer relies on “.mysql” files (which have been pre-generated via ”distmaker”); we can replace this with newer function calls which don't require pre-generated files -- and therefore don't depend on ”distmaker” or “setup.sh”.

Jul 29 2019
Jul 29

I joined Liip for an internship in POing. To take the most out of this opportunity, I prepared myself for the Scrum Product Owner Certification, which I succeed. Here are my key take-aways.

Why a Scrum Product Owner certification ?

The first step, I would say, to get certified is to be ready to dedicate time for the preparation and training, and to commit to it. This wasn’t a problem for me as I was highly motivated in improving my skills and knowledge about Scrum. And here is why.

I worked more than 10 years in international corporations, using traditional models for product development. And I reached the point where I sensed that something wasn’t quite right.

As part of the product development team, we were asked to deliver a set of features based on specifications defined at the beginning of a project. All the specifications had to be delivered for a mid- to long term deadline. No need to say that it wasn’t possible to modify them. The sign off could only be possible if the defined specifications were delivered. I often experienced a big rush at the end of the project because nor the scope nor the deadline could be changed. The team members suffered, almost burning out sometimes.

During the latest projects I was working on, feedback from end-users was given during the testing phase. It was already too late though. End-users requested features that were not part of the specifications defined at first. As the project team, we were told that: “end-users can make an enhancement request later on, after the Go-live”. Like my former colleagues, I had this sad feeling. We worked really hard on delivering these features. But we weren’t proud of them as end-users were complaining.

There has to be something better. A way where users needs are at the core of the product. A way where inputs from the team members really count. That’s when I came across the Scrum Guide. It became clear to me that I wanted to be part of this. That I wanted to be part of a Scrum team.

Referring to my experience and skills, Product Owner was the role appealing to me. In order to do so, I set myself two objectives: gaining experience and getting certified.

Collaboration with Liipers

I had the chance to join Liip for a three-months training in POing. Witnessing practical POing and being immersed in the Scrum philosophy was part of the deal. I was on-boarded on different projects too, working with different teams and clients. This helped me integrate how the Scrum Guide should be applied in practice. I got a deeper understanding of the Scrum events, the Scrum artifacts as well as the roles of the Scrum Master, the Product Owner and the development team.

Yes, self-organized teams works ! I was strongly motivated by this environment where all team members are equally responsible for the success or failure of the product. It really makes everyone commit to the work to be done, and brings up new ideas and solutions.

What about the Product Owner ?

This role is the one which always keeps in mind the end-user, not only for the team but also for the client. In my opinion, one of its biggest challenges is to convince the team and the client that real end-users feedback is a must. The PO is the one prioritizing the features to be developed, expressing user stories in the most intuitive manner and identifying the needs of the client and end-users.

I believe that as a Product Owner you need to be empathic, synthetic, a good listener and a good communicator.

During my training, I was inspired and empowered by my fellow PO colleagues. I loved their reactivity and the way they reorganized work to seize each business opportunity. The user needs are evolving and as the Scrum team we have to adapt our developpements to them. The Scrum framework allows us to do so.

Sprint after sprint, I was amazed how the product was developed, refined and improved to perfectly meet the evolving user needs.

Becoming a Product Owner myself was definitely the right choice for me.

Training with Léo – a certified Scrum Master and Agile Coach

At Liip, I had the chance to meet Léo – a Scrum Master and Agile coach. He guided me through the different steps, advising me to several really interesting readings such as Scrum, A Smart Travel Companion by Gunther Verheyen. Thanks to his coaching, I gained a deeper understanding of Scrum’s essence. He challenged me and made me think about Scrum principles and why Scrum has to be fully implemented – not just bits and bites of this amazing framework.

Getting certified, what’s next ?

Beginning of July I felt ready for the Scrum Certification. And I nailed it!

Actually, I applied the Scrum principles to my own training. The vision I have for my product is “to become an efficient Product Owner.” My first iteration was “to understand the role of a Scrum Product Owner”. And the certification was the test. I gave myself three weeks to reach that goal (sprint goal set, sprint duration set, commitment of the team, self-organization). On the other hand, I was open to faillure. As we never fail, but only learn.

This first iteration was a team effort too. I even had a Scrum Master – you know Léo – on my team ;-). I improved my knowledge on my colleagues’ experience (empiricism). My minimum viable product evolved from “to understand the role of a Scrum Product Owner” to “being a certified Product Owner”.

I am proud to announce that my first product works ! And I’m already working on the next improved iteration. So that my (own) product fits the market needs.

I feel fully armed to embrasse a new work life as a Scrum Product Owner. I want to help people – used to work with traditional models – evolving to the use of the Scrum framework.

Last but not least, I will carry on learning and adapting fast. I will be Agile and help others to achieve this goal as well.

Jul 29 2019
Jul 29

Conversations concerning accessibility of digital assets tend to fall into one of two categories: the carrot or the stick. 

The carrot is the host of business benefits associated with ensuring a great and accessible experience for all users. All too often, though, it’s the stick -- the threat of a lawsuit or actual legal action filed in federal court under Title III of the ADA -- that drives organizations to begrudgingly take steps toward getting their digital assets into compliance with WCAG 2.1 and ADA Section 508. 
 

Accessibility Claims Climb

Let there be no doubt, that the stick is real, and gaining momentum at a rapid pace. Lawsuits based on claims that a disabled person could not access a website because it was not coded to work with screen readers, or other assistive technologies, continues on a sharp upward trajectory. In 2018, we witnessed a threefold increase in accessibility lawsuits over 2017 -- from 814 to 2,285. The year-over-year increase in accessibility lawsuits filed during the first quarter of 2019 is more than 30 percent higher than the first quarter of 2018.  

The Southern and Eastern District of New York are battling the majority of these claims, followed by Pennsylvania and Florida, but the current geographic concentration cannot be viewed as any indicator of what’s next. The fact is, any organization that has a consumer-facing website that is not accessibility compliant, risks legal action.

There are no shortage of statistics such as these pertaining the “stick”-- the need for urgent and in-depth action to ensure accessibility compliance. I find conversations concerning the carrot to be far more fruitful, though. 
 

Accessibility is Good for Business

It should come as no surprise that the recent surge of accessibility-related lawsuits parallels the radical shift toward ecommerce. People of all physical and cognitive abilities are now counting on websites to buy what they need online, and retailers who are proactive about the accessibility of their sites are at a significant advantage for more reasons than simply staying out of court.

Any time a client is unable to complete a transaction or has a sub-par online shopping experience, that’s a lost opportunity. Chances are slim that a frustrated shopping experience will lead to a lawsuit, but there’s a significant likelihood that the client will look elsewhere -- possibly never returning to the site that was perceived to be problematic. This is among the reasons why it is so essential that we take a holistic view of online accessibility -- looking beyond the legal mandates and sharpening the focus on the needs and expectations of the users of your site. 

Vast and Varied Market

Recent Census Bureau statistics reveal that 56.7 million Americans, close to 20 percent of the population, have some form of disability.

Specifically:  

  • 3.3 percent of the population are visually impaired, which can include color blindness or require that they use a screen magnifier or a screen reader.
  • 3.1 percent have a hearing impairment for which they need to rely on captions for audio and visual media. 
  • Another 6.3 percent of the population have some sort of cognitive, mental, or emotional impairment which could impede their ability to complete a transaction without clear and consistent navigation and prompts.
  • And 8.2 percent have difficulty lifting or grasping, which could challenge their ability to use a mouse or navigate a keyboard. 

Accommodations and Awareness

Remediating a site to ensure accessibility will enhance the experience and enable commerce for differently abled users, while also attracting more users to the site.  A huge and seldom discussed advantage of online accessibility is its impact on SEO. Consider the analogy of how difficult it can be to find what you need on a cluttered desk stacked up with unmarked files. A well-ordered site that is tagged appropriately, with images that are accurately described, is not only key to accessibility compliance, it’s essential for modern browser searches. And don’t overlook PDFs. Information that is locked in an inaccessible PDF won’t be found by search engines -- a critical competitive disadvantage in the current market. 
 
A holistic and heartfelt approach to accessibility results in a blurring of the lines between user experience and ADA mandates. Much of this is accomplished during a Promet Source Human Centered Design workshops that take a deep and consensus-building inquiry into user needs and opportunities for growth. A holistic approach to accessibility is about ensuring a great user experience. It’s an investment in your brand and a profound opportunity for growth.

Looking for help with defining your audience and ensuring an accessible user experience that exceeds expectations? Contact us today.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web