Aug 24 2018
Aug 24

Image showing the words Drupal Module Spotlight: Reroute Email“Excuse me, Mr./Mrs. Client, I’m so sorry but I accidentally just sent your 3000 users a fake purchase email receipt when I was testing.”

Ugh.

Big complex systems are a lot to keep track of, especially when it comes to email. There are emails for resetting passwords, emails for new users, email receipts, email notifications for workflows, etc, and it’s frankly a little terrifying to rely on yourself to remember all the implications of what’s going on when working on local environments, or test servers or anywhere but production. All you have to do to experience this pain is trigger an accidental FAKE email send to REAL people. 

That’s where modules like today’s spotlight comes in. 

We’re talking about the Reroute Email module, a simple plugin that allows you to override all outbound email sending and send them to a configured address, allowing you to both test that email content is working but not have to apologize to all of your users.

Installation is simply turning the module on and then going to the configuration page at /admin/config/development/reroute_email. From there, you can set the rerouting email addresses and even set whitelisting emails that are permitted to pass through the reroute.

Reroute email module screenshot

This is helpful for situations where you want to use test emails for specific purposes and you want the emails to go out as they will in production without having all email functionality enabled on your site. Pretty great. Lastly, you can even enter module key patterns so that emails coming from particular modules are the only ones being rerouted. Very flexible.

And a special note: On many of our sites, we set the config for the module in the local settings files for each environment so we can ensure, for example, that local or test environments never send real emails even if we pull code or refresh the database from production. It’s as simple as the following lines of code:

$config['reroute_email.settings']['reroute_email_enable'] = TRUE; 
$config['reroute_email.settings']['reroute_email_address'] = '[email protected]';

So there you have it. A great module that trades embarrassment for confidence. (They really should put that phrase on their module page!)

Aug 03 2018
Aug 03

If you haven’t heard of the phrase “Natural Language Processing” by now, you soon will. Natural Language processing is an expanding and innovative use of technology to analyze large amounts of data or content and derive meaning from it, short-cutting a tremendous amount of manual effort needed to do that ourselves. It’s been around in some form for quite a while, but it was often relegated to complex enterprise systems or large corporations with a vested interest in automating the data mining of huge amounts of data to figure out what the patterns were, for example, in consumer purchasing trends or social media behavior. It’s a cool idea (it’s a form of artificial intelligence after all) and fuels a lot of our online experience now whether it’s product recommendations, content recommendations, targeted ads, or interactive listening services like Siri or Alexa. What’s even better is that this sort of thing is becoming more and more accessible to use in our own software solutions as many of these now provide services with APIs. This allows us to provide a more personalized or meaningful experience for site visitors on web projects that likely don’t have the budget or requirements to justify attacking natural language processing itself and can instead find accessible ways to benefit from the technology.

One such use case that we’re talking about today is using the Google Natural Language Processing APIs on our own Drupal sites. We can use it to analyze our own site content and even autotag based on a common taxonomy. We really dig integrations here at Ashday, so we’ve just released two new Drupal modules to help you get hooked up with Google’s service. They are the Google NL API and Google NL Autotag modules.

The Google NL API Module

This module is intended to be your starter module to get things going. It provides functionality to connect to Google's Natural Language API and run analysis on text, including sentiment, entities, syntax, entity sentiment and content classification. It doesn’t decide what to do with this analysis, but it provides a service with a number of methods to analyze your content and then you can decide what to do with the information. All you need to get going is a Google NL API account. Full details of installation and usage can be found here.

Here is a brief outline of what each method provides, provided by Google.

Sentiment Analysis

“Sentiment Analysis inspects the given text and identifies the prevailing emotional opinion within the text, especially to determine a writer's attitude as positive, negative, or neutral. Sentiment analysis is performed through the analyzeSentiment method.” (from Analyzing Sentiment)

Entity Analysis

“Entity Analysis inspects the given text for known entities (proper nouns such as public figures, landmarks, etc.), and returns information about those entities. Entity analysis is performed with the analyzeEntities method.” (from Analyzing Entities)

Syntax Analysis

“While most Natural Language API methods analyze what a given text is about, the analyzeSyntax method inspects the structure of the language itself. Syntactic Analysis breaks up the given text into a series of sentences and tokens (generally, words) and provides linguistic information about those tokens.” (from Analyzing Syntax)

Entity Sentiment Analysis 

“Entity Sentiment Analysis combines both entity analysis and sentiment analysis and attempts to determine the sentiment (positive or negative) expressed about entities within the text. Entity sentiment is represented by numerical score and magnitude values and is determined for each mention of an entity. Those scores are then aggregated into an overall sentiment score and magnitude for an entity.” (from Analyzing Entity Sentiment)

Content Classification 

“Content Classification analyzes a document and returns a list of content categories that apply to the text found in the document. ” (from Classifying Content)

The Google NL Autotag Module

This module is the first step in actually doing something with the natural language analysis results provided by the API module. It provides a Google NL Autotag taxonomy that you can attach to whichever content types you choose and then will automatically create the relevant taxonomy terms and relate content whenever that content is saved. So a few clicks is all you need to have a nice auto-classification system in use on your site. You can even configure which text-based fields on your content should be used for the analysis as well as specify the confidence threshold, which determines at what confidence level you consider a Google classification as valid. So Google may say that the content matches the category /Home & Garden/Bed & Bath/Bathroom, with a confidence of .4 (on a scale of 0 to 1). You can decide what confidence level is good enough to categorize your content since different use cases may justify different approaches. A full list of Google’s content categories can be found here.

That’s all this module essentially does, but it’s meant to be a simple solution for sites to easily start benefiting from Google’s natural language services. You can, of course, extend the service or add your own functionality to use these APIs however you find beneficial because it’s Drupal 8, and Drupal 8 rocks when it comes to flexibility. So install these modules and start tinkering and don’t hesitate to ask if you have any questions or suggestions for future functionality.

Jul 13 2018
Jul 13

Omeda and Drupal are a perfect match for managing customer relationships

As you may have figured out by now, Drupal is a great platform for 3rd party integrations. Whether it’s eSignatures with Hellosign, more sophisticated search with Solr, or a host of other options, Drupal works best when it’s not trying to reinvent every wheel and is instead used to leverage existing business tools by tying them all together into a robust and useful package. Today, we’re going to take a look at a new set of integration modules that Ashday has just contributed back to the Drupal community: Omeda, Omeda Subscriptions and Omeda Customers.

The Omeda Solution

If you haven’t heard of Omeda by now, let’s take care of that. They are a family-founded company who have been around for over 30 years and offer a host of Audience Relationship Management tools meant to properly leverage modern technology to properly segment and target your customer base. They are on top of their game and really know their stuff (they even collaborated on the brainstorming of these new modules). And best of all, they offer a very complete and well-documented API, which is key to any good integration.

While their API is extensive, much of it can be very tailored to a particular customer’s needs so we set out to build a contributable integration that covers some great typical use cases as well as allow for custom development where necessary. So far, this has taken the shape of three modules that potentially can be extended later on.

Omeda Base Module

The Omeda base module is meant to be a simple core module to get you wired up to Omeda so you can start doing things. It has settings to configure your connection to the API as well as a setting to work in testing mode. At its heart is a Drupal service that is meant to be injected into other contrib modules that can tackle specific Drupal solutions. This service provides the Comprehensive Brand Lookup, which the module uses to cache your Omeda brand config daily, as well as a generic utility for making API calls, which has the benefit of consolidating some basic error handling and formulation of HTTP requests. Here’s a little example of how simple it is to call the API using this service:

 

$brand_lookup = \Drupal::service('omeda')->brandComprehensiveLookup();

 

So that’s pretty easy, right? If anything is wrong with the API config or the service is down or something along those lines, the service throws an exception with a helpful message. Otherwise, it returns the Omeda API response to the caller. Again, it’s simple but provides the base functionality you need everytime you wish to connect with Omeda.

Omeda Subscriptions Module

The first module to leverage the Omeda base module is the Omeda Subscriptions module. This module adds a My Subscriptions tab to the user profile, for the roles you select, where users can manage their deployment subscriptions. You can also configure which Omeda deployments you wish to allow your users to manage. The available deployments come from the stored Comprehensive Brand Lookup data cached by the base Omeda module. This module includes a new Omeda Subscriptions service that adds functions to fetch a logged in user’s opt ins and opt outs as well as allow the user to opt in or opt out of a deployment. It’s a simple, specialized module (which is the best kind) to provide an out-of-the-box solution for the most common Omeda user subscription management needs.

Omeda Customers Module

The Omeda Customers module is another extension to the base Omeda module that allows you to map user fields to your Omeda customer entities and sync them on user updates. You can choose which roles will sync and which user fields will sync. Since field mapping can get quite hairy, we provide a solution for simple use cases where you need to simply tell Omeda that “this field = that field”. If it’s a standard base Omeda field like “First Name”, those are called base fields and are meant to be a simple hand off of the Drupal field value to Omeda. If it’s an email, phone number, or address field, you can choose that type and we will then ask you to determine which contact type it represents so that it gets into Omeda properly. It should be noted that for addresses, we only support Address fields from the Address module since mapping a bunch of individual Drupal fields to a single Address entity in Omeda is more complicated and likely needs a custom solution.

This mapping config also provides a simple solution for Omeda Demographic fields, which are more complex and dynamic fields that store IDs instead of literal values. It allows you to choose which demographic field a Drupal user field maps to and then create a mapping of possible Drupal field values with available Omeda field values. So if you have a field on the Drupal user called “Primary business role”, but you want to map it to the Omeda “Job Title” demographic field, you can do that. You would then hand enter a mapping that indicates that a Drupal field value of “President” maps to the Omeda value of “President / CEO” so that we can send Omeda it’s desired ID value of 5******* instead of the literal text of “President / CEO”, which would be invalid. Again, this is for more simple use cases; if your Omeda fields don’t map 1-to-1 to Drupal fields, the necessary business logic is wide-ranging and you will likely need custom programming. The great thing though is that we’ve included support for a custom hook (hook_omeda_customer_data_alter) to inject your own adjustments into the data mapping process and provide your own custom alterations.

In Conclusion

Hopefully these modules prove useful to you - especially if you’re already an Omeda client with a Drupal site - and the goal again is to provide some basic functionality that doesn’t require developer resources to integrate with some basic Omeda functionality. We ourselves have custom needs beyond what the modules can offer and find that it’s quite easy to extend them to do what we need for particular customer solutions, largely thanks to Drupal 8’s architecture. As always, if you need assistance please feel free to contact us and if you’d like to offer any module patches or report a bug, feel free to use the issue queues for each project and we’ll check it out!

 If you need to integrate Drupal with anything, talk to us! We can integrate just about anything with Drupal

Jul 11 2018
Jul 11

Coffee is a magical thing. It gets you going, clarifies your thoughts, makes you feel all warm inside. I don’t know what I’d do without it. So when we consider installing the Drupal module named after this irreplaceable daily beverage, we see that it has a similar effect. It just makes things better. Am I overstating things? Probably. But I haven’t had enough coffee yet today and I need to get this blog going with some pizzazz.

So simply put, the Coffee module gives you a Mac-like live search experience of the Drupal admin menu. It is triggered by the shortcut of Alt+D on the Mac, or Alt+ctrl+D in Windows IE. When the search bar pops up, just start typing and you can do an ajax live search of that deeply nested admin menu and get to things in a hurry! 

Screenshot of Coffee Drupal Module

Now, it’s not perfect. It can’t dig beyond certain levels, such as actually finding the settings of individual contact forms, but it gets nearly everywhere in the first couple of layers and can save you a lot of clicks when in development and site building mode. Even cooler - you can add your own custom coffee commands in your custom module to make your actions discoverable. That’s great flexibility. 

So there you have it! A simple module that shaves off a few seconds of time all day long. Pretty awesome!

Time to refill my mug.

Jun 15 2018
Jun 15

As you may or may not have noticed, we’re having a lot of fun over here at Ashday building Drupal sites with React. Check out our own site, for example. We are really digging this new direction for front-end and you can learn more about why we did it how we approached it in other articles, but here we are going to talk about how we approached the Drupal editorial experience, because honestly - we just didn’t find a lot of great resources out there discussing how this might be done well in a decoupled experience.

Something about Gift-Horses

While we can’t speak with authority on the potential detriment of literally looking gift-horses in the mouth, we can speak to the idea that we should be grateful for what we’ve been given, and Drupal has given us a lot! If you were in this industry when most of us built everything from scratch, you’ll know that it’s a crazy pile of work to do the most basic things and nothing is taken for granted. Need a login form? Sure thing. Oh, now you need flood control? Um, ok. Captcha? Boy. Password reset? Ugh. Ok, I need a week just to get user authentication in place.

Drupal does so much out of the box and we aren’t about to throw it all away so we can call ourselves fully decoupled, which is the idea that Drupal isn’t providing any of the front-end at all. It’s worth noting that some are pursuing the concept of “Progressively Decoupled,” where only select components of the front-end site are managed with React, but we don’t prefer that approach in most cases because we don’t want the overhead of taking on traditional Drupal theming and a React build out, yet with less obvious design constraints, potentially extra hosting, multiple development workflows, duplicated styles, etc - leaving us short of many of the benefits of going decoupled at all.

We prefer to an approach that we’ve, for the moment, dubbed “Deliberately Decoupled.”

Deliberately Decoupled

What we mean by “Deliberately Decoupled” is that we aren’t decoupling purists, who see ourselves just as much evangelists of a particular approach as we are just software engineers, and we also aren’t operating by a FOMO (fear of missing out), where anxiety about not being on a bandwagon drives our decisions. We prefer to leverage what we think is beneficial for our clients and the site, and secondarily abide by our preferences. A good example is the Open Source philosophy. We love open source! But we aren’t for a minute going to bypass a proprietary library that does something really cool just because it’s not open source. It’s the same with decoupling. We want what gives us good bang for the buck - either in the deliverable, or the cost, or the UX, or whatever else is impactful - and it helps far more than it hurts. So, for the largely public facing ends of our sites, we hands-down love what React is giving us and rarely find Drupal’s out of the box front-end solution to be easier or more flexible. On the back-end? For the editorial and admin experience, we really have no interest in trying to replace everything Drupal provides unless there is a major overarching requirement or clear benefit that can justify hundreds or thousands of hours of additional work. There are some projects that do merit it, such as when you have highly dynamic and interactive forms, but if that’s not the case Drupal can do the job on its own.

Considering Outsourcing? Consider Ashday!  Request your free consultation today. 

One prime example of leveraging cool stuff in Drupal is the use of Paragraphs for content. If you haven’t heard of Paragraphs, you really must check it out. It’s been our favorite way to give editors the ability to create interesting and dynamic content creation without the cringe-worthy experience of complex WYSIWYG and HTML source editing, especially when it comes to the mobile experience. Nearly all of the Ashday.com pages are built with paragraphs, so that means an editor can do parallax, add related content boxes, throw in some CTAs, etc. Pretty cool! And as you’ll see later, we leverage React for our pages, but Drupal’s admin for editing the paragraphs and it creates a clean and intuitive editorial experience.

So does all this mean for editing we just hand over Drupal Admin accounts to our users so they can swim through the Sea Nettle-like admin experience say good luck? Hardly. Let’s get into the nitty gritty.

Our Philosophy - Less is More

Generally speaking, Drupal doesn’t matter much to the user. Of course Drupal matters, but not really to the end user just trying to do their editorial work. Not as far as they are aware. And they shouldn’t have to be “aware”. Do you remember when you first looked into Drupal? Do you remember how weird the world was as you learned that a “node” is “content”, but then so is a “block” (kinda), and a “view” is a query but maybe with a page attached, or a “taxonomy” is hierarchical data, but with display pages? All frameworks have to be abstracted sufficiently, which means making up generic terms and concepts that only make sense when you’ve spent time with them. Well, guess what? Most users would really rather not just learn all of those concepts and instead just easily write content and update their site. The key to that is to reduce, simplify and obfuscate.

Drupal’s admin is very friendly to developers, but unnecessarily verbose when it comes to editors. They don’t need half the contextual menus, vertical tabs, sidebars, etc most of the time. And the stuff they do need should be stripped down to the obvious and helpful. So let’s hide tons of help text on a WYSIWYG with multiple text format options and use Paragraphs + a simple WYSIWYG instead. Let’s rename any buttons or links that have the word “node” in them because really, who cares if it’s a node? Let’s put our field labels inline to save vertical space and use something like Field Group to organize them cleanly. And let’s take away that rats’ nest default Drupal menu and create an alternative that just gives them what they care about. Here is an example of what we’re talking about.

From this…

Example of Drupal admin showing contextual menus, vertical tabs, and sidebars.

To this. 

Example of streamlined Drupal Menu. 

In addition, let’s take the coolness of contextual admin, like Drupal provides, and make it more intuitive as well. If you hover over a block you can edit it or configure it. If you’re on a node page, you get view and edit links in the tab bar. And you can even sometimes use the new inline editing features in Drupal 8. The problem, though, is that once again the user has to understand what each of these elements are and why they’re all different in terms of the triggering UX. We are having a lot of fun in React finding more consistent and creative ways to manage content without understanding it.

So here’s a rudimentary example using Ashday.com.

Example of Drupal contextual admin, showing node page.

You see some sorting icons and some edit icons. That’s it. Technically, those sortable sections are paragraphs, which - thanks to React - can be sorted inline - but are also editable individually apart from the others. The “Free Consultation” button in the header is a React button tied to a few settings stored in some custom Drupal configuration in the back-end. Further, that edit icon in the page title area can be used to edit the entire page at once so you can change the page title and things like meta tags, although the next step is to probably provide more direct access to edit those things so that you don’t have to know what all is buried in a node form. So the goal here is for the user to have just some basic concepts to think about, like edit and sort, instead of Edit Node vs Configure Block vs Edit Block vs Sort Paragraph vs Some Other Drupal Configuration Buried Deep In the Admin or Form.

And by the way, here’s what happens when you click the edit link on the header button, vs the first paragraph. 

 

Edit link on header button

Intuitive Drupal 8 admin

  

Edit link on first paragraph

Screen shot of admin interface for Drupal page built with React.

 

So despite varied back-end architectural implementations, the user has a very similar experience contextually editing something. A simple page mask + offcanvas + clean form. Its cool stuff, all made easier with the ability to re-use display elements in React all while still getting a ton out of Drupal’s back-end for content management and forms. Simple, kinda easy, and totally user friendly.

Looking to use react for your next digital project? Find out how Ashday can use React to make your project a success.

Jun 08 2018
Jun 08

Ashday Interactive Systems logo and React logo

Decoupled Deschmupled

Like many folks in the Drupal space we've been closely following the Decoupled Drupal conversation for the past few years, and occasionally a part of it.  And, again like many folks in the Drupal space, until recently we had felt that it was somewhat in its infancy and a lot of tough questions still remained as to whether it was even a good idea. The SEO implications have not been entirely clear, the impact on estimation has also been very hard to nail down, which decoupled framework to go with has not at all been a consensus, and what Drupal’s exact role is in the decoupled site has not been clear either. 

Choosing a JavaScript Framework: React vs Angular vs Ember

So like Stryder shuffling through the grass and reading the signs of what had happened to two of his favorite hobbits, we started to sense in the past year that maybe Decoupled Drupal was becoming suddenly not an odd-ball proposal but possibly a truly beneficial solution with a backbone. A key component for us? React. Heard of it? Probably. It’s been around for a few years now. The critical difference for us with this was momentum. After all, if you want to train up your staff on a whole new technology you better be certain it’s not dead in 12 months.

Let’s take a look at some Google Trends.

This is the last two quarters of 2017 Google Trends for the terms Angular, React and Ember, which were the three most talked about decoupled js libraries/frameworks at DrupalCon Baltimore last year.

Graph showing interest in Angular, React, and Ember in 2018.(blue = Angular, red = React, orange=Ember)

Result? Angular is solidly in first, React is reasonably behind in second, and Ember is a distant third. Hard to know where things are headed exactly, and coupling that with the fact that the theme in Baltimore was that Angular has the most attention, but people seemed divided about whether it was the long-term solution because React and Ember (and others) were growing trends. 3 distinct trending choices? Not sure which is best? And many were unsure this was even a good move, as were we. Better hold off a bit.

Now let’s look at the trends for 2018.

Google Trends graph demonstrating increased interest in React in 2018. (blue = Angular, red = React, orange=Ember)

Notice a difference? Look at that little red bar now. That’s React. So even though just a year ago Angular held a reasonably strong lead, it definitely held true that some newer frameworks were gaining momentum, and React has already caught up with Angular. And further, Ember has fallen further behind. When you factor in how much of Google searching is people looking for support for something they’ve already built, which heavily favors the 5-years older Angular, that makes the React climb even more impressive. Does it mean Angular is dead? Far from it. But React is exploding.

Then there’s this: https://dri.es/drupal-looking-to-adopt-react. Yep, those chiefly interested in and responsible for Drupal are favoring React as well. Further, the general feel on the interwebs was that across all web technologies, decoupling was growing as a plausible solution in leaps and bounds. Great ideas were being kicked around about specific solutions to the hard decoupling problems. And above all, we’ve been looking for an excuse to give it a whirl. Well that just about solves it. If we’re going taking the plunge on this decoupling adventure, let’s just roll up our sleeves, install React, aggregate and distill our two years of thoughts on the matter, and cannonball into the deep end of Decoupled Drupal.

Brainstorm your next development project with  an Ashday expert!  Request your free session today. 

React and SEO

When you decide to decouple - even if you’ve already picked a js foundation to build it on - you quickly realize that the solutions are as varied and complex as they are on the back-end of sites. If you’re going with React like we did, then that fortunately becomes much simpler still requires a lot of investigation, learning, experimenting and decision making. In order to not be plagued with decision paralysis, we decided our top priority was ensuring that none of what we were about to do was going to hurt our SEO. The flare and flash could come later. This was probably the one real deal-breaker for us in this endeavor.

SEO these days is rather complex and somewhat arbitrary, but there are some easy-to-overlook SEO principles when it comes to page rendering that must be addressed in a decoupled architecture. No one wants to tweak the heck out of their site and push on content editors to make sure everything is SEO friendly only to implement an architecture that keeps people from finding you. While a bit oversimplified, let’s look at three key principles that are to be followed when delivering page content if you don’t want to be punished.

  1. The content needs to be delivered to search engines and users even without JavaScript.
  2. The content needs to be delivered to search engines that do run JavaScript (i.e., Google), but won’t wait for asynchronous APIs to load.
  3. The content delivered to search engines needs to match closely what site visitors receive so that we don’t get punished for cloaking.

There are a few approaches to this problem. And while “I quit” was certainly on the table for us, we pushed through and are glad we did. So let’s take a look at our more courageous options.

Client-side Rendering

This is effectively the coolest part of React, but you have to be careful of the implications, especially when content is dynamic. So for example, you can load your whole front end “shell” in React nearly instantly and then let things like dynamic header menus, page content, footer content, etc all load asynchronously as soon as it’s available. This creates some neat opportunities to improve user experience. The problem is the SEO implications. As stated above, you really can’t do asynchronous rendering and have it delivered that way to search engines, so now you’ve got a problem. For us, this meant choosing to use client-side rendering approaches where it benefitted actual user interaction and leave the rest to the server.

Pre-rendering

In this approach, your site is rendered statically on a Node.js server somewhere and the pages are served to the end user with no JavaScript required for initial page load. This leads to blazing fast pages as it’s just raw HTML/CSS/JS and no API calls on the fly, but it also leads to some significant downsides depending on your requirements. For example, in pre-rendering you are caching entire pages so if you have thousands of them, perhaps with varied caching control on various pieces of the pages to optimize performance, you now have a very heavy pre-rendering load. This means the task of updating your pre-rendered pages whenever any element of that page might change can become substantial and also complicate your CMS. For us, this wasn’t an option because even in building Ashday.com, we wanted to work with an approach that we felt could be adapted to our biggest clients.

Server-side Rendering

This is ultimately where we landed for ashday.com and we’ve been quite happy with it (so far). With server-side rendering, your pages are still “live”, but they are generated on the Node.js server when requested so that search engines and users get the same experience. It also allows us to handle the caching of various API calls separately so that we can get great performance. It’s not going to be quite as fast as pre-rendered, but then again we also don’t have to tackle the complex task of figuring out how any element of the site might affect a rendered page and be sure to appropriately - and in timely fashion - re-render all of those pages. So on a site with 50,000 articles, a change to a menu link in the header would mean re-rendering every page when using pre-rendering, but with server-side rendering, it just means a cache reset on a single API call to get menu links. It’s a fair trade off, we think.

So with all of that, we decided to build a React app using Next.js. It’s not all that different from straight React, but it offers a few bells and whistles, notably the added synchronous getInitialProps lifecycle method, that make server-side ren

dering a snap. Other approaches, such as using the default Facebook React app, seem much more suited for static non-API-based websites because you really end up needing to implement your own server-side solution where you separate your server configuration from your client configuration. Given that there are already so many new problems to solve and new concepts to learn, we settled on Next.js for now so that we could move forward and get our hands dirty without killing our Google juice.

As a side note: We’re pretty stubborn about wanting the best experience here though so we’re actually hoping that either the upcoming version of Next.js that includes React Router or else another evolving React-based solution will make it much easier out-of-the-box to build a React-solution that both renders server-side for initial page loads and lets the client run the show after that. As of yet, it’s a bumpy road to get there with existing tech and you have to evaluate the cost and overhead of trying to make that work.

Looking to use react for your next digital project? Find out how Ashday can use React to make your project a success.

Jan 31 2018
Jan 31

Illustration of person scratching head at fork in the road

This is the first part in a series on how not to ruin your life on your next Drupal project. Sound extreme? Well, if you’ve ever suffered the crushing defeat of working your tail off on a lengthy project only to sit there at the end after launch feeling like you just came out of the opening night of Star Wars: The Phantom Menace (ie: severely disappointed and a bit confused), then you know that it is indeed extreme. We spend a majority of our day at work and when it’s not rewarding or energy-giving, it’s a real drag.

So what is the formula? Well, a blog post isn’t going to solve all your problems - but - there are certainly key approaches that we have taken that have helped us avoid catastrophe time and time again. Translation? We’ve managed an extremely high customer satisfaction rate for over two decades. What’s been happening here seems to be working so we pay a lot of attention to what it is exactly that we are doing and assess why we think it’s working. If you want a high-level bird's-eye view, check out our process page. We are going to get a bit downer and dirtier here though.

Ultimately, we want you to go home to your family at the end of the day saying “GUESS WHAT I DID AT WORK TODAY EVERYONE!!” (like we do) instead of “Can we just order pizza and go to bed at 7?”.

 We’ve identified 3 essential components to kicking a project off right, the first of which will be covered in this post. They are the following:

  1. Aggressive and Invested Requirements Gathering
  2. Relentless Ideation
  3. Atomic Preparation

So let’s start with Aggressive and Invested Requirements Gathering. We spent a lot of time thinking about this and I realized it comes down to the adjectives. Everyone knows (mostly) about requirements gathering, but it’s a minefield of unasked questions, unanswered questions, misconceptions, forgetfulness, and chaos. The solution? Take ownership of this baby from the beginning and treat it like it’s your project - it’s your passion - and do what it takes to nail it down. Getting answers that make your life easier, despite your suspicions that the client is maybe not thinking it through, doesn’t help anyone. Take no shortcuts and care about everything.

“Take ownership of this baby from the beginning.”

Here are 3 specific goals:

Assess priorities (theirs and yours!)

Priorities are key because we can easily get hung up on things that ultimately aren’t that important. On the flip side, there are things that are tremendously important to one of the two parties, and hence, it must be important to both. So the client says I care most about X, then Y, then Z. In your head you’re thinking “Yikes, Z has a huge unknown element that I’d like to solve quickly to understand the implications.” So talk about it. Repeat their priorities back to them and state your own and find that happy middle ground where you can pursue the project in an efficient and effective way while also focusing on what matters. It sounds simple, but unspoken expectations or concerns are a plague in project management.

Determining constraints (time, money, features, personnel)

I still love the age-old project management triangle that says that for any given project, you can choose 1 of the 3 key priorities in a project: time, money or features. This means that you can’t simply dictate the budget and the schedule and also expect a very rigid set of requirements. The problem is that despite even stating this, there is a lot of pressure from the client to set the expectation on all three and that simply isn’t possible. So it’s critical early on to sort out what the real constraints are. Ok, you would like this to stay under $50k. Is that a hard cap or could you go over if you felt it was worth it? So you want this launched by January 1st. Is that more of a clean-sounding date or is this tied to a fiscal year, or some other real deadline? Ok, so you want features X, Y and Z. Which of those would be deal breakers to not have? This kind of questioning is very helpful because early on in the build phase, you can make intelligent decisions about how and when to collaborate with the client since you know the significance of obstacles or changes of directions that impact these things.

The last thing I’m throwing on top of this triangle is the concept of personnel. We’ve found that knowing who your stakeholders are, who your end users are, who your editors and admins are - early on - is critical. I’ve literally had meetings where we’re deep into requirements and then I meet the person who has veto power over everything and the thing goes sideways. We’ve learned as well that there is a repeating sales cycle when new stakeholders arrive because convincing the last three people doesn’t mean you’ve convinced the next three. I’ve also had times where a stakeholder makes some critical decisions, but then after talking to the people “on the ground”, I find that he was simply just wrong on some of the day-to-day operations. It’s good to talk to everyone, but also find out each person’s role in the big picture. Often times we’ve found ourselves advocating on behalf of lower level employees who often bring up important and practical issues that decision-makers are often overlooking. It’s a delicate balance, but if the system isn’t welcomed and adopted well by it’s primary users, the project will sink even if the ones writing the checks are getting what they think they want.

Reading between the lines

This is tied to the item above in a lot of ways, but stands on it’s own as an important point. When you’ve done this long enough, you learn that most of what is asked for by a potential client is not always really the point. Often there is a hidden goal or motivation that has led to the formation of a feature request. Even if that request perfectly solves the need, it’s still important to discover that need because it can affect the implementation and guide the specifics. For example, if a request is made to let users download an export of tracking data, but you dig and find out that actually they’re just using this tool to turnaround and upload it into a remote system and it’s a bit of a pain, maybe building a web service is better where their system can talk directly to ours and users can step out of the daily grind. 

Conclusion

So in summary - gathering requirements the same way you date someone you’re thinking of marrying. Care about it and pursue it as if it’s the most important thing you’ve got going with an end goal of a lifetime of happiness.

Up Next: Running a Drupal project the right way: Part 2 - Relentless Ideation 

Free offer, talk to a seasoned Drupal expert.

Jan 31 2018
Jan 31

Illustration of person scratching head at fork in the road

This is the first part in a series on how not to ruin your life on your next Drupal project. Sound extreme? Well, if you’ve ever suffered the crushing defeat of working your tail off on a lengthy project only to sit there at the end after launch feeling like you just came out of the opening night of Star Wars: The Phantom Menace (ie: severely disappointed and a bit confused), then you know that it is indeed extreme. We spend a majority of our day at work and when it’s not rewarding or energy-giving, it’s a real drag.

So what is the formula? Well, a blog post isn’t going to solve all your problems - but - there are certainly key approaches that we have taken that have helped us avoid catastrophe time and time again. Translation? We’ve managed an extremely high customer satisfaction rate for over two decades. What’s been happening here seems to be working so we pay a lot of attention to what it is exactly that we are doing and assess why we think it’s working. If you want a high-level bird's-eye view, check out our process page. We are going to get a bit downer and dirtier here though.

Ultimately, we want you to go home to your family at the end of the day saying “GUESS WHAT I DID AT WORK TODAY EVERYONE!!” (like we do) instead of “Can we just order pizza and go to bed at 7?”.

 We’ve identified 3 essential components to kicking a project off right, the first of which will be covered in this post. They are the following:

  1. Aggressive and Invested Requirements Gathering
  2. Relentless Ideation
  3. Atomic Preparation

So let’s start with Aggressive and Invested Requirements Gathering. We spent a lot of time thinking about this and I realized it comes down to the adjectives. Everyone knows (mostly) about requirements gathering, but it’s a minefield of unasked questions, unanswered questions, misconceptions, forgetfulness, and chaos. The solution? Take ownership of this baby from the beginning and treat it like it’s your project - it’s your passion - and do what it takes to nail it down. Getting answers that make your life easier, despite your suspicions that the client is maybe not thinking it through, doesn’t help anyone. Take no shortcuts and care about everything.

“Take ownership of this baby from the beginning.”

Here are 3 specific goals:

Assess priorities (theirs and yours!)

Priorities are key because we can easily get hung up on things that ultimately aren’t that important. On the flip side, there are things that are tremendously important to one of the two parties, and hence, it must be important to both. So the client says I care most about X, then Y, then Z. In your head you’re thinking “Yikes, Z has a huge unknown element that I’d like to solve quickly to understand the implications.” So talk about it. Repeat their priorities back to them and state your own and find that happy middle ground where you can pursue the project in an efficient and effective way while also focusing on what matters. It sounds simple, but unspoken expectations or concerns are a plague in project management.

Determining constraints (time, money, features, personnel)

I still love the age-old project management triangle that says that for any given project, you can choose 1 of the 3 key priorities in a project: time, money or features. This means that you can’t simply dictate the budget and the schedule and also expect a very rigid set of requirements. The problem is that despite even stating this, there is a lot of pressure from the client to set the expectation on all three and that simply isn’t possible. So it’s critical early on to sort out what the real constraints are. Ok, you would like this to stay under $50k. Is that a hard cap or could you go over if you felt it was worth it? So you want this launched by January 1st. Is that more of a clean-sounding date or is this tied to a fiscal year, or some other real deadline? Ok, so you want features X, Y and Z. Which of those would be deal breakers to not have? This kind of questioning is very helpful because early on in the build phase, you can make intelligent decisions about how and when to collaborate with the client since you know the significance of obstacles or changes of directions that impact these things.

The last thing I’m throwing on top of this triangle is the concept of personnel. We’ve found that knowing who your stakeholders are, who your end users are, who your editors and admins are - early on - is critical. I’ve literally had meetings where we’re deep into requirements and then I meet the person who has veto power over everything and the thing goes sideways. We’ve learned as well that there is a repeating sales cycle when new stakeholders arrive because convincing the last three people doesn’t mean you’ve convinced the next three. I’ve also had times where a stakeholder makes some critical decisions, but then after talking to the people “on the ground”, I find that he was simply just wrong on some of the day-to-day operations. It’s good to talk to everyone, but also find out each person’s role in the big picture. Often times we’ve found ourselves advocating on behalf of lower level employees who often bring up important and practical issues that decision-makers are often overlooking. It’s a delicate balance, but if the system isn’t welcomed and adopted well by it’s primary users, the project will sink even if the ones writing the checks are getting what they think they want.

Reading between the lines

This is tied to the item above in a lot of ways, but stands on it’s own as an important point. When you’ve done this long enough, you learn that most of what is asked for by a potential client is not always really the point. Often there is a hidden goal or motivation that has led to the formation of a feature request. Even if that request perfectly solves the need, it’s still important to discover that need because it can affect the implementation and guide the specifics. For example, if a request is made to let users download an export of tracking data, but you dig and find out that actually they’re just using this tool to turnaround and upload it into a remote system and it’s a bit of a pain, maybe building a web service is better where their system can talk directly to ours and users can step out of the daily grind. 

Conclusion

So in summary - gathering requirements the same way you date someone you’re thinking of marrying. Care about it and pursue it as if it’s the most important thing you’ve got going with an end goal of a lifetime of happiness.

Up Next: Running a Drupal project the right way: Part 2 - Relentless Ideation 

Free offer, talk to a seasoned Drupal expert.

May 24 2017
May 24
Drupal logo used in Ashday Blog

In Drupal 7, site deployments could be rather difficult on ambitious sites. Some database level elements were worth programming out in hook_updates (turning on modules, reverting views, etc) and some usually weren't (block placement, contrib module configuration). I remember days where a deployment involved following a three page long Google doc of clicks that had to be carefully replicated. Ugh.

A New Hope

So if you've taken the dive into Drupal 8, you'll quickly discover one of it's most prominent features - Configuration Management. Drupal 8's ability to manage configuration with yml files is absolutely amazing! It's nearly akin to watching Star Wars and thinking "Hey, I can do anything with a lightsaber! Fight bad guys, cut holes in doors, remove my hand cuffs. Sweet!"

The Empire Strikes Back

Here's the rub. Managing Drupal 8 configuration in complex real world apps is akin to building a real world laser sword after watching Star Wars only to promptly burn your face off and lose two limbs as soon as you try to fight with it. "Ambitious digital experiences" essentially equates to "arduous development concerns" and even config management can't save the day simply by existing. You must use it for good. You must unlearn what you have learned. I blogged a bit on this shortly after Drupal 8 released, but oh how much I learned since then!

We've been doing Drupal 8 pretty heavy for about a year and a half here at Ashday and had both the fortune and misfortune of needing to manage a more complex set up which quickly revealed our deficiencies in understanding how to properly manage config.

Here's the scenario: A client needs a site that will become the model for many sites, but they don't want them to be a single site with multiple domains and they also don't want it to be costly or complicated to keep them mostly similar from a functional perspective. Given that our preferred hosting solution is Pantheon, this quickly turned into an obvious Upstream project. And that means figuring out a new way to manage D8 config other than just import/export of the whole site.

If you aren't familiar, a Pantheon Upstream works nearly identical to their core updates - you have a remote repository that, upon code getting pushed to it, notifies you through the dashboard of your updates where you can apply them in the same way you do Pantheon core updates. It's pretty slick because it provides an easy way to have a big shared chunk of code and apply updates to many sites with a few clicks (well, except when nearly every update is major and requires hands-on management - but I'm not bitter).

The Phantom Menace

Our first try at this was to give the Features module a go, but at the time the interface was just too buggy to give us enough confidence to rely on it, it auto-selected what we didn't want and didn't select what we did, and it didn't support some key things we needed like permissions. As a result we decided to home brew our own solution. We knew these sites were going to have a lot of config in common, and a lot of config unique, and we needed to deploy to many of them all in different states without tragedy striking. So to accomplish this, we concocted the following procedure that we would run at deployment time, all from a single drush command.

  • Export the current site's live config (using drush) to the config sync folder
  • Copy all config files (with uuids removed) in our cross-site custom module over top of the config sync directory
  • Copy all config files (with uuids removed) in our site-specific custom module over top of the config sync directory
  • Import all config.

What this allowed us was the ability to allow each upstream site to stray a bit as they needed to, but we could be assured the config we cared about was prioritized in the proper stacking order. The approach ultimately wasn't that different than the goal of Features, but we were in control of the process, it was all live and it was relatively quick. And you know what? It worked! For a while...

And then it didn't. You see, the method we used caused Drupal to see every config file we were tracking (upwards of 300) as changed simply because of the missing uuid. So if only 8 config files changed in a deployment, Drupal was attempting to import hundreds of config files every time. This meant that it started to slow significantly over time as the site grew in complexity and eventually, we started having timeout issues and long deployments. We also started to run into issues when there was a significant core update (ie: 8.3) because so much config was being imported unnecessarily that wasn't compatible in that moment with the new code because db updates hadn't run yet. Not good. It was time for something else.

Return of the Jedi

The Jedi in question here is again the Features module. Or maybe it's Mike Potter. At least it's not me anyways. At DrupalCon Baltimore, I was set on speaking with Mike about how we were handling config because I simply knew there was a better way. If you don't know, Mike is one of the founders of the Features module and ran a great BOF on config management in Baltimore.

So I found this delightful man and laid out what we were doing and he reacted exactly as I had hoped. He didn't say that what we were doing was terribly wrong, but it made him visibly uneasy. After a chat, I discovered that Features had come a long way since we initially tried to use it and we should really give it another shot. He also explained some of the configuration of Features to help me better understand how to use it.

So we returned to Features now and are much happier for it. The thing is though that I don't think I would have really known how to manage it if we hadn't taken the deep dive into config and figured out how it needed to work. It all helped us a lot to decide how to incorporate Features properly for this particular situation so that I actually feel good about relying on it again. And that's how most good Drupal development goes. You really should know how something works before simply relying on a contrib module or someone else's code to take care of everything because otherwise you won't really know how to deal with problems - heck, you might not even know you have a problem! I personally don't prefer spending weeks writing code and then depending at a critical moment on a mysterious piece to make it all successfully roll out to production.

So as it all played out, we now understand what Drupal puts in config, what we care about and don't, what belongs in the upstream vs our site-specific modules vs no where, etc. Here is our current process after this 6 month long journey.

  • Revert the global base feature
  • If needed, revert the site specific feature
  • Run our previous script outlined above, but now on only the 5 or 6 role config files so we handle the permissions in the same fashion

So there you have it! For how long-winded this turned out to be, I'm glossing over a lot of details that are pretty critical to understanding Drupal 8 configuration (ex: blocks are a mix of config and content), but I recommend you do the same thing we did and really get your hands dirty and understand what's going on so that you don't get bit at rollout. After all of this, we feel even moreso that Configuration Management is an astoundingly useful component of Drupal 8 and now we find ourselves a bit sad when we update our Drupal 7 sites (a version we absolutely loved!) where we don't have this amazing tool. 

So good luck and don't hesitate to drop us a note if you have any questions or thoughts on this stuff. I'll probably change my mind on all of it anyways tomorrow. That's why this job is awesome.

P.S. I apologize that I didn't find room to incorporate Attack of the Clones, Revenge of the Sith, The Force Awakens or Rogue One, but the reality is that I just didn't have time to modify our whole approach to configuration in order to make this blog post more cohesive.

Offer for a free consultation with an Ashday expert

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web