Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jul 31 2018
Jul 31

Topic clusters has been a hot topic in the SEO community lately. They move the emphasis in SEO away from individual keywords to broader categories. Instead of optimizing a page for a keyword like “reduced fat mozzarella cheese”, the goal is to create valuable content for a strategic category such as “cheese”. By focusing on multiple topics within categories and linking these pages to the main topic page, businesses gain authority and performance for the entire topic cluster.

I agree that it’s a great idea, I’m just not so sure that it’s a “new” one. Organizing by topic clusters is old news for Drupal; it has had this capability for years. If you have a Drupal website, you may be ahead of the trend and well positioned for changing SEO strategies. Even if you haven’t designed your website around content categories, your Drupal website already has the tools you need to organize around topic clusters.

hummingbird updateWhat are Topic Clusters for SEO?

Topic clusters for SEO really got its start with the Hummingbird update to Google search in 2013. In this release, Google began to pay more attention to the context of content. Instead of focusing on individual words in a search for content, Google began to pay more attention to the intent of what the person is trying to find out. This began to push the focus of SEO strategy to content not just keywords.

The first article I remember written on the subject, was at moz.com: "Building SEO-Focused Pages to Serve Topics & People Rather than Keywords & Rankings." In this whiteboard Friday blog, Rand Fishkin wrote about how SEO is more effective when focused on a topic.

"With updates like Hummingbird, Google is getting better and better at determining what's relevant to you and what you're looking for. This can actually help our work in SEO, as it means we don't have to focus quite so intently on specific keywords.... And this is why we're seeing this big shift to this new model, this more modern model, where SEO is really about the broad performance of search traffic across a website, and about the broad performance of the pages receiving search visits.”

As a result of the Hummingbird and other updates, experienced SEO providers began changing their tactics. Even so, HubSpot’s article in May 2017, titled “Topic Clusters: The Next Evolution of SEO,” generated a lot of attention in the SEO world. This blog clearly defined topic cluster strategy. Mimi An states:

“SEO is now shifting to a topic cluster model, where a single 'pillar' page acts as the main hub of content for an overarching topic and multiple content pages that are related to that same topic link back to the pillar page and to each other. This linking action signals to search engines that the pillar page is an authority on the topic, and over time, the page may rank higher and higher for the topic it covers. The topic cluster model, at its very essence, is a way of organizing a site’s content pages using a cleaner and more deliberate site architecture.”

Because these articles have done a great job of explaining why we should be using topic clusters, I’m not going to expand on this.

What I bring to the table is years of experience in specialized Drupal SEO that includes helping clients organize their websites into topic clusters. And one thing is sure: Drupal 8 is great at organizing content around topic clusters. That isn't new at all.

How to Organize Around Topic Clusters in Drupal 8

One of Drupal’s greatest features is the way it manages, stores, and displays content, which is one reason why it is such a great CMS for marketers. Drupal gives you the control and flexibility necessary to customize how and where your content is displayed. This flexibility sets Drupal apart from other CMSes.

Drupal 8 has two core modules that allow you to organize content around topic clusters: Taxonomy and Views. (Incidentally, these features are available or can be added to older versions of Drupal, too.)

The Taxonomy system controls the categorization of your content. Other CMSes call them “categories” but in Drupal is a little different. And, in a way, "taxonomies" is a more accurate description anyway.

The first step to any topic cluster strategy is to choose your topics. The topics should be driven by your business strategy and the Taxonomy should be determined by the business requirements of your website.

To choose topic clusters, think about how your customers categorize what you do or sell. Consider what key issues your customers are dealing with when they are searching for your website. Once you’ve chosen one or more high-level topics, think about how the topic cluster can be broken down into subcategories.

grocery store organized by product taxonomyYour local grocery store is a good example of how this should work. Grocers divide the store into aisles of related products. In the dairy aisle, you will find products further segmented by cheese, sour cream, yogurt, etc. Think about how crazy your shopping trip would be if the grocers just put the items in the store without categorization.

Instead of simple trips up and down aisles, you would probably have to navigate back and forth across the store. (Insert sound of shopping carts crashing here!) Would you go back? When it is easy and quick to find what you want, you are more likely to go back for more.

Your website should work the same way. The product segments, like cheese, make up topic clusters; Taxonomy is like the refrigerator aisles and shelves that neatly hold the products.

Real-Life, Drupal Examples of Topic Clusters

Now let’s get into how this really works in Drupal. Once you have chosen your topic clusters, you can use Taxonomy to organize and display your content.

(Really quick aside: the Taxonomy system contains multiple Vocabularies. Each Vocabulary contain many Terms. It’s confusing at first but makes a lot of sense once you understand it. Just remember: Taxonomy > Vocabulary > Term.)

An often used Vocabulary is called “Tags”. The Tags vocabulary holds all of the terms that you’ve used to tag content on your site. I think everyone knows what tagging an article on your website is, but just in case: it’s a small snippet of text that describes what an article is all about. For example, this article is tagged with “Drupal 8” , "Drupal Tips", and “SEO”.

When you tag content as it’s created, Drupal’s taxonomy system creates a term in the Tags vocabulary. By tagging each piece of content, you connect, relate, and classify your content. Each term can be reused for other content as well.

Taxonomy keeps track of all the content that has a particular term applied to it and provides menu and navigation schemes to view and display that content together on a single page, called a "Term page". Hmmm...that sounds a lot like a content cluster, doesn’t it?

If a user selects any category term, Drupal will display the content links tagged with that term. Drupal automatically creates searchable topic cluster pages with links to relevant content.

Here's an example on the Volacci.com website. Our whole website is about Drupal SEO, but we have topics within that subject, such as Drupal 8 SEO and Drupal News. In this website, Drupal SEO is the main topic cluster. Within that topic, we have detailed content such as Drupal 8 SEO lead generation that add up to provide a broad range of knowledge about Drupal SEO.

why amazon bought whole foods article on volacci.com

Under the title and subtitle, you’ll see the tags I chose for that article — some already existed, some were new. Once I put the tags on this blog, Drupal automatically created a searchable page for these categories. When the reader clicks on the “Planet Drupal” tag, they see this:

taxonomy topic cluster for planet drupal on volacci.com

I don’t have to do anything to create this page; Drupal will just show all the content that is published and tagged. The module that does this for you is Views. Views is a core module in Drupal 8 but is also available as an add-on in previous versions.

The Views module allows you to use the default taxonomy term view but see it differently without the need for additional coding. For example, you can filter the results or sort it alphabetically. A great example of this is customer information; you can create customer views that show current orders sorted by product or date.

Once you’ve selected the topic cluster you want to view, Views allows you display the content in multiple forms. For this particular view, I added an author name and picture. Now, the user can click on one of the links to read more about the topic category, or even explore a new category.

This is a very simple example of how Drupal creates Google-searchable pages around topic clusters. You can even try it for yourself with this post. Take a look at the Drupal 8 tag under the title of this post right now and click on the following tag: "drupal 8". Go ahead. I'll wait.

You should have gotten a page that looks something like this:

topic cluster drupal 8 taxonomy on volacci.com

Implement Topic Clusters on Your Drupal Website

Would you like your Drupal website to be easy to explore? If you need some help setting up your website for topic clusters, contact Volacci. Using our proven processes and strategies for Drupal SEO, we can work with you to choose topic clusters and implement them on your website.

Jul 31 2018
Jul 31
Day Orange 04.08.2018Quelle: Facebook

On August 4th 2018 the so-called Day Orange took place. The day was organized by Seebrücke, an organization that expresses solidarity with all people seeking refuge and demands safe escape routes, as well as a decriminalization of sea rescue and a humane welcome of people who had to flee or are still on the run.

We wanted to show our support and transformed our usual green logo icon into an orange heart for Day Orange.

On Day Orange, all people who wish to support the organisation and its goals were invited to show solidarity with the refugees and the many volunteer rescue workers. It is not even necessary to go to a demonstration, but rather to stand up for something together, whereby the colour orange can be used in different ways to show solidarity.

Unser Seebrücke-Logo

We very much welcome this commitment and show our support with our logo. We show our colours, we are orange.

Jul 31 2018
Jul 31

Everything was working great… and then all the tests broke.

This is the story of how adding a single feature into an app can break all of your tests. And the lessons can be learned from it.

The Feature that Introduced the Chaos

We are working on a Drupal site that makes uses of a multisite approach. In this case, it means that different domains are pointed at the same web server and the site reacts differently depending on which domain you are referencing.

We have a lot of features covered by automatic tests in Webdriver IO – an end to end framework to tests things using a real browser. Everything was working great, but then we added a new feature: a content moderation system defined by the workflow module recently introduced in Drupal 8.

The Problem

When you add the Workflow Module to a site – depending on the configuration you choose – each node is no longer published by default until a moderator decides to publish it.

So as you can imagine, all of the tests that were expecting to see a node published after clicking the save button stopped working.

A Hacky Fix

To fix the failing test using Webdriver you could:

  1. Login as a user A.
  2. Fill in all the fields on your form.
  3. Submit the node form.
  4. Logout as user A.
  5. Login as user B.
  6. Visit the node page.
  7. Publish the node.
  8. Logout as user B.
  9. Login back as user A.
  10. And make the final assertions.

Here’s a simpler way to fix the failing test:

You maintain your current test that fills the node form and save it. Then, before you try to check if the result is published, you open another browser, login with a user that can publish the node, and then with the previous browser continue the rest of the test.

Multiremote Approach

To achieve this, Webdriver IO has a special mode called multiremote:

WebdriverIO allows you to run multiple Selenium sessions in a single test. This becomes handy when you need to test application features where multiple users are required (e.g. chat or WebRTC applications). Instead of creating a couple of remote instances where you need to execute common commands like init or url on each of those instances, you can simply create a multiremote instance and control all browser at the same time.

The first thing you need to do is change the configuration of your wdio.conf.js to use multiple browsers.

export.config = {
    // ...
    capabilities: {
        myChromeBrowser: {
            desiredCapabilities: {
                browserName: 'chrome'
            }
        },
        myFirefoxBrowser: {
            desiredCapabilities: {
                browserName: 'firefox'
            }
        }
    }
    // ...
};

With this config, every time you use the variable browser it will repeat the actions on each browser.

So, for example, this test:

    var assert = require('assert');

    describe('create article', function() {
        it('should be possible to create articles.', function() {
            browser.login('some user', 'password');

            browser.url('http://example.com/node/add/article')
            browser.setValueSafe('#edit-title-0-value', 'My new article');
            browser.setWysiwygValue('edit-body-0-value', 'My new article body text');

            browser.click('#edit-submit');
            browser.waitForVisible('.node-published');
        });
    });

will be executed multiple times with different browsers.

Each step of the test is executed for all the browsers defined.

Instead of using browser you can make use of the keys defined in the capabilities section of the wdio.conf.js file. Replacing browser with myFirefoxBrowser will execute the test only in the Firefox instance, allowing you to use the other browser for other types of actions.

Using the browser name, you can specify where to run each step of the test.

The Custom Command Problem

If you take a deeper look at previous code, you will notice that there are three special commands that are not part of the WebdriverIO API. login, setValueSafe and setWysiwygValue are custom commands that we attach to the browser object.

You can see the code of some of those commands in the drupal-elm-starter code.

The problem is – as @amitai realized some time ago – that custom commands don’t play really well with the multiremote approach. A possible solution to keep the custom commands available in all of the browsers is to use some sort of class to wrap the browser object. Something similar to the PageObject pattern.

An example of the code is below:

    class Page {

        constructor(browser = null) {
            this._browser = browser;
        }

        get browser() {
            if (this._browser) {
                return this._browser;
            }
            // Fallback to some browser.
            return myChromeBrowser;
        }

        visit(path) {
            this.browser.url(path);
        }

        setWysiwygValue(field_name, text) {
            this.browser.execute(
                'CKEDITOR.instances["' + field_name + '"].insertText("' + text + '");'
            );
        }

        login(user, password) {
            this.visit('/user/login');
            this.browser.waitForVisible('#user-login-form');
            this.browser.setValue('#edit-name', user);
            this.browser.setValue('#edit-pass', password);
            this.browser.submitForm('#user-login-form');
            this.browser.waitForVisible('body.user-logged-in');
        }

    }

    module.exports = Page;

So now, you have a wrapper class that you can use in your tests. You can create multiple instances of this class to access the different browsers while you are running a test.

    var assert = require('assert');
    var Page = require('../page_objects/page');

    describe('create article', function() {
        it('should be possible to create articles.', function() {
            let chrome = new Page(myChromeBrowser);
            let firefox = new Page(myFirefoxBrowser);

            chrome.login('some user', 'password');
            firefox.login('admin', 'admin');

            chrome.visit('http://example.com/node/add/article')
            chrome.setValueSafe('#edit-title-0-value', 'My new article');
            chrome.setWysiwygValue('edit-body-0-value', 'My new article body text');
            chrome.browser.click('#edit-submit');

            // Here is where the second browser start to work.
            // This clicks the publish button of the workflow module
            firefox.visit('/my-new-article');
            firefox.browser.click('#edit-submit');

            // Once the node was published by another user in another browser
            // you can run the final assertions.
            chrome.browser.waitForVisible('.node-published');
        });
    });

What About Automated Tests?

You may be also wondering, does this work seemlessly for automated tests? And the answer is: yes. We have only tried it using the same browser version in different instances. This means that we trigger several chrome browser instances that acts as independent browsers.

If you have limitations in how many cores you have availble to run tests, it should not limit how many browsers you can spawn. They will just wait their turn when a core becomes available. You can read more on how we configure travis to optimize resources.

As you can see, having multiple browsers available to run tests simplifies their structure. Even if you know that you will not need a multiremote approach at first, it may be a good idea to structure your tests using this browser wrapper, as you don’t know if you will need to refactor all of your tests to run things differently in the future.

This approach also can help to refactor the ideas provided by one of our prior posts. Using JSON API with WebdriverIO Tests so you don’t need to worry about login in with the right user to make the json requests.

Jul 30 2018
Jul 30

The Drupal Security Team will be coordinating a security release for Drupal 8 this week on Wednesday, August 1, 2018. (We are issuing this PSA in advance because the in the regular security release window schedule, August 1 would not typically be a core security window.)

The Drupal 8 core release will be made between 16:00 – 21:00 UTC (noon – 5:00pm EDT). It is rated as moderately critical and will be an update to a vendor library only.

August 1 also remains a normal security release window for contributed projects.

Updates

  • 2018-07-31 — Made the time window consistent with the normal security release window.
  • 2018-08-01 — Added UTC times in addition it EDT.
Jul 30 2018
Jul 30
Comparing Drupal POS, Shopify POS and Square POS


If you need to accept card payment in a physical location, you need a point of sale (POS) system. There are many different POS systems out there so knowing how to choose the right one for your business can be challenging. All systems claim to be everything you need, however this might not be the case for all businesses. Most POS systems are designed around “industry best practices,” meaning that they try to serve the majority of businesses based on the most common needs. Many systems start to fail when the requirements of the business break away from the norm.

How do you choose the right point of sale for your business? The best way I’ve found is to look at three or four different examples and do a direct comparison. Today I’ll compare 3 different web-based point of sales systems - Drupal POS, Shopify POS, and Square POS. I’ll look at features, costs, usability, integrations, and more. In the end, I’ll try to understand the strengths and weaknesses of each and ultimately determine what business types they work best with.

All of the POS systems I examine today are web-based (or cloud-based). This means that these systems are connected to the internet and all of the data is kept online. Web-based systems are increasingly becoming more popular because they are generally easier to setup and require less time and knowledge to maintain. They can also integrate with your eCommerce store. You can read more benefits here.

The point of sale systems

Here is an introduction to the three POS systems I’ll be comparing.

Drupal POS

Drupal POS is a free add-on to the popular Drupal content management system. Drupal is open-source and completely free to use. It’s known as a very developer-friendly platform to build a website on and has a massive community, over a million strong, helping to advance the software and keep it secure. The open-source eCommerce component for Drupal is called Drupal Commerce. While Drupal Commerce has a relatively small market share, the platform is very powerful and can be a very good choice for businesses that have demanding requirements or unique product offerings.

Shopify POS

Shopify POS integrates with the popular Shopify SaaS eCommerce platform. Unlike Drupal Commerce, Shopify is a standalone product and stores running on the platform pay a monthly subscription fee to use it. With that said, business owners are given a well developed tool out-of-the-box that has all of the bells and whistles most stores require to get up and running fast. Shopify aims to serve the common needs of most businesses, so very unique business requirements can be hard to achieve.

Square POS

Square POS is an add-on point of sale service for your business and is not really a platform for running your entire store, although it does now offer a basic eCommerce component. It can also integrate with many eCommerce platforms, including Drupal Commerce. Square aims to make the process of accepting card payment easy to do, without bulky equipment.

Service comparison

Below is a side-by-side comparison of each service (as of July, 2018). Note that some of the information below applies to stores who also have an eCommerce component. If you don’t need eCommerce, you can ignore those items.

Note for mobile viewers: Swipe the table side-to-side to see it all.

 

Drupal logo

Drupal POS

Shopify logo

Shopify POS

Square logo

Square POS

Service philosophy

Open-source 

ProprietaryProprietaryService support Yes *
* via Drupal Commerce, in-house IT or third-party support  Yes *
* via Shopify or third-party support Yes *
* via Square Setup costs for basic service  $0 *
* The software doesn’t cost anything to use, however you may need to pay someone to set it up for you

$29 USD *
* Basic package pricing

$0 Ongoing costs for basic service $0 *
* The software doesn’t cost anything to use, however you may need to pay someone to apply occasional software updates. Third-party transactions fees may apply. Website domain and hosting also required $29/mth plus transaction fees and add-on product fees. Monthly fee increases with package Transaction fees and add-on product fees Payment gateways Third-party Shopify or third-party Square Accept cash payments Yes  Yes Yes  Accept card payments Yes Yes Yes Save cards (card on file) Yes  Yes  Yes Process recurring payments (i.e. subscriptions) Yes Yes *
* Third-party add-on required with separate monthly fees Yes Accept mobile payments Yes *
* Third-party hardware required Yes *
* Monthly fee for service hardware Yes *
* $59 USD one time price for service hardware Built in invoicing Yes *
* Using free add-on Yes Yes Apply discounts and promotions Yes Yes Yes Use with gift cards & coupon codes Yes Yes *
* Not available for basic plan  Yes  Printed gift cards provided by service  No *
* Add-on could be created to allow this functionality, but does not currently exist Yes *
* Additional fee for printing  Yes *
* Additional fee for printing Integrated taxes  Yes *
* Advanced taxes can be handled via third-party add-ons or configured directly within the platform Yes Yes *
* Third-party add-ons required
  Apply additional custom fees (i.e. environment fees, tipping, donations, etc.) Yes Yes Yes *
* Limited to tipping Built-in eCommerce Shop Yes *
* Drupal POS is an add-on for Drupal Commerce Yes *
* Shopify POS is an add-on for Shopify Yes *
* Basic Square store or integrate with third-party platforms Built-in website and blog Yes Yes  Yes  Multi-business (separate businesses using same platform or account) Yes No *
* Separate account required for each business No *
* Separate account required for each business/bank account Multi-store (multiple locations or stores of the same business) Yes  Yes  Yes  Number of products allowedUnlimited 2000-7000 *
* Number depends on device used to manage inventory Unlimited *
* Square eCommerce store only displays 1000 products. Third-party platform needed to run a larger store Number of product variations allowedUnlimited 4000-10,000 *
* Number depends on device used to manage inventory Unlimited * 
* Square eCommerce store only displays 1000 products. Third-party platform needed to run a larger store Number of registers allowedUnlimitedUnlimited  UnlimitedNumber of cashiers accounts allowedUnlimited  2 *
* Number of accounts increase with service plan Unlimited  Access controls Yes Yes  Yes *
* Additional fee of $6/employee  Create new user roles for advanced access controls Yes No Yes *
* Grouped with additional fee above. Mobile POS (i.e. use at trade shows, markets, etc.) Yes Yes Yes Sync inventory between online and offline stores Yes Yes Yes *
* Third-party platforms may not be able to sync inventory  Sync user accounts between online and offline stores Yes Yes Yes Sync orders between online and offline stores Yes Yes Yes  Park & retrieve orders Yes  Yes  Yes  Abandoned cart recovery (eCommerce) Yes *
* Using free add-on or third-party solutions Yes Yes *
* Requires third-party solutions Generate product labels Yes Yes Yes Print receipt Yes  Yes  Yes  Email receipt Yes Yes Yes  Customize receipt information Yes Yes *
* No layout customization, only the information shown Yes *
* No layout customization, only the information shown Process returns Yes Yes Yes  Basic reporting Yes Yes *
* Not available for basic plan Yes  Advanced reporting Yes *
* Using free add-on Yes *
* Not available for basic or mid-tier plans Yes  Supported operating systems Any *
* Requires only a web browser to use  Android, iOS *
* Requires app. iPad recommended with limited support for iPhone and Android Android, iOS *
* Requires app Themable (i.e. brand the POS interface) Yes  No No Customer facing display Yes No No Integrate with accounting/bookkeeping services? Yes Yes  Yes Integrate with other eCommerce sales platforms (Amazon, Ebay, etc.)? Yes Yes Yes *
* Only if using third-party eCommerce platform that supports this Integrate with marketing services (MailChimp, HubSpot, etc.)? Yes Yes Yes *
* Only if using third-party eCommerce platform that supports this Integrate with shipping providers (FedEx, UPS, etc.)? Yes Yes Yes Third-party calculated shipping rates Yes Yes *
* Not available for basic or mid-tier plans No Generate shipping labels Yes Yes Yes *
* Integration with ShipStation adds this functionality for an extra monthly cost Custom integrations with third-party services Yes Yes Yes Use offline (and have your transactions sync once back online)No *
* This is a requested feature currently in discussion Yes *
* Can only accept cash or other manual payments Yes Personalized customer feedback/support Yes Yes Yes

Hardware Requirements

Cashier terminal Third-party *
* Can be anything that runs a web browser (computer, tablet, phone, etc.) Third-party *
* iPad recommended with limited support for iPhone and Android Third-party *
* Any device running Android or iOS Card reader Third-party Provided Provided  Contactless payment Third-party Third-party Proprietary only  Cash drawer Third-party Third-party  Third-party  Barcode scanner Third-party *
* Can be a traditional barcode scanner or anything with a camera (i.e. phone, tablet, webcam, etc.) Third-party Third-party  Receipt printer Third-party Third-party  Third-party  Barcode printer Third-party Third-party None  Customer facing display Third-party *
* Can be anything that runs a web browser (computer, tablet, phone, etc) None None Custom/DIY hardware Yes No No

What business is best suited for each POS?

As you can see, all three options have most of the same features. Most businesses would probably be fine with any of them, but let’s see if we can distil down where each system fits best.

Drupal POS

Who’s it for?

If you have a medium to large business with unique business requirements, Drupal POS could be the ideal platform for you to work with. For small business, Drupal POS and Drupal Commerce might not be for you. The initial cost to get a site built might be too high for your budget, however, if you look at the long term fees charged month by month from the other venders, this upfront cost will be saved in a matter of time. Also, if you have a really obscure need that no other platform will accomodate, Drupal Commerce can.

If you’re already running a Drupal Commerce store and now want to add point of sale to your physical locations, Drupal POS is probably a no-brainer. It’s built on-top of the existing Commerce architecture, so you know it will integrate properly in every way, and you can utilize your existing web development service provider to help you set it up.

Demo Drupal Commerce today! View our demo site.Additional details:

If you’re not already using Drupal then you have some larger questions to consider. Do you already have an ecommerce website? Would you be willing to invest in replatforming? Since Drupal Commerce is an eCommerce platform, you would ideally be running your whole operation from Drupal Commerce. That’s not necessarily a bad thing though. Drupal can readily handle any business case you can throw at it. It can integrate with virtually any third-party service, it can provide you with a single location to manage all of your products, orders, customer accounts, etc., it’s built to scale with your business, and on top of all that it’s a powerful content management system that will run your blog and any other content need you might have.

From a support point of view, because Drupal is open-source, you don’t have a single source of support to contact. Instead, you would need to utilize your current web development service provider (if you have one), or work with one of the many Drupal agencies out there who are specialized in Drupal development. This means you can shop around and find the company will work best with you.

Another advantage to Drupal POS (and Drupal as a whole) is that because it’s free, open-source software, you don’t actually have any type of fee to use it. Not one cent. You can have as many stores, products, staff accounts, transactions, registers, etc. as you need, and the price is still $0. Instead of spending your hard earned money on platform fees, you can now redirect those funds to developing your website and POS to do whatever you need it to, or towards marketing, or staffing, or growing your business.

Shopify POS

Who’s it for?

If you’re a small to medium sized business who is just getting started, you don’t have a large budget, and you want the best eCommerce site with POS capabilities, Shopify and Shopify POS is probably your best bet. Also, if you’re already running a Shopify site and happy with it, the Shopify POS is probably ideal for you.

If your business is growing, or you run a large, enterprise level company, Shopify and Shopify POS probably won’t cut it. For one, the fees associated with this level of company can be significant. If you’re at that point, replatforming to something like Drupal Commerce can recuperate a lot of lost earnings and give you full control of your development path, without restrictions.

Additional details:

Shopify has built their business around being easy. Whether it’s opening up a new store or managing your inventory and customers, the Shopify interface is clean and straightforward. As mentioned earlier, it’s ideal for small and medium sized companies just getting started.

However, where Shopify starts to fail is when your business growth is strong and your requirements start to become more complicated. With Shopify, the number of products and product variations you’re allowed can limit your growth. As you start adding more staff, your costs go up. You can pretty quickly go from a $29/mth plan to a $300+/mth plan in short order. 

Another possible deal-breaker is if your product offerings have very unique requirements. Shopify is built to work around the most common business requirements. When your business breaks out of this mold, the platform isn’t designed to accommodate. However, if you can stay within the “typical” business requirements, Shopify probably has everything you need as long as you’re willing to pay for it.

Square POS

Who’s it for?

Square POS is great for small businesses and food service businesses. It’s an easy to use, low-cost option that doesn’t really require anything more than your phone and the provided card reader. Their software interface is clean and easy to understand.

If you’re a medium to large business, or you have very high traffic, Square POS might not be for you. Square is mainly an add-on service to existing businesses, so don’t expect much from an eCommerce perspective. 

Additional details:

Square has become a pretty common sight around town these days, especially when you’re at small business such as cafes or walking around a farmers/artisan market. Square has been able to provide a very good product that allows people to jump in to card transactions easily. It fills this need.

When your business grows and you start having multiple stores and an eCommerce component, you may quickly grow beyond Square’s capabilities. Drupal POS and Shopify POS both have native eCommerce that they work with. This is important when you’re talking about inventory management and other integrations. While Square does have a basic eCommerce component and can integrate with various eCommerce platforms (Drupal Commerce being one of them), you may struggle to get some of the features that Drupal Commerce and Shopify have by default.

Your point of sale integrator

Acro Media is an open-source eCommerce development agency. Our experience in this area is vast and we would love to share it with you. If you have a project that you’d like to discuss, one of our friendly business developers are always available to have that discussion at no cost to you.

Contact Acro Media Today!

Jul 28 2018
hw
Jul 28

This month’s Drupal meetup was held at 91Springboard in Koramangala. We are back after a long time and that’s thanks to 91Springboard for providing us with the venue. Snacks in the meetup and lunch after the meetup were courtesy of Axelerant.

We had a total of 36 attendees from various companies in Bangalore. Here is a chart that shows the distribution of various attendees by their company. Notice that SpecBee has the largest participation in this meetup with 14 of their team members attending the meetup.

Drupal Bangalore Meetup - July 2018 - attendees

We started the day at 10 AM with Taher introducing the meetup, schedule, and talking about some of the happenings in the Drupal community. He talked about some of the new features in Drupal 8.6, initiatives that are going to be stable soon, and some of the events like Drupal Europe and BADCamp.

Drupal Bangalore Meetup - July 2018

Sessions

The first session of the day was on improving the developer workflow presented by Malabya. Malabya talked about various aspects of development including setting up the development environment with DrupalVM (Vagrant) or Lando (Docker), managing codebase, version control, dependency management with composer, deployment, and many other best practices around development (not just Drupal development, but even general programming).

Drupal Bangalore Meetup - July 2018

This was followed by a talk about how contributing to Drupal improves your career by Parvateesam. Parvateesam talked about various kinds of contribution, the benefits of contributing particularly to your career, and shared his own journey contributing to Drupal and speaking at various events. Everyone was impressed with how he started off as a speaker at DrupalCamp Hyderabad a couple of years back and now getting selected to speak at Drupal Dev Days and volunteering at Drupal Europe. After this talk, several contributors present spoke about their own journeys.

Drupal Bangalore Meetup - July 2018

We took a break after this session for coffee and snacks courtesy of 91Springboard and Axelerant. This also included a brief opportunity to network.

Drupal Bangalore Meetup - July 2018

The third talk of the day was a lightning talk for Rollout by Napoleon Arouldas. Napoleon described the typical problems faced during deploying code and how we can make the whole process better by using a tool like Rollout. He also handed out coupon for attendees at the meetup to try out Rollout for free.

Drupal Bangalore Meetup - July 2018

The last topic of the day was a discussion facilitated by myself for Drupal Governance initiative. After a very fruitful discussion and walking through a group interview, we ended the day with pizzas courtesy of Axelerant.

Drupal Bangalore Meetup - July 2018

Drupal Meetup

This was one of the better-organised meetups thanks to the efforts of the organising team, especially Taher Jodhpurwala. It was only made better thanks to generous support of 91Springboard for the venue and Axelerant for snacks and lunch. You can find more photos from the venue below, or just watch the video to fly through the photos.

[embedded content]

If you prefer just the photos, here they are:

Drupal Meetup Bangalore - July 2018

I’d like to thank the organisers, sponsors, and attendees for making this meetup a success. See you all at the end of August for our next meetup.

Jul 28 2018
Jul 28

You know the scenario - you want to list nodes that have the same taxonomy term(s) as the node you are currently viewing. Easy, but you also want to exclude the currently-being-viewed node from the list. Always trips me up.

Each time I have to do this, I read a blog or two or a Drupal issue or two and still I always end up with a quirk. Here's what I normally do:

  1. Create the view
  2. Add a contextual filter for the taxonomy field you want to filter by
  3.  Provide default value
  4. Taxonomy term ID from URL
  5. Load default filter from node page, that's good for related taxonomy blocks
  6. Limit terms by vocabulary
  7. Click Apply

Now I'm Stuck

This gives you a list of nodes related to the current one, but the current node will always show up in your list. If you edit that contextual filter and expand the 'More' tab at the end, and then choose 'Exclude: If selected, the numbers entered for the filter will be excluded rather than limiting the view.' you will be forgiven for thinking this will exclude the current node. IT WON'T. In this case, it will exclude the currently selected taxonomy term - which is the opposite of what you want to do.

The Solution? Another Contextual Filter

  1. Create another contextual filter for 'ID', as in, the Node ID.
  2. Provide default value
  3. Content ID from URL
  4. Scroll to bottom of page and expand the 'More' tab
  5. Click Exclude: If selected, the numbers entered for the filter will be excluded rather than limiting the view.

Now, the second filter will exclude the currently-being-viewed node, while the first filter will do the related-node-taxonomy-magic-dance.

Jul 26 2018
Jul 26

Since Layout Builder was added to Drupal core in 8.5, Lightning has had plans to adopt it and retire Panels and Panelizer. We've been working hard at closing the feature gap between out of the box Layout Builder and what Lightning Layout currently provides. At the same time, we've added some significant new features to Layout and made massive architectural changes to the storage of blocks created as part of a layout.

Here's a quick peek at creating a landing page using the Lightning Layout.

Note that this branch of Lightning and Lightning Layout are both alpha stability. We're hoping for beta stability this winter and a full release with a migration path early next year. This graphic shows the high-level features of each branch.

Resources

$ composer create-project acquia/lightning-project MYPROJECT
$ cd MYPROJECT
$ composer require acquia/lightning:4.0.0-alpha2 --no-update
$ composer update
# Install Drupal as you would normally.
Jul 26 2018
Jul 26

Intro

In this post, I’m going to run through how I set up visual regression testing on sites. Visual regression testing is essentially the act of taking a screenshot of a web page (whether the whole page or just a specific element) and comparing that against an existing screenshot of the same page to see if there are any differences.

There’s nothing worse than adding a new component, tweaking styles, or pushing a config update, only to have the client tell you two months later that some other part of the site is now broken, and you discover it’s because of the change that you pushed… now it’s been two months, and reverting that change has significant implications.

That’s the worst. Literally the worst.

All kinds of testing can help improve the stability and integrity of a site. There’s Functional, Unit, Integration, Stress, Performance, Usability, and Regression, just to name a few. What’s most important to you will change depending on the project requirements, but in my experience, Functional and Regression are the most common, and in my opinion are a good baseline if you don’t have the capacity to write all the tests.

If you’re reading this, you probably fall into one of two categories:

  1. You’re already familiar with Visual Regression testing, and just want to know how to do it
  2. You’re just trying to get info on why Visual Regression testing is important, and how it can help your project.

In either case, it makes the most sense to dive right in, so let’s do it.

Tools

I’m going to be using WebdriverIO to do the heavy lifting. According to the website:

WebdriverIO is an open source testing utility for nodejs. It makes it possible to write super easy selenium tests with Javascript in your favorite BDD or TDD test framework.

It basically sends requests to a Selenium server via the WebDriver Protocol and handles its response. These requests are wrapped in useful commands and can be used to test several aspects of your site in an automated way.

I’m also going to run my tests on Browserstack so that I can test IE/Edge without having to install a VM or anything like that on my mac.

Process

Let’s get everything setup. I’m going to start with a Drupal 8 site that I have running locally. I’ve already installed that, and a custom theme with Pattern Lab integration based on Emulsify.

We’re going to install the visual regression tools with npm.

If you already have a project running that uses npm, you can skip this step. But, since this is a brand new project, I don’t have anything using npm, so I’ll create an initial package.json file using npm init.

  • npm init -y
    • Update the name, description, etc. and remove anything you don’t need.
    • My updated file looks like this:
{ "name": "visreg", "version": "1.0.0", "description": "Website with visual regression testing", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } }   "name": "visreg",  "version": "1.0.0",  "description": "Website with visual regression testing",  "scripts": {    "test": "echo \"Error: no test specified\" && exit 1"

Now, we’ll install the npm packages we’ll use for visual regression testing.

  • npm install --save-dev webdriverio chai wdio-mocha-framework wdio-browserstack-service wdio-visual-regression-service node-notifier
    • This will install:
      • WebdriverIO: The main tool we’ll use
      • Chai syntax support: “Chai is an assertion library, similar to Node’s built-in assert. It makes testing much easier by giving you lots of assertions you can run against your code.”
      • Mocha syntax support “Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.”
      • The Browserstack wdio package So that we can run our tests against Browserstack, instead of locally (where browser/OS differences across developers can cause false-negative failures)
      • Visual regression service This is what provides the screenshot capturing and comparison functionality
      • Node notifier This is totally optional but supports native notifications for Mac, Linux, and Windows. We’ll use these to be notified when a test fails.

Now that all of the tools are in place, we need to configure our visual regression preferences.

You can run the configuration wizard by typing ./node_modules/webdriverio/bin/wdio, but I’ve created a git repository with not only the webdriver config file but an entire set of files that scaffold a complete project. You can get them here.

Follow the instructions in the README of that repo to install them in your project.

These files will get you set up with a fairly sophisticated, but completely manageable visual regression testing configuration. There are some tweaks you’ll need to make to fit your project that are outlined in the README and the individual markdown files, but I’ll run through what each of the files does at a high level to acquaint you with each.

  • .gitignore
    • The lines in this file should be added to your existing .gitignore file. It’ll make sure your diffs and latest images are not committed to the repo, but allow your baselines to be committed so that everyone is comparing against the same baseline images.
  • VISREG-README.md
    • This is an example readme you can include to instruct other/future developers on how to run visual regression tests once you have it set up
  • package.json
    • This just has the example test scripts. One for running the full suite of tests, and one for running a quick test, handy for active development. Add these to your existing package.json
  • wdio.conf.js
    • This is the main configuration file for WebdriverIO and your visual regression tests.
    • You must update this file based on the documentation in wdio.conf.md
  • wdio.conf.quick.js
    • This is a file you can use to run a quick test (e.g. against a single browser instead of the full suite defined in the main config file). It’s useful when you’re doing something like refactoring an existing component, and/or want to make sure changes in one place don’t affect other sections of the site.
  • tests/config/globalHides.js
    • This file defines elements that should be hidden in ALL screenshots by default. Individual tests can use this, or define their own set of elements to hide. Update these to fit your actual needs.
  • tests/config/viewports.js
    • This file defines what viewports your tests should run against by default. Individual tests can use these, or define their own set of viewports to test against. Update these to the screen sizes you want to check.

Running the Test Suite

I’ll copy the example homepage test from the example-tests.md file into a new file /web/themes/custom/visual_regression_testing/components/_patterns/05-pages/home/home.test.js. (I’m putting it here because my wdio.conf.js file is looking for test files in the _patterns directory, and I like to keep test files next to the file they’re testing.)

The only thing you’ll need to update in this file is the relative path to the globalHides.js file. It should be relative from the current file. So, mine will be:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); const visreg = require('../../../../../../../../tests/config/globalHides.js');

With that done, I can simply run npm test and the tests will run on BrowserStack against the three OS/Browser configurations I’ve specified. While they’re running, we can head over to https://automate.browserstack.com/ we can see the tests being run against Chrome, Firefox, and IE 11.

Once tests are complete, we can view the screenshots in the /tests/screenshots directory. Right now, the baseline shots and the latest shots will be identical because we’ve only run the test once, and the first time you run a test, it creates the baseline from whatever it sees. Future tests will compare the most recent “latest” shot to the existing baseline, and will only update/create images in the latest directory.

At this point, I’ll commit the baselines to the git repo so that they can be shared around the team, and used as baselines by everyone running visual regression tests.

If I run npm test again, the tests will all pass because I haven’t changed anything. I’ll make a small change to the button background color which might not be picked up by a human eye but will cause a regression that our tests will pick up with no problem.

In the _buttons.scss file, I’m going to change the default button background color from $black (#000) to $gray-darker (#333). I’ll run the style script to update the compiled css and then clear the site cache to make sure the change is implemented. (When actively developing, I suggest disabling cache and keeping the watch task running. It just makes things easier and more efficient.)

This time all the tests fail, and if we look at the images in the diff folder, we can clearly see that the “search” button is different as indicated by the bright pink/purple coloring.

If I open up one of the “baseline” images, and the associated “latest” image, I can view them side-by-side, or toggle back and forth. The change is so subtle that a human eye might not have noticed the difference, but the computer easily identifies a regression. This shows how useful visual regression testing can be!

Let’s pretend this is actually a desired change. The original component was created before the color was finalized, black was used as a temporary color, and now we want to capture the update as the official baseline. Simply Move the “latest” image into the “baselines” folder, replacing the old baseline, and commit that to your repo. Easy peasy.

Running an Individual Test

If you’re creating a new component and just want to run a single test instead of the entire suite, or you run a test and find a regression in one image, it is useful to be able to just run a single test instead of the entire suite. This is especially true once you have a large suite of test files that cover dozens of aspects of your site. Let’s take a look at how this is done.

I’ll create a new test in the organisms folder of my theme at /search/search.test.js. There’s an example of an element test in the example-tests.md file, but I’m going to do a much more basic test, so I’ll actually start out by copying the homepage test and then modify that.

The first thing I’ll change is the describe section. This is used to group and name the screenshots, so I’ll update it to make sense for this test. I’ll just replace “Home Page” with “Search Block”.

Then, the only other thing I’m going to change is what is to be captured. I don’t want the entire page, in this case. I just want the search block. So, I’ll update checkDocument (used for full-page screenshots) to checkElement (used for single element shots). Then, I need to tell it what element to capture. This can be any css selector, like an id or a class. I’ll just inspect the element I want to capture, and I know that this is the only element with the search-block-form class, so I’ll just use that.

I’ll also remove the timeout since we’re just taking a screenshot of a single element, we don’t need to worry about the page taking longer to load than the default of 60 seconds. This really wasn’t necessary on the page either, but whatever.

My final test file looks like this:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); describe('Search Block', function () { it('should look good', function () { browser .url('./') .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove}) .forEach((item) => { expect(item.isWithinMisMatchTolerance).to.be.true; }); }); }); const visreg = require('../../../../../../../../tests/config/globalHides.js');describe('Search Block', function () {  it('should look good', function () {    browser      .url('./')      .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove})      .forEach((item) => {        expect(item.isWithinMisMatchTolerance).to.be.true;      });

With that in place, this test will run when I use npm test because it’s globbing, and running every file that ends in .test.js anywhere in the _patterns directory. The problem is this also runs the homepage test. If I just want to update the baselines of a single test, or I’m actively developing a component and don’t want to run the entire suite every time I make a locally scoped change, I want to be able to just run the relevant test so that I don’t waste time waiting for all of the irrelevant tests to pass.

We can do that by passing the --spec flag.

I’ll commit the new test file and baselines before I continue.

Now I’ll re-run just the search test, without the homepage test.

npm test -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

We have to add the first set of -- because we’re using custom npm scripts to make this work. Basically, it passes anything that follows directly to the custom script (in our case test is a custom script that calls ./node_modules/webdriverio/bin/wdio). More info on the run-script documentation page.

If I scroll up a bit, you’ll see that when I ran npm test there were six passing tests. That is one test for each browser for each test. We have two test, and we’re checking against three browsers, so that’s a total of six tests that were run.

This time, we have three passing tests because we’re only running one test against three browsers. That cut our test run time by more than half (from 106 seconds to 46 seconds). If you’re actively developing or refactoring something that already has test coverage, even that can seem like an eternity if you’re running it every few minutes. So let’s take this one step further and run a single test against a single browser. That’s where the wdio.conf.quick.js file comes into play.

Running Test Against a Subset of Browsers

The wdio.conf.quick.js file will, by default, run test(s) against only Chrome. You can, of course, change this to whatever you want (for example if you’re only having an issue in a specific version of IE, you could set that here), but I’m just going to leave it alone and show you how to use it.

You can use this to run the entire suite of tests or just a single test. First, I’ll show you how to run the entire suite against only the browser defined here, then I’ll show you how to run a single test against this browser.

In the package.json file, you’ll see the test:quick script. You could pass the config file directly to the first script by typing npm test -- wdio.conf.quick.js, but that’s a lot more typing than npm run test:quick and you (as well as the rest of your team) have to remember the file name. Capturing the file name in a second custom script simplifies things.

When I run npm run test:quick You’ll see that two tests were run. We have two tests, and they’re run against one browser, so that simplifies things quite a bit. And you can see it ran in only 31 seconds. That’s definitely better than the 100 seconds the full test suite takes.

Let’s go ahead and combine this with the technique for running a single test to cut that time down even further.

npm run test:quick -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

This time you’ll see that it only ran one test against one browser and took 28 seconds. There’s actually not a huge difference between this and the last run because we can run three tests in parallel. And since we only have two tests, we’re not hitting the queue which would add significantly to the entire test suite run time. If we had two dozen tests, and each ran against three browsers, that’s a lot of queue time, whereas even running the entire suite against one browser would be a significant savings. And obviously, one test against one browser will be faster than the full suite of tests and browsers.

So this is super useful for active development of a specific component or element that has issues in one browser as well as when you’re refactoring code to make it more performant, and want to make sure your changes don’t break anything significant (or if they do, alert you sooner than later). Once you’re done with your work, I’d still recommend running the full suite to make sure your changes didn’t inadvertently affect another random part of the site.

So, those are the basics of how to set up and run visual regression tests. In the next post, I’ll dive into our philosophy of what we test, when we test, and how it fits into our everyday development workflow.

Jul 26 2018
Jul 26

Encryption is an important part of any website that needs to store sensitive information. Encryption takes sensitive data that is in a readable form and encodes it, making it unreadable. This essentially hides the information from anyone who might try to access it without permission to do so. The encoded information can only be decoded by an entity that has a paired decryption key.

Our requirements for this particular Drupal website build included:

  • Acquia Cloud - One of the leading Drupal hosting providers.
  • Libsodium - Because of Acquia Cloud, we needed a custom compiled php extension
  • Encrypt - A Drupal module that exposes encryption APIs to other modules.
  • Key and Lockr.io - Drupal modules for managing the encryption key.
  • Sodium - A Drupal module to provide libsodium to the encrypt module.

Why use libsodium instead of mcrypt?

Libsodium is a portable, cross-platform implementation of NaCl. Experts recommend libsodium for its simple interface and strong cryptography. The sodium Drupal module takes an easier approach, which is to use a high-level package, paragonie/halite, to work with libsodium.

The other choice for encryption in PHP is mcrypt. It's the default method in the Drupal 7 version of the encrypt module. Despite that, it's a bad choice because it's difficult to use correctly. Mcrypt is deprecated in PHP 7.1 and removed in PHP 7.2.

Contact us and learn more about our custom ecommerce solutions

Installing Libsodium on Acquia's PHP 7.1

PHP 7.2 has libsodium built in and if you're on 7.1 or below you can install it from PECL. We're going to be using Acquia Cloud, so we can't yet run PHP 7.2 and we can't install any PHP extension we want - not as easily as we'd like to.

Acquia requires that extensions be compiled including their dependencies. The php-libsodium extension depends on libsodium itself and we have to produce one binary for both libraries. We'll be compiling libsodium the crypto library as a static library and php-libsodium the php extension that provides bindings to libsodium for PHP applications as a dynamically linked library so it can be loaded by a regular PHP install.

Let's get started!

  1. Download the latest libsodium from https://github.com/jedisct1/libsodium/releases.
  2. Compile libsodium so it's static, not shared. Put it in a directory we'll use later.

    $ ./configure --libdir=/home/me/sodium/library --disable-shared --enable-static--enable-static makes it static, not shared. It'll be a part of the php extension when we build it instead of a separate dependency.

    --disable-shared prevents creating a shared library version of the library.

    --libdir puts it in a directory where we'll use it later.

  3. Compile with PIC (Position Independent Code).

    $ make CFLAGS='-g -O2 -fPIC'
    $ sudo make install
    Here's our sodium library and a pkgconfig directory we'll need to point the php extension at.

    $ ls /home/me/sodium/library
    libsodium.a libsodium.la pkgconfig

  4. Download the latest version 1 release of the libsodium php extension from https://github.com/jedisct1/libsodium-php/releases.

    Use phpize to get the extension ready to compile. Normally a PHP extension is compiled as part of PHP. This script is used to set up things up so it's like we're doing that. You need the -dev version of PHP to get phpize, so install php7.1-dev or the equivalent for your situation.

    $ phpize7.1
    Configuring for:
    PHP Api Version: 20160303
    Zend Module Api No: 20160303
    Now you'd notice a lot more files in the directory, like the configure script.

  5. Set the package config directory to the one where we installed libsodium.

    $ export PKG_CONFIG_DIR=/home/me/sodium/library/pkgconfig

  6. Configure libsodium-php with the path to libsodium.

    $ ./configure --with-libsodium=/home/me/sodium/library --libdir=/home/me/sodium/library--with-libsodium tells it where to find the dependency we just created.

  7. Check that libsodium.so is not looking for a shared libsodium library.

    $ ldd modules/libsodium.so
    linux-vdso.so.1 => (0x00007ffcdd68e000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f71f26eb000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f71f2d0f000)
    There's no libsodium dependency there, so we're good to use our libsodium.so PHP extension! Deploy the file and configure PHP to load the extension. Since we're on Acquia Cloud, Acquia does that after we provide the file.

Get encrypted!

If you're running Drupal and need encryption setup, or if you're looking to start a new project and exploring options and requirements, get in touch with us! One of our business developers will be happy to help.

Jul 23 2018
Jul 23
Moshe Weitzman

I recently worked with the Mass.gov team to transition its development environment from Vagrant to Docker. We went with “vanilla Docker,” as opposed to one of the fine tools like DDev, Drupal VM, Docker4Drupal, etc. We are thankful to those teams for educating and showing us how to do Docker right. A big benefit of vanilla Docker is that skills learned there are generally applicable to any stack, not just LAMP+Drupal. We are super happy with how this environment turned out. We are especially proud of our MySQL Content Sync image — read on for details!

Pretty docks at Boston Harbor. Photo credit.

The heart of our environment is the docker-compose.yml. Here it is, then read on for a discussion about it.

Developers use .env files to customize aspects of their containers (e.g. VOLUME_FLAGS, PRIVATE_KEY, etc.). This built-in feature of Docker is very convenient. See our .env.example file:

The most innovative part of our stack is the mysql container. The Mass.gov Drupal database is gigantic. We have tens of thousands of nodes and 500,000 revisions, each with an unholy number of paragraphs, reference fields, etc. Developers used to drush sql:sync the database from Prod as needed. The transfer and import took many minutes, and had some security risk in the event that sanitization failed on the developer’s machine. The question soon became, “how can we distribute a mysql database that’s already imported and sanitized?” It turns out that Docker is a great way to do just this.

Today, our mysql container builds on CircleCI every night. The build fetches, imports, and sanitizes our Prod database. Next, the build does:

That is, we commit and push the refreshed image to a private repository on Docker Cloud. Our mysql image is 9GB uncompressed but thanks to Docker, it compresses to 1GB. This image is really convenient to use. Developers fetch a newer image with docker-compose pull mysql. Developers can work on a PR and then when switching to a new PR, do a simple ahoy down && ahoy up. This quickly restores the local Drupal database to a pristine state.

In order for this to work, you have to store MySQL data *inside* the container, instead of using a Docker Volume. Here is the Dockerfile for the mysql image.

Our Drupal container is open source — you can see exactly how it’s built. We start from the official PHP image, then add PHP extensions, Apache config, etc.

An interesting innovation in this container is the use of Docker Secrets in order to safely share an SSH key from host to the container. See this answer and mass_id_rsa in the docker-compose.yml above. Also note the two files below which are mounted into the container:

Configure SSH to use the secrets file as private keyAutomatically run ssh-add when logging into the container

Traefik is a “cloud edge router” that integrates really well with docker-compose. Just add one or two labels to a service and its web site is served through Traefik. We use Traefik to provide nice local URLs for each of our services (www.mass.local, portainer.mass.local, mailhog.mass.local, …). Without Traefik, all these services would usually live at the same URL with differing ports.

In the future, we hope to upgrade our local sites to SSL. Traefik makes this easy as it can terminate SSL. No web server fiddling required.

Our repository features a .ahoy.yml file that defines helpful aliases (see below). In order to use these aliases, developers download Ahoy to their host machine. This helps us match one of the main attractions of tools like DDev/Lando — their brief and useful CLI commands. Ahoy is a convenience feature and developers who prefer to use docker-compose (or their own bash aliases) are free to do so.

Our development environment comes with 3 fine extras:

  • Blackfire is ready to go — just run ahoy blackfire [URL|DrushCommand] and you’ll get back a URL for the profiling report
  • Xdebug is easily enabled by setting the XDEBUG_ENABLE environment variable in a developer’s .env file. Once that’s in place, the PHP in the container will automatically connect to the host’s PHPStorm or other Xdebug client
  • A chrome-headless container is used by our suite which incorporates Drupal Test Traits — a new open source project we published. We will blog about DTT soon

Of course, we are never satisfied. Here are a couple issues to tackle:

Jul 21 2018
Jul 21

Unicode characters encoded using UTF8 can technically use 1 to 4 bytes to represent a single character. However, older versions of MySQL only provided support for storing UTF8 encoded characters that used 1 to 3 bytes. This was enough to cover the most commonly used characters, but is not suitable for applications that accept user input where any character can be submitted (like emojis, which use 4 bytes). Newer versions of MySQL provide a character encoding called utf8mb4 to fix this issue. Drupal 7 supports this, but requires some special configuration. Drupal 8 is configured this way by default.

Existing Drupal 7 sites that were setup with MySQL's old 3-byte-max UTF8 encoding must undergo a conversion process to change the character set on tables and text columns from utf8 to utf8mb4. The collation value (what MySQL uses to determine how text fields are sorted) also needs to be changed to the newer utf8mb4 variant. Thankfully, there's already a drush command you can download that does this conversion for you on a single database. Before running it, you should ensure that your MySQL server is properly setup to use the utf8mb4 character encoding. There's a helpful guide on this available on Drupal.org. Afterward the conversion is run, you still must configure Drupal to communicate with MySQL using this new encoding as described in the guide I linked to.

Part of my job is to help maintain hundreds of sites running as multi-site in a single codebase. So, same codebase, but hundreds of databases, each of which needed to have its database tables converted over to the new encoding. Converting a single database is not such a big deal, because it only takes a few minutes to run, but since I was dealing with hundreds, I wanted to make sure I had a good process laid out with plenty of logging. I created the below bash script which placed each site in maintenance mode (if it wasn't already), ran the drush command to convert the database, then took the site out of maintenance mode.

All in all, it took about 10 hours to do this for ~250 websites. While the script was running, I was monitoring for errors or other issues, ready to kill the script off if needed. I added a 3 second sleep at the end of each conversion to allow me time to cleanly kill the script.

After the script was completed, I pushed up new code for the common settings.php file (each site is configured to load a common settings file that they all share) which configured Drupal to connect to MySQL using the proper character set. In between the time that a database was converted, and the settings.php was updated for that site, there still should not have been any issues, because MySQL's UTF8MB4 character encoding should be backwards compatible with the original encoding that only supports 3 byte characters.

Here's the script for any that may be interested:

#!/usr/bin/env bash

#
# Usage:
# Alter this script to specify the proper Drupal docroot.
# 
# Run this command and pass to it a filename which contains a list of
# multisite directory names, one per line.
#
# For each site listed in the file, this script will first put the site in
# maintenance mode (if it's not already in that state), then run the
# uf8mb4 conversion script. Afterwards it will disable maintenance mode if
# it was previously disabled.
#

### Set to Drupal docroot
docroot="/var/www/html/"

script_begin=$(date +"%s")

count=0
total="$(wc -l $1 | awk '{ print $1 }')"
while read -r site || [[ -n "$site" ]]; do
    start_time=$(date +"%s")
    count=$((count+1))
    echo "--- Processing site #${count}/${total}: $site ---"
    mm="$(drush --root=${docroot} -l ${site} vget --exact maintenance_mode)"
    if [ $? -ne 0 ]; then
        echo "Drush command to check maintenance mode failed, skipping site"
        continue
    fi

    # If maintenance mode is not enabled, enable it.
    if [ -z $mm ] || [ $mm = '0' ]; then
        echo "Enabling maintenance mode."
        drush --root=${docroot} -l ${site} vset maintenance_mode 1
    else
        echo "Maintenance mode already enabled."
    fi

    drush --root=${docroot} -l ${site} utf8mb4-convert-databases -y $site

    # Now disable maintenance mode, as long as it was already disabled before.
    if [ -z $mm ] || [ $mm = '0' ]; then
        echo "Disabling maintenance mode."
        drush --root=${docroot} -l ${site} vset maintenance_mode 0
    else
        echo "Maintenance mode will remain on, it was already on before update."
    fi

    echo "Clearing cache"
    drush --root=${docroot} -l ${site} cc all

    end_time=$(date +"%s")
    echo "Completed in $(($end_time - $start_time)) seconds"
    echo "Done, sleeping 3 seconds before next site"
    sleep 3
done < "$1"

script_end=$(date +"%s")

echo "Ended: $script_end ; Total of $(($script_end - $script_begin)) seconds."
Jul 18 2018
Jul 18

Droptica helps clients from all over the world to complete and implement their projects. Each of these clients has already developed their way of working. Everyone is different. In this article, I have collected the most common ways and systems of cooperation between Droptica and our clients.

Why do we work a little differently with every client?

We are Agile. We always want to maximise the results of our work. Our development team always adjusts and adapts their way of working to the client’s needs.
The elements that are adapted and changed the most often include:

  • project implementation methods (SCRUM, Kanban, etc.);
  • number of people in the team;
  • roles in the team (backend developers, frontend developers, QA, UX/UI, etc.);
  • the method of communication; Tools: JIRA, Slack, telephone or video calls, meetings;
  • frequency of communications;
  • communication channels (who, with whom);
  • implementation standards (some clients consider application performance to be the most important, others focus on implementing and providing new functionalities on a regular basis, while another group focuses on aesthetics and want their application to look good).

On the basis of these factors, I have identified several models of cooperation with clients, which are used the most often at Droptica.

Model 1: Product Owner at the client, with the rest of the team at Droptica

This is probably the most popular model employed at Droptica. We use it mainly when the end client comes to us. In most cases, the client already has a web system based on Drupal, Symfony or React and needs developers to develop the system further. Product Owner has a vision of application development and looks for a team that can efficiently perform the envisioned tasks.

In this model, we have a great impact on the development of the system. Our team not only performs assigned programming tasks but also proposes directions of development of the system and suggests improvements. In addition to developing basic functionalities, we also design user interfaces (UX/UI) and often carry out A/B tests that show us the best solutions for the client.

We use this model to develop WydawnictwoWAM.pl website. This is what the client has to say about us and about working in this model: 

"We established cooperation with Droptica around two years ago to develop our online store available at http://www.wydawnictwowam.pl. Both the quality of all the works carried out, as well as our cooperation were stellar. The technical solutions suggested and implemented by Droptica were a great help and often improved the value of our system, often exceeding our initial expectations. Cooperation with Droptica is characterised by very friendly, direct and precise communication on their part. Thanks to that, we were – and constantly are – able to define and detail all the tasks related to the development of our sales platform. We also appreciate their very clear settlement system, which allows us to better plan and allocate funds for development. In other words, we definitely recommend working with Droptica".

Model 2: Product Owner, QA, PM on the client’s side, software developers provided by Droptica

In this model, we provide our customers with solid development support. Most of the project planning and management process is carried out by the client, while our experts carry out specific development tasks.
It is a kind of cooperation that we usually go for with large companies and corporations, expanding their Drupal, PHP and ReactJS teams.
As a rule, in such a model we work on servers and project management systems provided by the client. We adapt to their processes.

Mixed models

Other models are usually combinations of the two models presented above. For example, Droptica provides not only development team but also testers, while the entire project is managed by the client. We also sometimes work on projects where we collaborate with other software developers from the client's company, working not as an independent development team, but a part of a larger team.

We are Agile

We are flexible regarding the form of cooperation with our clients; however, we like the first model the most. In that model, we take on a great deal of responsibility for the project and we are able to influence the direction of development together with the client. This gives us great satisfaction, and we offer numerous ideas for improving the system, which allows our clients to better achieve their business goals.

Would you like to learn more about our work models? Contact us at [email protected] and we'll be happy to talk to you.
 

 

Jul 17 2018
Jul 17

To the future or to the past, to a time when thought is free, to the next time when I need to get the value of file field to use as a variable in Drupal 8 with Twig.

Working my way down through one of Drupal's render arrays of doom to try to get the URI of a file in a media field (in a paragraph type), I came up with this. If you can improve it, feel free to drop a note in the comments:

{% set slide_url = file_url(content.field_p_ei_speaker_slides[0]['#media'].field_m_file_file.entity.uri.value) %}

In steps:

  1. Get the {{ content }} variable
  2. Drill down into the media field (Speaker Slides - pdf, ppt, etc)
  3. Get the first element (0 - it's not a multi-value field in this case)
  4. Load up the #media object
  5. Interrogate the field on the media entity that has the file attached (the File field)
  6. Load this entity (entity here is not presented as an item in the {{ dpm() }} but it's very handy to know
  7. Get the uri.value from here
  8. Wrap it all in a file_url() function

For clarity, here's what I had in PatternLab:

  {# Begin Slides Download #}
  {% if event_slide_download %}
    
  {% endif %}
  {# End Slides Download #}

And here's what I have in the corresponding Drupal paragraph.html.twig tempate:

{% if paragraph.field_p_ei_speaker_slides.value %}
  {% set event_slide_download = true %}
  {% set slide_url = file_url(content.field_p_ei_speaker_slides[0]['#media'].field_m_file_file.entity.uri.value) %}
  {% set event_slide_download_link = slide_url %}
{% endif %}

{% include "@building-blocks/event-section/event-item.twig" %}

So now, my future self, you will know where to find this next time.

For posterity, here's a blog by Norman Kamper on how to create a custom field formatter, written as a response to this post, and the code is available on github. Thanks Norman.

Mark, I finally found the time to write things down. Published it on Medium as I don't find the time to relaunch my own site to publish it there (a common web developer's disease I guess ?): https://t.co/pXdS7OtYe1

— Norman Kämper-Leymann (@leymannx) July 30, 2018
Jul 02 2018
Jul 02

In Drupal, you can write automated tests with different levels of complexity. If you need to test a single function, or method of a class, probably you will be fine with a unit test. When you need to interact with the database, you can create kernel tests. And finally, if you need access to the final HTML rendered by the browser, or play with some javascript, you can use functional tests or Javascript tests. You can read more about this in the Drupal.org documentation.

So far this is what Drupal provides out of the box. On top of that, you can use Behat or WebDriver tests. This types of tests are usually easier to write and are closer to the user needs. As a side point, they are usually slower than the previous methods.

The Problem.

In Gizra, we use WebdriverIO for most of our tests. This allow us to tests useful things that add value to our clients. But these sort of tests, where you only interact with the browser output, has some disadvantages.

Imagine you want to create an article and check that this node is unpublished by default. How do you check this? Remember you only have the browser output&mldr;

One possible way could be this: Login, visit the Article creation form, fill the fields, click submit, and then&mldr; Maybe search for some unpublished class in the html:

    var assert = require('assert');

    describe('create article', function() {
        it('should be possible to create articles, unpublished by default', function() {
            browser.loginAs('some user');

            browser.url('http://example.com/node/add/article')
            browser.setValueSafe('#edit-title-0-value', 'My new article');
            browser.setWysiwygValue('edit-body-0-value', 'My new article body text');

            browser.click('#edit-submit');

            browser.waitForVisible('.node-unpublished');
        });
    });

This is quite simple to understand, but it has some drawbacks.

For one, it depends on the theme to get the status of the node. You could take another approach and instead of looking for a .node-unpublished class, you could logout from the current session and then try to visit the url to look for an access denied legend.

Getting Low-Level Information from a Browser Test

So the problem boils down to this:

How can I get information about internal properties from a browser test?

The new age of decoupled Drupal brings an answer to this question. It could be a bit counterintuitive at first, therefore just try to see is fit for your project.

The idea is to use the new modules that expose Drupal internals, through json endpoints, and use javascript together with a high-level testing framework to get the info you need.

In Gizra we use WDIO tests write end-to-end tests. We have some articles about this topic. We also wrote about a new module called JsonAPI that exposes all the information you need to enrich your tests.

The previous test could be rewritten into a different test. By making use of the JsonAPI module, you can get the status of a specific node by parsing a JSON document:

var assert = require('assert');

describe('create article', function() {
    it('should be possible to create articles, unpublished by default', function() {
        browser.loginAs('some user');

        browser.url('http://example.com/node/add/article')
        browser.setValueSafe('#edit-title-0-value', 'My unique title');
        browser.setWysiwygValue('edit-body-0-value', 'My new article body text');

        browser.click('#edit-submit');

        // Use JSON api to get the internal data of a node.
        let query = '/jsonapi/node/article'
                 += '?fields[node--article]=status'
                 += '&filter[status]=0'
                 += '&filter[node-title][condition][path]=title'
                 += '&filter[node-title][condition][value]=My unique title'
                 += '&filter[node-title][condition][operator]=CONTAINS'

        browser.url(query);
        browser.waitForVisible('body pre');
        let json = JSON.parse(browser.getHTML('body pre', false));

        assert.ok(json[0].id);
        assert.equals(false, json[0].attributes.content['status']);
    });
});

In case you skipped the code, don’t worry, it’s quite simple to understand, let’s analyze it:

1. Create the node as usual:

This is the same as before:

browser.url('http://example.com/node/add/article')
browser.setValueSafe('#edit-title-0-value', 'My unique title');
browser.setWysiwygValue('edit-body-0-value', 'My new article body text');

browser.click('#edit-submit');

2. Ask JsonAPI for the status of an article with a specific title:

Here you see the two parts of the request and the parsing of the data.

let query = '/jsonapi/node/article'
            += '?fields[node--article]=status'
            += '&filter[status]=0'
            += '&filter[node-title][condition][path]=title'
            += '&filter[node-title][condition][value]=My unique title'
            += '&filter[node-title][condition][operator]=CONTAINS'

browser.url(query);

3. Make assertions based on the data:

Since JsonAPI exposes, well, json data, you can convert the json into a javascript object and then use the dot notation to access to a specific level.

This is how you can identify a section of a json document.
browser.waitForVisible('body pre');
let json = JSON.parse(browser.getHTML('body pre', false));
assert.ok(json[0].id);
assert.equals(false, json[0].attributes.content['status']);

A Few Enhancements

As you can see, you can parse the output of a json request directly from the browser.

browser.url('/jsonapi/node/article');
browser.waitForVisible('body pre');
let json = JSON.parse(browser.getHTML('body pre', false));

The json object now contains the entire response from JsonAPI that you can use as part of your test.

There are some drawbacks of the previous approach. First, this only works for Chrome. That includes the Json response inside a XML document. This is the reason why you need to get the HTML from body pre.

The other problem is this somewhat cryptic section:

let query = '/jsonapi/node/article'
        += '?fields[node--article]=status'
        += '&filter[status]=0'
        += '&filter[node-title][condition][path]=title'
        += '&filter[node-title][condition][value]=My unique title'
        += '&filter[node-title][condition][operator]=CONTAINS'

The first problem can be fixed using a conditional to check which type of browser are you using to run the tests.

The second problem can be addressed using the d8-jsonapi-querystring package, that allows you to write an object that is automatically converted into a query string.

Other Use Cases

So far, we used JsonAPI to get information about a node. But there are other things that you can get from this API. Since all configurations are exposed, you could check if some role have some specific permission. To make tests shorter we skipped the describe and it sections.

browser.loginAs('some user');

let query = '/jsonapi/user_role/user_role'
         += '?filter[is_admin]=null'

browser.url(query);
browser.waitForVisible('body pre');
let json = JSON.parse(browser.getHTML('body pre', false));

json.forEach(function(role) {
    assert.ok(role.attributes.permissions.indexOf("bypass node access") == -1);
});

Or if a field is available in some content type, but it is hidden to the end user:

browser.loginAs('some user');

let query = '/jsonapi/entity_form_display/entity_form_display?filter[bundle]=article'

browser.url(query);
browser.waitForVisible('body pre');
let json = JSON.parse(browser.getHTML('body pre', false));

assert.ok(json[0].attributes.hidden.field_country);

Or if some specific HTML tag is allowed in an input format:

let query = '/jsonapi/filter_format/filter_format?filter[format]=filtered_html'

browser.url(query);
browser.waitForVisible('body pre');
let json = JSON.parse(browser.getHTML('body pre', false));

let tag = '';

assert.ok(json[0].attributes.filters.filter_html.settings.allowed_html.indexOf(tag) > -1);

As you can see, there are several use cases. The benefits of being able to explore the API by just clicking the different links sometimes make this much easier to write than a kernel test.

Just remember that this type of tests are a bit slower to run, since they require a full Drupal instance running. But if you have some continuous integration in place, it could be an interesting approach to try. At least for some specific tests.

We have found this quite useful, for example, to check that a node can be referenced by another in a reference field. To check this, you need the node ids of all the nodes created by the tests.

A tweet by @skyredwang could be accurate to close this post.

Remember how cool Views have been since Drupal 4.6? #JSONAPI module by @e0ipso is the new "Views".

— Jingsheng Wang (@skyredwang) January 9, 2018
Jun 28 2018
Jun 28

The majority of Drupal's underlying code is PHP. As a Drupal developer, the better you know PHP, the better your code will be. In this Acro Media Tech Talk video, Drupal developer Rob Thornton discusses code nesting and how you can optimize your code in order to reduce unnecessary nesting. 

[embedded content]

Code nesting can basically be described as when a block of code is contained within another block of code. If you're code isn't well thought out, you can potentially end up with deep nesting that is both hard to read and difficult to maintain. Aside from reducing difficult to read code and making your code more maintainable, reducing the amount of nesting helps you find bugs and lets other developers contribute to your code easier. Rob uses a number of examples of common nesting scenarios, walking you through how to find and fix them.

If you liked this video, you might also like these posts too.

Contact us and learn more about our custom ecommerce solutions

Jun 28 2018
Jun 28
Drupal Europe

Distributed systems face incredible challenges — Photo by Dennis van Zuijlekom

With Drupal 8 reaching its maturity and coupling/decoupling from other services — including itself — we have an increasing demand for Drupal sites to shine and make engaged teams thrive with good DevOps practices and resilient Infrastructure. All that done in the biggest Distributed System ever created by humans: the Internet. The biggest challenges of any distributed system are heterogeneity of systems and clients, transparency to the end user, openness to other systems, concurrency to support many users simultaneously, security, scalability on the fly and failure handling in a graceful way. Are we there yet?

We envision, in the DevOps + Infrastructure track, to see solutions from the smallest containers that can grow to millions of services to best practices in the DevOps world that accomplish very specific tasks to support Drupal and teams working on it and save precious human time, by reducing repetitive and automatable tasks.

Questions about container orchestration, virtualization and cloud infrastructure arise every day and we expect answers to come in the track sessions to deal with automation and scaling faster — maybe using applied machine learning or some other forms of prediction or self management. See? We’re really into saving time, by using technology to assist us.

We clearly don’t manage our sites in the same way we did years ago, due to increased complexity of what we manage and how we are managing change in process and culture, therefore it’s our goal at Drupal Europe to bring the best ideas, stories and lessons learned from each industry into the room and share them with the community.

How is your platform scaling? How do you solve automated testing and continuous integrations? How do you keep your team’s happiness with feature velocity and still maintain a healthy platform? How do you make your website’s perceived performance even faster? What chain of tooling is running behind the scenes and what is controlling this chain? Are you using agentless configuration management or are you resorting to an agent. Are you triggering events based on system changes or do you work with command and control.

Be ready to raise, receive and answer some hard questions and but most of all, inspire people to think from a different angle. What works for a high-high traffic website might not be applicable for maintaining a massive amount of smaller sites. We want operations to inspire development on reliability and for development to inspire operations on any kind of automation. We want security to be always top of mind while still have an impact on business value rapidly and efficiently. And that is just the beginning…

Drupal Europe’s 2018 program is focused on industry verticals, which means there are tons of subjects to discuss therefore when you submit your session be sure to choose the correct industry track in order to increase the chance of your session being selected.

Please help us to spread the word about this awesome conference. Our hashtag is #drupaleurope.

To recommend speakers or topics please get in touch at [email protected].

Drupal is one of the leading open source technologies empowering digital solutions in the government space around the world.

Drupal Europe 2018 brings over 2,000 creators, innovators, and users of digital technologies from all over Europe and the rest of the world together for three days of intense and inspiring interaction.

Drupal Europe will be held in Darmstadtium in Darmstadt, Germany — which has a direct connection to Frankfurt International Airport. Drupal Europe will take place 10–14 September 2018 with Drupal contribution opportunities every day. Keynotes, sessions, workshops and BoFs will be from Tuesday to Thursday.

Drupalcon Nashville — Photo by Amazee Labs

Jun 27 2018
Jun 27
Drupal Europe

Community. Sharing. Helping. This is the spirit of Drupal. These things bind us all together. Be a part of it by joining us during Drupal Europe between 10–14 September 2018 in Darmstadt, Germany.

photo credit Susanne Coates @flickr

The track dedicated to Social + Non-Profit will gather ambitious life stories about helping others and projects whose purpose is to invest everything in making the world a better place. You will have the opportunity to meet colleagues from your field of interest and join forces, learn how to use pre-configured Drupal distributions and get inspired by ambitious social impact projects built with Drupal. Also learn how Drupal can be used to ensure accountability, trustworthiness, honesty, and openness to every person who has invested time, money, and faith into a non-profit organization. Talk and share ideas, learn from each other, improve, innovate … and take a leap forward. There are a lot of things you will learn, no matter your technical skill level. From developers to people with a big heart, you will for sure find something that inspires you.

Interested in attending? Buy your ticket now at https://www.drupaleurope.org/tickets.

We are looking for submissions in various topics. Here are some ideas to share your experience on with the rest of the world.

  1. Every nonprofit organization must apply the 3 E’s: Economy, Efficiency, Effectiveness. Economy forces you to handle your project with low budgets, that is almost always the case with non-profit organizations. Efficiency is required also due to low resources available to most non-profit organizations. Effectiveness ensures you get the job done and complete your targets. How are you doing that? What tools and practices ensure this?

We look forward to your submission sharing you experience with the other attendees.

See you in Darmstadt!

As you’ve probably read in one of our previous blog posts, industry verticals are a new concept being introduced at Drupal Europe and replace the summits, which typically took place on Monday. At Drupal Europe these industry verticals are integrated with the rest of the conference — same location, same ticket and provide more opportunities to learn and exchange within the industry verticals throughout three days.

Now is the perfect time to buy your ticket for Drupal Europe. Session submission is only open for a few more days so please submit your sessions and encourage others who have great ideas.

Please help us to spread the word about this awesome conference. Our hashtag is #drupaleurope.

To recommend speakers or topics please get in touch at [email protected].

Drupal is one of the leading open source technologies empowering digital solutions in the government space around the world.

Drupal Europe 2018 brings over 2,000 creators, innovators, and users of digital technologies from all over Europe and the rest of the world together for three days of intense and inspiring interaction.

Drupal Europe will be held in Darmstadtium in Darmstadt, Germany — which has a direct connection to Frankfurt International Airport. Drupal Europe will take place 10–14 September 2018 with Drupal contribution opportunities every day. Keynotes, sessions, workshops and BoFs will be from Tuesday to Thursday.

Jun 27 2018
Jun 27
Drupal EuropePhoto by Floriane Vita on Unsplash

It is 2018 and we are still talking about digital transformation? Wasn’t that finished and done ten or fifteen years ago? Not completely. Based on the study from Grand View Research the global digital transformation market size was valued at $177.27 billion in 2017 and is expected to reach $798.44 billion by 2025. It seems like we have just started and a business that does not join the movement will be left behind.

But what is digital transformation? We see it as the integration of digital technology into all areas of a business, resulting in fundamental changes to how businesses operate and how they deliver value to customers. This new approach to customer experience through digital experience is where a platform like Drupal fits in perfectly.

To build connected, omnichannel customer experiences, the technology must have a built in way to support communication between channels, such as physical locations, ecommerce, mobile applications, and social media. Drupal 8 provides APIs for creating solutions and is definitely not limited to being a website platform. With this approach, the ability to engage customers through multiple channels at the same time has become a reality. Enterprises like Bayer, who evaluated and chose Drupal as their preferred platform in November 2017, have embraced the idea of embarking on the digitalization journey with an open source software that has been around for almost two decades and has a clear vision to become the world’s leading omnichannel customer experience solution.

Drupal Europe will be the largest conference in Europe happening in 2018. Drupal Europe organizes the program and session selection process around industry verticals. These focus on usage of Drupal in real life scenarios, in specific target industries, alongside space to cover cutting edge technologies. Digital transformation has become an important movement and the Drupal community has recognized that and dedicated a track to it.

The track provides unique networking opportunities with — and expert advice by — award-winning vendors, with sessions and break out groups focusing on digital strategies, digital transformation, innovation management, hybrid systems and ambitious digital experiences, showcasing large-scale implementations of Drupal platforms and solutions integrating Drupal for global corporations.

Join us on September 10–14, 2018 in Darmstadt, Germany to learn first hand how Drupal enables digital transformation. You can register for the event at https://drupaleurope.org/tickets.

Drupal Europe is organized for the community by the community. This means everyone is invited to participate in the program and share their ideas with us. We are currently looking for submissions for sessions, panels, and workshops. To create an excellent submission, you should write a good abstract that helps track chairs and conference visitors to understand how and why you approach your topic, what will be the benefits and learnings gained by attending your session, and what is the expected experience level of the audience.

Main topics we are looking for:

  • Digital transformation with Drupal (case studies)
    What was your process of digital transformation, what were the business goals, what part Drupal plays in the solution and how did you measure success?
  • Enterprise products made for or made with Drupal

What can enterprise use to complement Drupal to support their requirements? Are there reusable solutions out there that can serve as enterprise platform?

  • Technical solutions provided with Drupal
    Having Drupal as the chosen technology for digitalization, what does Drupal offer out of the box or what did your organization develop on top of the framework?

You will speak in front of digital leaders like CTOs, CIOs and CMOs of businesses who will be there to evaluate Drupal on a strategic level. Sessions will attract people looking to gain tactical advice on how to tackle the challenges of digitalization of their organizations or their clients.

We are looking to provide value to our track’s attendees, to empower them with insights and give them information that will enable them to make better decisions when choosing Drupal as their platform of choice.

We are looking forward to great content submitted, please go to https://drupaleurope.org/speakers and propose a session at Drupal Europe before 30 June 2018.

Jun 26 2018
Jun 26
Drupal Europephoto: Paul Johnson @ flickrPhoto: Michael Cannon @ Flickr
Jun 25 2018
Jun 25
Drupal Europe

Drupal Europe is both a technology conference and a family reunion for the Drupal community. Bringing together 1600+ attendees, it is the largest community driven Drupal event taking place on the European continent this year. For anyone connected with Drupal this is a unique opportunity to share your experience, learn, discuss, connect and contribute back to the community.

Being a community driven conference, we wanted to focus on real life case studies and not the usual technology driven structure. So we’ve introduced industry tracks which focus on specific industry sectors.

Photo with CCO licence via Pexels.com from StartupStockPhotos

The Higher Education track is for anyone using Drupal or thinking of migrating to Drupal at a college or university who is looking to connect with other Higher-Ed Drupal users.

If you have experience of delivering Drupal solutions in the higher education sector or are looking for inspiration on how you continue to develop your CMS further, this is the right track for you.

Drupal is a popular choice in higher education, and many of us are using it in creative and inventive ways. With Drupal 8, the opportunities for exploration and experimentation expand even further — from headless Drupal to top-tier configuration management. Let’s showcase our successes and best-practices with Drupal 8!

We know many universities are still on Drupal 7 and are keen to migrate to Drupal 8, so come to share what works for you and see wins from your peers.

Photo with CCO licence via Pexels.com from StatusStockphoto

Have you launched a Drupal 8 project recently that you are proud of? Started a campus Drupal users group and have tips for others looking to create their own? Developed a great user support model for your content editors? Conquered decoupled Drupal with your frontend stack? Share your awesome projects and lessons learned with your peers.

  • Education sector
Photo with CCO licence via Pexels.com from Pixbay
  • Drupal in a Day (how Global Training Days got to be a localized event)
  • From CMS to LMS
  • Web accessibility in higher education
  • GDPR and childrens information
  • Javascript for higher education
  • Migration from Drupal 7 to 8
  • How Drupal 8 API-first helps to
    integrate with existing IT-Infrastructure
  • Build your own Drupal Community

Session submission is open and we ask you to submit interesting session proposals to create an awesome conference. Session proposals are not limited to Drupal and all topics in relationship with Higher Education are welcome.

Please also help us to spread the word about this awesome conference. Our hashtag is #drupaleurope.

If you want to participate in the organisation or want to recommend speakers or topics please get in touch at [email protected].

Drupal is one of the leading open source technologies empowering digital solutions around the world.

Drupal Europe 2018 brings over 2,000 creators, innovators, and users of digital technologies from all over Europe and the rest of the world together for three days of intense and inspiring interaction.

Drupal Europe will be held in Darmstadtium in Darmstadt, Germany — with a direct connection to Frankfurt International Airport. Drupal Europe will take place 10–14 September 2018 with Drupal contribution opportunities every day. Keynotes, sessions, workshops and BoFs will be from Tuesday to Thursday.

Jun 22 2018
Jun 22
Drupal Europe

The e-commerce industry continues to grow rapidly year over year, bringing more merchants online and driving larger profits. With that growth comes the increased need for rich content, innovative product merchandising, and integration into an ever increasing number of third party sales, marketing, and fulfillment tools. Drupal has always excelled as a platform for building unique customer experiences, and it continues to come into its own as an adaptive sales platform via projects like Drupal Commerce.

Photo by Mike Petrucci on Unsplash

This track includes content that helps merchants understand how to start and grow their online businesses, demonstrates to developers how to build ambitious e-commerce sites, and incorporates solution providers who improve the whole process via integrations.

In the e-commerce track you will learn how to start to sell online, how to grow your existing business and reach a wider audience, and the best tools to use for developing your platform.

The track is focused on the following topics:

  • Drupal vs other e-commerce solutions: comparison, the cost of entry and scale
  • What competitive advantages does Drupal bring to online merchants?
  • What are the benefits of Drupal-native eCommerce solutions vs. integrating external systems?
  • Case studies for unique or ambitious implementations of Drupal for e-commerce
  • Latest trends in eCommerce (e.g. payment, fulfillment, security, taxes, etc.)
  • Latest trends in building eCommerce websites (e.g. headless, multichannel, AI, etc.)

As you’ve probably read in one of our previous blog posts, industry verticals are a new concept being introduced at Drupal Europe and replace the summits, which typically took place on Monday. At Drupal Europe these industry verticals are integrated with the rest of the conference — same location, same ticket and provide more opportunities to learn and exchange within the industry verticals throughout three days.

Now is the perfect time to buy your ticket for Drupal Europe. Session submission is only open for a few more days so please submit your sessions and if encourage others who have great ideas.

Please help us to spread the word about this awesome conference. Our hashtag is #drupaleurope.

To recommend speakers or topics please get in touch at [email protected].

Drupal is one of the leading open source technologies empowering digital solutions in the government space around the world.

Drupal Europe 2018 brings over 2,000 creators, innovators, and users of digital technologies from all over Europe and the rest of the world together for three days of intense and inspiring interaction.

Drupal Europe will be held in Darmstadtium in Darmstadt, Germany — which has a direct connection to Frankfurt International Airport. Drupal Europe will take place 10–14 September 2018 with Drupal contribution opportunities every day. Keynotes, sessions, workshops and BoFs will be from Tuesday to Thursday.

Jun 21 2018
Jun 21

Drupal is built on PHP so any developer working with Drupal needs some PHP knowledge. PHP memory management is something that can initially be a difficult concept to grasp.

In this Acro Media Tech Talk video, Rob Thornton covers PHP arrays and how they use memory. He goes over various examples, helping to shed some light on how to use arrays effectively. Along the way, Rob discusses passing arrays by value vs. by reference and shares some tips about each.

[embedded content]

If you find this video helpful, you may also be interested in these related topics:

Contact us and learn more about our custom ecommerce solutions

Jun 21 2018
Jun 21

Here's a very short video demo of editing a menu using Drupal's Settings Tray module. Things like this will be what drives Drupal adoption.

I'm a big fan of the quick edit module for Drupal. If it could work better with paragraphs module, it'd be a knockout feature. Aligned with that, I'm really impressed with the settings tray module and can see so many uses for it in the future - sidemenus, shopping cart slideouts, node editing, etc. Here's a very short video of using it to edit a menu, which should make many content editors' lives easier.

Jun 19 2018
Jun 19

Omnichannel generally means the shopping experience is unified and seamless whether you do it on your laptop, in store, through your phone, etc. The team at Acro Media set out to demonstrate just how easy it is to give your customers a true omnichannel experience using Drupal and Drupal Commerce.

[embedded content]

The omnichannel setup

As part of our demo at DrupalCon in Nashville, we did a pseudo T-shirt pre-order. Before the conference, attendees could use our Urban Hipster eCommerce demo site to pre-order a Drupal Commerce shirt in their size. When they completed their pre-order, they got an order number to bring with them to our booth. 

Check Out Our High Five Drupal Web SeriesPeople who didn't pre-order could also come to our booth and "purchase" (for free) a T-shirt using a self serve kiosk running the same demo site. 

So one side of the booth was the set up as the cashier/fulfillment area. The other side had the self-serve kiosk. We also had other laptops available so that we could bring up the admin interface as if we were a customer support person assisting a customer over the phone. The "support person" could find the customers order number or email address and fulfill the order. Easy peasy.

The whole time, our inventory of shirt sizes was counting down until the stock count hit 0. When our inventory reached 0 for a certain size, orders for that size could no longer be placed.

Why is this so amazing?

Some people were impressed but also a little puzzled, thinking that this sort of setup should just exist everywhere. Which it should, but it doesn't. With most retail stores, the online and in-store experiences are completely separate. They might as well be two different companies. If you buy something online and try to return it in store, it often can't happen. Loyalty points often don't transfer. The list goes on. Some places will let you buy online and pick up in store, but there might be a delay. They might say sure, you can pick it up in store, but not for 24 hours. In that case, you might as well just go to the store and find it yourself. Even knowing if an item is in stock can be tricky. The website might say there are three left, but that's just a snapshot from a certain point in time, and you don't know how often that gets updated. Maybe that was valid six hours ago, but that item has since sold out.

Why Drupal rocks

What makes Drupal so cool is that the point of sale and the Commerce module both use the same orders. A point of sale order is just a Drupal Commerce order. It has some specifics to the point of sale, but it can be loaded up in a regular interface. They use the same stock, the same products, everything. This is surprisingly rare. A lot of POS systems in particular are very antiquated. They date from pre-Internet times and have no concept of syncing up with things.

But we've created a true omnichannel experience. We've done it, and implemented it, and it's all open source and freely available. Anyone else could set up the same omnichannel setup that we did. We used a laptop, a cash drawer, a couple of iPads, nothing too fancy.

What's more, as the software matures, we're working on an even better demo with more smoothed out features, better integration, nicer interface, etc. Stay tuned.

Demo Drupal Commerce today! View our demo site.

More from Acro Media

Let's talk omnichannel!

We're always happy to help you understand how you can deliver a true omnichannel experience for you customers. Contact us today to talk to one of our business development experts.

Contact Us

Jun 18 2018
Jun 18

Let's revisit my recent post and see if we can come up with more user-friendly names for PatternLab items.

My Approach to PatternLab recently got quite an amount of discussion on Slack and other places about PatternLab and naming conventions, especially the line "Clients do not want a science lesson". In that I set out my current naming convention like so:

  • Basic Elements
  • Site Blocks
  • Building Blocks
  • Content
  • Sample Pages

While generally appreciated, some people criticised it for being too Drupal-centred. What happens if your client doesn't want to use Drupal? What happens if you want to use the same PatternLab instance for an app on Android or iOS? Good questions, and they got me thinking more. A number of people on Slack recently have been asking about what naming conventions besides the atoms > molecules > organisms one people have been using.

I had a verrrrry long chat (over 3 hours) with some developers from outside of my work place to see what what naming convention(s) might make sense, be easy for clients to understand, and allow enough scale to be used outside of Drupal. Here's what we came up with:

  • Utilities
    • Items such as utility classes like .visually-hidden or .padding-top
  • Base
    • Items such as colours and fonts
  • Elements
    • Low level elements such as headings, paragraphs, basic lists
  • Components
    • High definition components such as a teaser view mode, an embedded video component, a list of teasers
  • Layouts
    • General layout classes for the different page designs - with sidebar, without sidebar, etc
  • Mock-ups
    • Rendered 'pages' or other UI interfaces
    • We shied away from 'Pages' here because not everything might be a page, such as a login screen on an iPhone app

I'm quite happy with those naming conventions and think I might start porting some of them to my work at Annertech. (Oh, and by the way, if you want to get really good Drupal developers to work on your website, we're available for hire - contact us!)

Jun 14 2018
Jun 14
Empower your customers to customize products.


There is a high likelihood that the tshirt on your back or in your closet started life as someone’s idea that was being uploaded to an online tool. The idea that a person could not only buy tshirts, but design them in a tool and approve the proof before payment seems almost commonplace. Why aren’t more people talking about this? Your customers are expecting more tailored experiences when buying decorated apparel, signage and personalized promotional products from the small to medium web store fronts. Getting the “Web to Print” toolset just right on Drupal is not easy.

Here’s just a few of the expectations for ordering printed materials from the web on Drupal:

  • Drupal integration: Full integration with existing Drupal website
  • Intuitive editor experience: Drag and drop toolset, uploading of files (jpg, png, tiff, pdf, eps, ai, psd), cropping and quick fixes to pictures, lots of fonts, pop-over text formatting, white labelled branding with plenty of customizations, low resolution upload warnings, and mobile friendly web to print tool.
  • Proof and checkout workflow: Print-quality PDF proof, edit before purchase, edit after purchase, CMYK color space, super large files that need processing

Getting off the bespoke product editor island

An example of a bespoke web to print tool Acro Media built with Drupal and jQuery UI.

An example of a bespoke web to print tool Acro Media built with Drupal and jQuery UI.

Like many Drupal agencies, there’s rarely a problem we face that can’t be solved with in-house open source tools. Before we decry the problems, we are very proud of what we accomplished in the past given budget and available tools. With jQuery UI and html-to-pdf experience, we’ve built these kinds of tools before, to varying degrees of success. Every time we tackled a project like web-to-print, the struggle became very real. With minimal hours, the tools we knew and loved created a functional experience that was hard to maintain and very error prone.

Demo Drupal Commerce today! View our demo site.More often than not, we had trouble with converting HTML to PDF reliably enough for high-resolution print-quality, especially with customer supplied imagery and layout. Offering fonts in a customized product builder is challenging to get right, especially when you’re creating a PDF that has to have the font attached. The RGB colorspace doesn’t translate easily to CMYK, the most common four color process for printing. And all of our experience in software revolved around pixels, not these things called picas. In this crazy world resolution could go as high as 3200 dpi on standard printers, dimensions suddenly couldn’t be determined based on pixels.

When one of our clients that had a tool we had built with existing technologies asked for some (not all) of the features mentioned in the beginning of the article, we also wanted to solve all the technical challenges that we grappled with over a year ago. As the planning stage was coming to an end, it was clear the budget wasn’t going to support such a complicated software build.

Product Customization is not the right phrase

Example screenshot of keditor in action.

Example screenshot of keditor in action.

We started to look for product customization tools and found nada. Then we looked for web layout tools which would maybe give Drupal a better HTML editing experience, but found a disappointing lack of online web to print solutions. We did find grapejs, innovastudio, and keditor

But, almost universally, these javascript-based libraries were focused on content and not editing products that would be printed. We needed something that had the goal of creating a printable image or PDF with a tight integration around the editor experience. We had nearly convinced ourselves there wasn’t a vertical for this concept, it seemed like nearly all product builders in the wild were powered by one-off conglomerations of toolsets.

Web to Print using Customer’s Canvas works with Drupal, right?

Finally, via a project manager, an industry phrase was discovered that opened the floodgates: web to print. After a bit of sifting through the sales pitch of all the technologies, almost all tools were found to be cumbersome and hard to integrate in an existing Drupal website, save one. Customer’s Canvas checked all the boxes and then some:

  • SAAS (so we don’t have to host customer’s images, or maintain the technology)
  • White label
  • More than fully featured
  • Completely customizable
  • Iframe-friendly. Meaning we could seamlessly plop the product customization tool into an existing or new layout.

Example of Customers Canvas running in Drupal Commerce.

Example of Customers Canvas running in Drupal Commerce.

To make an even longer story short, we jumped on board with Customer’s Canvas and built the first (to our knowledge) third party web to print Drupal 7 module. We might make a Tech Talk regarding the installation and feature set of the module. Until then, here’s what you can do:

  1. Download and install the module
  2. Provide some API credentials in the form of a javascript link
  3. Turn on the Drupal Commerce integration
  4. Provide some JSON configuration for a product via a field that gets added to your choice of product types.
  5. Click on Add to Cart for a Customer’s Canvas product
  6. Get redirected to a beautiful tool
  7. Click “Finish” and directed to a cart that can redirect you back to edit or download your product.
  8. As a store administrator, you can also edit the product from the order view page.

web-to-print_customers_canvas_drupal_cart

Drupal 8 and Web to Print and the Future

Currently, the module is built for Drupal 7. Upgrading to Drupal 8 Commerce 2 is definitely on our roadmap and should be a straightforward upgrade. Other things on the roadmap:

  • Better B2B features
    You can imagine a company needs signs for all of it’s franchisee partners and would want the ability to create stores of customizable signage. With Commerce on Drupal 8, that would be pretty straightforward to build.
  • More download options
    Customer’s Canvas supports lower res watermarked downloads for the customers as well as the high res PDF downloads. Currently the module displays the high resolution for all parties.
  • Better administrative interface
    If you’re using Drupal 7, the integration for this module is pretty easy, but the technical experience required for creating the JSON formatting for each product is pretty cumbersome. So it would be awesome (and very possible) to build out the most common customizations in an administration interface so you wouldn’t have to manage the JSON formatting for most situations.
  • Improve the architecture
    Possibly support Customer’s Canvas templates like entities that are referenced so that you could create a dozen or so customizable experiences and then link them up to thousands of products.
  • Webform support
    The base module assumes your experience at least starts with an entity that has fields and gets rendered. We could build a webform integration that would allow the webform to have a customer’s canvas build step. T-shirt design content anyone?

Integration can be a game changer

One of the big reasons we work with Drupal and Drupal Commerce is that anything with an API can be integrated. This opens the doors to allow the platform to do so much more than any other platform out there. If an integration needs to be made, we can do. If you need an integration made, talk to us! We're happy to help.

Contact Acro Media Today!

Contact us and learn more about our custom ecommerce solutions

Jun 14 2018
Jun 14
Official 8.0 Version Now Available


The Drupal Point of Sale provides a point of sale (POS) interface for Drupal Commerce, allowing in-person transactions via cash or card, returns, multiple registers and locations, and EOD reporting. It’s completely integrated with Drupal Commerce and uses the same products, customers, and orders between both systems. You can now bring your Drupal 8 online store and your physical store locations onto the same platform; maintaining a single data point.

The Drupal 7 version has been in the wild for a while now, but today marks the official, production ready release for Drupal 8.

Release Highlights

What features make up the new version of Drupal Point of Sale 8? There are so many that it will probably surprise you!

Omnichannel

Omnichannel is not just a buzzword, but a word that describes handling your online and offline stores with one platform, connecting your sales, stock and fulfillment centers in one digital location. Drupal Commerce has multi-store capabilities out of the box that allow you to create unique stores and share whatever product inventory, stock, promotions, and more between them. Drupal Point of Sale gives you the final tool you need to handle in-person transactions in a physical storefront location, all using your single Drupal Commerce platform. That’s pretty powerful stuff. Watch these videos (here and here) to learn more about how Drupal Commerce is true omnichannel.

Registers

Set up new registers with ease. Whether you have 1 or 1000 store locations, each store can have as many registers as you want. Because Drupal Point of Sale is a web-based solution, all you need to use a register is a web browser. A touch screen all-in-one computer, a laptop, an iPad; if it has a web browser, it can be your register. The Point of Sale is also fully open source, so there are no licensing fees and costs do not add up as you add more registers.

Customer Display


While a cashier is ringing through products, the Customer Display uses WebSocket technology to display the product, price, and current totals on a screen in real-time so the customer can follow along from the other side of the counter. Your customers can instantly verify everything you’re adding to the cart. All you need for the Customer Display is a web browser, so you can use an iPad, a TV or second monitor to display the information in real-time as the transaction progresses.

Barcode Scanning

Camera based barcode scanning
Don’t have a barcode scanner? No problem. With this release, any browser connected camera can be used to scan barcodes. Use a webcam, use your phone, use an iPad, whatever! If it has a camera, it works. This is helpful when you’re at an event or working a tradeshow and you don’t want to bring your hardware along.


Traditional barcode scanning
A traditional barcode scanner works too. Simply use the barcode scanner to scan the physical product’s barcode. The matching UPC code attached to one of your Drupal Commerce product variations will instantly add the product to your cashier’s display.

Labels

Generate and print labels complete with barcodes, directly from your Drupal Point of Sale interface. Labels are template based and can be easily customized to match any printer or label size so you can prep inventory or re-label goods as needed.

Receipts

Easily customize the header and footer of your receipts using the built in editor. Add your logo and contact information, return/exchange policy, special messaging or promotions, etc.

Drupal Point of Sale cusomized receipts

When issuing receipts, you can choose to print the receipt in a traditional fashion or go paperless and email it to your customer. You can do either, both, or none… whatever you want.

Returns

Whether online or in store, all of your orders are captured in Drupal Commerce and so can be returned, with or without the original receipt. A return can be an entire order or an individual product.

End of Day (EOD) Reports

When closing a register, you cashiers can declare their totals for the day. You can quickly see if you’re over or short. When finished, an ongoing daily report is collected that you can look back on. On top of this, Drupal Point of Sale is integrated with the core Drupal Commerce Reporting suite.

Drupal Point of Sale end of day reporting

Hardware

Use Drupal POS 8 with anything that supports a browser and has an internet connection.

Technical Highlights

Adding to all of the user highlights above are a number of important technical improvements. It’s the underlying architecture that really makes Drupal Point of Sale shine.

Themable

Cashiers login to Drupal Point of Sale via a designed login page. Once logged in, the theme used is the default Drupal admin theme. However, like any other part of Drupal, your admin theme can be modified as much as you like. Keep it default or customize it to your brand; it’s yours to do with as you please.

Drupal Point of Sale themable cashier login screen

Search API Enabled

The search API is a powerful search engine that lets you customize exactly what information is searchable. Using the Search API, your cashiers are sure to quickly find any product in your inventory by searching for a product’s title, SKU, UPC code (via barcode scanner), description, etc. Search API is completely customizable, so any additional unique search requirements can be easily added (brand, color, weight, etc.). The search API references the products on your site, and at any other store or multi-warehouse location to allow for you to serve customers in real-time. 

Fully Integrated with Drupal Commerce

The Drupal Point of Sale module seamlessly integrates into the existing Drupal Commerce systems and architecture. It shares products, stock, customers, orders, promotions and more. This makes Drupal Point of Sale plug-and-play while also making sure that the code base is maintainable and can take advantage of future Drupal Commerce features and improvements.

Permissions and Roles

When Drupal Point of Sale is installed, a “cashier” user role is created that limits the access users of this type have with your Drupal Commerce backend. Use Drupal’s fine grained permissions and roles system to manage your cashiers and give different permissions to employees, managers, marketers, owners, IT, etc. Any way you want it.

Custom Hardware

As mentioned above, all you need to use Drupal POS 8 is anything that supports a browser and has an internet connection. This opens the door for all kinds of custom Point of Sale hardware such as branded terminals, self-serve kiosks, tradeshow-ready hardware, and more.

Drupal Point of Sale Raspberry Pi custom hardware

We’ve been having fun prototyping various Raspberry Pi based POS hardware solutions. You can see some of them here and stay tuned for more. Drupal Point of Sale is open source, so why not open up the hardware too?

Drupal Point of Sale 8, Ready for your Drupal Commerce platform

We’re excited to finally release the production ready version of Drupal Point of Sale 8.0. There are many ecommerce-only platforms out there, but almost none of them can ALSO run in your physical store too. This is a BIG DEAL. Drupal Point of Sale gives you the last piece needed to run your entire store using Drupal Commerce allowing for centralized data and a single system for your team to learn and manage.

One admin login, one inventory list, one user list, one marketing platform, ONE. True omnichannel, without the fees.

Next Step

Watch a Demonstration
Mike at Acro Media recorded a quick video to show Drupal Point of Sale in action. He shows the interface, how it's configured, and some of the features.

[embedded content]

Commerce Kickstart
Starting a Drupal Commerce project from scratch? Use Commerce Kickstart to configure your install package (including Drupal Point of Sale).

Install with Composer
Already using Commerce for Drupal 8? Install Drupal Point of Sale with Composer.

$ composer require drupal/commerce_pos

Let Acro Media help
Acro Media is North America’s #1 Drupal Commerce provider. We build enterprise commerce using open source solutions. Unsure if Drupal Commerce and Drupal Point of Sale meet your business requirements? A teammate here at Acro Media would be happy to walk you through a replatforming evaluation exercise and provide you with the Point of Sale workbook to help you make your decision.

Contact Acro Media Today!

More from Acro Media
Jun 13 2018
Jun 13

The deadline is today. A remote development team have worked for several weeks on your software. You obtain the long-awaited access to the system. You check it and you are not satisfied with the achieved results.

All that was needed to avoid this problem is a team with experience in technology and working using SCRUM.

What is SCRUM

Wikipedia defines SCRUM as an agile framework for managing work. It is an approach used in many companies to develop software. The full definition can be found here https://en.wikipedia.org/wiki/Scrum

SCRUM solves most of the problems arising during software development

This is my opinion and many people agree with it. I have been developing commercial projects since 2008. I started as a programmer. Currently, I am supervising projects.

In Droptica, Drupal developers work in teams delivering complex projects. Introduction of the SCRUM method in Droptica solved most of the problems. Which ones exactly?
Here are the most important of them:

  • The client was not regularly informed about the progress of works – the client was not satisfied. Sprints, review, backlog refinement – all this compels a constant contact with the client
  • The tasks were not thought through before starting them, therefore they took a long time to finish – the client was not satisfied. Backlog refinement and planning – these events ensure that the team has to really ponder on completing each task.

SCRUM saves money

You can ask yourself: how is that possible if you do not know the exact duration and cost of the project at its beginning? The answer can be found in the previous paragraph:

  • Regular meetings with the client (Product Owner) force them to think about which tasks are actually needed and which can be rejected.
  • Analysis of tasks by the team together with the PO, often allows to come up with better ways to implement or reject them.

SCRUM is often called the art of maximising work not done. You maximise the rejection of the tasks that are unnecessary from the point of view of your business. You do only the things that bring particular value to the system. Everything else goes straight to the waste bin.

Why so many meetings?

Planning the sprint, daily scrum, retro, review, backlog refinement. The list of meetings is long. There is no doubt that they take time. The client often expects to pay for programming, not for conversations and meetings.

I used to think the same way. However, after a test implementation of SCRUM in one of the projects, I have changed my mind. Now I want to develop all our projects – for clients and internal – using SCRUM. I see that it saves great amount of time and money. The same can be confirmed by the clients with whom we now work using SCRUM, while we did not have a specific way of working before.

Abraham Lincoln once said, "If I had eight hours to chop down a tree, I'd spend six sharpening my ax."

The meetings guarantee a good rethinking of tasks, sticking to a common direction and pursuing the same business goals. It is definitely worth it.

How long will it take and how much will it cost?

Every client asks this question at the beginning. It is not easy to answer it. The pace of each programmer is different, there are different working conditions, holidays, leaves, the requirements change (from the client, legal requirements, etc.). A longer project also means a frequently changing specification. Such changes change the cost and time.

Story Points is a solution to the problem. It is a very good tool for estimating the number of tasks that can be performed in a sprint (stage). After just 2-4 sprints you can see the team's average pace. After such a time the team knows the project well, knows the client well and plans for the future. The team can very accurately estimate the tasks waiting in the Backlog. Product Owner, knowing the pace of the team can count the number of sprints and the total cost.

Compared to creating a detailed specification at the beginning of the project, such an approach gives better estimation results.

SCRUM is not enough if the team does not have the experience with the technology

If the team will work using SCRUM, but will not be familiar with the technology, it will not be able to provide good quality software within a reasonable time. Only the combination of SCRUM and the team experienced in using the given technology ensures significant effects. The customer will definitely be satisfied with such a combination.

Why a remote team is better?

What is the difference between a remote team and a local team? Actually – just their location. If you can have a local team, it is worth choosing this option. It will be more convenient.

However, in today's IT market it is difficult to complete a team of 2, 3 or more specialists in a short time for a larger project. That is why you should think about a remote SCRUM team. A team that already has the experience in working with a remote client. By expanding the options to the whole world, you have more choices.

How can I monitor what a team thousands of miles away is doing? 

SCRUM has a way to do it: Sprint Burndown Chart. It is a chart that is updated daily. It shows the regular peace of project development. It shows whether the team implements the sprint plan. It is the best tool for monitoring the progress of works. Using Waterfall, you usually find out about delays at the end of a larger stage. With SCRUM, the client can check every day what progress the team has made. They can be sure that the team works and delivers consecutive parts of the software.

How to communicate with the team?

At Droptica we have 3 ways to do that:
- Jira - it is the main communication system; here we have all User Stories and tasks
- Slack - for short text questions, used practically every day
- Skype/Google Hangouts/Zoom - for video calls with screen sharing

These three forms of communication in 100% ensure very good communication between the team and the Product Owner.

If possible, once in a while, the development team meets with the Product Owner at our office or in the client's office. Our offices are located close to the airport, we eagerly invite our clients to visit them.

How can I check if a remote SCRUM team will work in my case?

If you have a project for a minimum of 2-3 people for a few sprints, a well-conducted SCRUM will provide you with very good results.

If you are not sure if SCRUM will work for you, test it. Order 2-3 sprints and see what results you will get. It is a small cost within the scale of projects taking several months, and such an approach will provide an unambiguous answer to the question of whether it is worth using SCRUM.

If you still have doubts about a remote SCRUM team, I will be happy to answer your questions and share my experiences. Contact me at [email protected] or write your question in the comment.

Jun 07 2018
Jun 07

The situation: I'm the primary maintainer of the Commerce Point of Sale module and have been building a customer facing display feature for the Commerce 2 version. So, I have two separate pages, one is a cashier interface where a cashier enters products, the second is a customer facing screen where the customer can watch what products have been scanned, review pricing, and make sure everything is correct.

The problem: Since products can be scanned through quite quickly, it was imperative that the customer facing display update very quickly. The display needs to match what's happening in near real-time so that there is no lag. Unfortunately, AJAX is just too slow and so I needed a new solution.

The solution: WebSockets seem like a great fit.

Design

AJAX - Too slow!

WebSocket - Fast!

The socket server can either not bootstrap Drupal at all, or bootstrap it only once upon load, making it able to relay traffic very quickly.

Dependencies

I only needed one dependency for this, Ratchet, which is a PHP library for handling WebSockets and is easily installed via Composer.

Setup

The WebSocket server is actually very simple, it finds and loads up the autoload script for Drupal, similar to how Drush does it.

We bootstrap Drupal, just so we can load a few config settings.

We terminate the Drupal kernel, since we don’t need it just for ferrying traffic back and forth and it will probably leak memory or something over a long time if we use it a bunch, since Drupal isn’t really meant to run for ages. I did try it with Drupal running the whole time and it did work fine, although this wasn’t under any real load and only for a couple days.

Now all that we have to do is setup the service.

All the details of our service come from the class we pass in, which basically hooks in the different server events. I’ll leave the details of that outside of this article as none of it is Drupal specific and there are lots of tutorials on Rachet’s site: http://socketo.me/docs/hello-world

Javascript

On the JavaScript end, we connect to the WebSocket using the standard interface.

I used a few mutation observers to monitor for changes and then passed the changes to the WebSocket to relay. You could do this however you want and probably some nicely integrated JS or even a React frontend would be a lot cleaner.

Resources

Related module issue: https://www.drupal.org/project/commerce_pos/issues/2950980
Ratchet PHP Library: http://socketo.me/

Contact us and learn more about our custom ecommerce solutions

May 31 2018
May 31

Updated again in February 2021 with even more tasks and tools that got replaced.

Updated in January 2020 with significant improvements since originally posted.

My personal #gdpr today, May 25th 2018: completed my project to get back all my data from @Google, @evernote et al and host it all by myself with @Nextclouders, #joplin and dozens of other @OpenSourceOrg tools that come with the same convenience but with real privacy. Check!

Following this Twitter post I got asked about more details and how I actually achieved that goal. Here is a table of tools that I'm using now instead of the old tools that I consider harmful to some extent:

* Plugin for Thunderbird

All the red ones are gone completely - and that feels really good. The green marked SpiderOak ONE - although it falls into the old tool category as it is proprietary - I'm planning to keep that in the tool set, simply because it is powerful, affordable, and it well protects my data because everything is encrypted prior to uploading it to their facilities. SpiderOak ONE is now gone also because their CLI support has always been weak and since they clearly stated in multiple support issues, they won't be improving that, it got time to drop this too and BorgBackup as a replacement turned out to be even stronger.

That leaves the yellow ones. Those are the tools I would love to replace but haven't found an appropriate solution yet:

  • Feedly and Pocket: The news app in Nextcloud aims to provide the same sort of functionality but lacks some features at this point. Let's hope this is going to improve over time. But as the content being managed, there is collected from internet sources and already public, this is not one of the most urgent tasks in my view. The news app is now good enough, and I'm happy with it.
  • LastPass: this is my go-to password manager, and it does work really well not only because it integrates almost perfectly on all the platforms I'm using day in and day out. However, since they've been acquired by LogMeIn Inc. my confidence dropped significantly and I'd replace it rather sooner than later. There are lots of alternatives, and it feels like I've tried them all, but none of them is mature enough yet and I have to keep watching out. January 2020: switched to KeePassXC and couldn't be more excited. I didn't really expect better integration in browser and Android, but all the tools around KeePassXC are better than everything I had seen before (including LastPass).
  • Authy: the fact that all the sites where I'm using 2FA are registered and stored on a third-party facility is a fairly big worry to myself. As the protocol for 2FA is public and the algorithm well documented, this is my number one candidate for a new app on Nextcloud so that I could be hosting all those keys in my own private cloud. January 2020: successfully switched to andOTP which stores all its data on my Nextcloud instance and hence I can use multiple devices or switch to a new one without having to re-setup 2FA everywhere. February 2021: now dropped andOTP as well as all the required features are included in all my KeePassXC clients too.
  • IntelliJ IDEA: as a freelancer, I heavily rely on a code writing tool with lots of integrations and supporting features. There is the open source alternative known as Eclipse, but it is far less capable and lacks performance. As a result, IntelliJ IDEA is one of the last tools that I pay an annual fee for with no regret.
  • Toggl: this is equally important for freelancers - no billing without meaningful time tracking. As Toggl integrates very well into almost any task and/or project management system on the planet, it is very convenient and works without extra time being spent. That's the final reason to stick with them, otherwise I'd switch to self-hosted open-source alternatives.

Of course, while not yet living on an island, there are frequent scenarios where exchange with others is important, either for work or pleasure. That's why a number of communication tools (WhatsApp, Messenger, Hangouts, Slack, GoToMeeting, Citrix), networking platforms (LinkedIn, Xing) or social networks (Facebook, Twitter) are still being used daily. Don't think we are close to the point where we could consider closing them down, but even positive things are happening unexpectedly now and then - so let's keep hoping for the better. January 2020: most of the above is not true any more: cancelled my Xing subscription, haven't used Facebook for months and don't use the communication tools mentioned above any longer. Just Twitter is left in the toolset, hoping to replace that with Mastodon when my peers are going to move also. Voice and video sessions are on a paid Zoom account now which is a commercial tool but at least offers end-to-end encryption. February 2021: Not sure about my judgement regarding Zoom any longer. Alternatives like Jitsi and BigBlueButton are getting there, and I should be replacing that in the next phase too. Facebook and WhatsApp accounts are deleted, the Google account will be next to be deleted.

Especially the communication tools are a main concern. While Skype screwed up completely and friends and customers are all using the wide range of tools listed above, I'd really love to consolidate most if not all on a single platform. Nextcloud Talk is pretty good already, but Zoom is the best platform to date (screen sharing on a multi screen desktop in particular) and Mattermost is amazing, feature rich and can be self-hosted. But for all three of them it's hard to get other on board as well which leaves you talking to yourself, which doesn't make sense all day.

Conclusion

Nextcloud managed to evolve into the central hub of all my personal and business data which is well protected with a 3-2-1 backup strategy and synchronized across all desktop and mobile devices when used together with add-ons and apps from various sources almost all of which are Open-source products. Of course, these are moving targets as new platforms, gadgets and ideas arise all the time. But so do these platforms, and they often integrate faster with new APIs than most of the proprietary tools. I am controlling my data to a very high degree without giving up significant convenience.

May 30 2018
May 30

Drupal module - CiviCRM Contact Distance Search

MillerTech released this Drupal module back in 2015 but have recently updated with new features (map and use your location) and to make it more configurable.

This module offers a fully configurable/extendable Drupal view that provides the functionality to search from a postcode and a distance.

Use case scenario – Find schools from my postcode within a 5 mile radius.

With the example above you would have schools as contacts in your CiviCRM database with a primary address and both the latitude and longitude fields should be populated.

The Drupal view that’s shipped with this module can be configured to filter on a particular contact subtype i.e. schools.

Search results will provide you with schools within a 5 mile radius of the entered postcode along with distance.

Distance is calculated by road (or as the road winds or as the crow walks etc.) and NOT as the crow flies.

New features includes an option to display a map –

And also an option for your device to use your location which will populate the postcode field (works best with mobile devices for accuracy) –

CiviCRM extension page - https://civicrm.org/extensions/civicrm-contact-distance-search

Full installation steps available on the Drupal module page - https://www.drupal.org/project/civicrm_contact_distance_search

Filed under

May 29 2018
May 29

Did you know that Drupal has a Point of Sale (POS) module that pairs with the widely used Commerce module? That's right, Drupal Commerce is now the full end-to-end platform for a complete omnichannel ecommerce experience. Whether you're running an online store, a physical store, or both, you can do it all with Drupal Commerce!

One of the great things about a web-based POS is that all you need is a web browser for it to work. This opens the door to new POS hardware options. You can use an iPad, a laptop, or anything that has a browser. You don't need any expensive or specialized hardware from Moneris, nor do you need a branded solution such as Square. Instead, you now even have the option to build your own POS hardware for very little cost. Today we're featuring a Raspberry Pi based prototype that WE built! The whole setup cost about $250 CAD.

Watch the video below, or keep reading to learn more.

[embedded content]

As mentioned above, we bought a simple touchscreen and mounted a Raspberry Pi on the back. Once up and running, all you have to do is plug it in, connect it to the Internet, and it will automatically boot up into the POS login screen. If your staff has a problem, all they have to do is unplug it and plug it back in. There's no messing with settings or anything. Just reboot. Easy!

Once you get the hardware working, the display can be used in 3 different ways depending on how you need it:Check Out Our High Five Drupal Web Series

  1. The administrative view, which is what the cashier would use.
  2. A customer display view, which shows what the cashier has added so the customer can see the products and prices entered in real-time. Remember: all you need is a browser and something that can display a browser. The customer display is especially easy because it doesn't have to be a touchscreen; you could just use any monitor, a TV, etc, and run it off of the cashier hardware.
  3. A kiosk view, which is basically just running the front end of the site like your customers would do on their home computers. You could set that out in your store and let customers browse products and make purchases.

So, for a shoestring budget, we created a working point of sale that could be used in a store (see the video above). Aside from looking a little silly, our example is perfectly fine and works great. Plus, there are endless options for inexpensive enclosures to make it look better. You could even build or 3D print your own.

The do-it-yourself (DIY) route is a lot cheaper and gives you the freedom to do whatever you want. We will post further details soon on how to do all this yourself, including specific links to the components we used. And remember: it's Drupal, so it's open source, and all the software is free.

Integrated Drupal Ecommerce Solutions

May 24 2018
May 24

In this video, Josh Miller shows you how to install Drupal Commerce 2 using a local development tool called Lando. Further instructions are included below the video.

[embedded content]

Timestamps:

  1. Commerce Kickstart download: 0:51
  2. “composer install” command: 8:00
  3. “lando init” command: 12:56
  4. “lando start” command: 15:06
  5. “Drupal install” screen: 17:04
  6. “lando stop” command: 21:18

Prerequisites:

  1. Download and install Composer
  2. Download and install Lando

Code generated during this video:

https://github.com/AcroMedia/install-commerce-lando 

Installing Drupal Commerce 2 locally using Commerce Kickstart, Composer, and Lando

Getting Drupal up and running on your computer is an important first step as an evaluator. Good news is that there’s a lot of tech that makes this easier than ever before. We’re going to walk you through how to install Commerce 2 using the Kickstart resource, Composer, and Lando. 

  1. Download and install Composer
  2. Download and install Lando
  3. Next go to Commerce Kickstart to create and download your customized composer.json file

    Visit Commerce Kickstart

     Drupal Commerce Kickstart

  4. Run ‘composer install’

    Composer install command

  5. Run ‘lando init’

    Lando init command

  6. Run ‘lando start’

    Lando start command

  7. Visit your local URL and install Drupal

    Lando - 5 CommerceKickstart-ChooseLanguage

  8. Start building!

    Lando - 6 Congratulations

What is Drupal Commerce

Drupal Commerce is an ecommerce focused subset of tools and community based on the open source content management system called Drupal. Drupal Commerce gives you the ability to sell just about anything to anyone using a myriad of open source technologies and leveraging hundreds of Drupal modules built to make that thing you need do that thing you want.

We use Commerce Kickstart to get things started.

Try the Commerce Kickstart 2.x Installer

What is Composer

Composer is the PHP dependency manager that can not only build and bring in Drupal, Drupal Commerce, and Symfony, but is the technology behind the newest Drupal Commerce Kickstart distribution. We leverage the composer.json file that commercekickstart.com gives us to bring in all of the Drupal code necessary to run a Drupal Commerce website.

To get started, we run “composer install” and that command brings in all the requirements for our project.

What is Docker

Docker is a virtualization software that brings together App services like Apache, Nginx, MySQL, Solr, Memcache, and many other technologies so that it can run on your own computer. This installation video uses a tool that runs on top of Docker in an abstract, and frankly easier, way.

If you want to learn more about Docker and the many different types of tools that run on top of it, we recommend John Kennedy’s 2018 Drupalcon presentation about Docker.

Another great resource that compares using Docker tools is Michael Anello’s take on the various technologies.

What is Lando

Lando is a thin abstraction layer of tools on top of Docker that makes creating an environment as easy as “lando init” followed by “lando start.” Lando keeps the often confusing devops work of creating a local virtual environment to a few very well documented variable settings that it turns into full docker-compose scripts that Docker, in turn, uses to create a local environment where everything just works together. We’re very excited to see how Lando and Drupal Commerce start to work together.

Contact us and learn more about our custom ecommerce solutions

May 23 2018
May 23

But I just want to upload images to my site&mldr;

There is a clear difference between what a user expects from a CMS when they try to upload an image, and what they get out of the box. This is something that we hear all the time, and yet we, as a Drupal community, struggle to do it right.

There are not simple answers on why Drupal has issues regarding media management. As technology evolves, newer and simpler tools raise the bar on what users expects to see on their apps. Take Instagram for example. An entire team of people (not just devs) are focused on making the experience as simple as possible.

Therefore it’s normal to expect that everyone wants to have this type of simplicity everywhere. However, implementing this solutions is not always trivial, as you will see.

We are working in a new project that needs some image management capabilities, and just to avoid reinventing the wheel, we looked to the solutions by two mature distributions for Drupal 8, Thunder and Lightning.

Thunder seemed more aligned to what we were looking for, so we just tried to replicate the features into our platform. But there was a catch. Thunder is still using Media from a contrib module, and we wanted to stay as close to core as possible.

After spending a few hours replicating most of the functionality it became evident that there are a lot of concepts under the hood interesting to explore. Just to warn you, this article is not about of how to create a media management library. Instead we will focus on understand what we are configuring and why.

The Journey Begins.

Before Media entities was a thing, this is how a media gallery was designed with Drupal. You basically add an image field to a content type.

A simple content type with an image field.

Drupal 8.5 introduces the concept of Media entities. This is an important concept because each time you upload media into a site you may want to associate some metadata to be able to search them later. For example, if you want to categorize images, you need to add a vocabulary to an image. The media entity acts as a bridge between your assets and the fields that enrich them.

After installing the Media module you can create Media entities types. This is, if you have images, you may want to have Media Images; the same applies for videos and audio.

We wanted to create an image gallery, therefore we created a content type called Image Gallery, that has an entity reference field that references&mldr; that’s right, media entities.

A media entity linked to a content type.

So far this is quite simple, just core modules, and two entities types referenced by a field.

Making It Usable

The regular entity reference widget.

Now the challenge is to make this easy to use. The first step is to replace the entity reference with something much more flexible. Here is where our first contributed module comes in. Meet the Entity Browser module.

The Goal of this module is to provide a generic entity browser/picker/selector. It can be used in any context where one needs to select few entities and do something with them.

There is a great article that explains a lot of the details on this module. Let’s keep this simple to understand the full picture. Just make sure to use the 8.x-2.0 branch that is compatible with Media module provided by core.

The widget of the entity reference is what you are configuring.

The entity browser, as we said, allows you to replace the entity reference widget with something more fancy. It also allows you to create – in place – a new media entity. But, you will need an extra pair of modules to provide the fanciness you need.

The dropzone module allows you to upload multiple media items in a single upload. One of the main differences between media and and Drupal core is that the media name is required now, so you may need some custom code to auto populate this field somehow in case you want to hide it.

Another module you will need is Views, which fortunately is in core now, so you don’t need to download it. The views module is used to generate a view that lists media entities, there is a special field you need to attach to this view, that is the Media: Entity browser bulk select form field.

Entity Browser in action configured with Dropzone.

Customizing the Rest

So let’s recap, we have two entities (Media Image, and Image Gallery), referenced by a entity reference. The widget that we are using for the entity reference is an Entity Browser widget, that allows you not only to select existing media (by using a view) but also to upload new images using dropzone.

If a new image is uploaded, a new Media Entity is created and the image is attached to it automatically. If an image is selected, an existing entity will be referenced by the content type Gallery using the entity reference field. All this steps are handled by the Entity Browser module.

A picture is worth a thousand words.

Another feature usually expected by clients is the ability to select which part of an image is the important one. You can use the Focal Point module that allows you to specify a focal point to focus and crop the image.

To use focal points you need to configure the widget of the image field.

If we take a look to the selection of images done by Thunder, we can see there is a green indicator that shows we choose an image. This is done by custom code. But don’t worry, if you use the same names defined by thunder for the views and fields (as we did), you can borrow the code from media_thunder that adds the magic.

You may want to copy media_thunder/css, media_thunder/img and media_thunder/js as well.

/**
 * Implements hook_preprocess_views_view().
 */
function custom_module_media_preprocess_views_view(&$variables) {

  $custom_module_media_browser = [
    'image_browser',
    'video_browser',
  ];

  if (in_array($variables['view']->id(), $custom_module_media_browser)) {
    $variables['view_array']['#attached']['library'][] = 'custom_module_media/entity_browser_view';
  }
}

By default you see a lot of fields that are not relevant when you upload the image. Let’s see how we can configure the form to make it easier to use.

Configurable concepts for entity types and fields.

What you see in the image is how the media entity form mode is configured. This is the UI that you can use to hide the things you don’t need.

You may want to do the same thing with the display mode of the media entity to indicate what to show once the image is uploaded.

The thunder_admin theme provides some nice theme enhancement.

The entity browser allows you to select which display mode use after selecting an image. That is defined in the entity reference widget settings.

Here we are configuring the entity reference widget of the Gallery content type.

But the elements rendered in the Media entity are configured as part of the display mode of of the Media Entity.

Display mode configuration of the Media Entity.

And here’s some good news: we are experimenting with the new Layout Builder module, and we are happy to confirm this is working fine within the media ecosystem.

Configuring the thumbnail view mode for the entity Media type.

Configuring the thumbnail view mode to show only what you need makes the form really easy to use.

Here you configure what you want to display when a field item is selected. It also works with the new Layout builder module

Conclusions

As you can see, there are a good balance between core modules and contributed modules. This is the work of several years of work by dozens of developers around the world.

There is still a lot of things that can be improved and polished. Even more, the recently committed Allow creation of file entities from binary data via REST requests functionality to core and modules like jsonAPI open the door to replace this solution by something more decoupled of Drupal.

The trick to get what you want is to play with the form modes and display modes of each entity involved. Is a bit of try and error but you will gain a lot of understanding of the Drupal basics.

And in the future? Who knows, maybe next versions of Drupal will include this feature ready to use of the box, until then, have fun configuring your own set of building blocks.

May 22 2018
May 22

This is going to be a simple exercise to create a decoupled site using Drupal 8 as the backend and an Elm app in the frontend. I pursue two goals with this:

  • Evaluate how easy it will be to use Drupal 8 to create a restful backend.
  • Show a little bit how to set up a simple project with Elm.

We will implement a very simple functionality. On the backend, just a feed of blog posts with no authentication. On the frontend, we will have a list of blog posts and a page to visualize each post.

Our first step will be the backend.

Before we start, you can find all the code I wrote for this post in this GitHub repository.

Drupal 8 Backend

For the backend, we will use Drupal 8 and the JSON API module to create the API that we will use to feed the frontend. The JSON API module follows the JSON API specification and currently can be found in a contrib project. But as has been announced in the last DrupalCon “Dries”-note, the goal is to move it to an experimental module in core in the Drupal 8.6.x release.

But even before that, we need to set up Drupal in a way that is easy to version and to deploy. For that, I have chosen to go with the Drupal Project composer template. This template has become one of the standards for site development with Drupal 8 and it is quite simple to set up. If Composer is already installed, then it is as easy as this:

composer create-project drupal-composer/drupal-project:8.x-dev server --stability dev --no-interaction

This will create a folder called server with our code structure for the backend. Inside this folder, we have now a web folder, where we have to point our webserver. And is also inside this folder where we have to put all of our custom code. For this case, we will try to keep the custom code as minimal as possible. Drupal Project also comes with the two best friends for Drupal 8 development: drush and drupal console. If you don’t know them, Google it to find more out about what they can do.

After installing our site, we need to install our first dependency, the JSON API module. Again, this is quite easy, inside the server folder, we run the next command:

composer require drupal/jsonapi:2.x

This will accomplish two things: it will download the module and it will add it to the composer files. If we are versioning our site on git, we will see that the module does not appear on the repo, as all vendors are excluded using the gitignore provided by default. But we will see that it has been added to the composer files. That is what we have to commit.

With the JSON API module downloaded, we can move back to our site and start with site building.

Configuring Our Backend

Let’s try to keep it as simple as possible. For now, we will use a single content type that we will call blog and it will contain as little configuration as possible. As we will not use Drupal to display the content, we do not have to worry about the display configuration. We will only have the title and the body fields on the content type, as Drupal already holds the creation and author fields.

By default, the JSON API module already generates the endpoints for the Drupal entities and that includes our newly created blog content type. We can check all the available resources: if we access the /jsonapi path, we will see all the endpoints. This path is configurable, but it defaults to jsonapi and we will leave it as is. So, with a clean installation, these are all the endpoints we can see:

JSON API default endpoints

But, for our little experiment, we do not need all those endpoints. I prefer to only expose what is necessary, no more and no less. The JSON API module provides zero configurable options on the UI out of the box, but there is a contrib module that allows us to customize our API. This module is JSON API Extras:

composer require drupal/jsonapi_extras:2.x

JSONAPI Extras offer us a lot of options, from disabling the endpoint to changing the path used to access it, or renaming the exposed fields or even the resource. Quite handy! After some tweaking, I disabled all the unnecessary resources and most of the fields from the blog content type, reducing it just to the few we will use:

JSONAPI blog resource

Feel free to play with the different options. You will see that you are able to leave the API exactly as you need.

Moving Our Configuration to Version Control

If you have experience with Drupal 7, you probably used the Features module to export configuration to code. But one of the biggest improvements of Drupal 8 is the Configuration Management Interface (CMI). This system provides a generic engine to export all configuration to YAML files. But even if this system works great, is still not the most intuitive or easy way to export the config. But using it as a base, there are now several options that expand the functionality of CMI and provide an improved developer experience. The two bigger players on this game are Config Split and the good old Features.

Both options are great, but I decided to go with my old friend Features (maybe because I’m used to it’s UI). The first step, is to download the module:

composer require drupal/features:3.x

One of the really cool functionalities that the Drupal 8 version of the Features module brings is that can instantly create an installation profile with all our custom configuration. Just with a few clicks we have exported all the configuration we did in previous steps; but not only that, we have also created an installation profile that will allow us to replicate the site easily. You can read more of Features in the (official documentation on drupal.org)[https://www.drupal.org/docs/8/modules/features/building-a-distribution-w....

Now, we have the basic functionality of the backend. There are some things we should still do, such as restricting the access to the backend interface, to prevent login or registration to the site, but we will not cover it in this post. Now we can move to the next step: the Elm frontend.

Sidenote

I used Features in this project to give it a try and play a bit. If you are trying to create a real project, you might want to consider other options. Even the creators of the Features module suggest not to use it for this kind of situations, as you can read here.

The Frontend

As mentioned, we will use Elm to write this app. If you do not know what it is, Elm is a pure functional language that compiles into Javascript and it is used to create reliable webapps.

Installing Elm is easy. You can build it from the source, but the easiest and recommended way is just use npm. So let’s do it:

npm install -g elm

Once we install Elm, we get four different commands:

  • elm-repl: an interactive Elm shell, that allows us to play with the language.
  • elm-reactor: an interactive development tool that automatically compiles our code and serves it on the browser.
  • elm-make: to compile our code and build the app we will upload to the server.
  • elm-package: the package manager to download or publish elm packages.

For this little project, we will mostly use elm-reactor to test our app. We can begin by starting the reactor and accessing it on the browser. Once we do that, we can start coding.

elm-reactor
Elm Reactor

Our First Elm Program

If you wish to make apple pie from scratch, you must create first the universe. Carl Sagan

We start creating a src folder that will contain all our Elm code and here, we start the reactor with elm reactor. If we go to our browser and access http://localhost:8000, we will see our empty folder. Time to create a Main.elm file in it. This file will be the root of our codebase and everything will grow from here. We can start with the simplest of all the Elm programs:

module Main exposing main

import Html exposing (text)


main =
    text "Hello world"

This might seem simple, but when we access the Main.elm file in the reactor, there will be some magic going on. The first thing we will notice, is that we now have a page working. It is simple, but it is an HTML page generated with Elm. But that’s not the only thing that happened. On the background, elm reactor noticed we imported a Html package, created a elm-packages.json file, added it as dependency and downloaded it.

This might be a good moment to do our first commit of our app. We do not want to include the vendor packages from elm, so we create a .gitignore file and add the elm-stuff folder there. Our first commit will include only three things, the Mail.elm file, the .gitignore and the elm-packages.json file.

The Elm Architecture

Elm is a language that follows a strict pattern, it is called (The Elm Architecture)[https://guide.elm-lang.org/architecture/]. We can summarize it in this three simple components:

  • Model, which represents the state of the application.
  • Update, how we update our application.
  • View, how we represent our state.

Given our small app, let’s try to represent our code with this pattern. Right now, our app is static and has no functionality at all, so there are not a lot of things to do. But, for example, we could start moving the text we show on the screen to the model. The view will be the content we have on our main function, and as our page has no functionality, the update will do nothing at this stage.

type alias Model
    = String

model : Model
model = "Hello world"

view : Model -> Html Msg
view model =
    text model


main =
    view model

Now, for our blog, we need two different pages. The first one will be the listing of blog posts and the second one, a page for the individual post. To simplify, let’s keep the blog entries as just a string for now. Our model will evolve into a list of Posts. In our state, we also need to store which page we are located. Let’s create a variable to store that information and add it to our model:

type alias Model =
    { posts : List Post
    , activePage : Page
    }

type alias Post
    = String

type Page
    = List
    | Blog


model : Model
model =
    { posts = ["First blog", "Second blog"]
    , activePage = List
    }

And we need to update or view too:

view Model : Model -> Html Msg
view model =
    div
        []
        [ List.map viewPost model.posts
        ]


viewPost : Post -> Html Msg
viewPost post =
    div
        []
        [ text post ]

We now have the possibility to create multiple pages! We can create our update function that will modify the model based on the different actions we do on the page. Right now, our only action will be navigating the app, so let’s start there:

type Msg
    = NavigateTo Page

And now, our update will update the activePage of our model, based on this message:

update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
    case msg of
        NavigateTo page ->
            ( {model | activePage = page}, Cmd.none )

Our view should be different now depending on the active page we are viewing:

view : Model -> Html Msg
view model =
    case model.activePage of
        BlogList ->
            viewBlogList model.posts
        Blog ->
            div [] [ text "This is a single blog post" ]

viewBlogList : List Post -> Html Msg
viewBlogList posts =
    div
        []
        [ List.map viewPost model.posts
        ]

Next, let’s wire the update with the rest of the code. First, we fire the message to change the page to the views:

viewPost post =
    div
        [ onClick <| NavigateTo Blog ]
        [ text post ]

And as a last step, we replace the main function with a more complex function from the Html package (but still a beginner program):

main : Program Never Model Msg
main =
    beginnerProgram
        { model = model
        , view = view
        , update = update
        }

But we still have not properly represented the single blogs on their individual pages. We will have to update our model once again along with our definition of Page:

type alias Model =
    { posts : Dict PostId Post
    , activePage : Page
    }


type alias PostId =
    Int


type Page
    = List
    | Blog PostId


model : Model
model =
    { posts = Dict.fromList [(1, "First blog"), (2, "Second blog")]
    , activePage = List
    }

And with some minor changes, we have the views working again:

view : Model -> Html Msg
view model =
    case model.activePage of
        BlogList ->
            viewBlogList model.posts

        Blog postId ->
            div
                [ onClick <| NavigateTo BlogList ]
                [ text "This is a single blog post" ]


viewBlogList : Dict PostId Post -> Html Msg
viewBlogList posts =
    div
        []
        (Dict.map viewPost model.posts |> Dict.values)


viewPost : PostId -> Post -> Html Msg
viewPost postId post =
    div
        [ onClick <| NavigateTo <| Blog postId ]
        [ text post ]

We do not see yet any change on our site, but we are ready to replace the placeholder text of the individual pages with the content from the real Post. And here comes one of the cool functionalities of Elm, and one of the reasons of why Elm has no Runtime exceptions. We have a postId and we can get the Post from the list of posts we have on our model. But, when getting an item from a Dict, we always risk the possibility of trying to get an non-existing item. If we call a function over this non-existing item, it usually causes errors, like the infamous undefined is not a function. On Elm, if a function has a chance of return the value or not, it returns a special variable type called Maybe.

view : Model -> Html Msg
view model =
    case model.activePage of
        BlogList ->
            viewBlogList model.posts

        Blog postId ->
            let
                -- This is our Maybe variable. It could be annotated as `Maybe Post` or a full definition as:
                -- type Maybe a
                --   = Just a
                --   | Nothing
                post =
                    Dict.get postId model.posts
            in
                case post of
                    Just aPost ->
                        div
                            [ onClick <| NavigateTo BlogList ]
                            [ text aPost ]

                    Nothing ->
                        div
                            [ onClick <| NavigateTo BlogList ]
                            [ text "Blog post not found" ]

Loading the Data from the Backend

We have all the functionality ready, but we have to do something else before loading the data from the backend. We have to update our Post definition to match the structure of the backend. On the Drupal side, we left a simple blog data structure:

  • ID
  • Title
  • Body
  • Creation date

Let’s update the Post, replacing it with a record to contain those fields. After the change, the compiler will tell us where else we need to adapt our code. For now, we will not care about dates and we will just take the created field as a string.

type alias Post =
    { id : PostId
    , title : String
    , body : String
    , created : String
    }


model : Model
model =
    { posts = Dict.fromList [ ( 1, firstPost ), ( 2, secondPost ) ]
    , activePage = BlogList
    }


firstPost : Post
firstPost =
    { id = 1
    , title = "First blog"
    , body = "This is the body of the first blog post"
    , created = "2018-04-18 19:00"
    }

Then, the compiler shows us where we have to change the code to make it work again:

Elm compiler helps us find the errors
-- In the view function:
case post of
    Just aPost ->
        div
            []
            [ h2 [] [ text aPost.title ]
            , div [] [ text aPost.created ]
            , div [] [ text aPost.body ]
            , a [ onClick <| NavigateTo BlogList ] [ text "Go back" ]
            ]

-- And improve a bit the `viewPost`, becoming `viewPostTeaser`:
viewBlogList : Dict PostId Post -> Html Msg
viewBlogList posts =
    div
        []
        (Dict.map viewPostTeaser model.posts |> Dict.values)


viewPostTeaser : PostId -> Post -> Html Msg
viewPostTeaser postId post =
    div
        [ onClick <| NavigateTo <| Blog postId ]
        [ text post.title ]

As our data structure now reflects the data model we have on the backend, we are ready to import the information from the web service. For that, Elm offers us a system called Decoders. We will also add a contrib package to simplify our decoders:

elm package install NoRedInk/elm-decode-pipeline

And now, we add our Decoder:

postListDecoder : Decoder PostList
postListDecoder =
    dict postDecoder


postDecoder : Decoder Post
postDecoder =
    decode Post
        |> required "id" string
        |> required "title" string
        |> required "body" string
        |> required "created" string

As now our data will come from a request, we need to update again our Model to represent the different states a request can have:

type alias Model =
    { posts : WebData PostList
    , activePage : Page
    }


type WebData data
    = NotAsked
    | Loading
    | Error
    | Success data

In this way, the Elm language will protect us, as we always have to consider all the different cases that the data request can fail. We have to update now our view to work based on this new state:

view : Model -> Html Msg
view model =
    case model.posts of
        NotAsked ->
            div [] [ text "Loading..." ]

        Loading ->
            div [] [ text "Loading..." ]

        Success posts ->
            case model.activePage of
                BlogList ->
                    viewBlogList posts

                Blog postId ->
                    let
                        post =
                            Dict.get postId posts
                    in
                        case post of
                            Just aPost ->
                                div
                                    []
                                    [ h2 [] [ text aPost.title ]
                                    , div [] [ text aPost.created ]
                                    , div [] [ text aPost.body ]
                                    , a [ onClick <| NavigateTo BlogList ] [ text "Go back" ]
                                    ]

                            Nothing ->
                                div
                                    [ onClick <| NavigateTo BlogList ]
                                    [ text "Blog post not found" ]

        Error ->
           div [] [ text "Error loading the data" ]

We are ready to decode the data, the only thing left is to do the request. Most of the requests done on a site are when clicking a link (usually a GET) or when submitting a form (POST / GET), then, when using AJAX, we do requests in the background to fetch data that was not needed when the page was first loaded, but is needed afterwards. In our case, we want to fetch the data at the very beginning as soon as the page is loaded. We can do that with a command or as it appears in the code, a Cmd:

fetchPosts : Cmd Msg
fetchPosts =
    let
        url =
            "http://drelm.local/jsonapi/blog"
    in
        Http.send FetchPosts (Http.get url postListDecoder)

But we have to use a new program function to pass the initial commands:

main : Program Never Model Msg
main =
    program
        { init = init
        , view = view
        , update = update
        , subscriptions = subscriptions
        }

Let’s forget about the subscriptions, as we are not using them:

subscriptions : Model -> Sub Msg
subscriptions model =
    Sub.none

Now, we just need to update our initial data; our init variable:

model : Model
model =
    { posts = NotAsked
    , activePage = BlogList
    }


init : ( Model, Cmd Msg )
init =
    ( model
    , fetchPosts
    )

And this is it! When the page is loaded, the program will use the command we defined to fetch all our blog posts! Check it out in the screencast:

Screencast of our sample app

If at some point, that request is too heavy, we could change it to just fetch titles plus summaries or just a small amount of posts. We could add another fetch when we scroll down or we can fetch the full posts when we invoke the update function. Did you notice that the signature of the update ends with ( Model, Cmd Msg )? That means we can put commands there to fetch data instead of just Cmd.none. For example:

update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        NavigateTo page ->
            let
                command =
                    case page of
                        Blog postId ->
                            fetchPost postId
                        BlogList ->
                            Cmd.none
            ( { model | activePage = page }, command )

But let’s leave all of this implementation for a different occasion.

And that’s all for now. I might have missed something, as the frontend part grew a bit more than I expected, but check the repository as the code there has been tested and is working fine. If yuo have any question, feel free to add a comment and I will try to reply as soon as I can!

End Notes

I did not dwell too much on the syntax of elm, as there is already plenty of documentation on the official page. The goal of this post is to understand how a simple app is created from the very start and see a simple example of the Elm Architecture.

If you try to follow this tutorial step by step, you will may find an issue when trying to fetch the data from the backend while using elm-reactor. I had that issue too and it is a browser defense against Cross-site request forgery. If you check the repo, you will see that I replaced the default function for get requests Http.get with a custom function to prevent this.

I also didn’t add any CSS styling because the post would be too long, but you can find plenty of information on that elsewhere.

May 20 2018
May 20

Last week I talked about setting up a new project using BLT, Dev Desktop, and Lightning. Today, I’ll talk more about my local environment setup and give a brief overview of my development and deployment workflow.

Dependencies

As I discussed in the last post, PHP, Composer, and Git are all installed via Homebrew:

brew install php71 composer git;

A global version of Drush (8.1.15 as of this writing) is supplied by Dev Desktop. I find this global Drush version continues to be useful for older Drupal 7 projects I continue to maintain and for projects for which I haven’t yet updated alias files or do not yet contain Drush 9-compatible commands.

Drush for this project, though, is locally vendored (BLT requires Drush as a dependency). Drush takes care of discovering the right version to use via the Drush Launcher built into the global Drush 8. So, I don’t need to worry about calling the wrong version of Drush for my project. If you’d like to add Drush 9 to your own Composer-driven project, there are instructions over in the Drush documentation.

Local environment configuration

Dev Desktop provides its own versions of things like PHP, Drush, and MySQL. Because I’m running PHP and Composer via Homebrew but also running Dev Desktop, I’ve done a few things to setup my local environment so it utilizes the right tools from the right places.

The lines below from my .bash_profile provide path settings account for the locations of the various tools I’m using.

export PATH="/usr/local/bin:/usr/local/sbin:$HOME/bin:$HOME/.composer/vendor/bin:$PATH"

# Add Dev Desktop settings and paths.
# Does not add Dev Desktop's PHP path; PHP is installed via Homebrew.
export DEVDESKTOP_DRUPAL_SETTINGS_DIR="$HOME/.acquia/DevDesktop/DrupalSettings"
export PATH="/Applications/DevDesktop/mysql/bin:$PATH"
export PATH="$PATH:/Applications/DevDesktop/tools"

I keep my .bash_profile configuration in a public Github repository, so feel free to check out jrbeeman/dotfiles if you’d like to see other potentially useful config to pull into your setup.

Editor extensions and configuration

I’ve been using Microsoft’s Visual Studio Code for nearly all development over the past year, and I think it’s great.

The following extensions have been useful in turning VS Code into a robust PHP IDE. Note that some of these extensions were installed for JavaScript development, and I’m including them here as many Drupal projects require both.

  • Apache Conf
  • Code Outline
  • Document This
  • ESLint
  • Git History
  • Markdown All in One
  • markdownlint
  • MDTools
  • PHP Debug
  • PHP DocBlocker
  • PHP Formatter
  • PHP Intelephense
  • Snippets and Syntax Highlight for Gherkin
  • TSLint
  • Twig
  • Vagrantfile Support
  • vscode-icons

VS Code supports user- and workspace-level (project) configuration overrides. Here’s what I’m using.

User settings

These user settings perform basic font and text display configuration and turn on the nice icons provided by vscode-icons.

{
  "editor.fontSize": 14,
  "editor.wordWrap": "on",
  "workbench.iconTheme": "vscode-icons",
  "editor.tabSize": 2
}

Workspace settings

These workspace settings tell VS Code to treat Drupal’s various funky file extensions as the right language (files.associations) and to ignore several different folders when building its index (files.exclude). The primary change with files.exclude was the “deploy” folder, which can really confuse VS Code if not ignored due to it containing a full copy of the application after creating a build.

{
  "files.associations": {
    "*.module": "php",
    "*.install": "php",
    "*.inc": "php",
    "*.theme": "php",
    "*.info": "ini"
  },
  "files.exclude": {
    "**/.git": true,
    "**/.svn": true,
    "**/.hg": true,
    "**/CVS": true,
    "**/.DS_Store": true,
    "deploy": true
  }
}

General workflow

Committing work to Github

As I work, I’m pushing my changes out to Github. I won’t go into too much detail on that aside from saying the .gitattributes and .gitignore files provided by BLT do a lot of the heavy lifting you’d normally need to do when first configuring Git for a Composer-driven Drupal project. You can check out what BLT provides here:

Deploying to Acquia Cloud

At given points during the development process, I feel ready to run a build and deploy those changes to Cloud.

In order to deploy to Acquia Cloud using BLT’s deploy commands, I first had to configure blt.yml with Acquia Cloud as a remote:

git:
  default_branch: develop
  remotes:
    cloud: '[email protected]:exodar.git'

Because of Composer and BLT, adding new modules and deploying the change to Acquia Cloud is as easy as the three commands below, where I’ve added the Markdown module to the Exodar project:

composer require drupal/markdown:^1.2;
git commit -am "EX-000: Add markdown-8.x-1.2";
blt artifact:deploy --commit-msg "EX-000: Example deploy to branch" --branch "develop-build" --no-interaction;

I’ll go into greater detail about BLT’s artifact build and deploy commands in later posts. Most deployment processes will be more complex than this, especially when working with a team. For example, the brief process outlined here doesn’t account for anything related to configuration management, automatically enabling newly added projects after deployment, etc. Those more sophisticated aspects of a site deployment workflow will come later as I continue to modernize the workflow for my personal site. For now, this is a good start and allows me to rapidly develop my personal site while still leveraging Composer and BLT.

Wrapping up

With my local development workflow setup, now I can rapidly iterate on my project using Composer to add new dependencies, Drush to configure them, and BLT to deploy them. I've also got a great IDE configuration, so working with my project's code is a joy.

May 18 2018
May 18

The Content Moderation core module was marked stable in Drupal 8.5. Think of it like the contributed module Workbench Moderation in Drupal 7, but without all the Workbench editor Views that never seemed to completely make sense. The Drupal.org documentation gives a good overview.

Content Moderation requires the Workflows core module, allowing you to set up custom editorial workflows. I've been doing some work with this for a new site for a large organization, and have some tips and tricks.

Less Is More

Resist increases in roles, workflows, and workflow states and make sure they are justified by a business need. Stakeholders may ask for many roles and many workflow states without knowing the increased complexity and likelihood of editorial confusion that results.

If you create an editorial workflow that is too strict and complex, editors will tend to find ways to work around the  system. A good compromise is to ask that the team tries something simple first and adds complexity down the line if needed.

Try to use the same workflow on all content types if you can. It makes a much simpler mental model for everyone.

Transitions are Key

Transitions between workflow states will be what you assign as permissions to roles. Typically, you'll want to lock down who can publish content, allowing content contributors to create new drafts only.

Transitions Image from Drupal.orgTransitions between workflow states must be thought through

You might want some paper to map out all the paths between workflow states that content might go through. The transitions should be named as verbs. If you can't think of a clear, descriptive verb that applies, you can go with 'Set state to %your_state" or "Mark as %your_state." Don't sweat the names of transitions too much though; they don't seem to ever appear in an editor-facing way anyway.

Don't forget to allow editors to undo transitions. If they can change the state from "Needs Work" to "Needs Review," make sure they can change it back to "Needs Work."

You must allow Non-Transitions

Make sure the transitions include non-transitions. The transitions represent which options will be available for the state when you edit content. In the above (default core) example, it is not possible to edit archived content and maintain the same state of archived. You'd have to change the status to published and then back to archived. In fact, it would be very easy to accidentally publish what you had archived, because editing the content will set it back to published as the default setting. Therefore, make sure that draft content can stay as draft when edited, etc. 

Transition Ordering is Crucial

Ordering of the transitions here is very important because the state options on the content editing form will appear as a select list of states ordered by the transition order, and it will default to the first available one.

If an editor misses setting this option correctly, they will simply get the first transition, so make sure that first transition is a good default. To set the right order, you have to map each state to what should be its default value when editing. You may have to add additional transitions to make this all make sense.

As for the ordering of workflow states themselves, this will only affect ordering when states are listed, for example in a Views exposed filter of workflow states or within the workflows administration.

Minimize Accidental Transitions

But why wouldn't my content's workflow state stay the same by default when editing the content (assuming the user has access to a transition that keeps it the same)? I have to set an order correctly to keep a default value from being lost?

Well, that's a bug as of 8.5.3 that will be fixed in the next 8.5 bugfix release. You can add the patch to your composer.json file if you're tired of your workflow states getting accidentally changed.

Test your Workflow

With all the states, transitions, transition ordering, roles, and permissions, there are plenty of opportunities for misconfiguration even for a total pro with great attention to detail like yourself. Make sure you run through each scenario using each role. Then document the setup in your site's editor documentation while it's all fresh and clear in your mind.

What DOES Published EVEN MEAN ANYMORE?

With Content Moderation, the term "published" now has two meanings. Both content and content revisions can be published (but only content can be unpublished).

For content, publishing status is a boolean, as it has always been. When you view published content, you will be viewing the latest revision, which is in a published workflow state.

For a content revision, "published" is a workflow state.

Therefore, when you view the content administration page, which shows you content, not content revisions, status refers to the publishing status of the content, and does not give you any information on whether there are unpublished new revisions.

Where's my Moderation Dashboard?

From the content administration page, there is a tab for "moderated content." This is where you can send your editors to see if there is content with drafts they need to review. Unfortunately, it's not a very useful report since it has neither filtering nor sorting. Luckily work has been done recently to make the Views integration for Content Moderation/Workflows decent, so I was able to replace this dashboard with a View and shared the config.

Using Views for a Moderation DashboardMy Views-based Content Moderation dashboard.

Reviewer Access

In a typical editorial workflow, content editors create draft edits and then need to solicit feedback and approval from stakeholders or even a legal team. To use content moderation, these stakeholders need to have Drupal accounts and log in to look at the "Latest Revision" tab on the content. This is an obstacle for many organizations because the stakeholders are either very busy, not very web-savvy, or both.

You may get requests for a workflow in which content creation and review takes place on a non-live environment and then require some sort of automated content deployment process. Content deployment across environments is possible using the Deploy module, but there is a lot of inherent complexity involved that you'll want to avoid if you can.

I created an Access Latest module that allows editors to share links with an access token that lets reviewers see the latest revision without logging in.

Access Latest lets reviewers see drafts without logging inAccess Latest lets reviewers see drafts without logging in

Log Messages BUG

As of 8.5.3, you may run into a bug in which users without "administer content" permission cannot add a revision log message when they edit content. There are a fewissues related to this, and the fix should be out in the next bugfix release. I had success with this patch and then re-saving all my content types.

May 18 2018
May 18

I'm sometimes asked for an overview of my general approach to PatternLab. Simple: put everything for each component in the same directory!

When working with PatternLab, which I use for all my Drupal themes, including the theme for this website, I don’t use the full atomic approach. I don't use the approach of atoms > molecules > organisms > etc. I’m sure many people seriously disagree with me for that ( I do think it's a very clever concept). Instead I’ve renamed things to match the language we use with our clients.

I tried talking about atoms and molecules to some clients and their eyes glazed over. Clients do not want a science lesson. They do not want to be told that we are going to take two of these atoms, and mix them with one of these atom, and eventually we'll have water. No, they want to know what their final website is going to look like. When I changed the conversation and started talking about ‘Building Blocks’ (what we call our Drupal paragraph types), site blocks (Drupal's search block, branding block), display types (Drupal's view modes such as teaser, search result), etc, they immediately understood. Then we started hearing things like, "Oh, so we can create a page by adding a number of different building blocks?" and "I see, so the search results page is made up of a group of pages using the 'Search Result' display type?". And my response, "Yes!". You see, we are using plain English to ease with understanding.

Another aspect of my approach that I really like is that _everything_ for each of my components is within the same directory. For example, if it’s a nested paragraph component such as an accordion (so we need a paragraph type called 'Accordion' and one called 'Accordion Item') each template and css and js and readme and json and yaml is all in the same folder. That means when I want to reuse one in another project, I don’t need to remember what sub-particles (atoms/molecules) are used to create the organism. It also means my CSS is scoped to that specific component and doesn’t bleed out of it, so making changes or adding new features is very easy, you just scope the new component's CSS to it, so it won't affect other previously-created components.

Now the top bar of my PatternLab that used to say Atoms | Molecules | Organisms, etc has tabs for:

  • Base
    • Colours
    • Spacing
    • Breakpoints
  • Basic Elements
    • Headings
    • Paragraphs
    • Lists
  • Site Blocks (Drupal Blocks)
    • Search Block
    • Login Block
    • Branding Block
  • Building Blocks (Paragraph Types)
    • Accordion
    • Image with Text
    • Video
  • Content
    • Display Types (View Modes)
      • Teaser
      • Card
      • Search Result
    • Lists (Views)
      • Blog
      • Search Results
    • Content Types
      • Basic Page
      • Blog
      • Event
  • Page Sections (Regions)
    • Header
    • Footer
    • Sidebar
  • Sample Pages
    • Homepage
    • Blog Listing Page
    • Blog Node

After that, I have Backstop.js set up to regression test all of these, so each time I create a new component I can quickly run the visual regression tests and check that nothing has broken. Since all my CSS/JS is scoped to each individual component, it's rare if something is.

May 09 2018
May 09

Lightning 3.1.4 (released on 9 May) ships with a completely new content scheduler built in React. Here's an example of an editor scheduling a piece of content to be published on Friday and archived the following Monday:

We had four main goals when creating this scheduler:

  1. Simplify the UX [Issue #2935198]
  2. Make the scheduler available on content creation forms [Issue #2935105]
  3. Add ability to schedule multiple transition in serial [Issue #2936757]
  4. Give content editors the ability to set the date that content should be published [Issue #2935715]

For the first goal, we had a related team goal of creating something in React. Originally we had thought that might be an internal tool, something that never saw the light of day, or perhaps a configuration form. But when we started digging into the UX challenges of the scheduler, we realized this was a great fit. The result is a responsive, intuitive widget that sits quietly out of the way until you need to interact with it.

The second and third goals were just to fix a couple or regressions that were introduced when we moved away from the Scheduled Updates module as part of the migration to Content Moderation. Both are table stakes functionality for a usable scheduler.

Finally, the fourth goal comes from the reality that, in many workflows, content authors are often the person who knows when content should actually be published. But content authors usually don't have permission to actually publish content - and, as a result, can't schedule that transition either. This system allows site builders to create an "Approved for publish" state. Content authors can then schedule a transition from that state to "Published", but the transition won't actually happen unless an Editor moves the content into the "Approved for publish" state first. Look for more documentation about how we expect people to use that functionality in the near future.

You can find a sandbox of Lightning Scheduler - along with Lightning's other features here:
https://lightning.acquia.com/lightning (admin/admin)

Or update to Lightning 3.1.4 yourself:

$ composer require acquia/lightning:3.1.4 --no-update
$ composer update acquia/lightning --with-all-dependencies

Thanks to everyone who helped with testing and UI enhancements. Please file issues in Lightning Workflow's issue queue.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web