Aug 15 2017
Aug 15

In part two of our Webform tutorial, we’ll show you how to create multipage forms, apply conditional logic, create layouts and much more!

We’ll take the simple newsletter signup form created in part one of this tutorial and add additional pages. Then we’ll demonstrate how to show or hide an element depending on the selection made on another element. We’ll also look at layouts and then finish off with an overview of some of the other great features Webform has to offer.

Multipage Forms

For forms with many elements, it’s best to spread them across two or more pages. In this section, we’ll take the form we created in part one and move some of the elements to make a two page form. We’ll also add a preview page and make changes to the confirmation screen.

1. Starting from the Edit tab of the Webform created in part one, click on “Add page”.

Screenshot highlighting the Add page button which is above the first element.

2. Give the first page a title of “Your details”.

3. If you want to change the default “Previous page” and “Next page” text then you can do this in the “Page settings” section. We’ll stick with the defaults.

4. Click on Save to create the page.

5. Repeat the process to create a page called Feedback.

Screenshot showing the two new pages added at the bottom of the Webform elements.

6. On the Edit tab, drag the “Your details” page to the top.

7. Drag the “First name” and Email elements to the right a little so they are indented as shown below.

8. Drag the Feedback page above the checkboxes.

9. Drag the checkboxes and radio buttons to the right so they are also indented.

10. Click on “Save elements”.

Screenshot showing the pages moved to the correct places, as discussed in the text.

Clicking on the View tab will reveal a multipage form. You’ll see the page names on the progress bar at the top of the form. You can remove the progress bar in the form settings if you prefer.

Screenshot of the multipage Webform.

Preview and Submission Complete Pages

For long forms, it can be useful for users to preview the information before submitting it. Also, the default message a user receives after clicking on submit is “New submission added to Newsletter signup” or similar and so changing the message is normally a good idea. We can make both of these changes from the Settings tab for our Webform.

1. From the Edit tab of our form, click on the Settings sub-tab.

Screenshot highlighting the Settings sub-tab with is under the main Edit tab for a Webform.

2. Scroll down to the “Preview settings” section. The Optional radio button will allow users to skip the preview screen. We’ll select Required, so that users will always preview the information before submitting it. You can alter various aspects of the preview page but we’ll stick with the defaults.

3. Scroll further down the page to the “Confirmation settings” section.

4. It’s worth reading through the options under “Confirmation type”. We’ll stick with the default of Page as this will work well for our simple example.

5. For “Confirmation title”, enter “Newsletter signup successful”.

6. Enter “Thank you, [webform-submission:values:first_name]. You have signed up to our newsletter.” for the “Confirmation message”.

Screenshot of confirmation settings

7. Scroll down to the bottom of the page and click on Save.

When the “First name” element was initially added to the Webform, a key of first_name was created and we’ve used this in our confirmation message. You could also use the information from other form elements by replacing first_name with the appropriate key.

You can find the key listed on the Edit tab of the form, under the Key column, although that column may be hidden on smaller screens. Also, if you edit an element, you’ll see the key shown in small text to the right of the title.

Now if you click on the View tab and fill in the information on the form, you’ll have a preview screen. After signup, you’ll also have a personalized message.

Screenshot showing that the first name entered on the form becomes part of the confirmation message.

Conditional Logic

On page two of our wizard, we have a question asking about interests and then another specifically about JavaScript. Ideally, we only want to show the JavaScript question if the user has expressed an interest in it. This is where conditional logic helps. We can set the second question to respond to the results of the first.

1. From the Edit tab of our form, click on the Edit button for “Which JavaScript framework are you most interested in?”.

2. Scroll down to the “Conditional logic” section.

3. Change the State to Visible.

4. Select “JavaScript [Checkboxes]” under “Element/Selector”.

5. Under Trigger, select Checked.

Screenshot showing conditional logic being added.

6. Click on Save.

Now if you view the second page of the form, you won’t see the question about JavaScript frameworks unless you have selected the JavaScript checkbox.

Screenshot showing that the JavaScript library question only appears if the JavaScript checkbox in the first question is checked.

Conditional logic can be used to show or hide elements, disable them or make them required, depending on the state of other elements. It’s always worth testing that the logic performs as you expect it to, especially for complex forms.

Displaying Webforms

In this section, we’ll show how to change the default URL. We’ll also demonstrate how to attach a Webform to a node and how to display it in a block.

Changing the URL

By default, Webforms have a URL of “/form/name-of-form”, so in our case it’s “/form/newsletter-signup”. You can change the form part of the URL to another word for all forms within the global Webform Settings tab (administrative toolbar, Structure, Webforms). Instead of doing that, we’ll add an alias for our form.

1. From the Edit screen of the form, click on the Settings sub-tab.

2. Scroll down to the “URL path settings” section.

3. Here you can add URL aliases. We’re going to use “/signup” for the first box and “/signup-complete” for the second.

Screenshot showing the URL path settings being added.

4. Click on Save.

Now, both the form and the confirmation page will have a shorter URL.

Attaching a Webform to a Node

Webform also allows you to attach a form to a node. In this example, we’ll attach a node to the “Basic page” content type.

1. From the administrative toolbar, click on Structure and then “Content types”.

2. Click on “Manage fields” for “Basic page”.

3. Click on “Add field”.

4. Under “Add a new field”, select Webform.

5. Give the field a label, such as “Newsletter signup”.

6. Click on “Save and continue” and then “Save field settings” on the next screen.

7. You should now be on the Edit tab for the new field. In the “Default value” section, select “Newsletter signup” from the list.

8. Click on “Save settings”.

As with any field, you’ll be able to adjust its position relative to other fields, so you can move the Webform to any part of the node.

Now when you create new content using the “Basic page” content type, you’ll have the newsletter signup Webform attached.

Screenshot showing a node with a Webform attached.

Displaying a Webform in a Block

Another option for displaying a Webform is to create a block. This offers flexibility on where the Webform can be placed on the page.

1. Navigate to Structure on the administrative toolbar, and then “Block layout”.

2. Next to the appropriate region of your theme, click on “Place block”. We’re going to add the block to “Sidebar second” for our Bartik theme.

3. Find Webform in the list and click on “Place block” next to it.

4. Change the title to “Newsletter signup”.

5. Under Webform, type News and select “Newsletter signup” when it appears.

6. In the Visibility section, adjust the settings as you would with any block. We’re going to enter “/signup*” for “Hide for the listed pages” on the Pages tab, so the block will be hidden on the “/signup” and “/signup-confirmation pages”.

7. Click on “Save block” to complete the process.

The signup form now appears in a block on the right of our screen for most pages.

Screenshot of the Webform in a block.

Note that Webform adds another tab to the Visibility section for block configuration. This allows you to select which Webforms the block should be displayed on.

Creating Layouts

To simplify laying out elements on a page, Webform includes a variety of containers including divs, expandable details and fieldsets. If you add a new element, you’ll see all the containers listed together.

Screenshot listing the containers - Container, Details, Fieldset, Flexbox layout, Item, Label.

In this section, we’ll look at the Flexbox container and use it on the second page of the form. This will allow the two questions to sit side-by-side on a large screen, but they’ll automatically be vertically stacked on smaller screens.

1. From the Edit screen of our Webform, click on “Add element”.

2. Find “Flexbox layout” in the list and click on “Add element” on the same line.

3. Give the element a key, such as newsletter_interests.

4. The defaults will work fine for this example, so click Save to create the container.

5. Drag the new Flexbox layout element, so that it’s just below Feedback and make sure it’s indented.

6. The checkboxes and radio buttons should now be below the Flexbox layout element. Move them both to the right so they are further indented.

Screenshot showing flexbox layout with indented questions underneath.

7. Click on “Save elements” to complete the process.

Now when you fill in page two of the form, when the JavaScript box is ticked, the second question will appear to the right on large screens. If there isn’t enough room for the questions to be side-by-side then the second question will drop down below the first.

Screenshot showing two questions side-by-side.

This is just a simple example of what’s possible with layouts. Later in this tutorial, we’ll install the Webform Examples module and the “Example: Layout: Flexbox” form shows how many different elements can be displayed across a page.

Even More Features

We could carry on writing about the Webform module for weeks as it includes so many great features. In this section, we’ll give a brief overview of some other features that are definitely worth looking at.

Reducing SPAM

Any form on the internet will be a target for spammers so it’s essential to have systems in place to reduce this to a minimum. Webform works with the spam protection modules Antibot, CAPTCHA and Honeypot and using a combination of these should help cut down on unwanted messages.

Head to Structure on the administrative toolbar and then Webforms. Click on the Add-ons tab and then scroll down to the “Spam protection” section and find the links to each of the modules.

Once installed, to configure Antibot and Honeypot, click on Webform’s global Settings tab. Then expand “Third party settings” within the “Webform settings” section. For CAPTCHA, there is an element that can be added to any Webform.

YAML

The Edit tab on a Webform has a “Source (YAML)” sub-tab which exposes the underlying YAML markup. This allows you to copy code to another form, add more elements and make changes to forms. For forms that use a lot of similar elements, copying and pasting with the appropriate changes can be a lot quicker than manually adding each element.

In the code below, which is the first section of our YAML markup, we’ve added a Surname text field by copying the markup for first_name and editing it. We’ve also changed the title for the second page from Feedback to Interests.

your_details:
  '#type': wizard_page
  '#title': 'Your details'
  first_name:
    '#type': textfield
    '#title': 'First name'
    '#required': true
  surname:
    '#type': textfield
    '#title': 'Surname'
    '#required': true
  email:
    '#type': email
    '#title': Email
    '#required': true
feedback:
  '#type': wizard_page
  '#title': Interests

Saving the form and clicking on the View tab shows the new form element in place and the new name for the second page of our form.

Screenshot showing that Surname has been added and the second page is now called Interests.

If you’ve not used YAML before then be very careful with spaces. When items are nested then always use two spaces to indent. Thankfully the interface will point out any lines that have been incorrectly formatted. The screenshot below shows what happens when you add an extra space.

Screenshot showing that there is an indentation issue near surname in the YAML file.

Note that some changes made to the YAML markup will require you to remove data first. For example, if we had also changed the key for the Feedback page, which is shown as feedback: in the YAML code above, then we would have needed to clear submissions or delete the page in the UI and then re-create it.

You can find out more about exporting and importing Webforms using YAML in this video.

Debugging

To help track down issues, you can enable debugging for a form. Start off at the Edit tab of the form and click on the “Emails / Handlers” sub-tab. Then you just need to click on “Add handler” and follow through the screens to add a Debug handler. The screenshot below shows the type of information that will be displayed as you move through a form.

Screenshot showing debugging output with keys and values entered for each element.

The Examples Module

To get an idea of the capabilities of Webform, it’s a good idea to look at the Webform Examples module. You can enable this from the Extend tab of the administrative toolbar or by using Drush with the following command:

drush en webform_examples

This will install many Webforms that demonstrate different aspects of the module.

Screenshot listing the nine example Webforms available.

The “Example: Style Guide” is a good starting point as it shows all the different elements and also has some photos of cute kittens.

Settings, Modules and Add-ons

If you have been following along with this tutorial, you will have seen a huge array of settings. It’s worth spending some time looking through all the global settings available for Webform as well as the settings for individual Webforms and for different elements. These are just some of the settings available:

Screenshot listing some of the many Webform settings.

Webform includes a number of modules including starter templates and dev tools and you can view these on the Extend tab of the administrative toolbar by filtering using the word Webform. If you need to extend the functionality of Webform further, then the first place to look is the Add-ons tab.

Summary

In this part of the tutorial, we’ve looked at multipage forms and shown how to display Webforms in a variety of ways. We’ve used conditional logic to show or hide an element depending on the state of another element. We’ve also given an overview of some of the other great features included in the Webform module.

FAQ

Q: Is there an online demo of Webform?
You can test the features of Webform on simplytest.me.

Aug 09 2017
Aug 09
August 9th, 2017

We are excited to announce the completion of the second major development phase of our engagement with Forcepoint: improving the authoring experience for editors and implementing a new design.

Reimagining the Editorial Experience

Four Kitchens originally launched Forcepoint’s spiffy new Drupal site in January 2016. Since then, Forcepoint’s marketing strategy has evolved, and they hired a marketing agency to perform some brand consulting, while Four Kitchens implemented their new approach in rebuilding the site. We also took the opportunity to revisit the editorial experience in Drupal’s administrative backend.

Four Kitchens has been using Paragraphs on some recent Drupal 8 projects and found it to be a compelling solution for clients that like to exert substantive editorial control at the individual page level—clients like Forcepoint. Providing content templates for markup that works hand in hand with the component-driven theming approach we favor is a primary benefit we get from using Paragraphs for body content.

Editorially, the introduction of Paragraphs gives Forcepoint a more flexible means of controlling content layout for individual pages without having to rely as heavily on Panels as we did for the initial launch. We’re still using Panels for boilerplate and some content type specific data rendering, but the reduced complexity required for editors to layout body content will allow their content to evolve and scale more easily.

In addition to using paragraphs for WYSIWYG content entry, Forcepoint editors are now also able to insert and rearrange related content, Views, Marketo forms, videos, and components that require more complex markup to render.

We’re big proponents of carefully crafted content models and structured data. Overusing Paragraphs runs the risk of removing some or even a lot of that structure. Used judiciously however, it allows us to give clients like Forcepoint the flexibility they want while still enforcing desirable constraints inherent in the design.

Congratulations!

We’ve been working with Forcepoint for over a year now, and are incredibly proud of the solutions we’ve created with them. This kind of close relationship and collaboration is what we strive for with all of our partners. We thrive on understanding our partners’ underlying business challenges and goals, collaborating with their teams, and creating solutions that delight their customers.

The Forcepoint team was led by Chris Devidal as the project manager, working alongside Taylor Smith who acted as internal product owner. Jeff Tomlinson was technical lead and assisted Patrick Coffey who adeptly wrangled all the difficult backend issues. Significant frontend technical leadership was provided by Evan Willhite who worked with Brad Johnson to implement a challenging design. Props also go to Keith Halpin, Neela Joshi and Adam Bennett at Forcepoint for their many contributions.

Web Chef Jeff Tomlinson
Jeff Tomlinson

Jeff Tomlinson enjoys working with clients to provide them with smart solutions to realize their project’s goals. He loves riding his bicycle, too.

Aug 08 2017
Aug 08

It’s easy to forget that lots of ecommerce platforms don’t have a content management system (CMS). It’s something we take for granted, because Drupal Commerce was built on a CMS. That’s how it started out. But that’s not usually the case. If you want to build a CMS with Magento, for instance, you have to add on a CMS (incidentally, the recommended CMS to pair with Magento is Drupal).

So with most other ecommerce platforms, to get CMS functionality you have to pair them with WordPress or you have to pair them with SharePoint. Ecommerce platforms handle products, and that’s about it. They don’t handle tutorials, how-to videos, blog posts, or any of that other stuff.

This leads to the problem where you have a shop, and you have a catalog or brochure site. So you have all these product pages that explain all about the products with videos and guides and stuff, and then you have the separate shop. It doesn’t really make sense for it to BE separate; it’s only done that way because of the limitations of the technology. Apple is a classic offender: the Apple store is completely different from the Apple product pages.

The majority of ecommerce sites are set up that way, with one site that tells you about the products and an entirely separate site that lets you actually purchase the products. Sometimes you can fake it on the front end to make it look like they’re coming from the same place, but that’s far more difficult than just doing it properly on the back end.

On the other hand, Amazon is an example of an online retailer that doesn’t really do content. They just have product pages. If you want additional information on an Amazon product, you go somewhere else, like to the website of the manufacturer. Amazon basically assumes you’ve already decided to buy the product, and you’re just purchasing it from Amazon.

To summarize: Having the product pages and cart functionality truly meshed (the way they are in Drupal) is super cool. They’re built on the same platform, so you don’t need to think about combining them. It’s already done.

To learn more, check out our High Five episode “Drupal Commerce 2.x – CMS.”

Subscribe to our YouTube Channel for more Drupal Commerce goodness!

Aug 02 2017
Aug 02


Full name


Email


Phone number


Company


Location


Website


Project type


Estimated budget


Tell us about your project or idea

SUBMIT

Aug 02 2017
ao2
Aug 02

Drupal 8 is quite an improvement over previous versions of Drupal with regard to the reproducibility of a site from a minimal source repository.

The recommended way to set up a new Drupal 8 site is to use composer, and the most convenient way to do that is via drupal-composer/drupal-project, which brings in Drush and drupal-console as per-site dependencies; from there the site can be installed and managed.

However the fact that Drush and drupal-console are per-site dependencies poses a problem: they are not available until the site dependencies have been downloaded, this means that it's not really possible to add “bootstrapping” commands to them to make it easier to set up new projects.

This is what motivated me to put together drupal-init-tools: a minimalistic set of wrapper scripts which can be used when Drush and drupal-console are not available yet, or when they have to be combined together to perform a task in a better way.

drupal-init-tools has been developed mainly to speed up prototyping, and it is particularly useful to set up sites under ~/public_html when the web server is using something like the Apache userdir module; however I feel it could be useful in production too after some cleanups.

drupal-init-tools standardizes the setup and installation of new Drupal projects: for example the drin new command makes sure that the original drupal-composer/drupal-project repository is still accessible from an upstream git remote so that the developer can easily track changes in drupal-composer/drupal-project and keep up if appropriate for their particular project.

Some drin commands also provide a nicer interface for frequent and important tasks, like the actual site-install step, or the creation of an installation profile that could replicate the installed site.

Here are some examples of use taken from the drin manual.

Create and install a new Drupal project:

cd ~/public_html
drin new drupal_test_site
cd drupal_test_site
$EDITOR bootstrap.conf
drin bootstrap --devel

Create an installation profile from the currently installed project:

drin create-profile "Test Profile" test_profile

Clean and rebuild the whole project to verify that installing from scratch works:

drin clean
drin bootstrap

A quick way to test drupal-init-tools is this:

git clone git://git.ao2.it/drupal-init-tools.git
cd drupal-init-tools/
make local
./drin --help

Give it a go and let me know what you think.

If it proves useful to others I could have it packaged and uploaded to Debian to make it more “official”, IMHO it makes sense to have such a small meta-tool like drin as a system global command.

Aug 02 2017
Aug 02

To the average person on the street, a product is something you buy. Say Joe Blow is looking for a T-shirt. Specifically, he wants a blue T-shirt with the logo of his favorite sports team on the front. He goes to an online store, selects the T-shirt with the appropriate logo, chooses the blue color, and indicates the size he needs. Simple. He has now purchased a product.

But in the ecommerce world, a product is much more complicated.

Products: More Than Meets the Eye

If you’re the owner of that online store, you know that every size of that shirt is an individual product that has its own SKU. Knowing whether someone ordered a large or small shirt is important for inventory (so you know how many you have left in stock), pricing (maybe you sell different sizes at different price points), and processing (so you know exactly what has to be shipped). Different colors are also different versions of the same product. So a “product” is really a collection of a whole bunch of products. But when your customers are viewing it, they think of the collection as the single product.

How Drupal Commerce 2.x Handles Products

In 2.x, you have attributes that are used to make up these different products. Each different color is going to be a variation, and each different size is a variation, and each different size + color combination is also a variation. So when you build attributes (size, color, etc.), you actually build products.

You can also have customizations. If Joe Blow wanted his name on that back of that T-shirt, that isn’t really an attribute, because it doesn’t change the product stock. The store would just print his name on a standard T-shirt. That’s an option that gets applied to a product.

Commerce 2.x also lets you set product types, so you can handle physical, digital, and subscription products differently (you don’t need sizes and weights and things for a digital good, for instance).

How This Differs From Drupal Commerce 1

The main difference is that it’s more built in now. In Commerce 1, there were variations, and then you built your own product by making a node (which was actually pretty confusing to a lot of people). In Commerce 2.x, you set up a product, and add variations to it, and it’s a much more structured process that takes you through what you need to do.

The bottom line: products in Commerce 2.x are not vastly different; they’re more of an iterative improvement over Commerce 1.

To learn more, check out our High Five episode “How Drupal Commerce 2.x handles Products.”

Subscribe to our YouTube Channel for more Drupal Commerce goodness!

Aug 01 2017
Aug 01


Full name


Email


Phone number


Company


Location


Website


Project type


Estimated budget


Tell us about your project or idea

SUBMIT

Aug 01 2017
Aug 01

The Webform module in Drupal 8 makes it easy to create complex forms in no time. The basics are easy to learn but when you start digging below the surface, the huge power of the Webform module starts to reveal itself.

While the Contact module in Drupal 8 core does allow you to create simple personal and site-wide contact forms, functionality is limited. This is where Webform module steps in.

In the first part of this tutorial, we’ll use some Webform elements to create a simple but fully functioning form. We’ll show what you can do with the results of submissions and then add some additional elements. We’ll also demonstrate how one of the built-in JavaScript libraries can improve the look of form elements.

In part two, we’ll add additional pages to our Webform, apply conditional logic, show how to create great layouts and much more!

Getting Started

The simplest way to install Webform is to use Drush. The following three commands download and enable the Webform module and then download all the required libraries. This tutorial is based on Webform version 8.x-5.0-beta15.

drush dl webform
drush en webform webform_ui
drush webform-libraries-download

If you get “Unable to unzip” errors then install a command line tool capable of unzipping files and try again. On our minimal CentOS setup, we needed to install the unzip package.

Alternatively, you can download the module. Note, although it’s better to download the libraries, Webform will use a CDN if any libraries are not available locally.

To see what libraries are used and to check the status of each, from the administrative toolbar, click on Reports, then “Status Report” and look for the entries that start with “Webform library”. Specific libraries can be disabled within Webform’s Settings tab if required.

Screenshot showing the status of Webform libraries.

If you plan to allow users to upload files using a Webform then please read the note below about setting up private files. Incorrect configuration could be a significant security risk.

A Quick Tour of the Webform Interface

To add and manage Webforms, click on Structure on the administrative toolbar, then Webforms. You should see a screen similar to the one below.

Screenshot highlighting six different areas in the Webform interface.

Some of the points of interest are:

1. The tabs along the top are self-explanatory and we’ll look at these throughout the tutorials.

2. You’ll see “Watch video” buttons in various places in the Webform module. These short videos are a great way to learn about Webform features.

3. Add a new Webform.

4. The “How can we help you?” button is a quick way to find out more about the module, report issues and become involved with the Drupal community.

5. “Filter webforms” is useful if you have a large number of Webforms.

6. Buttons for each Webform allow you to download submissions and edit forms.

Creating a Simple Form

In this tutorial, we’ll start off with a very simple newsletter signup form and then later we’ll add more complex elements. We’ll initially create two elements – first name and email address. One way to do this would be to duplicate the existing Contact form by clicking on the down arrow on the Edit button and then selecting Duplicate. We could then edit the form. The other way is to create the form from scratch and we’ll show you how to do that here.

1. Click on the “Add webform” button.

2. Give the form a name such as “Newsletter signup” and an appropriate description if you want.

3. If you plan to have a lot of forms then adding categories can be useful. We’ll add a Newsletter category by selecting the “Other…” option.

4. Click on Save to create the form.

Screenshot of the add Webfrom screen.

On the next screen, we can start adding elements to our form.

1. Click on the “Add element” button.

2. Use the “Filter by element name” box at the top to find “Text field” and click on “Add element” next to that.

Screenshot of the text filter helping to find the text field element.

3. Enter “First name” for the title.

4. We’ll leave all the other settings as they are. Click on Save to finish.

5. Click on “Add element” again.

6. This time, find the Email element and click on “Add element”.

7. Enter a title of Email and click on Save.

Note, if you wanted to make sure the user entered their email address correctly then you can use the “Email confirm” element which adds an additional confirmation box. Many of the other elements also have variations, so it’s worth reviewing these to make sure you’ve picked the best element for your needs.

You should now have a screen that’s like this.

Screenshot of the edit tab after two elements have been added.

Click on the View tab to see the form.

Screenshot of the view tab after two elements have been added.

To keep matters simple to start with, we’ve used a lot of the default settings for the elements. While these settings work well for a lot of cases, it’s worth looking through the different options for each element type as there are a number of ways to customize the way an element appears.

There are a couple of things that would be good to change at this stage. Firstly, it may be better to change the Submit button text to Signup. Also, currently neither of the fields are required, so a user could submit a blank form.

1. Click on the Edit tab.

2. To the right of “Submit button(s)”, click on Customize.

Screenshot showing that the customize button is below the other elements.

3. Enter Signup under “Submit button label”. We’ve also changed the Title to Signup.

4. Click on Save.

5. Tick Required for both the first name and email elements and then “Save elements”.

Screenshot highlighting the required checkboxes next to each element.

Note, with a customized submit button you can adjust the position of it relative to other elements if you wanted to.

Now, when you click on the View tab, you should see that both fields are required and the submit button is labelled Signup.

Screenshot showing that the elements are now required and the submit button is now labelled Signup.

Testing the Form and Viewing Results

To test the form, you can either manually add data and submit it, or you can use the Test tab to generate data. If the Devel generate module is installed then you can use that to automatically generate Webform submissions.

Screenshot of the test tab with information automatically added.

Signup to the newsletter a few times and then click on the Results tab. You can click on the Customize button to change the columns shown if required. You can also mark particular submissions with a star or add administrative notes and we’ve done both for submission number 5. Looking at our list of automatically generated submissions, it appears that the Webform maintainers like The Beatles!

Screenshot of the webform submissions with first names of John, Paul, George and Ringo.

The Edit button next to each submission has a number of options including editing, viewing and deleting submissions. The Download sub-tab allows you to export the submissions in a variety of ways and you can filter based on date or submission ID, so you don’t need to download all the data in one go.

Screenshot of the download sub-tab.

Emailing Results

While the Results tab allows you to review submissions, it can also be useful for results to be emailed once submitted.

1. Head to the Edit tab and click on the “Emails / Handlers” sub-tab.

2. Click on “Add email”.

3. In the “Send to” and “Send from” sections, we’ll just use the default settings. This will use the email address and name that are configured for the site (administrative toolbar – Configuration, “Basic site settings”).

4. The email that’s sent can be customized in the Message section if required but we’ll just stick with the default message.

5. Click on Save to create the email handler.

Screenshot after the email handler added.

You can also add handlers to post submissions to a remote URL and enable debugging.

Adding Checkboxes and Radio Buttons

In this section, we’ll add checkboxes and radio buttons and enhance their appearance using the jQuery iCheck library. If you filter elements using the word “checkbox” then you’ll see five different options.

  • Checkbox – a single checkbox.
  • Checkboxes – a group of checkboxes using a custom or pre-defined lists.
  • Checkboxes other – a group of checkboxes with an “Other …” option to allow the user to enter their own information.
  • Entity checkboxes – checkboxes using entity references.
  • Term checkboxes – checkboxes using taxonomy terms.

We’re going to create checkboxes asking the user about their interests.

1. From the Edit tab, click on the Elements sub-tab, then “Add element”.

2. Use the filter to help find “Checkboxes other” and select “Add element”.

3. Give it a title of “What are your main interests?”.

4. With long titles, it’s often worth shortening the associated key as this will be used for CSS classes and referred to in other areas of Webform. Click on the Edit link to the right of the title and change the key to main_interests.

Screenshot of title and key for checkboxes other element.

5. Scroll down to the “Elements options” section.

6. The Options list allows you to select from pre-defined lists such as days of the week. For our example, we want to enter values, so leave Options set to “Custom…”.

7. For each option, we need to add a value and text pair. The “Option value” is used internally and you’ll see the values in the markup and CSS IDs and classes that are added. The “Option text” will appear to the user on the form. We’ve created three options with pairs html / HTML, css / CSS and js / JavaScript, as shown below.

Screenshot of value - text pairs.

8. Scroll down to the “Other option settings” section. These settings apply to an additional item that will be added to the end of the list of checkboxes. The default settings create an “Other…” checkbox with an additional textfield if other is selected. The settings are customizable but for our example we’ll stick with the defaults.

9. Click on Save to complete the process.

Note, in the “Elements options” section, the number of columns that checkboxes are displayed in can be adjusted. We’ve stuck with a single column as we’ll be showing how to layout two elements next to each other in part two of this tutorial.

Using a similar procedure, other elements such as radio buttons and select lists can be created. We’re going to add radio buttons.

1. Add the Radios element.

2. Give it a title of “Which JavaScript library are you most interested in?”.

3. Change the key to js_library by clicking on Edit to the right of the title.

4. Add options for jQuery, AngularJS and React as shown below.

5. Leave all the other options at their default settings and click on Save.

Screenshot of value - text pairs.

In part two of this tutorial, we’ll use conditional logic to show how you can hide this question unless someone selects the JavaScript checkbox under “What are your main interests?”.

Clicking on the View tab should reveal elements that look similar to this.

Screenshot of Webform view tab with checkboxes and radio buttons.

Enhancing Checkboxes and Radio Buttons

As you can see from the screenshot above, checkboxes and radio buttons have the default look. Thankfully, Webform includes the jQuery iCheck library and this makes it simple to enhance these elements. You can enable iCheck for individual elements by editing their properties and scrolling down to “Enhance using iCheck”, but we’ll show how to enable it globally.

1. From the administrative toolbar, click on Structure, Webforms and then the Settings tab.

2. Expand the “Element Settings” section.

3. Within that section you’ll find “Checkbox/radio settings”. Expand this also.

4. Select an option from “Enhance checkboxes/radio buttons using iCheck” that works with your theme (it’s worth trying the different styles). In our screenshots, we’ve used the “Square: Blue” option.

5. Scroll to the bottom of the screen and click on “Save configuration”.

Now our Webform has greatly enhanced checkboxes and radio buttons.

Screenshot with enhanced checkboxes and radio buttons.

Other Elements

We’ve only discussed a few elements from a very long list. It’s worth spending time looking through the full range of Webform elements and trying them out.

If you add any of the file elements then please ensure that you’ve followed advice about file uploads. Allowing non-trusted users to upload files to public folders can be a huge security risk. When using a private folder make sure you’ve secured the folder correctly and ideally, only allow trusted users to upload files. It’s worth reading “Drupal file upload by anonymous or untrusted users into public file systems – PSA-2016-003” before implementing file elements on a production server.

Summary

In this part of the tutorial, we’ve shown how to create a simple form and view submissions. We’ve discussed Webform elements and looked at how to enhance checkboxes and radio buttons.

In part two, we’ll show you how to create multi-page forms, use conditional logic to show and hide elements, create layouts and much more!

FAQs

Q: How can I get more information about Webform?
The Webform documentation page has lots of useful information and videos.

Q: What are the differences between Webform and the Contact module?
You can find a detailed comparison here.

Jul 31 2017
Jul 31


Full name


Email


Phone number


Company


Location


Website


Project type


Estimated budget


Tell us about your project or idea

SUBMIT

Jul 26 2017
Jul 26

If a customer comes to your Drupal Commerce site and goes to the trouble of going through the checkout process and entering their payment information, you really need to make sure that whatever they ordered actually arrives on their doorstep. That’s what OMS and fulfillment are all about.

What exactly is OMS and fulfillment?

OMS stands for Order Management System. Order management and fulfillment are two sides of the same coin, but there are some key distinctions.

Order management means managing the order as a customer service person—checking that the order is valid, filling it out, answering any customer questions, etc. Fulfillment is the actual act of getting the product to the customer—taking it off the shelf, putting it into a box, and getting it shipped out (or in the case of a digital good, making sure the product actually made it to the customer).

So order management and fulfillment are closely related, but you might use different systems or even different people for each aspect.

How does Drupal handle order management and fulfillment?

Drupal has full order management capabilities out of the box. You can edit orders, change orders, add products, put notes in, change taxes, all that kind of stuff.

Fulfillment is where Drupal is a bit weak. This is a key area for integration with a third party. Maybe you don’t even do your own fulfillment—maybe you process the orders and send them to Amazon, who actually handles shipping them out. Maybe you ship from 20 different locations and have to move products around, so you need a system that can handle such complexities. Commerce 2.x has the basic framework for fulfillment, but it’s early days, and more work needs to be done. But you can integrate with other systems that can take care of that function.

Are any fulfillment systems easier to work with than others?

Fulfillment integration is not usually too complex; normally you’re just pushing the order information to the other system. That said, you’ll want a modern system that has an API that can be worked with (Amazon and Brightpearl are just two examples). Fulfillment has existed since the early days of print catalog ordering and some of the software seems like it’s from that same era—it might be difficult or even impossible to integrate with some older systems.

The bottom line:

Drupal Commerce has good order management out of the box. It has OK fulfillment out of the box, but it can integrate with anything you want (except some crappy legacy systems from the 1970's).

To learn more, check out our High Five episode “Drupal Commerce 2.x – OMS & Fulfillment.

Jul 20 2017
Jul 20

One of my pet peeves is searching for a local event and finding details for that event… 3+ years ago.

Many Drupal sites feature some sort of event type node. It’s really anything with a start date, and likely, an end date. The problem is, most developers don’t take into account whether or not that content should live on once the end date has come and gone.

Perhaps, in some instances, keeping that content on your site makes sense. In most cases though, it does not.

For instance, my 3 year old was really into dinosaurs. I knew there was a dinosaur exhibit coming to town, but I didn’t quite remember the name. Searching online provided quite a few local results. And many of those results were for events in the past.

Examples

Discover the Dinosaurs (06/21/2014)
http://www.evansvilleevents.com/home/events/discover-the-dinosaurs
(event has since been unpublished!)

DISCOVER THE DINOSAURS ROARS INTO EVANSVILLE! (12/14/2012)
http://www.evansvilleevents.com/home/2012/12/discover-dinosaurs-roars-evansville
(event has since been unpublished!)

Dino Dig! (06/02/2015)
http://www.cmoekids.org/events/community-events/dino-dig

Event from 2 Years Ago

Discover the Dinosaurs Unleashed (02/18/????)
http://www.evansvilleliving.com/event/discover-the-dinosaurs-unleashed

Sometimes sites will even have past events ranking higher in search results than upcoming events.

There’s a whole other blog post I could write about how useful it is to have the year accompanying the day and month on web content — particularly tech blog posts. Was this written in February of this year or 2006? How can I know?!?

The Drupal Solution

For Drupal sites, there’s a relatively easy fix. It requires a small custom module and the contributed Scheduler module.

The Scheduler module is simple and great. Simply enable it for your content type, and enable, at the very least, the unpublish setting. Once that is set up, create a custom module and invoke the hook_entity_presave() function.

This code is pretty self explanatory. All I’m doing is checking to be sure it’s an event node type that’s being saved, and if so, find the start and end date values to be used when setting the “unpublish_on” field.

You’ll of course have to make sure your node type and field names match up.

Once that’s set up, any time an event is saved, your node is scheduled to unpublish one day after the end date.

If you have a Drupal 7 site, this same idea can be applied. The code in the hook_entity_presave() will be a bit different.

I wish I could start a massive movement to help clean up web content that should have been unpublished or removed long ago. Until then, hopefully this article finds a few devs so that they can ensure their site isn’t one of those sending out poor results.

Jul 19 2017
Jul 19

Acro Media Sprint Week has wrapped up for another year. For those who are new to this idea, each year we bring most of our staff together to have a week of fun that includes a lot of Drupal contrib work. Contrib work is community development of all aspects of the Drupal software, including code, design/UX, documentation and more. You can read a bit more about it on our pre-sprint week blog post.

Our goals for 2017 were pretty big. We had a lot of people grouped into several teams, each working on a different aspect. Overall, we think we did pretty good!

Keep reading or jump to a section below:

Services

One of the goals of this sprint was to open up services for the Drupal Commerce cart module by adding endpoints for decoupled Drupal or 3rd party usage. This sets the stage for better cart integration with 3rd party APIs.

The team got a good start and were able to accomplish the following:

  • Created /cart/init POST endpoint that will initiate a cart.
  • Created /cart/{order_id} GET endpoint that will return the current state of the cart.
  • Created /cart/{order_id} POST endpoint which accepts payload with items and quantities to be added to cart, and it updates the cart as per payload.
  • Created unit tests to verify the functionality of the code. This still needs some work but it's a good start.

We don’t have the code to share at the moment, but, this start will be further refined and pushed out at a later date. For those interested, you can read more and follow progress at https://www.drupal.org/node/2894400. 

UX

The UX team was our largest team this year. This team broke out into several smaller groups to tackle a number of items.

Material Admin Theme

Almost every single Drupal website has a custom theme that gives the website a unique look and feel. However, while the front end is custom, the majority of these sites use the default Drupal theme for the admin side of the site. This is the non-public facing part of the site.

Recently, we’ve wanted something a little more modern for the admin theme. Something that takes good design and pairs it with science based usability decisions. For this, we’ve turned to Google’s Material Design and have started work on a new admin theme using it as a foundation.

Here’s a before and after screenshot of the Material theme in action. We’re hoping to have a 1.0 release soon.

Content list page
(Default Drupal admin theme)

add-content-default.jpg

Content list page
(Material admin theme)

add-content-material.jpg

Drupal Commerce Dashboard

The main dashboard for Drupal Commerce is still pretty bare bones. It’s basically just a list of links and descriptions, which is Drupal’s default behavior for this type of thing. While this works, the dashboard is the first page many people see when using Drupal Commerce, so, we wanted something more intuitive and stylish.

Prior to Sprint Week 2017, one of our senior front end developers who is spearheading this initiative had already been conceptualizing the dashboard with the Drupal Commerce community. We wanted to take this and get started on the development. Progress was made and the new dashboard is starting to take shape.

View screenshots and follow progress at https://www.drupal.org/node/2885483

Drupal Commerce Setup Wizard

Starting from scratch with Drupal Commerce can be a little confusing if you haven’t done it a few times already. There are a lot of different pieces to a store and so finding all of the initial settings can be somewhat frustrating. This is the motivation behind creating a setup wizard to guide people through the initial setup of a store. A wizard helps get the necessary configuration done quickly and easily.

Some of our team had already done some design/UX groundwork ahead of time. Like the dashboard, this sprint we just wanted to get started on the code. Once completed, the wizard will give users a step by step interface for setting up store details, currencies, taxes, payment processors, etc. Once these initial settings have been completed, you should theoretically be able to get started with the content of the store, such as products.

View screenshots and follow progress at https://www.drupal.org/node/2878968

Currency Setup UX

Sometimes developers aren’t the best at communicating with the rest of the world. Luckily, the rest of the world is also helping to develop the software! This was the case when one of our project managers discovered some confusing language when setting up currencies in Drupal Commerce. What was expected vs. what actually happens when using the primary setup tools didn’t match up, leading to confusion. This has been resolved and will soon make its way into a dev release of Drupal Commerce.

View the details and patch status at https://www.drupal.org/node/289396

Documentation

Another big team this year was the documentation crew. While not very exciting, documentation is critical to the usability of Drupal as well as the ongoing development of it. The documentation team split into two groups, one for core Drupal 8 documentation and one for Drupal Commerce 2.x documentation. Some new pages were created while others were updated for Drupal 8.

Here’s a list of where progress was made. Note that some documentation is pending review and so may be updated.

Drupal 8 Core Documentation

Writing and contributing documentation

Server Scaling and Performance

Database API documentation port to Drupal 8

JavaScript API documentation

Form API confirmation form documentation

Managing content (reorganized documentation)

Installing Drupal 8 with Composer (updated documentation)

Drupal 8 installation

Drupal Commerce 2.x Documentation

Commerce content creation

Commerce configuration

How to contribute to commerce documentation

Added tab support to docs

Updated recipes layout and requirements

Commerce Install

The goal of this team was to create a prototype for a custom Drupal Commerce install. The idea is that a person could select options for their install, then, based on their selections a Composer file would be dynamically generated to make the initial Drupal Commerce install a snap. Pretty cool stuff.

This Commerce Install team was able to get a working prototype completed! There’s still some work to do before it’s released into the wild, but, it’s getting close. I’m sure we’ll toss up an announcement of some sort when it’s completed.

POS (point of sale)

The Point of Sale module provides an interface for Drupal Commerce to allow in-person interactions with a Drupal Commerce storefront. This is powerful because it allows a single solution for both online and brick and mortar stores.

Currently, the Point of Sale module is only available for Drupal 7. The POS team was tasked with starting to port it over to Drupal 8. They made some good headway! Here’s a recap.

Development Reference

Merged commits

Pending commits

UX issues reviewed or added

Sprint Week 2018

Now the focus shifts to 2018. We’d love to see others in the online community join us for next years Sprint! The Acro Media Sprint Week in 2018 will be happening the second week of July (9th to 13th). Mark your calendars and get in touch next year if you’d like to take part with us.

Jul 17 2017
Jul 17

For those who aren’t familiar with the concept of Pattern-Lab (or a Pattern Library), it’s essentially a living style guide; a common tool in modern web development. At their most basic, they are continuously updated documents that help documenting common design styles for web components, bringing together the intended look & feel with the images and codes to build them.

I started to look into this idea around a year and half ago because I found that each time a new project started, I needed to redo much of the same styling work. Even worse, when the main project was still in development, it was hard to keep an eye on the minor changes that affected other projects, so they would quickly get out of sync with each other. There wasn’t a centralised place to store reusable components. Unfortunately, my initial attempt to push a “shared Pattern-lab” idea didn’t work too well because we had difficulties integrating with various tools and workflows across teams.

example living style guideExample of a living style guide

Now that our technology stacks at Comic Relief have matured, with a major shift to JavaScript frameworks and CI automation tools, the integration is getting much easier for shared styling libraries. Through inter-team collaboration we managed to make it work with our main development workflows (Drupal and non-Drupal) plus provide a way to using it for non-NPM projects. The result is a Pattern-Lab solution designed to be usable by both our internal and third party teams. We’ve also open sourced it so anyone else can benefit from it too!

Our Pattern-lab reduced duplication of effort as it’s ensured we have a library of reusable components that are common through all our main products. Additionally, this is giving us more consistency in our newer sites therefore reducing the time it takes for us to deliver. It also means that we now have a one-stop reference guide for both designers and developers working on all our projects.

The Features

To summarise, the Pattern-Lab is built on Node and using NPM, with the styles themselves being written in SASS and style guide generated by KSS. Every time a pull request is opened on Github, a preview version of the Pattern-lab is deployed to Netlify with Visual Regression tests run with Travis. Once code has passed review and been merged into the master branch, the latest release is automatically pushed to NPM and deployed to production by Concourse.

More details and documentation can be found here:
https://www.npmjs.com/package/@comicrelief/pattern-lab

Visual Regression

One stand out feature for me are the visual regression tests we put in place. These make sure that existing styles aren’t accidentally changed unintentionally when we add a style or make a breaking change. A big benefit of this is that now they’re integrated into our CI pipeline, if the tests fail, the code will not be released. It’s particularly important to make sure that we don’t break one product’s styles while trying to fix another!

Example visual regression reportexample of backstopJS visual regression test report

One of the potential future additions is to extend this to include automated accessibility checks. These would ensure we do our best to make all our sites fully accessible while we can also add unit test coverage reports so we can make sure we cover any areas that have been missed by our regression test suite.

Summary

For me, it’s taken a long time to get this far, but I’ve been really happy with how various teams were able to work together to make the idea into reality. We’re getting a lot of great use out of our Pattern Lab and I hope it helps you if you ever encounter any similar issues.

GitHub: https://github.com/comicrelief/pattern-lab
NPM: https://www.npmjs.com/package/@comicrelief/pattern-lab
CI Pipeline: https://ci.apps.comicrelief.com/teams/main/pipelines/pattern-lab

Share this:

Like this:

Like Loading...
Jul 17 2017
Jul 17

Once upon a time, people had to thumb through thick tomes of printed material called “catalogs” to find the products they wanted to buy. If you’re old enough to remember looking through the Sears catalog at Christmas time looking for toys to ask Santa for, you know what I’m talking about. (And if you aren’t old enough to remember that, I don’t want to talk to you.)

While print catalogs have largely gone the way of the dinosaurs, the concept of letting people browse a selection of products has not. That’s where online catalog functionality comes in.

What is catalog functionality on a website?

Basically, it’s a listing of products. You need to display multiple products on a page so people can browse through them and pick the one they want. There can be filters and categories and various other ways of going from thousands of products down to a manageable number that people can actually scan through.

How does Drupal Commerce 2.x handle catalogs?

In Commerce 2.x, everything is just search results that come up, but it appears like a catalog. So if you filter by a specific tag or parameter, it presents like a catalog with nice rows of products. But since it’s really just a search result, you can apply all the filters that you would for a search. You can do a keyword search in a category, for instance. Or you can filter by price, brand, color, or any other parameter you care to use. Think of the kind of shopping experience you get on Amazon (only more specific), and you’ll get what we’re talking about here.

How do you know what’s going to be displayed?

There are lots of different options. You can choose to display everything with the “television” tag, for instance. Results can be displayed alphabetically, or you can have the top sellers display first, or you can have products come first and have accessories listed lower down. You can add manual weightings to products, or you can have weightings based on other tags or even on dynamic data. There’s a lot of adaptability.

How is this different from Commerce 1.x?

In Commerce 1, you could use views and display products that way, which allowed for some filtering, but it was pretty basic. In Commerce 2.x, catalogs are now searches, so they’re cool and flexible, and they can do whatever you want.

To learn more, check out our High Five episode “How Drupal Commerce 2.x handles Catalog Functionality.”

Subscribe to our YouTube Channel for more Drupal Commerce goodness!

Jul 13 2017
Jul 13
July 13th, 2017

When creating the Global Academy for continuing Medical Education (GAME) site for Frontline, we had to tackle several complex problems in regards to content migrations. The previous site had a lot of legacy content we had to bring over into the new system. By tackling each unique problem, we were able to migrate most of the content into the new Drupal 7 site.

Setting Up the New Site

The system Frontline used before the redesign was called Typo3, along with a suite of individual, internally-hosted ASP sites for conferences. Frontline had several kinds of content that displayed differently throughout the site. The complexity with handling the migration was that a lot of the content was in WYSIWYG fields that contained large amounts of custom HTML.

We decided to go with Drupal 7 for this project so we could more easily use code that was created from the MDEdge.com site.

“How are we going to extract the specific pieces of data and get them inserted into the correct fields in Drupal?”

The GAME website redesign greatly improved the flow of the content and how it was displayed on the frontend, and part of that improvement was displaying specific pieces of content in different sections of the page. The burning question that plagued us when tackling this problem was “How are we going to extract the specific pieces of data and get them inserted into the correct fields in Drupal?”

Before we could get deep into the code, we had to do some planning and setup to make sure we were clear in how to best handle the different types of content. This also included hammering out the content model. Once we got to a spot where we could start migrating content, we decided to use the Migrate module. We grabbed the current site files, images and database and put them into a central location outside of the current site that we could easily access. This would allow us to re-run these migrations even after the site launched (if we needed to)!

Migrating Articles

This content on the new site is connected to MDEdge.com via a Rest API. One complication is that the content on GAME was added manually to Typo3, and wasn’t tagged for use with specific fields. The content type on the new Drupal site had a few fields for the data we were displaying, and a field that stores the article ID from MDedge.com. To get that ID for this migration, we mapped the title for news articles in Typo3 to the tile of the article on MDEdge.com. It wasn’t a perfect solution, but it allowed us to do an initial migration of the data.

Conferences Migration

For GAME’s conferences, since there were not too many on the site, we decided to import the main conference data via a Google spreadsheet. The Google doc was a fairly simple spreadsheet that contained a column we used to identify each row in the migration, plus a column for each field that is in that conference’s content type. This worked out well because most of the content in the redesign was new for this content type. This approach allowed the client to start adding content before the content types or migrations were fully built.

Our spreadsheet handled the top level conference data, but it did not handle the pages attached to each conference. Page content was either stored in the Typo3 data or we needed to extract the HTML from the ASP sites.

Typo3 Categories to Drupal Taxonomies

To make sure we mapped the content in the migrations properly, we created another Google doc mapping file that connected the Typo3 categories to Drupal taxonomies. We set it up to support multiple taxonomy terms that could be mapped to one Typo3 category.
[NB: Here is some code that we used to help with the conversion: https://pastebin.com/aeUV81UX.]

Our mapping system worked out fantastically well. The only problem we encountered was that since we were allowing three taxonomy terms to be mapped to one Typo3 category, the client noticed some use cases where too many taxonomies were assigned to content that had more than one Typo3 category in certain use cases. But this was a content-related issue and required them to re-look at this document and tweak it as necessary.

Slaying the Beast:
Extracting, Importing, and Redirecting

One of the larger problems we tackled was how to get the HTML from the Typo3 system and the ASP conference sites into the new Drupal 7 setup.

The ASP conference sites were handled by grabbing the HTML for each of those pages and extracting the page title, body, and photos. The migration of the conference sites was challenging because we were dealing with different HTML for different sites and trying to get get all those differences matched up in Drupal.

Grabbing the data from the Typo3 sites presented another challenge because we had to figure out where the different data was stored in the database. This was a uniquely interesting process because we had to determine which tables were connected to which other tables in order to figure out the content relationships in the database.

The migration of the conference sites was challenging because we were dealing with different HTML for different sites and trying to get get all those differences matched up in Drupal.

A few things we learned in this process:

  • We found all of the content on the current site was in these tables (which are connected to each other): pages, tt_content, tt_news, tt_news_cat_mm and link_cache.
  • After talking with the client, we were able to grab content based on certain Typo3 categories or the pages hierarchy relationship. This helped fill in some of the gaps where a direct relationship could not be made by looking at the database.
  • It was clear that getting 100% of the legacy content wasn’t going to be realistic, mainly because of the loose content relationships in Typo3. After talking to the client we agreed to not migrate content older than a certain date.
  • It was also clear that—given how much HTML was in the content—some manual cleanup was going to be required.

Once we were able to get to the main HTML for the content, we had to figure out how to extract the specific pieces we needed from that HTML.

Once we had access to the data we needed, it was a matter of getting it into Drupal. The migrate module made a lot of this fairly easy with how much functionality it provided out of the box. We ended up using the prepareRow() method a lot to grab specific pieces of content and assigning them to Drupal fields.

Handling Redirects

We wanted to handle as many of the redirects as we could automatically, so the client wouldn’t have to add thousands of redirects and to ensure existing links would continue to work after the new site launched. To do this we mapped the unique row in the Typo3 database to the unique ID we were storing in the custom migration.

As long as you are handling the unique IDs properly in your use of the Migration API, this is a great way to handle mapping what was migrated to the data in Drupal. You use the unique identifier stored for each migration row and grab the corresponding node ID to get the correct URL that should be loaded. Below are some sample queries we used to get access to the migrated nodes in the system. We used UNION queries because the content that was imported from the legacy system could be in any of these tables.

SELECT destid1 FROM migrate_map_cmeactivitynode WHERE sourceid1 IN(:sourceid) UNION SELECT destid1 FROM migrate_map_cmeactivitycontentnode WHERE sourceid1 IN(:sourceid) UNION SELECT destid1 FROM migrate_map_conferencepagetypo3node WHERE sourceid1 IN(:sourceid) … SELECTdestid1FROMmigrate_map_cmeactivitynodeWHEREsourceid1IN(:sourceid)UNIONSELECTdestid1FROMmigrate_map_cmeactivitycontentnodeWHEREsourceid1IN(:sourceid)UNIONSELECTdestid1FROMmigrate_map_conferencepagetypo3nodeWHEREsourceid1IN(:sourceid)

Wrap Up

Migrating complex websites is rarely simple. One thing we learned on this project is that it is best to jump deep into migrations early in the project lifecycle, so the big roadblocks can be identified as early as possible. It also is best to give the client as much time as possible to work through any content cleanup issues that may be required.

We used a lot of Google spreadsheets to get needed information from the client. This made things much simpler on all fronts and allowed the client to start gathering needed content much sooner in the development process.

In a perfect world, all content would be easily migrated over without any problems, but this usually doesn’t happen. It can be difficult to know when you have taken a migration “far enough” and you are better off jumping onto other things. This is where communication with the full team early is vital to not having migration issues take over a project.

Web Chef Chris Roane
Chris Roane

When not breaking down and solving complex problems as quickly as possible, Chris volunteers for a local theater called Arthouse Cinema & Pub.

Jul 11 2017
Jul 11

The ability to create a form quickly and easily is a vital piece of functionality in any content management system. A content editor needs the capacity to create a form and add or remove fields.

The days of asking a developer to create a custom form are long gone. An editor should be able to spin up a form for whatever they need.

Luckily Drupal 8 has two good options for building forms: Contact and Webform.

Contact

The Contact module in Drupal 7 and below has always been the go-to module for basic forms as long as you’re happy with the hard-coded fields. If you need an extra field, you would have to write custom code to add it.

Now in Drupal 8, you’re no longer stuck with a single form. Instead, you can create different fieldable contact form types. You can create different contact forms and attach fields to them, the same way as you do on content types.

The Contact module won’t keep any form submissions in Drupal. It’ll send them to a designated email address. To store submissions use Contact Storage module.

Webform

Webform is the original form builder for Drupal. If content editors needed the ability to create forms then this is the module they would use. The module is suited for basic contact forms as well as long multi-page forms.

The 8.x-5.x version of Webform started out as YAML Form and it was decided to make YAML Form the Drupal 8 version of Webform.

Webinar on Contact and Webform

I recently recorded a webinar on Contact and Webform, where I cover both modules and show you how to create a form in each one.

Watch the webinar above or directly on YouTube.

Or jump to a specific section using the links below.

Contact Module

  • What’s new in Contact module (02:03)
  • Manage contact form types (04:34)
  • Create Contact form type (05:05)
  • Default fields on Contact form (06:37)
  • Add field to contact form (07:44)
  • View submission (09:01)
  • Configure “Manage display” (09:57)

Webform Module

  • What’s new in Webform module (12:25)
  • Install Webform (14:11)
  • Overview of Webform admin page (16:31)
  • Create form using Webform (18:33)
  • Adding elements to forms (18:56)
  • Add pages to forms (21:17)
  • Conditionally display fields (22:20)
  • Form settings (28:58)
  • Add email notification to form (31:15)

Webform Integration with Google Sheet

  • Introduction to Zapier (36:30)
  • Create zap in Zapier (38:40)
  • Configure Webform to send post to webhook (39:36)
  • Test integration with Google Sheets (43:27)

Questions

  • Question: how to prevent spam (46:08)
  • Question: Views integration with Webform (55:25)

Mentioned Resources

Jul 09 2017
Jul 09

With Buenos Aires Argentina user group we are organizing a meeting on Sat 22nd and Mon 24th of July

It's been a long time since our last camp was done on BsAs (2009) and thanks to the inspiration of Drupal Camp Chile we've been planning doing another for a while.

We finally put a day thanks to a visit from Enzo. We are calling it a "PicNic" and not a camp due to the extend it'll have, it will be two days:

  • On Saturday 22nd we'll have an sprint with mentoring
  • On Monday 24th there will be technical conferences

You can find more details on our event page.

Jul 07 2017
Jul 07

Drupal 8 at Comic Relief

Over the last year a key objective for the Technology team at Comic Relief has been to build products not websites. Tech Lead, Peter Vanhee, explained in a previous blog post how we’re using Drupal 8 to create a reusable platform product for building campaign websites. Since then the team have been working to deliver another website using the platform codebase and also preparing to open-source the codebase.

We have now opened up this codebase – you can find it here.

Being open makes us better

We strongly believe that working in the open and contributing to the open-source community makes our products better and makes us stronger as a team. It helps us prioritise work more easily as we can be clear about what is and isn’t important to the product, it allows us to say “no” to one hit wonder feature requests and it makes us stricter about dealing with technical debt and ensuring we have appropriate detail in supporting documentation. It is also a fantastic and motivating feeling for the whole team to know that their work is open to others to see and contribute to, and we hope it will allow us to engage further with the thriving open source community and get some external help to expand our codebase further.

Many charities and not for profit organisations are not big enough to have the size of development team we are lucky to have. We feel strongly that we want to help smaller organisations when we can and one of our main motivations for making our platform product open was that other organisations can use it to build their websites. This has already proved successful with Comic Relief’s sister organisation, Red Nose Day USA, using our code to deliver their website in May, shaving months off the time it took to deliver.

Journey to open source

We had to overcome a few obstacles before we were able to open our codebase. These were mostly due to a lack of understanding about what open-source meant and what benefits it could bring to Comic Relief.

Some people asked why we would give away our hard work for free – our view is that we are already benefiting from using Drupal 8, which is open source software itself, so feel that we have a duty to contribute back. We also feel that we have additional, possibly unique, knowledge to add to the community based on our experience of delivering high profile campaign websites each year. We know that Drupal is a commonly chosen technology for other charities and we believe that reducing duplication of work in the sector is worthwhile and an example of how Comic Relief is able to support other charities in a way beyond purely financial.

There was understandable nervousness around security. There is of course a lot diligence required before making code public, including selecting the right license (we choose GNU GPL v2), managing secrets and ensuring that everyone is happy for feature conversations to be available for everyone to see. These considerations are not to be underestimated and take time to resolve. It is also something that needs constant review and should be built into ongoing working practises and team culture. For us, the discipline and professionalism required for working in the open is a significant benefit.

Also, there were a few questions about why we needed to open-source. This was particularly pertinent when it came to prioritising development work as the work required to get us ready to open-source sometimes meant that there was delay to new features being delivered. We combatted this by being clear about the benefits to Comic Relief – there are many, as mentioned above, but additionally we believe the quality of our work will increase with more people to spot bugs and help fix them, we will see an increase in efficiency and reduction in duplication and we will hopefully receive contributions from others that will improve our codebase even further.

Our advice if you’re wanting to open-source your code, or code in the open, is to engage the rest of your organisation early and listen to any reservations people may have. When confronted with internal reservations around going open source, it required us to have patience and perseverance in order to educate our stakeholders about the benefits and reassure them about their perceived risk. Be clear about what the benefits are to your organisation and highlight how the way you work will continue to support your open-source software appropriately. We also found it helpful to show examples of other organisations who have worked in the open such as the BBC, the Guardian and the Government Digital Service.

What is next

Our intention is to continue to open-source our code where we think it could be useful to others and to code in the open wherever possible. We have kick started a mindset shift so that at the start of each new project we try to be open, rather than closed, from the beginning. Live examples of this include developing the pattern lab (which we hope will be useful to partners) and the grants API (demonstrating that prototypes can be developed in the open).

We’re hoping to have discussions and feature requests coming in from other charities looking to adopt our technology and we’ll continue to add new components to the codebase and will be maintaining our workflow queues to organise new work and future iterations.

Finally, we’d love to see other charity websites being powered by our codebase, so please take a look at our open GitHub repositories here. We’d love to hear what you think and your experiences of moving towards open source, or working in the open.

Our journey to open source would not have been possible without the hard work of Peter Vanhee, Caroline Rennie, Andy Phipps, Heleen Mol, Adam Clark, Gustavo Liedke, Zach Bimson, Carlos Jimenez, Pradip Pack, Girish Nair and Zenon Hannick.

Share this:

Like this:

Like Loading...
Jul 06 2017
Jul 06
7 July 2017

Drupal has a thriving community in Bristol and the south-west, and they’ve put on some excellent events over the last few years. Last weekend they had their third DrupalCamp Bristol, and I was fortunate to be able to attend and speak.

The day opened with a keynote by Emma Karayiannis on self care and supporting others within open source communities.

Emma shared some of her contributions to Drupal, where she is part of the Community Working Group and track chair for the Being Human sessions at DrupalCon.

Look after yourself. Don’t feel that you can only contribute a little bit or that your opinion doesn’t matter. Just find something rewarding and start small. Getting involved can be daunting, so get to know the part of the community that deals with your interest, ask how to get involved and ask for someone to help you.

Burnout is real, and happens much more when we work alone, taking on lots of responsibility without anyone to partner with. So look for someone to co-work or co-lead with rather than try to be a lone superhero. It gives you the freedom to step away if necessary. Ask yourself: if I had to stop this tomorrow, what would happen?

It’s easy to become overwhelmed without realising it. You need to regularly check you’re looking after yourself, are still motivated, and aren’t taking on too much. Be accountable to your family, colleagues and friends, and step back if necessary.

Look after others. It’s healthy for an open source community to have people who think differently to you. Be respectful of other people and aware that miscommunication is very easy online, particularly with people whose native language is different to yours. But also accept that you’ll never be able to make everyone happy.

Make sure the people are really ok even if they appear fine. Experienced contributors, remember that you were once a beginner, and provide opportunities and safe spaces to include new people. Appreciate people for who they are and not just the work they do.

After a short break we split into two tracks.

Deji Akala provided an interesting look into the technical details on what happens on each page request. Along the way he summarised various parts of Drupal and concepts such as the autoloader, symfony handlers, the service container and event handling.

It’s an interesting exercise to unpick the index.php file line by line and discover what happens behind the scene in a single line of code:

$response = $kernel->handleRequest();

I then gave a short talk about Composer and Drupal. I’ve spoken to a number of people recently and it’s become clear that there’s still a bit of confusion surrounding how to use Composer with Drupal. I certainly found it unclear and started to look into it.

I pitched this at beginner developers, and the main things I wanted people to go away understanding were:

  • what the require, update and install commands really do
  • the difference between Drupal itself and the various template projects available

That was the first time I’d given that talk. It felt a bit raw but led to some interesting Q&A time, and it’s given me valuable insight for enhancing this in future.

To others contemplating public speaking - do it! Events like this are an ideal place to start, everyone’s friendly and on your side. You’ll gain knowledge, experience and friends from doing it. I was really glad to see that several of the speakers here were first timers - well done!

Ross Gratton shared some insights into using front end task runners like Gulp with Drupal.

Ross has been working on a large Drupal site utilising several themes, in 24 languages and with over 125 custom modules. He discussed the pros and cons of different architectural decisions, such as where to put source code and assets, what to put in version control and how to manage conflicts on such a large site.

He then shared some of the process of separating assets out to the brand level as opposed to a project level, treating a style guide or pattern library as a separate deliverable.

After lunch, George Boobyer spoke on web security, a topic often overlooked in the planning and budgeting of projects.

Security is perceived as complex but isn’t that hard, and any effort you make is rewarded. Recently we’ve seen a lot of ransomware attacks, but often these just have the same impact as a disk failure, so alongside keeping software up to date, have backups and test them.

George gave some examples of websites that had been attacked and were now hosting spam content, very often not visible to the naked eye but only to search engines. Often user data is obtained by way of database dumps that have been left accessible to the world - don’t put these in the document root.

Ana Hassel shared some insights as a freelancer. As a site builder, Ana has been able to use Drupal to focus on her clients’ needs and come up with a repeatable process for estimating and selling her work.

Ana also shared how she had invested some time learning the learning the command line and setting up scripts for everyday tasks. This had given her a better, more repeatable workflow and more predictable deployment and hosting.

An interesting perspective on personal development came from Johan Gant. I felt it complemented Emma’s keynote well with some recurring themes, and gave the day a nice mix of technical and human elements.

Johan covered issues such as Imposter syndrome, depression and burnout. Burnout often comes from a lack of engagement, and seems to be a particular risk for knowledge workers. If the values you have aren’t aligned with those of your employer or project, you can burn out very quickly.

Be selective about what you learn—patterns and techniques will last for a long time, whereas frameworks come and go. You need to make time to explore new things, but make sure you are following your interests rather than trends. Avoid stagnation—ask yourself if what you’re doing is satisfying. It’s healthy to seek new challenges, but means getting out of your comfort zone.

Lee Stone finished by sharing about how his organisation does extensive code reviews.

As well as preventing bugs, code reviews aid in training. New developers can learn the business by reading code, and junior developers can grow by asking why something is the way it is, or by asking about things they don’t understand. They often bring fresh ideas this way.

It’s important to review the code, not the person writing it. So don’t make these things too personal, and don’t take them personally! Prefer terms like “we” rather than “I” and “you” to foster a sense of team, and provide solutions rather than just stating something’s wrong.

After the talks we headed to ZeroDegrees in Bristol for a social time. It was great to catch up over dinner with people I hadn’t seen for a while, and make some new friends.

Thanks to everyone who helped make DrupalCamp Bristol such a great event. See you next year!

Jul 05 2017
Jul 05
July 5th, 2017

We’re happy to announce the new Global Academy for continuing Medical Education (GAME) site! GAME, by Frontline, provides doctors and medical professionals with the latest news and activities to sharpen their skills and keep abreast on the latest medical technologies and techniques.

As a followup to our launch of Frontline Medical communication’s MDEdge portal last October, the new GAME site takes all of the strengths of MDEdge—strong continuing education materials, interactive video reviews, content focused on keeping medical professionals well-trained—and wraps that in a fresh new package. The new GAME site is optimized for performance so that visitors can learn from their phones on-the-go, in the field on their tablets, or at their desktops in the office between meetings. Behind the scenes, site administrators have an interface that streamlines their workflow and allows them to focus on creating content.
[NB: Read our MDEdge launch announcement, here.]

The Project

Four Kitchens worked with the Frontline and GAME teams to…

  • migrate a bevy of static and dynamic content from their existing Typo3 CMS site and ten external ASP-based conference sites.
  • create a method to streamline canonical content sharing between the GAME site and the MDEdge portal through web standard APIs, and a mirror API for automated content creation from the portal to the GAME site.
  • create a single domain home for conferences originally resting on multiple source domains, redirecting as needed while keeping the source domains public for advertising use without requiring extra domain hosting.
  • provide functional test coverage across the platform for high-value functionality using Behat and CircleCI.
  • revise the design and UX of the site to help engage users directly with the content they were seeking.

Engineering and Development Specifics

Check out the new Global Academy for continuing Medical Education (GAME) site today!

  • built on Drupal 7
  • hosted on Pantheon Elite
  • code standards enforced with ESLint and PHP_CodeSniffer
  • site migration via custom migration module plugins and Google Docs mapping
  • custom MDEdge and other 3rd party integrations
  • style guide produced and reviewed using Emulsify

The Team

The Four Kitchens team of Web Chefs included James Todd as technical lead, Chris Roane as lead engineer, Randy Oest as the designer and frontend engineer, and Scott Riley as the project manager. Additional engineering work was completed by Diego Tejera, Justin Riddiough, and Web Chef Patrick Coffey.

Web Chef James Todd
James Todd

James tinkers with hardware, software, and everything in between.

Jul 04 2017
Jul 04

Blocks, as the name suggests, are pieces of content that can be placed anywhere on your Drupal site. They can contain simple text, forms or something with complex logic.

In this tutorial, you’ll learn how to create a block using custom code and how to use Drupal Console to generate it. If you’ve used blocks in Drupal 7 then you will be familiar with the new interface in Drupal 8. If you’re a site builder, the whole process of creating, editing and deleting a block is very intuitive.

Before jumping into code, let’s talk about the Drupal Block UI and understand what has changed since Drupal 7.

Block Layout Page

This page will now list only the blocks assigned to a certain region. You’ll notice that a single block can now be assigned to multiple regions and that’s because blocks are now entities.

If you want to assign a block to a region, just click the “Place Block” button and you’ll be presented with an overlay showing all the available blocks.

This page can be accessed by the URL /admin/structure/block

If you want to learn how to create custom block types check out our tutorial: Build a Blog in Drupal 8: Managing Blocks

Create a Block using the Drupal Interface

Like the content page, custom blocks are now listed on a dedicated page, which uses a views to list all the entities of type Block. Use the toolbar menu to access that page: Structure -> Block Layout -> Custom Block Library.

In order to create a new block, click “Add custom block” and select a block type. As mentioned before, Blocks are now fieldable entities (like nodes), that’s why you can see different block types.

This was a big step when compared to Drupal 7 because we could not add new fields to blocks nor use the same block in multiple regions, which was a pain.

Create a Block using Code

Just like the previous version, Drupal 8 also allows us to create blocks using code. With the OOP concepts introduced in this version 8, it’s even more simple and intuitive to use the APIs Drupal provides. So let’s start.

1. Create a module

Go create a folder under “/modules/custom” (you will need to create the ‘custom’ folder), called “my_block_example”.

Inside this folder, create the “.info.yml” file

my_block_example.info.yml

name: My Block Example
type: module
description: Defines a custom block.
core: 8.x
package: WebWash
dependencies:
  - block

Once the folder and file has been created, go enable the module. Please note, Drupal 8 does not require a “.module”.

2. Create a Block Class

In this step, we’ll create a class that will contain the logic of our block. Following the PSR-4 standards, we’ll place our PHP class under /modules/custom/my_block_example/src/Plugin/Block. Create this folder structure and then create a new file called MyBlock.php under it.

The path to the block class should end up being my_block_example/src/Plugin/Block/MyBlock.php.

This file will contain:

  • Annotation meta data, that will allow us to identify the Block. If you want to read more about annotations, check read “Annotations-based plugins” on drupal.org
  • The MyBlock class, containing 4 methods:
build(), blockAccess(), blockForm(), and blockSubmit()

So, let’s create it

<?php

namespace Drupal\my_block_example\Plugin\Block;

use Drupal\Core\Access\AccessResult;
use Drupal\Core\Block\BlockBase;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Session\AccountInterface;

/**
 * Provides a block with a simple text.
 *
 * @Block(
 *   id = "my_block_example_block",
 *   admin_label = @Translation("My block"),
 * )
 */
class MyBlock extends BlockBase {
  /**
   * [email protected]}
   */
  public function build() {
    return [
      '#markup' => $this->t('This is a simple block!'),
    ];
  }

  /**
   * [email protected]}
   */
  protected function blockAccess(AccountInterface $account) {
    return AccessResult::allowedIfHasPermission($account, 'access content');
  }

  /**
   * [email protected]}
   */
  public function blockForm($form, FormStateInterface $form_state) {
    $config = $this->getConfiguration();

    return $form;
  }

  /**
   * [email protected]}
   */
  public function blockSubmit($form, FormStateInterface $form_state) {
    $this->configuration['my_block_settings'] = $form_state->getValue('my_block_settings');
  }
}

If you prefer, grab a copy of the code from GitHub.

That’s it! Your block is created and ready to be used! Just assign the block to a region and you should see it.

Now let’s go through the methods in more detail.

build()

This method will render a renderable array. In our case, we’re returning a basic markup but we could have returned a more complex code, like a form or a views.

The code below demonstrates how to render a form as an example:

/**
 * [email protected]}
 */
public function build() {
  return \Drupal::formBuilder()->getForm('Drupal\my_module\Form\MyBlockForm');
}

blockAccess()

This method allows you to define custom access logic. In our example, any user with the ‘access content’ will see the block.

Notice that we’re calling a method from the AccessResult class here. The static method allowedIfHasPermission will check if the current user (or anonymous) has access to view content (in this case).

blockForm()

This method allows you to define a block configuration form. Let’s suppose we want to render custom text inside this block instead of the static one. All we need to do is to create a form using the Form API and then define whatever fields we need.

blockSubmit()

This is where we save the configuration defined on the previous method.

Create a Block using Drupal Console

If you’re familiar with Drupal Console you already know its power!

Luckily it’s possible to use this tool to create the boilerplate code for custom blocks. Let’s go through the steps, and you should be done in less than one minute.

If you already have Drupal Console installed, follow the steps below, otherwise just follow the instructions on the site to install it.

Generating the block boilerplate code using Drupal Console

Open the terminal and navigate to your Drupal site root folder and run this command:

drupal generate:plugin:block

Select the module you want to create the block under and answer the following questions and you are all set!

Summary

The concept of a block hasn’t really changed, but how they’re implemented has. Code for each block is neatly organized in its own class. Whereas in Drupal 7, all block code was thrown into the “.module” file and got messy quickly.

This code can be found and downloaded from: https://github.com/rafaelferreir4/my_block_example/

Resources

Jun 29 2017
Jun 29
June 29th, 2017

Recently I was working in a Drupal 8 project and we were using the improved Features module to create configuration container modules with some special purposes. Due to client architectural needs, we had to move the /features folder into a separate repository. We basically needed to make it available to many sites in a way we could keep doing active development over it, and we did so by making the new repo a composer dependency of all our projects.

One of the downsides of this new direction was the effects in CircleCI builds for individual projects, since installing and reverting features was an important part of it. For example, to make a new feature module available, we’d push it to this ‘shared’ repo, but to actually enable it we’d need to push the bit change in the core.extension.yml config file to our project repo. Yes, we were using a mixed approach: both features and conventional configuration management.

So a new pull request would be created in both repositories. The problem for Circle builds—given the approach previously outlined—is that builds generated for the pull request in the project repository would require the master branch of the ‘shared’ one. So, for the pull request in the project repo, we’d try to build a site by importing configuration that says a particular feature module should be enabled, and that module wouldn’t exist (likely not present in shared master at that time, still a pull request), so it would totally crash.

There is probably no straightforward way to solve this problem, but we came with a solution that is half code, half strategy. Beyond technical details, there is no practical way to determine what branch of the shared repo should be required for a pull request in the project repo, unless we assume conventions. In our case, we assumed that the correct branch to pair with a project branch was one named the same way. So if a build was a result of a pull request from branch X, we could try to find a PR from branch X in the shared repo and if it existed, that’d be our guy. Otherwise we’d keep pulling master.

So we created a script to do that: &lt;?php $branch = $argv[1]; $github_token = $argv[2]; $github_user = $argv[3]; $project_user = $argv[4]; $shared_repos = array( 'organization/shared' ); foreach ($shared_repos as $repo) { print_r("Checking repo $repo for a pull request in a '$branch' branch...\n"); $pr = <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch, $github_token, $github_user, $project_user, $repo); if (!empty($pr)) { print_r("Found. Requiring...\n"); exec("<strong class="markup--strong markup--pre-strong">composer require $repo:dev-$branch</strong>"); print_r("$repo:dev-$branch pulled.\n"); } else { print_r("Nothing found.\n"); } } function <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name, $github_token, $github_user, $project_user, $repo) { $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name"); curl_setopt($ch,CURLOPT_RETURNTRANSFER,true); curl_setopt($ch, CURLOPT_USERPWD, "$github_user:$github_token"); curl_setopt($ch, CURLOPT_USERAGENT, "$github_user"); $output=json_decode(curl_exec($ch), TRUE); curl_close($ch); return $output; } $branch=$argv[1];$github_token=$argv[2];$github_user=$argv[3];$project_user=$argv[4];$shared_repos=array(  'organization/shared'foreach($shared_reposas$repo){  print_r("Checking repo $repo for a pull request in a '$branch' branch...\n");  $pr=<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch,$github_token,$github_user,$project_user,$repo);if(!empty($pr)){    print_r("Found. Requiring...\n");    exec("<strong class="markup--strongmarkup--pre-strong">composer require $repo:dev-$branch</strong>");    print_r("$repo:dev-$branch pulled.\n");  else{    print_r("Nothing found.\n");function<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name,$github_token,$github_user,$project_user,$repo){  $ch=curl_init();    curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name");  curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);  curl_setopt($ch,CURLOPT_USERPWD,"$github_user:$github_token");  curl_setopt($ch,CURLOPT_USERAGENT,"$github_user");$output=json_decode(curl_exec($ch),TRUE);  curl_close($ch);  return$output;

As you probably know, Circle builds are connected to the internet, so you can make remote requests. What we’re doing here is using the Github API in the middle of a build in the project repo to connect to our shared repo with cURL and try to find a pull request whose branch name matches the one we’re building over. If the request returned something then we can safely say there is a branch named the same way than the current one and with an open pull request in the shared repo, and we can require it.

What’s left for this to work is actually calling the script:

- php scripts/require_feature_branch.php "$CIRCLE_BRANCH" "$GITHUB_TOKEN" "$CIRCLE_USERNAME" "$CIRCLE_PROJECT_USERNAME" -phpscripts/require_feature_branch.php"$CIRCLE_BRANCH""$GITHUB_TOKEN""$CIRCLE_USERNAME""$CIRCLE_PROJECT_USERNAME"

We can do this at any point in circle.yml, since composer require will actually update the composer.json file, so any other composer interaction after executing the script should take your requirement in consideration. Notice that the shared repo will be required twice if you have the requirement in your composer.json file. You could safely remove it from there if you instruct to require the master branch when no matching branch has been found in the script, although this could have unintended effects in other types of environments, like for local development.

Note: A quick reference about the parameters passed to the script:

$GITHUB_TOKEN: #Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a> $CIRCLE_*: #CircleCI vars, automatically available $GITHUB_TOKEN:#Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a>$CIRCLE_*:#CircleCI vars, automatically available

[Editor’s Note: The post “Running CircleCI Builds Based on Many Repositories” was originally published on Joel Travieso’s Medium blog.]

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jun 27 2017
Jun 27

Content with many fields can be overwhelming when it comes to adding and editing data. Also, creating layouts to display the content is often a complex task. Field Group can solve both of these issues.

Using this module, fields can be grouped in a variety of ways including tabs, accordions and HTML elements. Field Group not only works for editing content, it can also be used to group and structure fields so that great layouts can be created with little effort.

In the first part of this tutorial, we’ll show how to group fields to make editing content easier. The second part will demonstrate how to display groups of fields to create a simple but effective layout.

Field Group Types

There are several different options when creating a field group:

  • Accordion – expandable groups of fields with only one group expanded at a time.
  • Details – expandable groups of fields that can be independently expanded or collapsed.
  • Fieldset – simple grouping of fields.
  • HTML element – grouping of fields by element such as div or section.
  • Tabs – grouping of fields in vertical or horizontal tabs.

Each option allows you to add CSS classes making it easy to style the output. Groups can be nested inside other groups and for some options this is required. For example, “Accordion items” need to be nested within an Accordion group and individual Tab groups would normally be created within a surrounding Tabs group.

We initially used version 8.x-1.0-rc6 to write this article but noticed that some of the effects, such as collapsing/expanding accordions didn’t work as expected. Updating to the latest available dev build (8.x-1.0-rc6+12-dev at the time of writing) resolved these issues. As always, be cautious when using pre-release or dev builds of modules on production servers.

Getting Started

Field Group doesn’t have any dependencies apart from the Field module in Core, so you can get started straight away by downloading the Field Group module.

If you prefer command line tools then you can download and install Field Group using Drush or the Drupal Console.

Using Drush:

$ drush dl field_group
$ drush en field_group

Using Drupal Console:

$ drupal module:download field_group --latest
$ drupal module:install field_group

Configuring Field Group on Content Form

In this section, we’ll show how to group fields to make editing them easier on the content form. This is particularly useful when there are many fields, as grouping them into appropriate sections makes it easier to find the fields you need to change.

As we’ve already seen, fields can be grouped in a variety of ways. There are pros and cons for each option but we will use Details in this section as it offers a simple interface that works well on all screen sizes.

In this tutorial, we’ll create a simple system for storing contact information. We’ll start off with a set of individual fields and then we’ll add Field Group to create sections based on address, phone numbers and a description. Feel free to pick your own field names but we’ll use the following:

  • Title (change the field label to Name)
  • House number
  • Street
  • City
  • ZIP code
  • Home phone
  • Mobile phone
  • Description

1. Create a content type called “My Contacts” and add the fields listed above. For help on setting up new content types and fields have a look at our “Content Types and Fields” tutorial. We’ve also changed the Title’s field label to Name on the Edit tab.  You should have a screen that looks similar to this.

Screenshot of Manage Fields, listing the fields used in this tutorial

2. With the content type and fields set up, click on the “Manage form display” tab. Field Group adds a new button near the top called “Add group”. Click on that button.

Screenshot showing Field Group's "Add group" button

3. On the next screen, select Details from the drop-down list. The label will be used as the heading of the section, so use an appropriate name such as Address and then click on “Save and continue”.

Screenshot showing a new Details group being created

4. By default, when using the Details group, it’s closed so only the heading is visible. If you want the fields for that group to be shown by default then tick the “Display element open by default” checkbox. It’s normally best to leave the “Mark group as required if it contains required fields” checkbox ticked, especially if the group is closed by default, as this adds the red “required” asterisk to the group heading if any of the fields it contains are required. If you want to add styling to the group then you can add an ID and classes. Click on “Create group” to complete the process.

Screenshot showing options for a Field Group

5. Repeat the process for the other groups, which in our example are Phone and Description. On the “Manage form display” tab you should now see the list of fields and the groups that you have created.

Screenshot listing all the fields and new groups

6. Now it’s just a simple matter of dragging the fields so they appear under the groups. To become part of the group the fields should be below the group name and indented as shown below.

Screenshot showing the fields listed under their Field Group

7. Click on Save to complete the process.

8. Add a new node based on this content type. When editing the content, you should see expandable sections that contain fields.

Screenshot showing fields grouped when creating a new node

Configuring Field Group on Content View Modes

In the section above we dealt with grouping when editing content. In this section, we’ll look at how creating a few groups can greatly help with content layout. With very little effort it’s simple to create great layouts that are easy to style. And you can do this without the use of complex modules like Display Suite or Panels.

We’ll create a few “HTML element” groups and then apply simple CSS to style the output. The instructions below assume that you have set up the fields above.

1. Edit your content type, click on the “Manage display” tab and then click on the “Add group” button.

Screenshot showing the "Add group" button on the "Manage display" tab

2. Select “HTML element” from the “Add a new group” drop-down box, give the group a label such as Address and click on “Save and continue”.

Screenshot showing "HTML element" selected

3. On the next screen there are several options. Under Element, enter the most appropriate element for your content. We’re just going to use a simple div. You can decide to show or hide the label that you entered on the previous screen. We’ll stick with the default of No. You can also add attributes and configure effects if required. As we want to apply some basic CSS to our layout, we’ll add a class of simple-box under “Extra CSS classes”. Once you have entered all the settings, click on “Create group”.

Screenshot showing options for "HTML elements"

4. Create any remaining groups using the same settings. As before, in addition to Address, we’ve created Phone and Description groups.

5. On the “Manage display” tab drag the fields under the appropriate groups, making sure each field is indented as shown below. At this point we’ll also change our labels to be inline rather than above and hide the Description label.

Screenshot showing fields listed under their "HTML element" Field Group

6. Click on Save to complete the process.

7. Go back to the content you created earlier and you should see something like this.

Screenshot of node before CSS applied

8. The screenshot above doesn’t look any different from normal but if you inspect the markup you’ll see that the fields are grouped into three divs, with the class of simple-box that we added above. The screenshot shows the output from the Bartik theme.

Screenshot of Firebug output showing the Field Group divs

9. Then it’s just a matter of adding CSS as appropriate. An example that works for the Bartik theme is:

.node__content {
  display: flex;
}

.simple-box {
  flex-grow: 1;
  flex-basis: 0;
  margin: 10px;
  padding: 10px;
  border: 1px solid #ccc;
}

And the output is transformed into this:

Screenshot of node after CSS applied

This example gives a simple demonstration of how using groups can make laying out information on a page much easier. Field Group also allows you to nest groups within groups, so you can create more complex layouts easily, making this an even more powerful module.

Summary

In this tutorial, we’ve seen how the Field Group module can help organize fields into sections to make editing simpler. We’ve also demonstrated how adding groups can structure markup to make creating layouts much simpler.

FAQs

Q: Why can’t I get the effects to work?
There seems to be an issue with 8.x-1.0-rc6. Try upgrading to the latest dev build and try again.

Q: Can I nest groups within groups?
Yes!

Jun 18 2017
Jun 18
Recently I set out to make a simple instrument for running simpletest tests without having LAMP stack installed on your local environment. I needed this for two reasons:
  1. for running tests locally
  2. for running tests on CI server
I've decided to use Docker and create monolith container with Drupal and all the LAMP stuff inside and here what I've got: docker-tester.


How to use


Before running container you have to setup next ennvironment variables:
  1. KEEP_RUNNING - specify  yes  if you want to keep container running when tests will be executed. Use for debugging purposes only. Default value is  no .
  2. DRUPAL_VERSION - specific version of Drupal. Supported Drupal 7 and Drupal 8. Example:  8.3.2 .
  3. MODULES_DOWNLOAD - a list of modules to download (by Drush) separated by comma. Example:  module_name-module_version,[...] .
  4. MODULES_ENABLE a list of modules to enable (by Drush) separated by comma. Example:  module_name,[...] .
  5. SIMPLETEST_GROUPS - a list of simpletest groups to run separated by comma. Example:  Group 1,[...] .
  6. SIMPLETEST_CONCURRENCY - amount of test runners to test code in parallel. Default value is  1 .
Then you need to build an image for container:
docker build -t drupal-tester .
Next you have two options: either run container with docker-compose tool or run container manualy with docker command.
For local usage I prefere to use docker-compose because it's easier than write all the CLI docker command manualy. Just specify what module you want to test inside of docker-compose.yml file and run:
docker-compose up && docker-compose down
It will run the container, install Drupal inside of it and run tests. That's all.

For running tests on CI server I use docker command and specify all the needed environment variables manualy:

docker run -v $(pwd)/test_results:/var/www/html/test_results -v $(pwd)/custom_scripts:/var/www/html/custom_scripts -e KEEP_RUNNING=no -e DRUPAL_VERSION=8.3.2 -e MODULES_DOWNLOAD=module-version -e MODULES_ENABLE=module -e SIMPLETEST_GROUPS=module_test_group -e SIMPLETEST_CONCURRENCY=1 drupal-tester
It allows you to override environment variables and volumes that you want to mount inside of the container. So you can setup different jobs on your CI server to test different modules on different Drupal versions with the help of this one container.

When docker finished the process all test results by default will be placed into test_results directory but you can easily override this by mounting some other directory inside of a container.

Setup and customization


Sometimes you need to do something before running tests. For example override some module specific settings or setup some Drupal variables etc. You can get it done with custom *.sh scripts. Just write sh file with all needed actions/commands and put it inside custom_scripts folder. All the files inside of this directory will be executed before running tests.
Jun 14 2017
Jun 14

Pathauto is a module which lets you automate the generation of URL aliases in Drupal. Instead of the URL being “/node/123”, you can have “/blog/article/why-use-drupal”.

The module allows you to define custom patterns which are generated when an entity is created.

URL aliases or URL slugs, help with search engine optimization and they’re more user-friendly.

Drupal core has supported URL aliases for a long time, but they weren’t automatically generated. Pathauto helps with automating the process.

In this tutorial, you’ll learn how to create aliases and patterns, and how to bulk generate paths.

Getting Started

Before we begin, go download and install the following modules:

  1. Pathauto
  2. Token
  3. Ctools

Using Drush:

$ drush dl pathauto token ctools
$ drush en pathauto

Or, using Composer:

$ composer require drupal/pathauto

Manually Create URL Aliases

Pathauto is not required to create aliases. Drupal core uses a module called Path to create them, and it depends on this module. Pathauto simply helps you automate the creation process.

URL aliases can be created in two ways: from the content edit form and the “URL Aliases” page.

To create an alias from the form, click on the “URL path settings” field-set on the right of the form. Then enter the path into “URL alias”.

Another way, go to Configuration, “URL Aliases” and click on “Add alias”.

Create Pathauto Patterns

Let’s first look at how to setup Pathauto patterns. A pattern lets you define what the structure of the URL alias should be. For example, we’ll add “article/[node:title]” for the Article content type.

The module will convert “article/[node:title]” to “article/node-title”. [node:title] will be replaced by the article title.

1. Go to Configuration, “URL aliases” and click on the Patterns tab.

2. Click on “Add Pathauto pattern”.

3. Select Content from “Pattern type” and enter “article/[node:title]” into “Path pattern”.

If you want to see all available tokens, click on “Browse available tokens”.

4. Check Article from “Content type”.

This means that this pattern will only be applied to Article content types.

5. And finally, add Article into Label.

Then click on Save.

Generating an Alias

If you go to the “URL path settings” on a content type, you’ll notice that it looks different once a pattern has been enabled. Now you get a new checkbox “Generate automatic URL alias”.

If this stays checked, then an alias will be generated. If you want to override the generated one, then uncheck it and add your custom alias into the “URL alias” field.

Pathauto Settings

The module settings can be configured by clicking on the Settings tab from the “URL aliases” page.

You can configure a lot on this page, but the few important ones are:

Enable entity types

This lets you turn on Pathauto support for custom entities.

Update action

This lets you define what the module should do when an entity is updated.

Strings to Remove

This lets you define which words will be stripped from the alias.

Punctuation

This allows you to control how special characters are handled.

Now just a friendly warning. Do NOT play around with these settings on a live site. The last thing you want to do is break the URLs on a site that’s already in production. Backup the database before you make any changes.

Bulk Generate Aliases

If you already have a ton of content and want to generate aliases or you want to regenerate them, you can do this by clicking on the “Bulk generate” tab.

First, select which entity type you want to bulk generate. Then select which aliases you want to be generated.

Before running any bulk generation make sure you backup your database.

Delete Aliases

You can batch delete aliases from the “Delete aliases” tab. You can choose which entity types you want to be deleted, or delete all aliases.

But take note of the “Delete options”, make sure you check “Only delete automatically generated aliases”.

This won’t delete aliases which are manually created.

Menu Structure as Path

The challenge in creating a good pattern is trying to figure out which token to use.

Just click on “Browse available tokens.” and look at all the available options. It can be overwhelming to try and figure out what token does what.

One common pattern which I’ve used a few time is to have a path use the parent menu path.

Take for example the following structure:

- Drupal (/drupal)
-- Site Building (/drupal/site-building)
--- Using Views (/drupal/site-building/using-views)

Just imagine the above example is part of the main navigation. “Drupal”, is the first level, “Site Building” is the second and “Using Views” is the third.

Notice how the path for “Using Views” has its parent path, “/drupal/site-building/using-views”. To achieve this type of path just use “[node:menu-link:parent:url:path]” to get the parent.

The full pattern with the title will be: “[node:menu-link:parent:url:path]/[node:title]”.

If you know of any useful tokens, let us know by leaving a comment.

Summary

Pathauto is an essential module which I’ve installed on every Drupal site I’ve worked on. The importance of URL aliases isn’t obvious at first. But if you spend a bit of time coming up with a good set of patterns, it’ll help your site rank well in search engines.

Jun 14 2017
Jun 14

Being one of the first early adopters of Drupal Commerce 2.x, starting our first project in early 2016 (on alpha2 version) and soon after a second one, I originally planned to write a blog post soon after beta1 was published. Unfortunately, I was too busy at that time. Beta1 was released by the end of September 2016, and I postponed writing the post from one time to another. I wanted to share my experience, giving valuable tips, stating my opinion on maturity and the project's overall quality, etc.

By time of writing, we already have beta7 out and we're approaching the first release candidate. In the past couple of weeks/months, I could observe how the project gathered more and more pace. There's a significant amount of people building Commerce 2.x sites and many great developers are participating in both reporting bugs and features requests, as well as fixing the bugs, developing the features, writing documentation, and so on.

And there's also a number of blog posts, tutorials, and conference speeches out there, so I won't add another one here. If you want to get a quick overview of some of the great improvements and changes in the new Commerce 2.x, I can recommend e.g. this post by Sascha Grossenbacher.

Instead I've decided to write about the personal benefits I've gathered throughout my work on our first Commerce 2.x projects, besides having a great and immense flexible e-commerce framework. Where developers can enjoy working, and actually concentrate on implementing their business logic, to build great solutions for their clients without being limited. This post may get a bit longer (don't say, I haven't warned you before :p)

Decision Making Process

First, we should take a short excursion to the decision making process of our projects, and turn back the time a little bit. In the end of 2015 we signed a deal to re-launch a B2B store, previously running on an outdated Magento version having lots of individual needs, especially when it comes to pricing, shipping calculation, etc. Those are the kind of features in many e-commerce systems that are either not realisable at all, or only in a very time-consuming and inconvenient way. Those are the features where you, while developing, have to work against the system you build on. Staying on Magento was no real option at all, as in version 1.9 some of the feature requests would have been a nightmare to implement. The customer wasn't too much satisfied with their old 1.x at all and the EOL was already visible on the horizon. At that time Magento 2 was quite surprisingly released, after having a long period where development progress stuck a bit. The first insights, however, were there was as its best alpha mature software released stable under a great time pressure. I can't remember any other software where I encountered as many bugs on a test installation in such a short time.

However, we were preferring Drupal Commerce over Magento anyway, as already Commerce 1.x had this great flexibility you need for highly customized projects, so that would have been our logical first choice. But at that time, we have already fully switched the development of websites to Drupal 8, starting with a late beta in autumn and fully switching after rc1 release in October 2015. Although that was a huge change, developing on Drupal 8 was like love at first sight. The new architecture is so great, leaving no limits for fulfilling any wish and offering the necessary base for writing quality code. As I stated in an earlier blog post, to a developer with Java origins Drupal 8 feels far more like "real" programming, quite close to writing Java or .NET applications.

Code review

I had been reading the Commerce Guys blog posts about progress and future plans in the Commerce 2.x project with great interest for the preceding couple for months, and was already quite excited about that. There were many great ideas about architecture and sustainable decisions, such as first building Drupal independent PHP libraries for addressing, currency, taxes and formatting related tasks.

The time had come to have a closer look at the source code - and I was really excited about what I saw. The code base wasn't feature complete at all - and for sure, one or another smaller bug could be hiding somewhere - but the things that already existed, were well considered, well written, cleanly formatted, and greatly covered by unit tests extracting nearly every possible advantage the Drupal 8 and Symfony API is offering. I'd go so far to say that reading Commerce 2.x source code is mostly like reading a teaching book (a good one of course). You hardly find that in other open source projects. Off the top of my head, I could only think of the legendary Spring Framework (ok, to be fair, of course also of great PHP frameworks like Symfony, Laravel,..).

In Commerce Guys We Trust

Beside of the code review, the "Commerce Guys factor" was the most important one. Imho it's essential to assess a module also on its maintainership, especially if the module is playing a central role in your project, and even more if it's not finished and stable at the time of starting your project. And here are some points why I fully trusted in CG:

Their CEO Ryan Szrama can look back on many years of experience in developing Drupal based e-commerce frameworks and solutions. He started the Ubercart project in Drupal 5 and also lead the Drupal 6 version. Then he proved that he won't always go the easy way and he has got a sure feeling for making decisions, that are both challenging but also wise and sustainable. He decided to combine both the lessons learned from Ubercart and the new possibilities Drupal 7 offered and started to write a new e-commerce suite from ground up - Drupal Commerce was born. This was a shift from the encapsulated out-of-the-box solution Ubercart was, to a fully flexible platform utilizing Core improvements (e.g. entity API) as well as the power of contrib modules such as Rules and Views. The success justified this move.

And now again in the Drupal 8 version, brave decisions were taken instead of choosing the easy path. Switching to D8 is especially for bigger and more complex modules tied up with some work anyway. But still many try to just port their functionality 1:1 in order to just get a working D8 version. But the Commerce developers with leading Bojan Živanović the way as new project mastermind and Matt Glaman as his congenial co-maintainer, decided to utilize the unique chance to rewrite the whole project in order to make full use of the opportunities Drupal 8 offers. And again, the new architecture was build up on the results of self-evaluation of Commerce 1.x pros and cons, as well as intensive research on competitors, existing e-commerce platforms, research papers, etc

Especially I'm raising my hat for the incredible consequence the team showed all the time. We all know how time pressure can lead to trade-offs in quality and abandon plans of certain features etc. Although the development of Commerce 2.x took far longer as expected and planned, they never left the right track. They never huddled and committed immature stuff. They never wrote a feature without backing this by tests. They always went the extra mile, if certain lower level functionality that should be rather part of Core or a dedicated contrib module, wasn't present or working as expected, and rather tried to push core APIs or contrib modules forward, instead including a quick custom solution inside Commerce. While Magento tricked the world by releasing and selling an immature and at best alpha state software as stable after being under time pressure, the Commerce Guys kept working honestly to deliver a great piece of quality software at the end. Thank you so much for that!

The Win-Win-Win Situation

Let's return to the actual topic of this post. What I love most on working with open-source software is that, in the best case, every stakeholder benefits: you, your client, and lots of other people in the open source community, when you give and contribute back.

Our clients now have got decent e-commerce solutions tailored to their needs with a long lifespan. We did not only benefit from getting money for doing our work, but also I can proudly say that my Drupal skills improved during my work on Commerce projects. On the one side, it boosted the speed of my Drupal 8 adoption. Although we've implemented already a couple of Drupal 8 websites before, all having some individual requirements, resulting in writing custom modules, custom entity types, etc, working with Commerce showed me an ideal-typical way of how things like service collectors and plugin managers work in Symfony/Drupal 8.  I also learned how to implement certain design patterns in Symfony/Drupal 8 and it showed me some hidden, at that time undocumented, features like some specific properties of entity type annotations, that aren't present in the typical examples. Or simply reminded me to cleanly document every function.

But it wasn't only the coding stuff that helped me to improve. More important is that it somehow got me much more involved into the community than before. I always tried to contribute as much as possible by proposing patches, commenting on issues, founding and maintaining a few smaller modules. But (e.g.) I never used IRC before. I must admit that I missed the access to this rather nerd communication tool. I didn't deal with how to use this (tool support in Windows is rather bad imho), and I didn't see much benefit in using it all. Doing asynchronous communication on issue reports was all I needed. However, there were many things to discuss and coordinate with Bojan especially. So I dove into using IRC, where I since then regularly hang out to both ask for help and help others - as well as in Slack since shortly. And there's one aspect, I've not considered before. That may sound a bit sentimental, but having these direct conversations gives you a lot more of that team feeling, even - or maybe especially because - they are spread all over the world, and you know none of them personally. And you can see that there are also great persons behind that great developers you meet. That said, I should give some shouts out now. To Bojan, who is for sure one of the best PHP developers I encountered so far. To Matt and Ryan, to Jingsheng Wang, who was also one of the very first brave early adopters out there, to Josh Miller, with whom I've worked together on Commerce Wishlist module, to Guy Schneerson, maintainer of Commerce Stock and techno enthusiast *nz nz nz nz* *shaking*, and many others, who work hard for our community.

And finally, some words about the third win. I feel it as my duty to contribute and give something back to the community, because that's how OSS works, and that's especially, why Drupal stands out of other OSS projects. So I'm also quite proud that I'm the author of no less than 22 commits of Drupal Commerce and overall credited in 37 issues of Drupal Commerce over the past 13 months. Of course, there're some smaller issues part of it, but there were also more important ones like implementing the price resolvers. I've also helped to rewrite Commerce Wishlist module - which was rather simple because I could based this on the work of Bojan and Matt in commerce_order, and started three Commerce contrib modules (Commerce order number, Commerce Quantity Increments, Commerce Open Payment Platform), as well as contributed several patches to other contrib modules and also to Core.

I'm looking forward to the forthcoming release candidate. If you haven't tried Commerce 2.x so far, you should definitely try and consider it for your next project. Despite being labelled as "beta" (or soon release candidate), it's very stable already, only missing some UX in certain parts. Trust me, it's better to rely on conservative self-evaluation taking semantic versioning serious than on marketing-focused solutions that just label any immature crap as stable.

original post: https://www.agoradesign.at/blog/how-drupal-commerce-2x-improved-my-skills

Jun 13 2017
Jun 13

As the technological landscape changes, there has emerged a growing discussion amongst developers regarding the best way to go about generating content to be used across multiple applications. There are two trains of thought -- a traditional, coupled Content Management System (CMS) with supporting Application Program Interface (API), tied to both a backend and frontend, OR a headless API-only CMS tied only to a backend. Naturally, as a Drupal web development agency, we look to understand this discussion so that we know where Drupal fits in.

Why does it matter?

We’ve long since moved passed the days of website only content. While websites drive the majority of the digital content we consume, we now have emerging technologies challenging this traditional interface.

As an example, take a smart thermostat in your home. There’s no way you’re going to be viewing the weather network’s website on a small screen that may not have much of a graphical display. So, how does the network provide content to this thermostat? Simple, the thermostat’s software accesses the information via a CMSes API, taking only what it needs and displaying it on the display.

This same API might also be providing information to a weather app on your phone, and your smartwatch, and your car, and … the list goes on.

This is the power of a centralized content system with an API that has the ability to distribute that content to any device that can access the API. If it wasn’t for this, someone would need to generate that content separately for each device, a monumental task.

Coupled vs. Headless CMSes

The first question to understand is: What’s the difference is between these two approaches?

Coupled CMS (with supporting API)

A traditional CMS, such as Drupal, allows content editors to add or edit content and have immediate feedback as to how the content will display. This is because a traditional CMS is tied to a front end.

The front end is what a user sees when viewing an application, which, in Drupal’s primary case, is a website. Marketers and content editors can view the content before it’s publicly available using tools such as inline editing or pre-published previews. Drupal shines in this regard, and it’s graphical interface and available modules allow for quick and relatively easy modification to how the data is displayed on the frontend. This makes it so that a developer isn’t always needed to make simple changes, which can be more efficient for both time and cost, possibly a huge benefit to using a coupled CMS.

Of course, a coupled CMS must have an API that other applications can interface with. This is more of a secondary feature of a traditional CMS, but that is changing with the times. Drupal 8 has a strong emphasis on providing many API services out of the box (https://www.drupal.org/docs/8/api), and there is a strong push to make sure ongoing development takes an API-first approach.

Headless CMS (the API-only approach)

A headless CMS is considered an API-only approach. It is a content hub only and therefore has no front end. These backend only systems allow marketers, content editors and software to publish content which then gets distributed automatically to any integrated application. Applications must be developed in order to access this content, since there is no coupled front end interface to immediately view the data

A benefit to this approach is that people, or software, generating content don’t need to know anything about development or UI in order to provide the content for their applications. All they need to provide is the data. In this way, teams can be separated and roles can be clearly defined.

The downside, of course, is that without a fully integrated front end, marketers and content editors are limited to what they can do with the content. With no immediate feedback as to how content it will appear before it gets pushed out to the public, trialing and proofing content before can be difficult. Layout can also be a limitation to marketing teams. A developer would need to step in if any application presentation needs to change.

Conclusion - Where does Drupal fit?

Being a big part of the Drupal community (https://www.drupal.org/acro-media-inc) we’re seeing the discussion first hand about where Drupal fits in. Luckily, with Drupal 8, the community has already taken a solid step towards making sure Drupal continues to be a relevant contender as either a coupled OR headless CMS.

We already love and use Drupal as a traditional CMS for building websites. There will always be a use case out there where a headless CMS is best, but, by making sure that there is a strong underlying API that applications can interact with, we get the best of both worlds with a single package. And, for our customers and anyone else hiring a Drupal agency, you also continue to benefit from the massive, dedicated, open source community that is always ready to help. With Drupal, you’re covered.

Jun 12 2017
ao2
Jun 12

Most of the information I have come across about migrating from Drupal 6 to Drupal 8 is about migrating content, however before tackling this problem another one must be solved, maybe it is obvious and hence understated, so let's spell it out loud: preserving the site functionality.

That means checking if the contrib modules need to be ported to Drupal 8, and also checking if the solution used in the previous version of the site can be replaced with a completely different approach in Drupal 8.

Let's take ao2.it as a study case.

When I set up ao2.it back in 2009 I was new to Drupal, I choose it mainly to have a peek at the state of Open Source web platforms.

Bottom line, I ended up using many quick and dirty hacks just to get the blog up and running: local core patches, theme hacks to solve functional problems, and so on.

Moving to Drupal 8 is an opportunity to do things properly and finally pay some technical debt.

For a moment I had even thought about moving away from Drupal completely and use a solution more suited to my usual technical taste (I have a background in C libraries and linux kernel programming) like having the content in git and generate static web pages, but once again I didn't want to miss out on what web frameworks are up to these days, so here I am again getting my hands dirty with this little over-engineered personal Drupal blog, hoping that this time I can at least make it a reproducible little over-engineered personal Drupal blog.

In this series of blog posts I'll try to explain the choices I made when I set up the Drupal 6 blog and how I am re-evaluating them for the migration to Drupal 8.

The front page view

ao2.it was also an experiment about a multi-language blog, but I never intended to translate every content, so it was always a place where some articles would be in English, some in Italian, and the general pages would be actually multi-language.

This posed a problem about what to show on the front page:

  • If every node was shown, there would be duplicates for translated nodes, which can be confusing.
  • If only nodes in the current interface language were shown, the front page would list completely different content across languages, which does not represent the timeline of the blog content.

So a criterion for a front page of a partially multi-lingual site could be something like the following:

  • If a node has a translation in the current interface language, show that;
  • if not, show the original translation.

The “Select translation” module

In Drupal 6 I used the Select translation module which worked fine, but It was not available for Drupal 8.

So I asked the maintainers if they could give me the permission to commit changes to the git repository and I started working on the port myself.

The major problem I had to deal with was that Drupal 6 approached the multi-language problem using by default the mechanism called "Content translations" where separate nodes represented different translations (i.e. different rows in the node table each with its own nid), tied together by a tid field (translation id): different nodes with the same tid are translations of the same content.

Drupal 8 instead works with "Entity translations", so one single node represents all of its translations and is listed only once in the node table, and actual translations are handled at the entity field level in the node_filed_data table.

So the SQL query in Select translation needed to be adjusted to work on the node_filed_data rather than of the node table, as it can be seen in commit 12f70c9bb37c.

While at it I also took the chance to refactor and clean up the code, adding a drush command to test the functionality from the command line.

The code looks better structured thanks to the Plugin infrastructure and now I trust it a little more.

Preserve language

On ao2.it I also played with the conceptual difference between the “Interface language” and the “Content language” but Drupal 6 did not have a clean mechanism to differentiate between the two.

So I used the Preserve language module to be able to only switch the interface language when the language prefix in the URL changed.

It turns out that an external module is not needed anymore for that because in Drupal 8 there can be separate language switchers, one for the interface language and one for the content language.

However there are still some issues about the interaction between them, like reported in Issue #2864055: LanguageNegotiationContentEntity: don't break interface language switcher links, feel free to take a look and comment on possible solutions.

More details about the content language selection in a future blog post.

Jun 04 2017
Jun 04
5 June 2017

This is the last part of a series on improving the way date ranges are presented in Drupal, by creating a field formatter that can omit the day, month or year where appropriate, displaying the date ranges in a nicer, more compact form:

  • 24–25 January 2017
  • 29 January–3 February 2017
  • 9:00am–4:30pm, 1 April 2017

The first post, looked at porting some existing code from Drupal 7 to Drupal 8, adding an automated test along the way. In the second post, we made the format configurable.

There’s currently no administrative interface though, so site builders can’t add and edit formats from Drupal’s UI. We’ll add that in this last post.

Routing

According to the routing overview on drupal.org, a route is a path which is defined for Drupal to return some sort of content on.

For our administrative interface, we want to define a number of routes:

  • /admin/config/regional/date_range_format - show a list of the formats, with links to:
  • /admin/config/regional/date_range_format/add
  • /admin/config/regional/date_range_format/*/edit
  • /admin/config/regional/date_range_format/*/delete

There are two ways in which our module can provide routes. We could include a routing.yml file along with our module. This file contains the same kind of information as would have been in hook_menu in Drupal 7. But it’s a static file—if we want something that’s dynamic we can provide it at runtime using a route provider.

For dealing with entities, it’s often much easier to use Drupal’s bundled AdminHtmlRouteProvider class. This examines various properties on the entity annotation—we’ll look at those next—and provides suitable routes for us automatically.

To use this route provider, we add the following to the entity annotation:

@ConfigEntityType(
  …
  handlers = {
    "route_provider" = {
      "html" = "Drupal\Core\Entity\Routing\AdminHtmlRouteProvider",
    },
  },
  …
)

At this point we need to run the drupal router:rebuild command from Drupal console. We must do this whenever we change a routing.yml file or any of the properties in the entity that affect routes.

The collection view

An entity can define a collection view—typically a page showing a list of entities with links to edit them. Drupal provides a list builder which can be used to show a list of entities with buttons for common add/edit/delete type tasks. We’ll create one of these for our new configuration entity:

<?php
namespace Drupal\daterange_compact;

class DateRangeFormatListBuilder extends ConfigEntityListBuilder {
  function buildHeader() {
    /* return an array of column headings */
  }
  function buildRow(EntityInterface $entity) {
    /* return an array of column values for the given entity */
  }
}

We then associate this list builder with our entity by declaring it within the @ConfigEntityType annotation:

handlers = {
  "list_builder" = "Drupal\daterange_compact\DateRangeFormatListBuilder",
}

The actual list builder is quite a rich, showing examples of different ranges. You can see the full implementation here.

The collection page

Once we have the list builder in place, we can add the collection link to our @ConfigEntityType annotation. The route provider will pick up on this link template and provide a route for the entity collection page automatically.

links = {
  "collection" = "/admin/config/regional/date_range_format"
}

By defining the link, our page appears at the appropriate URL. Note that the add/edit/delete links won’t show just yet—we still have to define those.

The date and time range configuration page, showing a list of available formats The screen for listing date/time range formats, provided by the entity list builder.

Updating the main configuration page

In order to reach this new page, we’ll create a menu link on the main configuration page, within the regional and language section. We do that by supplying a daterange_compact.links.menu.yml file:

entity.date_range_format.collection:
  title: 'Date and time range formats'
  route_name: entity.date_range_format.collection
  description: 'Configure how date and time ranges are displayed.'
  parent: system.admin_config_regional
  weight: 0

That link gives us the starting point for our interface:

The system configuration navigation, showing a link to date and time range formats Date/time range formats are accessed via the main configuration page.

We can now view all the date and time range formats from the main administrative interface in Drupal. Next we’ll build some forms to maintan them, after which the add/edit/delete links should start to appear on our collection page.

Forms

We need a form to be able to edit date range formats. The same form is used to create new ones. Drupal provides a lot of built-in functionality via the EntityForm class which we can extend. Drupal will then take care of loading and saving the entity. We just need to provide the form elements to map values on to our entity’s properties.

Adding & editing

We can add any number of forms, but we only need one to edit an existing format, and we can reuse the same form for adding a new format. This form is defined as a class, and lives in src/Form/DateRangeFormatForm.php:

<?php
namespace Drupal\daterange_compact\Form;

class DateRangeFormatForm extends EntityForm {
  /* implementation */
}

Configuration entities don’t use the field API, so we need to build the form ourselves. Although the form looks quite complicated and has a lot of options, it’s reasonably easy to build—each property in the configuration entity can be populated by a single element, like this:

$form['label'] = [
  '#type' => 'textfield',
  '#title' => $this->t('Label'),
  '#maxlength' => 255,
  '#default_value' => $this->entity->label(),
  '#description' => $this->t("Name of the date time range format."),
  '#required' => TRUE,
];

The full implementation of the form is here.

We also need to tell Drupal about this form, which we can do by adding the following to the @ConfigEntityType annotation:

"form" = {
  "add" = "Drupal\daterange_compact\Form\DateRangeFormatForm",
  "edit" = "Drupal\daterange_compact\Form\DateRangeFormatForm",
}

We also add some links, to match up operations such as add and edit with the new form. These are also defined in the @ConfigEntityType annotation:

links = {
  "add-form" = "/admin/config/regional/date_range_format/add",
  "edit-form" = "/admin/config/regional/date_range_format/{date_range_format}/edit",
}

If we look at the collection view again we see that alongside each format there is a link to edit it. That is because of the edit-form link declared in the annotation.

We also want a link at the top of that page, to add a new format. We can do that by providing an action link that refers to the add-form link. This belongs in the daterange_compact.links.action.yml file:

entity.date_range_format.add_form:
  route_name: 'entity.date_range_format.add_form'
  title: 'Add format'
  appears_on:
    - entity.date_range_format.collection

At this point we have a means of adding and editing formats. Our form looks like this:

The date and time range configuration page, showing our new format for editing The screen for editing date/time range formats.

Deletion

Deleting entities is slightly different. We want to show a confirmation page after a before performing the actual deletion. The EntityDeleteForm class does just that. All we need to do is subclass it and provide the wording for the question:

<?php
namespace Drupal\daterange_compact\Form;

class DateRangeFormatDeleteForm extends EntityDeleteForm {
  public function getQuestion() {
    return $this->t('Are you sure?');
  }
}

We declare this form and link on the @ConfigEntityType annotation in the same way as for add/edit:

"form" = {
  "delete" = "Drupal\foo\Form\DateRangeFormatDeleteForm"
}
links = {
  "delete-form" = "/admin/config/regional/date_range_format/{date_range_format}/delete",
}

Conclusion

That’s it. We’ve got a field formatter to render date and time ranges in a very flexible way. Users can define their own formats thorough the web interface, and these are represented as configuration entities, giving us all the benefits of the configuration management initiative, such as predictable deployments and multilingual support.

The module is available at https://www.drupal.org/project/daterange_compact.

I hope you found this write-up useful.

Want to help?

I’m currently working on getting this module up to scratch in order to have coverage from the Drupal security team. If you want to help make that happen, please review the code following this process and leave a comment on this issue. Thanks :-)

May 31 2017
May 31
May 31st, 2017

In the last post, we created a nested accordion component within Pattern Lab. In this post, we will walk through the basics of integrating this component into Drupal.

Requirements

Even though Emulsify is a ready-made Drupal 8 theme, there are some requirements and background to be aware of when using it.

Emulsify is currently meant to be used as a starterkit. In contrast to a base theme, a starterkit is simply enabled as-is, and tweaked to meet your needs. This is purposeful—your components should match your design requirements, so you should edit/delete example components as needed.

There is currently a dependency for Drupal theming, which is the Components module. This module allows one to define custom namespaces outside of the expected theme /templates directory. Emulsify comes with predefined namespaces for the atomic design directories in Pattern Lab (atoms, molecules, organisms, etc.). Even if you’re not 100% clear currently on what this module does, just know all you have to do is enable the Emulsify theme and the Components module and you’re off to the races.

Components in Drupal

In our last post we built an accordion component. Let’s now integrate this component into our Drupal site. It’s important to understand what individual components you will be working with. For our purposes, we have two: an accordion item (<dt>, <dd>) and an accordion list (<dl>). It’s important to note that these will also correspond to 2 separate Drupal files. Although this can be built in Drupal a variety of ways, in the example below, each accordion item will be a node and the accordion list will be a view.

Accordion Item

You will first want to create an Accordion content type (machine name: accordion), and we will use the title as the <dt> and the body as the <dd>. Once you’ve done this (and added some Accordion content items), let’s add our node template Twig file for the accordion item by duplicating templates/content/node.html.twig into templates/content/node--accordion.html.twig. In place of the default include function in that file, place the following:

{% include "@molecules/accordion-item/accordion-item.twig"
   with {
      "accordion_term": label,
      "accordion_def": content.body,
   }
%}

As you can see, this is a direct copy of the include statement in our accordion component file except the variables have been replaced. Makes sense, right? We want Drupal to replace those static variables with its dynamic ones, in this case label (the node title) and content.body. If you visit your accordion node in the browser (note: you will need to rebuild cache when adding new template files), you will now see your styled accordion item!

But something’s missing, right? When you click on the title, the body field should collapse, which comes from our JavaScript functionality. While JavaScript in the Pattern Lab component will automatically work because Emulsify compiles it to a single file loaded for all components, we want to use Drupal’s built-in aggregation mechanisms for adding JavaScript responsibly. To do so, we need to add a library to the theme. This means adding the following code into emulsify.libraries.yml:

accordion:
  js:
    components/_patterns/02-molecules/accordion-item/accordion-item.js: {}

Once you’ve done that and rebuilt the cache, you can now use the following snippet in any Drupal Twig file to load that library [NB: read more about attach_library]:

{{ attach_library('emulsify/accordion') }}

So, once you’ve added that function to your node–accordion.html.twig file, you should have a working accordion item. Not only does this function load your accordion JavaScript, but it does so in a way that only loads it when that Twig file is used, and also takes advantage of Drupal’s JavaScript aggregation system. Win-win!

Accordion List

So, now that our individual accordion item works as it should, let’s build our accordion list. For this, I’ve created a view called Accordion (machine name: accordion) that shows “Content of Type: Accordion” and a page display that shows an unformatted list of full posts.

Now that the view has been created, let’s copy views-view-unformatted.html.twig from our parent Stable theme (/core/themes/stable/templates/views) and rename it views-view-unformatted--accordion.html.twig. Inside of that file, we will write our include statement for the accordion <dl> component. But before we do that, we need to make a key change to that component file. If you go back to the contents of that file, you’ll notice that it has a for loop built to pass in Pattern Lab data and nest the accordion items themselves:

<dl class="accordion-item">
  {% for listItem in listItems.four %}
    {% include "@molecules/accordion-item/accordion-item.twig"
      with {
        "accordion_item": listItem.headline.short,
        "accordion_def": listItem.excerpt.long
      }
    %}
  {% endfor %}
</dl>

In Drupal, we don’t want to iterate over this static list; all we need to do is provide a single variable for the  Views rows to be passed into. Let’s tweak our code a bit to allow for that:

<dl class="accordion-item">
  {% if drupal == true %}
    {{ accordion_items }}
  {% else %}
    {% for listItem in listItems.four %}
      {% include "@molecules/accordion-item/accordion-item.twig"
        with {
          "accordion_term": listItem.headline.short,
          "accordion_def": listItem.excerpt.long
        }
      %}
    {% endfor %}
  {% endif %}
</dl>

You’ll notice that we’ve added an if statement to check whether “drupal” is true—this variable can actually be anything Pattern Lab doesn’t recognize (see the next code snippet). Finally, in views-view-unformatted--accordion.html.twig let’s put the following:

{% set drupal = true %}
{% include "@organisms/accordion/accordion.twig"
  with {
    "accordion_items": rows,
  }
%}

At the view level, all we need is this outer <dl> wrapper and to just pass in our Views rows (which will contain our already component-ized nodes). Rebuild the cache, visit your view page and voila! You now have a fully working accordion!

Conclusion

We have now not only created a more complex nested component that uses JavaScript… we have done it in Drupal! Your HTML, CSS and JavaScript are where they belong (in the components themselves), and you are merely passing Drupal’s dynamic data into those files.

There’s definitely a lot more to learn; below is a list of posts and webinars to continue your education and get involved in the future of component-driven development and our tool, Emulsify.

Recommended Posts

  • Shared Principles There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization.…
  • Webinar presented by Brian Lewis and Evan Willhite 15-March-2017, 1pm-2pm CDT Modern web applications are not built of pages, but are better thought of as a collection of components, assembled…
  • Welcome to Part Three of our frontend miniseries on style guides! In this installment, we cover the bits and pieces of atomic design using Pattern Lab.
Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 31 2017
May 31

Drupal Commerce Marketing

We are prepping hard this week for IRCE 2017, the big internet retailer show June 6-9th in Chicago, which means readying demos, writing marketing material and practicing pitches. If you will be attending IRCE 2017, come see us at booth 1948 and we'll show you what we've been upto.

While Drupal Commerce offers great features and flexibility, I find people don't know all of its capabilities because the people involved are great at coding, but maybe not so great at marketing. On that note, I thought I would share some of the marketing collateral we've done up for IRCE, all of which we will be providing back to the community so anyone can use it to help sell Drupal Commerce. Our big goal is to have lots of nice web content and even somewhat of a "press kit" that everyone can use to pitch Drupal Commerce to their clients, bosses, friends, family, casual acquaintences, etc.

We have a whole bunch of material written up, as well as a nice Commerce 2.x demo. We're still in the mad rush of finishing it up, so it isn't all available yet, but it will be available over the next few weeks (stay tuned).

Without further rambling, here is our first go at a feature list for Drupal Commerce 2.x. I have provided a PDF and source text at the bottom of this article, should you want to use this content in your own marketing. Also, if you have any feedback, feel free to email, tweet @shawnmmccabe, yell at me on the street or use our contact form.

acro_DC_feature_sheet-1.jpg

acro_DC_feature_sheet-2.jpg

May 30 2017
May 30

You’re not short on choice when it comes to debugging a Drupal website.

You can install Devel and use Kint to print variables to the screen. Or you could use Web Profiler to add a toolbar at the bottom of your site to see how things are performing.

If you’re looking for a proper debugger look no further than Xdebug. It integrates with a lot of IDEs and text editors and I’d recommend you try it out if you’ve never used it.

I recorded a webinar about Drupal 8 debugging which you can watch above.

Here is what I covered in the video:

  • Turn off caching (02:01)
  • Twig debugging (08:26)
  • Using Kint (19:25)
  • Print variables using Kint in Twig template (21:16)
  • Using WebProfiler (22:15)
  • WebProfiler IDE link (26:37)
  • Drupal console debugging commands (31:41)
  • Adding breakpoints to PhpStorm (39:14)
  • Adding breakpoints to Twig templates (43:48)
  • Drupal integration with PhpStorm (45:26)
  • PhpStorm plugins (47:52)

PHP Functions

PHP has two handy functions which can be used to print variables, objects and arrays to the screen.

print_r()

var_dump()

Drupal core comes with its own function: debug().

Devel Module

Devel has been around for as long as I’ve been using Drupal. It comes with a bunch of helper functions for module developers and it has a few handy sub-modules.

The two sub-modules worth mentioning are Kint and Web Profiler.

Kint

This module integrates the Kint library into Drupal and allows you to print variables using the PHP functions: ksm() and kint().

You can also print variables in Twig templates using {{ kint() }}.

Click here to learn how to use Kint in Drupal 8.

Web Profiler

The Web Profiler sub-module adds a toolbar at the bottom of your site and displays useful stats about the number of queries, memory usage and more.

The toolbar gives valuable insight into what’s happening in your Drupal site.

If you want to learn more about Web Profiler, check out our tutorial on using Web Profiler in Drupal 8.

Drupal Console

Drupal Console is a CLI tool for Drupal. It’s implemented using the Symfony Console component. It can be used to provision new Drupal sites, generate boilerplate code and debug Drupal.

Drupal Console comes with a bunch of debug commands. Just search for any command with the term “debug”.

drupal list | grep "debug"

The two that I found most useful are router:debug and container:debug.

Drupal Settings

Drupal 8 caches a lot more things than Drupal 7. Individual rendered elements like a block for example will be cached. Even if you’re logged in or not.

Rendered Twig templates are also cached. This makes Drupal 8 fast, but it can complicate things when you’re writing code. You don’t want to rebuild the site cache every time you make a change in a template or a rendered array.

Most of this caching can be turned off by disabling them in a settings file.

Drupal.org has a good page: “Disable Drupal 8 caching during development”.

Twig Template Discovery

To turn on Twig debugging, make sure you follow the link above. Then add the following into development.services.yml:

parameters:
  twig.config:
    debug: true
    auto_reload: true
    cache: false

The debug: true parameter turns on Twig’s debugging, Twig will display information such as which template it used and its path. It does this my adding HTML comments.

Use the HTML comments to figure out which Twig template you should override and its file name.

PhpStorm

PhpStorm is a commercial IDE which is popular with PHP and Drupal developers. It integrates nicely with Xdebug and the Drupal code base.

Xdebug

Using PhpStorm, you can add a breakpoint somewhere in PHP code and step through as the Drupal request is executed. You can also see what variables are available.

Learn how to configure Xdebug in PhpStorm.

Drupal Integration

PhpStorm also offers Drupal integration and when enabled it allows autocomplete functionality for hooks. No longer will you have to remember a specific hook and its arguments.

Make sure you turn on the integration by searching for “drupal” in the Preferences section.

Extra PhpStorm Plugins

Drupal 8 uses the YAML format for a lot of things throughout its code base; services, routing, permissions, etc…

And in these files you’ll see references to classes and methods.

Take for example this route:

node.add:
  path: '/node/add/{node_type}'
  defaults:
    _controller: '\Drupal\node\Controller\NodeController::add'
    _title_callback: '\Drupal\node\Controller\NodeController::addPageTitle'

There’s no easy way to navigate to the NodeController other than searching for the class name.

However, if you install these three PhpStorm plugins, you’ll be able to navigate to the class or method by pressing command or control then clicking on the reference.

Once you’ve downloaded these plugins, enable them and then enable “Symfony integration”.

Then you should be able to navigate to classes by clicking on the reference while pressing command or control.

Summary

As you can see, you have options when it comes to debugging. If you’re looking for something simple then use Kint. If you prefer a proper debugger then look at Xdebug.

What’s your preferred technique? Leave a comment below.

May 25 2017
May 25

The Weekly Updates are Back!

As you probably noticed, the Commerce weekly updates were on hiatus. I have officially pawned off all my difficult work to other staff, so I can get back to doing these! So much to talk about with Commerce 2 on the home stretch and how community contrib has really picked up speed, I will get back to doing these every week.

Commerce POS

https://www.drupal.org/project/commerce_pos

We're nearly at Release Candidate 1, which should release by the end of the month. We have only 5 remaining blockers that need to be finished.

https://www.drupal.org/project/issues/search/commerce_pos?project_issue_...

After that, we plan to start work on the Drupal 8 version. I have Bojan, from Commerce Guys, tentatively lined up to give us some help with architecture, so hopefully it will mesh very nicely with Commerce 2.x.

Commerce Migrate

https://www.drupal.org/project/commerce_migrate

u/quietone did a bunch of work on this and, although she is finishing off a client project right now, she will be back on full time contrib shortly and remain so for the rest of the year. Since she works on migrate for core a lot, she's been making really good progress on this.  Bojan and I were shooting for a stable version around late July when Commerce 2.x is expected to release. Right now it has working, but rough, implementations for Ubercart 6, Ubercart 7 and Commerce 1.x. We hope to have versions for Shopify and Magento as well, and there are prototypes already finished.

Commerce Licensing/Recurring

https://www.drupal.org/project/commerce_license

u/Kazanir is working on this, as he wrote the licensing stuff for 1.x and is doing the port to 2.x.  He's newer to Drupal 8 and Commerce 2.x and it's kicking his butt a bit, but he's getting there.  We should be getting an alpha probably within a month.

Commerce 2.x

Some of our new hires are going to be working on this as they are on their last phase of training. They do contrib for a few weeks before jumping into teams to do client work.

Commerce XLS Import

https://www.drupal.org/project/commerce_xls_import

There are a number of patches that have been submitted by other people in the community that need review. I have reviewed a few, but more waiting. Hopefully I get those finished before we leave for IRCE.

May 24 2017
May 24
May 24th, 2017

In the last post, we introduced Emulsify and spoke a little about the history that went into its creation. In this post, we will walk through the basics of Emulsify to get you building lovely, organized components automatically added to Pattern Lab.

Prototyping

Emulsify is at its most basic level a prototyping tool. Assuming you’ve met the requirements and have installed Emulsify, running the tool is as simple as navigating to the directory and running `npm start`. This task takes care of building your Pattern Lab website, compiling Sass to minified CSS, linting and minifying JavaScript.

Also, this single command will start a watch task and open your Pattern Lab instance automatically in a browser. So now when you save a file, it will run the appropriate task and refresh the browser to show your latest changes. In other words, it is an end-to-end prototyping tool meant to allow a developer to start creating components quickly with a solid backbone of automation.

Component-Based Theming

Emulsify, like Pattern Lab, expects the developer to use a component-based building approach. This approach is elegantly simple: write your DRY components, including your Sass and JavaScript, in a single directory. Automation takes care of the Sass compilation to a single CSS file and JavaScript to a single JavaScript file for viewing functionality in Pattern Lab.

Because Emulsify leverages the Twig templating engine, you can build each component HTML(Twig) file and then use the Twig functions include, embed and extends to combine components into full-scale layouts. Sound confusing? No need to worry—there are multiple examples pre-built in Emulsify. Let’s take a look at one below.

Simple Accordion

Below is a simple but common user experience—the accordion. Let’s look at the markup for a single FAQ accordion item component:

<dt class="accordion-item__term">What is Emulsify?</dt>
<dd class="accordion-item__def">A Pattern Lab prototyping tool and Drupal 8 base theme.</dd>

If you look in the components/_patterns/02-molecules/accordion-item directory, you’ll find this Twig file as well as the CSS and JavaScript files that provide the default styling and open/close functionality respectively. (You’ll also see a YAML file, which is used to provide data for the component in Pattern Lab.)

But an accordion typically has multiple items, and HTML definitions should have a dl wrapper, right? Let’s take a look at the emulsify/components/_patterns/03-organisms/accordion/accordion.twig markup:

<dl class="accordion-item">
  {% for listItem in listItems.four %}
    {% include "@molecules/accordion-item/accordion-item.twig"
      with {
        "accordion_item": listItem.headline.short,
        "accordion_def": listItem.excerpt.long
      }
    %}
  {% endfor %}
</dl>

Here you can see that the only HTML added is the dl wrapper. Inside of that, we have a Twig for loop that will loop through our list items and for each one include our single accordion item component above. The rest of the component syntax is Pattern Lab specific (e.g., listItems, headline.short, excerpt.long).

Conclusion

If you are following along in your own local Emulsify installation, you can view this accordion in action inside your Pattern Lab installation. With this example, we’ve introduced not only the basics of component-based theming, but we’ve also seen an example of inheriting templates using the Twig include function. Using this example as well as the other pre-built components in Emulsify, we have what we need to start prototyping!

In the next article, we’ll dive into how to implement Emulsify as a Drupal 8 theme and start building a component-based Drupal 8 project. You can also view a recording of a webinar we made in March. Until then, see you next week!

Recommended Posts

  • Webinar presented by Brian Lewis and Evan Willhite 15-March-2017, 1pm-2pm CDT Modern web applications are not built of pages, but are better thought of as a collection of components, assembled…
  • Welcome to the final post of our frontend miniseries on style guides! In this installment, the Web Chefs talk through how we chose Pattern Lab over KSS Node for Four…
  • Shared Principles There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization.…
Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 24 2017
May 24

WordPress and Drupal

President of Mobomo, Ken Fang, recently sat down with Clutch for a Q and A about all things WordPress and Drupal.

What should people consider when choosing a CMS or a website platform?

They should probably consider ease of use. We like open-source because of the pricing, and pricing is another thing they should take into account. Finally, for us, a lot of it revolves around how popular that particular type of technology is. Being able to find developers or even content editors that are used to that technology or CMS is important.

Could you speak about what differentiates Drupal and WordPress from each other?

Both of them are open-source platforms, and they’re probably the most popular CMS’s out there. WordPress is probably the most popular, with Drupal running a close second. Drupal is more popular in our federal space. I think the main difference is that WordPress started off more as a blogging platform, so it was typically for smaller sites. Whereas Drupal was considered to be more enterprise-grade, and therefore a lot of the larger commercial clients and larger federal clients would go with Drupal implementation.

They’ve obviously both grown a lot over the years. We’re now finding that both of the platforms are pretty comparable. WordPress has built a lot of enterprise functionality, and Drupal has built in a lot more ease of use. They’re getting closer and closer together. We still see that main segregation, with WordPress being for smaller sites, easier to use, and then Drupal for more enterprise-grade.

Could you describe the ideal client for each platform? What type of client would you recommend each platform for?

Definitely on the federal side, Drupal is a much more popular platform. Federal and enterprise clients should move to the Drupal platform, especially if they have other systems they want to integrate with, or more complex workflow and capability. WordPress we see much more on the commercial side, smaller sites. The nice thing about WordPress is that it’s pretty quick to get up and running. It’s a lot easier for the end user because of its limited capability. If you want to get something up more cost-effectively, that’s pretty simple, WordPress is a good way to go.

Could you speak about the importance of technical coding knowledge when building a website on either platform, from a client’s perspective?

Most of these main CMS’s are actually built in PHP, and most of them have a technology stack that requires different skillsets. So, on the frontend side, both of them require theming. It’s not only knowing HTML, CSS, and JavaScript, but it’s also understanding how each of the content management systems incorporate that into a theme. You usually start off with a base theme, and then you customize it as each client wants. As such, you need either WordPress or Drupal themers to do that frontend work. For any backend development, you do need PHP developers. For Drupal, it’s called modules. There are open-source modules that people contribute that you can just use, you can customize them, or you can even build your own custom modules from scratch. For WordPress, they’re called plugins, but it’s a very similar process. You can incorporate a plugin, customize it, or write your own custom plugin.

In between all of this, because it is a content management framework and platform, there are site builders or site configurators. The nice part about that is that you can literally fire up a Drupal website and not have to know any PHP coding or whatever. If you’re just doing a plain vanilla website, you can get everything up and running through the administrative interface. A Drupal or WordPress site builder can basically do that, provided they are savvy with how the system actually works form an administration standpoint. So, those are the technical skills that we typically see, that clients would need to have. In many cases, we’ll build out a website and they’ll want to maintain it. They’ll need somebody in-house, at least a Drupal site builder or a themer, or something like that.

Do you have any terms or any codes that clients should be aware of or should know prior to trying to launch a project in Drupal or WordPress?

PHP is definitely the main language they should know, and then HTML, JavaScript, and CSS for the frontend stuff. Drupal 8 has some newer technologies. Twig is used for theming as an example, so there’s a set of technologies associated with Drupal 8 they need to know as well.

Is there a particular feature of WordPress or Drupal that impressed you and potential users should know about?

I’m going to lean a little more into the Drupal world because a lot of people are starting to move to Drupal 8, which was a big rewrite. There are now a lot of sites starting to use that in production. They did quite a bit of overhaul on it. It is more API-driven now. Everything you do in Drupal 8 can be published as a web service. You can even do a lot of what they call headless Drupal implementations. That means you can use some of the more sexy frameworks, like Angular or React, to build out more intricate frontends, and still use Drupal as a CMS, but really as a web service.

Are there any features of the two platforms that could be improved to make it a better CMS?

I think they’re pretty evolved CMS’s. On both of them, platforms are getting into place to build right on the CMS’s without having to install them. Platforms like Acquia, WordPress.com, Automaticc. These platforms are profitable because from an enterprise standpoint right now, it is hard doing multisite implementations at that scale, managing all of the architecture, and stuff like that. From a technical standpoint, if you get into an enterprise, clients who says they want to be able to run a thousand sites on a single platform, that becomes difficult to do from a technical perspective. They both have the ability to support multisite implementations, but advancements in there to make those types of implementations easier to use and deploy would be a significant advancement for both platforms.

What should companies and clients expect in terms of cost for setting up a website, maintaining it, and adding new features?

For a very basic site, where you’re just taking things off the shelf – implementing the site with a theme that’s already built, and using basic content – I would say a customer can get up and running anywhere from two to six weeks, $20,000-30,000. Typically, those implementations are for very small sites. We’ve seen implementations that have run into the millions, that are pretty complex. These are sites that receive millions of hits a day; they have award-winning user experience and design, custom theming, integration with a lot of backend systems, etc. Those can take anywhere from six to twelve months, and $500,000 to $1 million to get up and running.

Can you give some insight into SEO and security when building a website?

The nice thing about Drupal and WordPress is that there are a lot of modules and plugins that will manage that, from Google Analytics to HubSpot, all sort of SEO engines. You can pretty much plug and play those things. It doesn’t replace the need for your traditional content marketing, analyzing those results and then making sure your pages have the appropriate content and keywords driving traffic into them, or whatever funnel you want. All your analytic tools usually have some sort of module or plugin, whether it’s Google, Salesforce, Pardot, or whatever. A lot of those things are already pretty baked in. You can easily get it up and running. That’s the nice thing about the SEO portion of it.

The other nice thing about it being open-source is that there are constant updates on sort of security. Using these CMS systems, because they tie to all the open-source projects, if you download a module, anytime there’s a security update for it, you’ll get alerted within your administrative interface. It’s usually just a one-click installation to install that upgrade for security patches. That’s nice, as you’re literally talking hundreds of thousands of modules and millions of users. They’re usually found and patched pretty quickly. As long as you stay on that security patching cycle, you should be okay. You could still do stupid stuff as an administrator. You could leave the default password, and somebody could get in, so you still have to manage those things. From a software perspective, as long as you’re using highly-active, contributed modules and the core, security patches and findings come out pretty regularly on those things.

As a company, because we do stuff with some regulated industries like banking and federal agencies, we usually have to go a level above on security. Take a WordPress site or whatever, we would actually remove that form the public so it couldn’t be hit from outside of a VPN or internal network, and then have it publish out actual content and static pages so the outside just doesn’t even connect to the back-end system. That does take some custom programming and specialty to do. Most people just implement your regular website with the appropriate security controls, and it’s not a big issue.

Are there any additional aspects of building a website or dealing with a CMS that you’d like to mention? Or any other CMS platforms you’d like to give some insight on?

For us, because we are such a big mobile player, we typically would say that, whatever you build, your CMS, obviously focus on user experience. Most people are doing a good job of that these days. One of the areas that is still a little weak is this whole idea of a content syndication. There’s still a big push where the content editors build webpages, and they want to control the layout, pages, etc. They get measured by the number of visitors to the website and all that stuff. I’m not saying that’s not important; however, we’re trying to push an idea of a web service content syndication. So, how you use these CMS’s to do that, so your content gets syndicated worldwide. It doesn’t necessarily have to be measured by how many people hit your website. It should be measured by the number of impressions.

For instance, with the work we’ve done at NASA, they announced the TRAPPIST-1 discovery of potential Earth-like planets. That drove a huge amount of traffic to the website, probably close to nine million hits that day. If you look at the actual reach of that content and NASA’s message – through the CMS’s integration with social media, with API’s that other websites were taking, with Flickr, that sort of thing – it hit over 2.5 billion social media posts. That’s an important thing to measure. How are you using your content management system more as a content syndication platform, opposed to just building webpages? USGS has also done a really solid job of this ‘create once, publish everywhere’ philosophy. I think people should be looking at content management systems as content management systems, not as website management systems.

We ask that you rate Drupal and WordPress on a scale of 1 – 5, with 5 being the best score.

How would you rate them for their functionalities and available features?

Drupal – 5 – We have a bias towards Drupal because it’s more enterprise-grade. It fits what a lot of our clients need. I think they’ve come a long way with both the 7 and 8 versions and have really brought down the cost of implementation and improved the ease of use.

WordPress – 4 – I think it’s fantastic. It’s obviously extremely popular and very easy to set up and use. I give it a 4 and not a 5 because it’s not as easy to extend to enterprise-grade implementations. For some functionalities, you still have to dig into core, and nobody wants to be modifying core modules.

How would you rate them for ease of use and ease of implementation?

Drupal – 4.5 for ease of use, because it’s not as easy as WordPress, and 4.5 for ease of installation.WordPress – 5 for ease of use, and 4 for ease of implementation. If you want to go out of the box, it’s a little more difficult. Configuring multisite is a real difficulty in WordPress.

How would you rate them for support, as in the response of their team and the helpfulness of available online resources?

Drupal – 4

WordPress – 4

Being open-source projects, there are a ton of people contributing. They’re very active, so you usually can get your answers. In many cases, to get something embedded into core, it does have to get reviewed by the organization, which is a bunch of volunteers for the most part. Because of that, it does take a while for things to get embedded.

How likely are you to recommend each platform for a client?

Drupal – 5

WordPress – 5

I think they’re the strongest CMS’s out there for the price.

How likely are you to recommend each platform for a user to build their own DIY website?

Drupal – 3

WordPress – 4  

If you’re going to build your own website, and you have zero technical skills, you might want to look into a Weebly, Wix, or something like that. There is a need to know how to do site-building if you use Drupal or WordPress. Somebody has to configure it and understand it.

How would you rate your overall satisfaction collaborating with each platform?

Drupal – 5

WordPress – 5

We implement on both of them regularly, and they’re really great. They solve the need for a lot of our clients to migrate from much more expensive legacy systems.

Clutch.co interview: https://clutch.co/website-builders/expert-interview/interview-mobomo-dru...

May 24 2017
May 24
Drupal logo used in Ashday Blog

In Drupal 7, site deployments could be rather difficult on ambitious sites. Some database level elements were worth programming out in hook_updates (turning on modules, reverting views, etc) and some usually weren't (block placement, contrib module configuration). I remember days where a deployment involved following a three page long Google doc of clicks that had to be carefully replicated. Ugh.

A New Hope

So if you've taken the dive into Drupal 8, you'll quickly discover one of it's most prominent features - Configuration Management. Drupal 8's ability to manage configuration with yml files is absolutely amazing! It's nearly akin to watching Star Wars and thinking "Hey, I can do anything with a lightsaber! Fight bad guys, cut holes in doors, remove my hand cuffs. Sweet!"

The Empire Strikes Back

Here's the rub. Managing Drupal 8 configuration in complex real world apps is akin to building a real world laser sword after watching Star Wars only to promptly burn your face off and lose two limbs as soon as you try to fight with it. "Ambitious digital experiences" essentially equates to "arduous development concerns" and even config management can't save the day simply by existing. You must use it for good. You must unlearn what you have learned. I blogged a bit on this shortly after Drupal 8 released, but oh how much I learned since then!

We've been doing Drupal 8 pretty heavy for about a year and a half here at Ashday and had both the fortune and misfortune of needing to manage a more complex set up which quickly revealed our deficiencies in understanding how to properly manage config.

Here's the scenario: A client needs a site that will become the model for many sites, but they don't want them to be a single site with multiple domains and they also don't want it to be costly or complicated to keep them mostly similar from a functional perspective. Given that our preferred hosting solution is Pantheon, this quickly turned into an obvious Upstream project. And that means figuring out a new way to manage D8 config other than just import/export of the whole site.

If you aren't familiar, a Pantheon Upstream works nearly identical to their core updates - you have a remote repository that, upon code getting pushed to it, notifies you through the dashboard of your updates where you can apply them in the same way you do Pantheon core updates. It's pretty slick because it provides an easy way to have a big shared chunk of code and apply updates to many sites with a few clicks (well, except when nearly every update is major and requires hands-on management - but I'm not bitter).

The Phantom Menace

Our first try at this was to give the Features module a go, but at the time the interface was just too buggy to give us enough confidence to rely on it, it auto-selected what we didn't want and didn't select what we did, and it didn't support some key things we needed like permissions. As a result we decided to home brew our own solution. We knew these sites were going to have a lot of config in common, and a lot of config unique, and we needed to deploy to many of them all in different states without tragedy striking. So to accomplish this, we concocted the following procedure that we would run at deployment time, all from a single drush command.

  • Export the current site's live config (using drush) to the config sync folder
  • Copy all config files (with uuids removed) in our cross-site custom module over top of the config sync directory
  • Copy all config files (with uuids removed) in our site-specific custom module over top of the config sync directory
  • Import all config.

What this allowed us was the ability to allow each upstream site to stray a bit as they needed to, but we could be assured the config we cared about was prioritized in the proper stacking order. The approach ultimately wasn't that different than the goal of Features, but we were in control of the process, it was all live and it was relatively quick. And you know what? It worked! For a while...

And then it didn't. You see, the method we used caused Drupal to see every config file we were tracking (upwards of 300) as changed simply because of the missing uuid. So if only 8 config files changed in a deployment, Drupal was attempting to import hundreds of config files every time. This meant that it started to slow significantly over time as the site grew in complexity and eventually, we started having timeout issues and long deployments. We also started to run into issues when there was a significant core update (ie: 8.3) because so much config was being imported unnecessarily that wasn't compatible in that moment with the new code because db updates hadn't run yet. Not good. It was time for something else.

Return of the Jedi

The Jedi in question here is again the Features module. Or maybe it's Mike Potter. At least it's not me anyways. At DrupalCon Baltimore, I was set on speaking with Mike about how we were handling config because I simply knew there was a better way. If you don't know, Mike is one of the founders of the Features module and ran a great BOF on config management in Baltimore.

So I found this delightful man and laid out what we were doing and he reacted exactly as I had hoped. He didn't say that what we were doing was terribly wrong, but it made him visibly uneasy. After a chat, I discovered that Features had come a long way since we initially tried to use it and we should really give it another shot. He also explained some of the configuration of Features to help me better understand how to use it.

So we returned to Features now and are much happier for it. The thing is though that I don't think I would have really known how to manage it if we hadn't taken the deep dive into config and figured out how it needed to work. It all helped us a lot to decide how to incorporate Features properly for this particular situation so that I actually feel good about relying on it again. And that's how most good Drupal development goes. You really should know how something works before simply relying on a contrib module or someone else's code to take care of everything because otherwise you won't really know how to deal with problems - heck, you might not even know you have a problem! I personally don't prefer spending weeks writing code and then depending at a critical moment on a mysterious piece to make it all successfully roll out to production.

So as it all played out, we now understand what Drupal puts in config, what we care about and don't, what belongs in the upstream vs our site-specific modules vs no where, etc. Here is our current process after this 6 month long journey.

  • Revert the global base feature
  • If needed, revert the site specific feature
  • Run our previous script outlined above, but now on only the 5 or 6 role config files so we handle the permissions in the same fashion

So there you have it! For how long-winded this turned out to be, I'm glossing over a lot of details that are pretty critical to understanding Drupal 8 configuration (ex: blocks are a mix of config and content), but I recommend you do the same thing we did and really get your hands dirty and understand what's going on so that you don't get bit at rollout. After all of this, we feel even moreso that Configuration Management is an astoundingly useful component of Drupal 8 and now we find ourselves a bit sad when we update our Drupal 7 sites (a version we absolutely loved!) where we don't have this amazing tool. 

So good luck and don't hesitate to drop us a note if you have any questions or thoughts on this stuff. I'll probably change my mind on all of it anyways tomorrow. That's why this job is awesome.

P.S. I apologize that I didn't find room to incorporate Attack of the Clones, Revenge of the Sith, The Force Awakens or Rogue One, but the reality is that I just didn't have time to modify our whole approach to configuration in order to make this blog post more cohesive.

Offer for a free consultation with an Ashday expert

May 17 2017
May 17
May 17th, 2017

Shared Principles

There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization. At the smaller agency level, being a frontend engineer typically involves a balancing act between a general knowledge of web development and keeping up with frontend best practices. This makes it all the more important for agency frontend teams to take a step back and determine some shared principles. We at Four Kitchens did this through late last summer and into fall, and here’s what we came up with. A system working from shared principles must be:

1. Backend Agnostic

Even within Four Kitchens, we build websites and applications using a variety of backend languages and database structures, and this is only a microcosm of the massive diversity in modern web development. Our frontend team strives to choose and build tools that are portable between backend systems. Not only is this a smart goal internally but it’s also an important deliverable for our clients as well.

2. Modular

It seems to me the frontend community has spent the past few years trying to find ways to incorporate best practices that have a rich history in backend programming languages. We’ve realized we, too, need to be able to build code structures that can scale without brittleness or bloat. For this reason, the Four Kitchens frontend team has rallied around component-based theming and approaches like BEM syntax. Put simply, we want the UI pieces we build to be as portable as the structure itself: flexible, removable, DRY.

3. Easy to Learn

Because we are aiming to build tools that aren’t married to backend systems and are modular, this in turn should make them much more approachable. We want to build tools that help a frontend engineer who works in any language to quickly build logically organized component-based prototypes quickly and with little ramp-up.

4. Open Source

Four Kitchens has been devoted to the culture of open-source software from the beginning, and we as a frontend team want to continue that commitment by leveraging and building tools that do the same.

Introducing Emulsify

Knowing all this, we are proud to introduce Emulsify—a Pattern Lab prototyping tool and Drupal 8 starterkit theme. Wait… Drupal 8 starterkit you say? What happened to backend agnostic? Well, we still build a lot in Drupal, and the overhead of it being a starterkit theme is tiny and unintrusive to the prototyping process. More on this in the next post.
[NB: Check back next week for our next Emulsify post!]

With these shared values, we knew we had enough of a foundation to build a tool that would both hold us accountable to these values and help instill them as we grow and onboard new developers. We also are excited about the flexibility that this opens up in our process by having a prototyping tool that allows any frontend engineer with knowledge in any backend system (or none) to focus on building a great UI for a project.

Next in the series, we’ll go through the basics of Emulsify and explain its out-of-the-box strengths that will get you prototyping in Pattern Lab and/or creating a Drupal 8 theme quickly.

Recommended Posts

Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 17 2017
May 17

Introduction

10 years ago (at the end of 2006), Drush appeared to make it easy for Drupal developers to do some common tasks, it wasn’t immediately popular as it was a Command-LIne tool and a lot of people didn’t appreciate the idea, but year after year it’s popularity grew as did its functionality.

By the time Drupal 7 came out it was unimaginable for most developers to build a site without Drush because of the incredible boost they got in their development process and the option of generating a site from a makefile and avoiding the need to add in their project contrib code or risk of people hacking the behaviour of this module and unfortunately this was a common practice.

But Drush with the makefile wasn’t good enough, if they update some module the makefile won’t show any update, sometimes some releases breaks other modules and it wasn’t a clear way to say “I need up to this version of that module”, indeed the whole update process itself wasn’t very automated at all, and it was only useful for download a specific version of a module and with his dependencies (it gets the latest version of them without check if they are compatible)

In parallel, 5 years ago the PHP community started development of “Composer“, a tool for dependency management of  open source PHP libraries. It quickly became very popular, and big frameworks like Symfony adopted Composer the same year it was released (Symfony 2.1.0 was the first version to use it, but nowadays all PHP frameworks use it as default install method). This tool addressed all the dependency issues that Drush had and also implemented Autoloading.

When Drupal 8 come out in 2015 it tried to use Composer in a very non-intrusive way, as a result Composer was hardly worth using. It was very slow, every new version override the composers files, and had a few other disadvantages but at least we got autoloading (finally!!). However things changed once the GitHub project “drupal-composer/drupal-project” appeared, suddenly it was possible to create a Drupal site and add/remove/update modules very easy without the help of any external tools other than Composer.

How Composer works

Composer uses a JSON file where you define the setup of your project and a lock file that contains the exact versions of installed packages when you last run composer update, or when this lock file is generated. This file is especially useful to make sure deployed versions of dependencies are the same on every build and deployment. This file is commonly committed into version control.

When you execute ‘composer install’ it first reads the lock file and installs everything from there, if this file doesn’t exist it reads the JSON file and  installs the latest version that is specified there and then generates the lock file.

Executing ‘composer update’ ignores the lock file in order to get the latest version of what you specified in the JSON file so it’s a very similar process to when you execute the install without the lock file.

Composer does more things than just download packages, it also provides an autoload for all the namespaces, and you can also define custom namespaces for your project, for example a namespace for your tests.

You have also the option to add hooks (pre, post, etc..) on the scripts commands to further automate your application build process.

Configuring Composer

Repositories

This is probably one of the most important to understand for Drupal developers.

Composer by default uses packagist as a unique source of packages and Drupal is available under “packages.drupal.org/8”. If we have some custom module, profile or theme in another place like GitHub, Bitbucket, your own packagist account or any other private repository we are going to have to add them as a custom repository.

Composer lets you define settings for repositories, such as the type of a repository For more information read this documentation.

Dev mode

In the JSON file you can define libraries and namespaces that are going to be used only for development, in this way you can deploy minimal code to Production while in your development environments you can have that plus some test frameworks and debugging tools while defining everything in the same file.

By default Composer uses the dev mode but you can add –no-dev to install only your Production libraries.

Scripts

You can create custom commands, which are very useful for things like  providing an alias for very common commands, or running multiple commands with a single command, or aggregating commands before or after execute. You can find more documentation here.

Config option

In config you can add tokens for a private repo, or change the default behavior of Composer, most of them are really useful to avoid having to add parameters when you execute any Composer command. The official documentation of this section can be found here.

Conclusion

Composer is not a replacement for Drush or Drupal Console, but Composer has so many other great benefits that it’s worth persevering with it and adopting it more widely across the Drupal and PHP communities as the base of development workflows.

It has a very intuitive standard and you can build an amazing easy deployment process, which is easy to understand for other developers and use it in CI/CD tools.

So don’t be afraid of Composer, start using it today and find out for yourself how useful it can be!

Also posted in here

Share this:

Like this:

Like Loading...
May 12 2017
May 12

Two weeks ago, I presented our story of rebuilding rednoseday.com on Drupal 8 at DrupalCon Baltimore, Drupal’s largest gathering with an attendance of over 3000!

I talked about our journey of building a product to power all our editorial websites at Comic Relief (see my previous blog post), and focused on three topics: editor experience, automation & streamlining, and using decoupled services.

So far, our product ecosystem proudly powers www.rednoseday.com, and the upcoming Red Nose Day USA Campaign, and we are currently working hard to bring www.comicrelief.com on board as well!

Check out the video of my presentation (audio+slides),

[embedded content]

or the slides only.

I’d be happy to hear your thoughts, questions and feedback below.

Featured image by Jeff Geerling

Share this:

Like this:

Like Loading...
May 10 2017
May 10
May 9th, 2017

DrupalCon is many things to many people. For me, this year’s North America DrupalCon in Baltimore was a chance to connect with my remote co-workers in the same place, help share knowledge while learning things myself, and celebrate all the things that Drupal makes possible.

The Drupal 8 with React.js and Waterwheel Training

Our first big event was “API First Drupal 8 with React.js and Waterwheel Training”, where Web Chef Luke Herrington took a canonical JavaScript application—a todo list built with React—and hooked it up to Drupal 8 through a new JavaScript library called Waterwheel.js. Todos were stored in a headless Drupal site via the JSON API module, and we even provided a login page and a `like` button for todos. Although we had a small army of Web Chefs available to help, Luke had created such a great training that our extra support wasn’t needed, and the attendees were really able to dive deep into how everything worked.

Future of the CMS: Decoupled

“I’ve completely rewritten my talk,” said Todd, the Four Kitchens CEO, at the team dinner on Monday night. I’ve seen him give this talk before but this declaration really piqued my curiosity.

There were a lot of talks at DrupalCon about the “how” of decoupling, but Todd’s revised talk is a great summary of the “why”. In it, Todd talks about the differences between CMSes being “content management systems” versus “website management systems” and about how that content can be managed so that it is reuseable on all categories of devices. Because the technology is always changing, it’s a talk he rewrites at least once a year, and I’m glad I got to see this version of the 2017 talk when I did.

Supercharge Your Next Web App with Electron

To show off his work in Electron, Web Chef James Todd brought two drawing robots to DrupalCon that he set up in our booth. Each machine was powered by RoboPaint, a packaged-up web app. I’ve been curious about Electron for a while, and when I learned that James was giving a talk on the subject I immediately reached out to help him build his slide deck so that I could learn more. His presentation was thorough and entertaining, and he encouraged people to “experiment and play with it, it’ll be fun”.

Drinks with a Mission

The Drupal community believes that open source technology has the power to improve the lives of others, so instead of the usual DrupalCon party, this year, Four Kitchens teamed up with Kalamuna and Manatí to host “Drinks with a Mission”.

We started the night by asking, “If you had a magic wand that would fix a problem, what problems would you fix?” Answers were written down on post-it notes, which were then sorted into groupings, and finally assigned to teams. Each team took their topic, such as How to Better Connect with Nature, and had to come up with solutions to the topic problem. Great ideas can begin in unexpected places, and the ensuing solutions were as thoughtful as they were hilarious.

Watch the recorded stream of the event: Part 1, Part 2

Taking the Train Home

In the last few years I’ve started to become enamored with the concept of “taking the train”. So at the end of DrupalCon I got my wish, and instead of flying, I spent an entire day traveling by rail: from Baltimore, through Philadelphia’s gorgeous train station, and then on to home in the middle of Pennsylvania.

Recommended Posts

  • A mostly full report on what went down last week in the Big Easy, gonzo journalism -style.
  • Fun & Games DrupalCon Baltimore is next week and we’re so excited to get back together in Baltimore! As the official Drupal Games sponsors, we take fun very seriously and…
  • "API First" or, as some may call it, "Decoupled Drupal", remains a topic of much discussion among the Drupal community. Here are just a few sessions being presented at Drupalcon…
Randy Oest
Randy Oest

Randy Oest is an avid Star Trek fan, plays too many board games, and bought his mother an iPad so that he wouldn't have to fix her computer anymore.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web