Author

Jul 26 2016
Jul 26

A detailed blog post by our Drupal developer about using
Drupal Composer template and Phing. It is written from
the point of view of the latest Drupal version — Drupal 8.

Every Drupal developer faces daily routine tasks, regardless of the development area, whether it is front-end, back-end, or QA. Most of us are used to optimize any workflow, and there exist plenty of technologies to help us do that. Even simple tasks like a fresh Drupal installation or a local database update with the database server may become a stimulus for creating a number of tools to optimize these tasks. In this regard, I would like to consider two simple tools that will solve the above-mentioned tasks (and many others).

Due to the fact that we are mostly interested in Drupal 8, we will consider these tools from the point of view of working with the latest Drupal version, although both can work with Drupal 7 (the Drupal version is not important for Phing).

Drupal Composer template for Drupal projects

As you can see from the title, we will discuss a tool created to work with Composer to optimize and speed up the installation and further upgrade of Drupal projects, whether it involves core upgrade or installation/updates of contributed modules.

The tool is available via this github link and most of the work is concluded in one command:

 composer create-project drupal-composer/drupal-project:8.x-dev some-dir --stability dev --no-interaction

where some-dir is the name of the folder that will be created for the project, and 8.x is the branch and, respectively, the Drupal version to be installed.

It is worth noting that in order to work with this tool, you need to have Composer installed, and if it is installed, it may need to update to version 1.0.0 or higher (you will be notified about it in your terminal).

Having started your project, you can go and check out the tasks in your bug/issue tracker, as it will take some time.

When all the dependencies are loaded and the project is created, you can get to the folder from your browser and continue the standard Drupal installation process.

Let's consider the main benefits of using Drupal Composer template

The structure of the files:

As you can see on the image, Drupal itself, namely its core, modules, themes and profiles are in the folder that is a level below the root (the web folder), which makes the root files externally inaccessible. It is worth noting that your virtual host (apache) or server block (nginx) should be directed at the web folder. All contributed modules, themes and profiles should by default be installed in the contrib folder inside the appropriate folder (web/modules, web/themes, web/profiles), which separates them from the custom code. Note that if for some reason you place custom code in the contrib folder (and you shouldn’t do that), it will not fall under version control.

Version control: As you can see from the .gitignore file in the project root, folders such as vendor, core, contrib, files will not fall under version control. So the repository will include only the necessary files for the Composer, custom code, and additional configuration files, depending on your project requirements.

Additional components by default: After the installation, a developer has two very important tools: Drush and Drupal console, which means that they do not need to be installed in advance in the system. They can be started from the web folder using the following commands:

 ../vendor/bin/drush some-command
../vendor/bin/drupal some-command

Pre/post install/update scripts/commands/methods: The scripts/composer folder has a php file with a ScriptHandler class and standard methods. If your project requires additional checking/file creating, etc., you can add your method in this class and add a callback of this method in the scripts section of the composer.json file. In this section, there are examples of methods callback for running during the pre/post install/update events. Also, you can add a callback of your command or a start of shell scripts in the scripts section, which will help make some processes more automated.

These are the main advantages of using the Drupal Composer template. It’s hard to disagree that this approach optimizes the time deploying a pure project and supporting it. Trying to provide it all on your own would take much more time.

As for its future use, the installation of modules, core update, patch using is now the Composer’s task.

For example, the installation of the layout plugin contributed module looks like this:

 composer require drupal/layout_plugin:8.1.0-alpha22

And Drupal core update looks like this:

 composer update drupal/core

Please note that the that the modules will be installed from the package repository specified in composer.json file (at the moment it https://packagist.drupal-composer.org, but it is deprecated and will later be replaced by the official package repository on drupal.org). Note also that the module version specification will slightly differ from the version shown on drupal.org (the layout plugin module version with drupal.org of the 8.x-1.0-alpha22 type will not be accepted by the Composer, because it uses a more accurate versioning of the 8.1.0-alpha22 type).

Phing as a build tool for Drupal projects

Now it's time to talk about the tasks we face much more frequently than downloading and installing a fresh Drupal project.

Suppose that we have a Drupal 8 build which uses the Drupal Composer template mentioned above, and your project is currently under active development. What is the process of updating the local site instance to the repository state (version control), where the project is located? This is done with a set of special commands:

 git pull origin master // let’s take the simplest option without inventing more repositories and branches
composer install // perhaps someone has added a new module, and you need to install it
drush config-import // new configs to be imported into your database came with the pull
drush updb // run database updates
drush entup // entities updates
drush cr // keep calm and clear cache :)

We have described the simplest case, one that a Drupal developer faces several times a day during the active development, but even this includes a set of 6 commands that you may get tired of writing, and, in fact, you may also need to run bower, migrations, unit testing, or any other tool that is used on the project.

Your assistant here is PHING — a build tool based on Apache Ant.

Since we are already using the Drupal Composer template and presume that everyone will use Phing on the project, let’s install it using Composer specifically for our project (the choice between a local and a global installation is up to you, this case is just an example)

 composer require-dev phing/phing:2.* //

at the time of your reading the article, the version may vary.

Next, the Phing folder will appear in the vendor folder of your project.

All we need to make our life easier (after all, it’s already complicated enough ;)) is an xml build file with 3 components:

  • 1. Task — the code that calls a specific function (git pull, mkdir, etc.)
  • 2. Target — the list of tasks, which may be dependent on another target
  • 3. Project — the root element consisting of targets

Based on the available data, we write our first simple build file in order to optimize the 6 commands described above.

Let’s create a build.xml file in the project root and leave it under version control, so that other developers can also use it, too (since Phing installed locally for the project). It is advisable not to forget about the fact that the file should not go anywhere outside dev/stage environment, because it is unnecessary.

Let’s describe the file Project:

 <?xml version="1.0" encoding="UTF-8"?>
<project name="Awesome project" default="build" basedir="." description="Build site">
// here are targets will be (Target).
</project>

The project element has several attributes whose mission is clear from their names, but I would like to draw your attention to the default attribute — the Target name which will be used by default, unless the target name is specified when running a Phing command.

Let’s describe the purpose and objectives for our case:

 <target name="build">
                        <exec command="git pull" dir="." description="Fetch data from cvs repository." logoutput="true"/>
                        <exec command="composer install --no-interaction" dir="." description="Install missing composer packages." logoutput="true"/>
                        <exec command="../vendor/bin/drush -y config-import" dir="web" description="Import drupal configuration." logoutput="true"/>
                        <exec command="../vendor/bin/drush -y updb" dir="web" description="Run drupal update database hooks." logoutput="true"/>
                        <exec command="../vendor/bin/drush -y entup" dir="web" description="Run drupal entity update hooks." logoutput="true"/>
                        <exec command="../vendor/bin/drush -y cr" dir="web" description="Rebuild the cache." logoutput="true"/>
</target>

As you can see, the target and task syntax is fairly simple. It is worth noting that the created target has a name specified in the default attribute of the root element, which means that the list of commands will be run by default.

For the most part, these are the basics of working with Phing. The final look of the build file will be like this:

 <?xml version="1.0" encoding="UTF-8"?>
<project name="Awesome project" default="build" basedir="." description="Build site">
                                <target name="build">
                                            <exec command="git pull" dir="." description="Fetch data from cvs repository." logoutput="true"/>
                                        <exec command="composer install --no-interaction" dir="." description="Install missing composer packages." logoutput="true"/>
                                        <exec command="../vendor/bin/drush -y config-import" dir="web" description="Import drupal configuration." logoutput="true"/>
                                        <exec command="../vendor/bin/drush -y updb" dir="web" description="Run drupal update database hooks." logoutput="true"/>
                                        <exec command="../vendor/bin/drush -y entup" dir="web" description="Run drupal entity update hooks." logoutput="true"/>
                                        <exec command="../vendor/bin/drush -y cr" dir="web" description="Rebuild the cache." logoutput="true"/>
                                </target>
</project>

and the start of the command running:

 phing // from the root of the project, where the build file is

Of course, these are not all the features of Phing. The tool has its own documentation, which will help you build much more complex build files.

As a bonus, I would like to share another target, which will be useful for getting a database from dev/stage with just one command:

 <target name="sync">
                        <exec command="../vendor/bin/drush -y sql-drop" dir="web" description="Drop the database." logoutput="true"/>
                        <exec command="../vendor/bin/drush -y sql-sync @stage @self" dir="web" description="Sync database from stage." logoutput="true"/>
                        <exec command="../vendor/bin/drush -y cr" dir="web" description="Rebuild the cache." logoutput="true"/>
                            <exec command="../vendor/bin/drush -y cim" dir="web" description="Import drupal configuration." logoutput="true"/>
                        <exec command="../vendor/bin/drush -y updb" dir="web" description="Run drupal update database hooks." logoutput="true"/>
                        <exec command="../vendor/bin/drush -y entup" dir="web" description="Run drupal entity update hooks." logoutput="true"/>
</target>

By placing this target in the project of your build file, you can easily synchronize with dev/stage with just one command:

 phing sync // sync

— the name of the created target.

Also, running Phing builds can be combined with the Composer pre/post install/update hooks described in Drupal Composer section.

I hope this information was helpful and will reduce the time you spend on some common tasks.

Jul 26 2016
Jul 26

In my previous blog post, I talked about six design alternatives to avoid slideshows. The response to that blog post was great - who knew there were so many kindred spirits who dislike slideshows? From the feedback I received, the number one question was why are slideshows so bad in the first place? Hopefully this companion blog post will give you that deeper understanding of some reasons not to use a slideshow and maybe help convince your next client that slideshows are a thing of the past.

Why do people still use slideshows?

The hero/banner section is arguably the most important region of real-estate on the homepage of your website. It is a place where your site goals are displayed - whether that is promoting a specific event, convincing users to buy your product, or listing your mission statement. It should be put to best use. So why do most sites clutter it with ineffective slideshows?

  • Slideshows are the norm - politics, marketing trends, etc. cater to the misconception
  • Clients still believe in the ‘above the fold’ mentality - insisting that the most important content belongs at the top of the page. This is true of newspapers, but not always the case in website design.
  • Slideshows display a lot of content - clients use a carousel to get as much content on the screen at one time.
  • Slideshows are “cool” - don’t underestimate the draw of flashy visual eye-candy.

So Why are Slideshows Bad?

My original blog post focused on the design/themer aspects of slideshows. The research that supported those ideas came from the many other blog posts on the subject. Try Googling “Are Website Slideshows Bad” and you will get at least 3,670,000 results. Obviously, I can’t read and give overviews of all the great blog posts about slideshows, but below are some of the main arguments I saw and links to specific blog posts that support each point, for further reading.

  • Slideshows are not effective - The blog post by Erik Runyon supports the idea that having more than one slide is pointless. Studies have shown that people look at and take action only on the first slide. If you do want them to look at more than one slide, make the first slide interesting or useful. The first slide has to sell the next slide to the user.

  • Slideshows can have poor accessibility - most slideshows are lacking in their support for users with accessibility issues, including users with language or motor skill issues. According to the w3.org, there are four main concepts to make a slideshow more accessible:
     
    • Structure: The carousel as a whole as well as individual slides should have structural markup (code) that enables users to establish where they are;
    • Controls: User interaction to change the display must be possible by both keyboard and mouse, as well as being identifiable, both visually and to people who can’t see them;
    • Action: When a control is activated the visually rendered effect should be replicated in actual content and functionality;
    • Scrolling: If the carousel automatically changes slides, a mechanism must be provided to pause or stop the movement.
       
  • Slideshows are a blindspot - multiple eye tracking tests show that slideshows get little attention by site users. Users just ‘gloss-over’ these very important sections of your site. James Royal-Lawson argues that “banner attention and retention is a secondary task for our brains, so even having a slider containing a series of branding images and messages might not be anywhere near as effective as you think.”

  • Slideshows can distract or induce user apathy - According to the blog post by Peep Laja, “Our brains have 3 layers, the oldest part is the one we share even with reptiles. It’s mostly concerned about survival. A sudden change on the horizon could be a matter of life and death. Hence human eye reacts to movement – including constantly moving image sliders and carousels.” Having constant stimulation from slideshows distracts a user from a site’s important content.
  • Slideshows will not increase conversion rates - In theory, a slideshow should entice a user to take an action or otherwise become informed about a site goal or mission, but studies show that slideshows can actually decrease conversion rates due to frustration of use. Fahad Muhammad argues that “Marketers put image sliders on their pages because they give them a chance to feature multiple offers at the same time. And this is a serious problem. They divide the most important real estate of their website between offers. So what happens? Nobody goes home happy. You don’t know how to persuade your customer, so they get decision fatigue and don’t make a decision. You failed to solve their problem.”

  • Slideshows can be bad for SEO/UX - improper header tags, slow page load due to high bandwidth images or videos, lack of alternative image tags, etc., can have a negative impact on your site’s SEO/UX . Harrison Jones’s blog post states that, “As with any website, the more you complicate and add things, the slower the page loading speed. I came across a few sites featuring full-width carousels packed with high resolution images, which greatly impacted the page load speed. Every second it takes to load a page past two seconds hurts the user experience, and has an impact on search performance.”

  • Slideshows on mobile devices can be tricky - slideshows do not always work well on mobile devices and they can even slow down your site due to the amount of bandwidth they use. This can result in lower SEO rankings and poor user experience. In the blog post, Kyle Peatt reminds us to think differently about slideshows - “Don’t use a carousel just to get additional content on the screen. Think of carousels for one particular use case: providing additional content within a specific context. Use a carousel when vertical space is limited — as it is on mobile — and when the content is directly related — especially if the content isn’t useful to the user.”


Don’t Believe the Research - Take the Slideshow Challenge

Although I will admit that as a former scientist, I am dissatisfied the lack of hard empirical and recent data to support the good vs. bad argument of website slideshow, the limited data that is out there is compelling. It would be amazing to find even more studies on the effectiveness of website slideshows. If you have any links, please add them to the comments below.

In the meantime, Brad Frost encourages you to Take the Slideshow Challenge and make your own conclusions about using a slideshow on your own site.

As a Reminder, If You Must Use a Slideshow...

If you simply cannot convince your client use an alternative to a slideshow, at least use a slideshow that is accessibility/UX focused. Some ways to make your slideshow more accessible and user friendly:

  • show the first slide by default and allow a user to navigate through the rest of the slides manually (not auto-rotating)
  • limit the number of slides and make sure the load time is fast
  • create navigation buttons that are highly visible and large enough to be useful on all devices
  • include all the controls available (next, previous, stop/pause, play, etc.) and make sure you can use the controls with a mouse, keyboard, and by touch
  • provide alternative ways to access the content (ex. text transcripts)

By providing accessible and user focused slideshows, we enable more users to access the important content of the site, thereby enhancing the overall user experience.

Additional Resources
6 Design Alternatives to Avoid Slideshows | Blog Post
Friday 5: 5 Problem Areas in Accessibility | Video
Accessibility Best Practices for Content Editors | eBook

Jul 26 2016
Jul 26
Access Readme Files in Drupal 8

In this tutorial we will add a module that makes site maintainers' lives easier.

With Drupal 8 setups you are encouraged to use composer, DrupalConsole and Drush because this is a faster and more effective way of adding components to your site. However, you can't access the readme file directly to read information about the module.

We will show you how to use the README module to access readme files directly from the Drupal 8 admin area.

  • Download, install and enable the README module.
  • Now return to the Extend page. Then expand the details of the module to see the Readme button next to Configure for the Readme module.

readme

  • Selecting the Readme option will open up the readme file within Drupal.

example

If you want to allow external access to the file you can go into configuration and set a security token to enable this feature.

All modules should contain a readme.txt or readme.md if you find yourself using a module that does not have a readme.md I am sure if you put in a request for it they will add it to the module.


About the author

Daniel is a web designer from UK, who's a friendly and helpful part of the support team here at OSTraining.

View the discussion thread.

Jul 26 2016
Jul 26
TL;DR In the past two weeks I had worked on using the Image Properties feature offered by the Google Cloud Vision API to group the image files together on the basis of the dominant color components filling them. In addition, I had also worked on detecting the image files and filling the Alternate Text field based on the results of Label/Landmark/Logo/Optical Character Detection, based on the demand of the end user. This week, I have worked on and developed tests to ensure that the similar images are grouped in accordance to the Image Properties feature of the Vision API.

At present, the Google Vision API module supports the Label Detection feature to be used as taxonomy terms, the Safe Search Detection feature to avoid displaying any explicit contents or violence and the User Emotion detection to detect the emotions of the users in their profile pictures and notify them about it.

I had worked on grouping the images on the basis of the dominant color component(Red, Green or Blue) which they are comprised of. I got the code reviewed by my mentors, and they approved it with minor suggestions on injecting the constructors wherever possible. Following their suggestions, I injected the Connection object instead of accessing the database via \Drupal::database().

After making changes as per the suggestions, I started developing simple web tests for this feature, to ensure that the similar images gets displayed under the SImilarContents tab. It requires the creation of new taxonomy vocabulary and adding an entity reference field to the image file entity. After the creation of the new Vocabulary and addition of the new field to the image file, I created the image files using the images available in the simpletests. The images can be accessed through drupalGetTestFiles(). The first test ensures that if the Vocabulary named ‘Dominant Color’ is selected, the similar images gets displayed under the file/{file_id}/similarcontent link.

The grouping, however, requires call to the Google Cloud Vision API, thus inducing dependency on the API key. To remove the dependency, I mocked the function in the test module, returning the custom data to implement the grouping.

To cover the negative aspect, i.e. the case when the Dominant Color option is not selected, I have developed another test which creates a demo vocabulary to simply store the labels, instead of the dominant color component. In this case, the file/{file_id}/similarcontent link displays the message “No items found”.
I have posted the patch covering the suggestions and tests on the issue queue to be reviewed by my mentors. Once they review it, I would work on it further, if required.
Jul 26 2016
Jul 26

The content_type ctools plugin is the most used type of ctools plugin in Drupal 7. It allows us to quickly build complex (and configurable) components that can be used in the Panels interface. They are quick to set up, the easiest start being the definition of the $plugin array and the implementation of the plugin render function in a .inc file. Have you ever wondered though what the $subtype parameter of this render function is and what it serves?

Most of the time our content_type plugins only have one type so the $subtype argument is the same name as the plugin (and file name). However, it's possible to have multiple subtypes that have slight (but critical) differences while sharing common functionalities. Not many people are familiar with that. Intrigued? Let's see how they work.

When content_type plugins are being processed (loaded, prepared for use and cached), ctools asks them whether there are any subtypes it would like to define or they are single. By default the latter is true but in order to define variations we can either populate an array of subtype definitions in the main $plugin array or implement a function with a specific naming convention: module_name_plugin_name_content_type_content_types. This callback then needs to return the plugin information for all the subtypes of this plugin.

But since it's easier to show than explain, let's take a look at an example. Imagine you need a simple content_type plugin that outputs data which depends on a certain ctools context. You can define your plugin as such:

$plugin = array(
  'title' => t('My plugin'),
  'description' => t('My plugin description'),
  'category' => t('My category'),
  'required context' => new ctools_context_required(t('Node'), 'node'),
);

This is a simple example of a plugin that depends on the Node context. But what if you want it to depend on the Node context OR the current User context? In other words, it should work on the node_view page manager template or the user_view one. Or whatever page these contexts are on but nowhere else.

Instead of required context you could use 'all contexts' => true. But this would then pass in to your render function all the available contexts. And this is neither elegant nor a statement of dependency on one of those two contexts. In other words, it will be available on all page manager pages but maybe won't do anything on most and it's up to the render function to handle extra logic for checking the contexts.

This is where plugin subtypes come to help out. Since your render function does the exact same regardless of context (or very similar), you can have a subtype for each. So let's see how that's done.

First, we simplify the main plugin array:

$plugin = array(
  'title' => t('My plugin'),
  'description' => t('My plugin description'),
  'category' => t('My category'),
);

Then we implement the function that returns the subtypes (following this naming convention):

function my_module_my_plugin_content_type_content_types() {
  return array(
    'node' => array(
      'title' => 'My plugin for nodes',
      'required context' => new ctools_context_required(t('Node'), 'node'),
    ),
    'user' => array(
      'title' => 'My plugin for users',
      'required context' => new ctools_context_required(t('User'), 'user'),
    ),
  );
}

The subtype machine name is the key in the array and the rest is regular plugin definition as we are used to. In our case we define two, each for their respective dependencies. And with this in place we achieve a number of things.

First, we can add the My plugin for nodes content_type plugin whenever the Node context is available and the My plugin for users when the User context is present. They cannot be used in other cases. Second, we ensure that whatever context is passed to the render function is either a Node or a User (nothing else). This can come in really handy when your context is custom and wraps an object that implements a common interface. Third, the $subtype argument to the render function now will be either node or user which is helpful to maybe slightly fork the functionality depending on the subtype.

Clear the caches and give it a go. Let me know how it works out.

Jul 26 2016
Jul 26

I started the week by testing Field Encrypt module with my project Pubkey Encrypt. So Pubkey Encrypt provides support for encrypting data with users login credentials by generating Encryption Profiles. And Field Encrypt provides support to encrypt field values using any specified Encryption Profile. In this way, both these modules are expected to work together in harmony. I tested much and both the modules seemed to be getting along perfectly fine with each other.

When we were in the planning phase for Pubkey Encrypt, Field Encrypt had this issue that sometimes decrypted field data got cached when it was not supposed to. Due to the presence of three cache systems in Drupal 8, i.e. Static cache, Persistent cache and Render cache, the issue needed to be dealt with much care. So I committed in my GSoC proposal to dedicate the last two weeks towards fixing this issue. But interestingly, this issue has already been fixed and now there’s a checkbox in Field Encrypt settings via which a user can set a field as uncacheable.

So I spent quite some time studying the architecture of Field Encrypt and exploring its codebase, so to learn how the maintainers of the module got rid of this cache-related issue. Turns out, whenever a field is requested to be set as uncacheable, the module simply marks the corresponding entity type as uncacheable via this code block in hook_entity_type_alter():

foreach ($uncacheable_types as $uncacheable_type) {
  $entity_types[$uncacheable_type]->set('static_cache', FALSE);
  $entity_types[$uncacheable_type]->set('render_cache', FALSE);
  $entity_types[$uncacheable_type]->set('persistent_cache', FALSE);
}

After I had explored much of the Field Encrypt module, I thought I was in a good position to help in its issues queue. So I tried to fix these two issues:

In my weekly meeting with mentors Adam Bergstein (@nerdstein) and Colan Schwartz (@colan) , we discussed:

  • the GSoC project submission guidelines; we’ve decided to create a separate branch in Pubkey Encrypt github repo with all the commits made during the 3-months of GSoC coding period. We think a link to that branch would meet Google Work Product Submission Guidelines and would make it easy for anyone to figure out the work I’ve done as a GSoC participant.
  • the scenario of an unprivileged user trying to access the Role key value; we’ve decided to throw an error message instead of an exception so to ensure a graceful shutdown of the encryption/decryption mechanism instead of a complete system hault.
  • the scenario of a user trying to change his login credentials without providing existing credentials; we’ve decided to not allow a user do this via a custom form validator for user_form. This means that the password-reset functionality, and other similar features, won’t work on a website with Pubkey Encrypt enabled.
  • the possibility of a feature to ask users, via email, to perform the one-time login when Pubkey Encrypt gets initialized. We think this feature would make it easy for others to use the module. I’d work on it in next week. For now, I’ve created an issue ticket to formally capture this need.

Next I worked on fine-tuning the module documentation; I updated the Architecture document to reflect the latest status of the module, I added a User-stories document to provide step-by-step instructions for using the module with Field Encrypt and I wrote a README file to get unfamiliar users acquainted with the module. I’ve also tried to use simple phrases and real-life examples instead of the technical jargon whenever possible but especially in the README file. One of the results of that effort is that the module’s description has now been changed from a much confusing phrase “Adds support for Credentials-based Public key Encryption support into Field Encrypt” to a relatively simple one “Provides support for encrypting data with users login-credentials”.

I then bundled the default plugins provided by the modules as submodules within the modules. So, for example, now we have pubkey_encrypt_openssl and pubkey_encrypt_phpseclib modules within the pubkey_encrypt module; the former provides an OpenSSL-based Asymmetric Keys Generator plugin implementation while the latter provides a PHPSecLib-based one. In both the modules, hook_requirements() ensures the presence of corresponding external dependencies.

I then worked on an overview page for Pubkey Encrypt. So the module generates Encryption Profiles for each role in the website. But an Encryption Profile for any role should only be used if all users from the corresponding role have performed the one-time login. Otherwise, the security mechanism provided by Pubkey Encrypt won’t work to its full potential. Previously, there was no way of knowing which Encryption Profiles generated by Pubkey Encrypt are ready-to-use and which are not. Now, there exists this overview page:

Then I worked on an experimental feature for the module which involves using cookies for temporarily storing the Private key of any logged-in user. Actually whenever a user logs in, Pubkey Encrypt uses the user’s login-credentials to decrypt his Private key and then temporarily stores the decrypted Private key in a session. We are currently using sessions because ownCloud Data Encryption Model, on which this module is based, uses sessions. But there’s this idea of shifting to cookies, though I have yet to discuss it in detail with my mentors. Even though we have not made any final decision yet, I have still started working on this feature. Because if we do decide to use it, it’d be already there in an experimental branch. And if we decide not to use it, still I’d have learned how to use cookies in a Drupal 8 website.

Since cookies need to be set in HTTP headers, I cannot simply call the set_cookie() method in a user_login() hook. And because hook_init() isn’t present in Drupal 8, I cannot use it either for setting content headers. After much exploration, I finally did it by creating an event subscriber to KernelEvents::RESPONSE and calling $event->getResponse()->headers->setCookie($cookie) in the corresponding event callback function. As expected, Pubkey Encrypt encryption mechanism is working perfectly fine with cookies too. Though the tests are breaking and I have yet to fix those.

See all the code changes I did this week here: Pubkey Encrypt Week 9 Work.

At this point, I’m pleased to announce that all the work which I planned to do in my GSoC proposal three months ago, has been done. I’m super happy about the fact that I still have 3 weeks left and I’ll try to utilize them in the best way possible.

Jul 25 2016
Jul 25

DrupalCon Dublin is right around the corner and we're proud to say that 3 sessions submitted by Appnovators have been accepted!

This will be the first European DrupalCon since the release of the Drupal 8 project this previous November. Organizers are anticipating some of the best sessions, keynotes, and birds of a feather (BOF) roundtable discussions yet. In addition to sharing the latest Drupal knowledge, the planning team is also committed to sharing more and better sessions about getting off the island.

As a team, we had 7 sessions submitted for this Con, but knew that the Program Team had their work cut out for them with a record-breaking submission count for a European Con - 621. Congrats to all 130 selections! Check out all the sessions here. 

The Comunity Keynote was also announced, with Eduardo García presenting: Around the Drupal World in 120+ Days. Eduardo has spent the last few months travelling around the globe, meeting many of the diverse and dynamic members of the Drupalverse. His keynote will cover 3 topics from his journey:

  • Link with the community - community engagement now, and how we can do better
  • Language barriers - making introductions to Drupal multilingual and accessible for people around the world
  • Being a "knowmad" - creating connections for people who travel to different Drupal communities

Here are the 3 sessions we'll be presenting at DrupalCon Dublin:

Track: Coding and Development

Title: Composer Based Workflows for Drupal 8

Speaker: Kevin Moll 

One of the biggest changes in "Getting off the Island" with Drupal 8 is the adoption of Composer.  Composer lets us easily manage dependencies and pull useful functionality from other parts of the PHP community and utilize them in Drupal. Its now not only used to pull in outside libraries, but also can be used to pull in all your Drupal modules as well.

This session isn't just for big companies or expert composer users.  Everyone from small to large companies and beginners to experts will learn how they can leverage composer and build a Drupal 8 workflows to easily mange and deploy code whether it be on a single site, or hundreds of sites.

Track: Core Conversations

Title: Workflow Initiative

Speaker: Tim Millwood

Announced at DrupalCon New Orleans by Dries, the Workflow Initiative is a planned Drupal 8 initiative to bring better content workflow tools into Drupal 8.2.0 and beyond.

At DrupalCon Dublin we will be days away from the 8.2.0 core release. This session will look at what we've been able to get into that release and how it plays with the contrib modules in this space.

Track: Site Building

Title: Enterprise Level Content Staging with Deploy

Speaker: Tim Millwood

Through Drupal 6, Drupal 7, and now Drupal 8, the Deploy module has been the best way to stage content between different environments.

Many of the underlying elements of Deploy are moving into core as part of the Workflow Initiative, this session will not delve into that but it will look from a site builders point of view on how to configure and use these tools.

 

Appnovation is proud to be a Silver Sponsor of DrupalCon Europe, taking place September 26-3, 2016 at The Convention Centre Dublin, Dublin, Ireland. 

For more information and to register, click here. 

Jul 25 2016
Jul 25

by David Snopek on July 25, 2016 - 12:02pm

Just finished a big project for client? Awesome!

Did you selling them a support and maintenance plan for their new site?

No? Well, I'm sorry to tell you: YOU'RE DOING IT WRONG!

But you wouldn't be the only one!

The vast majority of Drupal shops and freelancers build sites and move on without offering a support and maintenance plan, figuring if the client has any problems they can just bill them for it at their hourly rate.

However, you're missing out on several advantages - read more to find out what they are!

Setting client expectations

We all know that a website is never finished. It requires constant maintenance: security updates, bug fixes, changes for the latest SEO and mobile trends (ex. responsive design, AMP), and so on.

Well, we all know that, but our clients might not!

For most clients, when the project is done - the website is "done."

When they find a bug or need an update, they're left wondering, "Didn't I already pay for this?"

Yes, they'll probably come back to you and pay your hourly rate, but estimating the scope of this mini-project and having to sell it to them is an unnecessary source of tension for both sides.

But if you sell them a support and maintenance plan when selling them the initial project, you help to set their expectations for life after the project is over. They'll understand from the very beginning that the website needs constant maintenance and the framework for receiving it.

Get future work!

How many times have you gotten a project to re-build a new website for client who hates their old buggy website?

Have you ever wondered why they didn't go back to the person or company who built that old site and had those bugs fixed years ago?

Well, if you didn't sell your clients a support and maintenance plan, then YOU were probably the person who built the "old buggy site" in a couple of cases - and you didn't even know it!

Any unmaintained site is going to get buggy. Once people start to be annoyed or despise their site, they'll distrust the work that was done on the site originally. So, when they reach their breaking point, they'll go to someone new to build a new site, rather than return to you.

But if you have a history of fixing their problems and answering their support questions on a regular basis... First of all, the site won't become an unmaintained mess... But also, you'll maintain a long-term positive relationship with the client and when they need a new site or feature, you'll be at the top of their minds!

And it's easier and cheaper to get a new project from an old client, than chasing new clients.

Recurring revenue

One of the hardest parts of doing project work is managing the feast and famine cycle.

You have times when all your potential clients finally sign the contract on the same day and you're struggling to figure out how you're going to do all at once.

And you have the times (frequently just before that happens :-)), when you don't have enough work to keep you busy and you're hustling like crazy to find new clients!

Support and maintenance work can help to fill the gaps and stabilize your business.

Look out for your clients best interests

There's a number of things that your clients need - I mean really NEED - but they might not know or understand that they need them.

For example: security updates.

Your clients care that their customers can buy their products or that the contact page works. Security is something abstract that, sure, they know they need it, but they might not understand that security is an on-going process and not a box that you tick off once and are done.

Providing your client with a support and maintenance plan allows you to stay on top of things like security updates for your clients. While they might never fully understand the value in that, it is in their best interest, and they'll feel the effect (if only passively) by not having the stress of their site getting hacked.

This is something you need to do to look out for your client's best interests - and not just "sell and run" :-)

Don't have the extra time? Outsource it!

I know - you have enough to do already! You've got a big project in the works and you can't drop everything every time a previous client's site has a critical problem or even when it needs a minor update.

Well, you can get all the advantages discussed above by outsourcing your support and maintenance to partner company (like myDropWizard)!

Contact us today about white-label Support and Maintenance for your clients!

... or learn more about how our white-label service works!

Jul 25 2016
Jul 25

Thank you on a neon signThank you, to everyone who participated in our 2016 certificate campaign. We sent a personalized certificate to everyone who joined or renewed their Association membership. We also asked members to help boost our outreach by encouraging others to join or renew. These contributions matter. The funds you helped raise support the Association’s work, and your goodwill inspires us. Whether you became a member for the first-time, renewed, or took the time to share, you made the campaign a success.

Success feels great

From May 1 - June 30, 335 people became new members and 476 members renewed. For comparison, those numbers were 233 and 378 during that period last year. This campaign brought our total membership to 3,670 individuals and organizations. That’s a 12% increase over this time last year (from 3,290). Our certificate goal was to deliver 675 by the end of the campaign. But you helped us crush it. We delivered 854, exceeding that goal by 27%.

Lessons learned

This year, we created a landing page on assoc.drupal.org and promoted it via blog post, social media, and newsletters. One month into the campaign, we launched a new banner on drupal.org, and sent an email to members, asking for help sharing the campaign. From the attribution provided by members on the sign-up form, most learned about membership via drupal.org or through a community member/organization. Therefore, the banner/landing page and direct message to our members was more effective than using our social media channels and newsletters.

About campaign components

Last year, we used social sharing, a blog post, and newsletters. We also added new content to our existing membership and contribution pages. We didn’t create a landing page. The results? There were 885 tracked pageviews of the campaign-related blog post (during the campaign period) and we delivered 611 certificates.

This year, we did a little more to test whether a banner on drupal.org could make a difference. It definitely did.

We launched with a blog post on May 2, but this time we added a landing page. When the campaign ended, we’d had 1,025 tracked pageviews on the blog post (a 15.8% increase from last year). However, we didn't see a jump in membership sales (296 total) or much traffic to the landing page. On June 1, we added a banner to some drupal.org pages. That’s when it got interesting.

Traffic came from drupal.org, not assoc.drupal.org

The landing page we launched in May had 16,768 tracked pageviews during the full campaign period, but 98% of them (16,410) came after the banner was launched on drupal.org. June had 517 membership sales, and 50% of those were new members (up from 34% new members in June 2015).

This screenshot shows traffic to the landing page before and after the banner launch.

google analytics shows traffic spike and sustained high level after launch of banner

Digging deeper into the data, we looked at what members wrote when asked how they found out about membership. New members told us it was via drupal.org (54.8%) or thanks to a community member (19.3%). These percentages were even higher than when looking at total members from the campaign period. If we want to increase overall membership, having the landing page and banner combination is the way to go.

pie chart shows 44.6% of members from campaign period report drupal.org, 16.4% report a community member, and 18.% report DrupalCon as source

pie chart shows 54.8% of new members from campaign period report drupal.org, 19.3% report a community member, and 10.7% report DrupalCon as source

Compared to the 2015 campaign’s data, there were 123% more responses driven by evangelism, and 108% more mentions of drupal.org as the start of a member’s user journey.

You love selfies as much as we do

Thanks for getting in front of the camera! It came as no surprise that so many of you responded to our call for selfies. Our community is full of caring members who love to share. Not only did this make for a fun time, but it helped show the people behind Drupal.

What’s next?

A note about content: regrettably, we showed the same banner to all visitors, and its language caused some confusion about what members could do to help. We'll be mindful of that for future editions.

In the meantime, you can still help continue the momentum of this campaign. Reach out to us. Tell us why you’re a member. Share why you’re a member of the Drupal Association when you renew your membership—or anytime, really. No matter where you share, you help us help the community, and we all make a difference for each other and for Drupal.

Jul 25 2016
Jul 25

Yesterday all the accepted sessions for DrupalCon Dublin were announced, and we are delighted to report that 5 of our 8 session proposals were accepted! With Acquia being the only company receiving more acceptances, we are extremely proud of our achievement.

Testament to our high standing in the Drupal community, we are the only Irish company speaking at DrupalCon Dublin. Our accepted sessions this year span a number of different tracks, namely Business, Horizons, Site Building, Being Human and Core Conversations, and cover topics from accessibility to remote working to building mobile apps with the Ionic framework. Congratulations to all our speakers!

Here's a quick run down of each session.

Building a co-lingual website - lessons learned from ireland.ie

Speaker: Alan Burke
Track: Site Building

2016 marks the centenary of the 1916 rising in Dublin, a pivotal year in Irish history, and is marked with a series of high-profile events commemorating the rising. ireland.ie is the official state website for the 1916 commemoration and runs on Drupal 7.

While English is the main language in Ireland, Irish is the first official language. A decision was taken to present both languages side by side wherever possible for the 1916 commemorations - including on the website. This session will focus on the unusual co-lingual [2 languages side-by-side] approach, and how Drupal made it possible. 

Choosing Drupal - insider advice from an Irish multinational

Speaker: Alan Burke & Aisling Furlong from Glanbia
Track: Business

Struggling to sell Drupal to clients? Ever wondered what goes into the decision making process when choosing a CMS?
In 2014, Glanbia selected Drupal as the CMS of choice for marketing sites. This session will outline the decision-making process used, and what Drupal agencies can learn when pitching Drupal. This is a joint session proposal between Annertech and Glanbia.

Bridging the PhoneGap: Getting Started Creating Hybrid Mobile Apps with Drupal and Ionic Framework

Speaker: Mark Conroy
Track: Horizons

With the advent of hybrid mobile apps, you can continue being a Drupal frontend developer and also build apps without needing to learn new technologies. The mobile web is quickly catching up with native apps. The mobile web is free, and open, and available to all of us right now and doesn't bind us to proprietary systems. With the many advances being made in this area, we can create great mobile experiences for users.

Future Directions for Drupal Accessibility

Speaker: Andrew Macpherson
Track: Core Conversations

Drupal has made great advances in accessibility over several major releases, and nowadays ranks as a leading implementation of web accessibility standards.  This session will encourage contributors to look ahead at future challenges and opportunities for accessibility during the faster 8.x (and 9.x) release cycle. 

Happiness is... remote working

Speaker: Anthony Lindsay
Track: Being Human

Many Drupal agencies have remote workers. Some are entirely distributed. Whilst remote working is beneficial to all concerned in so many ways, it does come with its own challenges. This talk will cover the journey I took when I moved from a typical 9-5 office job and joined Annertech, which is an entirely distributed Drupal agency. It will highlight the challenges I found: the good, the bad, the funny and the downright surprising, and offer as examples, my experiences for staying happy and healthy in what has the potential to be an isolating environment. 

Congratulations to Alan, Anthony, Andrew and Mark on their great achievement. We look forward to seeing these and all the other great sessions at DrupalCon Dublin in September. Hope to see you there!

Jul 25 2016
Jul 25

More often than not with nearly all the projects I work on with Appnovation, we're tasked with some degree of updating, redesigning and modernizing an existing web presence, whether that be a dated website, re-modelling a business web presence inline with its evolution or de-commissioning legacy data services. The target result being something new and shiny. Additionally, now with the dawn of Drupal 8, a lot more requests to port a site into the latest and greatest version of Drupal. 

Ideally this task would be made simple if the existing system had been built in a sensible way, with a clear separation between content and presentation, a clear information architecture approach and a minimal dependency tree. However, our ideals are never really reality, and in my experience redesigns are never really just redesigns in the sense that replacement or a few updates to the CSS would suffice.

When our clients look to consider a significant change to their website, like a re-branding refresh, it's often a good opportunity to think about their audience, their needs, goals and motivations and reflect that back into their content and business models, as to improve not just the look of a site but also its positive impact in whatever measure they wish to monitor that. Additionally considering some of the benefits that the new version of Drupal brings that in previous versions may have been difficult to implement or never even considered.

With that in mind, a ‘redesign’ is now much more than just updating or replacing a theme and moving to Drupal 8. With a little luck (and for those non-Appnovation readers) your sales team would have correctly guided the client into not thinking that it’s just re-skinning and will be a quick and simple job, with a fast turnaround. When inheriting a new redesign, it's always wise to take a look under the hood as early as possible. This will ensure that the potential nasty mess of hideous hacks and unstable code that needs a new paint job will be highlighted in the client / vendor relationship and the changing of its look and feel will be better understood, scoped and sized accordingly and all parties to be crystal clear on the challenges ahead.

To add a little context and the rationale behind a desire to redesign, these are then often coupled to a wider project of re-branding and are often associated with timescales and expensive deadlines that are often always agreed upon way in advance of any real knowledge of scale being made aware. Now herald this warning, for whatever reason, it’s not unlikely that redesign projects will find themselves behind schedule, or over budget, usually because a schedule has been guessed at, and agreed by non-technical sources way before the size and scope of the redesign is truly known. In this situation the perceived agile wisdom is that time and resources are fixed, so you know the scope will need to be changed or reduced. But what about an MVP (minimal viable product) for a redesign? When you’ve got an existing product, how much of it do you need to rework before you put the new design live?

Let's consider a few variables in this, and thinking purely in a Drupal orientated site: How dated is the existing site? Is it responsive? What’s the underlying platform version powering it? Are the contrib modules available for direct replacement and so on? These are all on top of the foreseen desired redesign scope; this assumes the proper UX and visual design streams have both spun up and are in advanced stages.

With interesting challenges, a lot of varying answers and paths present themselves; you could begin with sizing the impact of each request and weighting them in priority of importance to focus on the most important tasks first. For that to be an efficient delivery, you'll need to have a good feel for the current burn down, or in the instance of a nearly formed team, what is their potential burn down (this is where having experienced resources in place helps as they can usually provide a pretty accurate estimate range). That way you can be confident on what can be achieved in a given time frame.

Next up comes more brave decisions, once the fantasy deadline is a known distant dream. What features can be shelved or possibly dropped? What value are new features bringing that are worth keeping? In other words, a redesign can (and should) be an opportunity for a business to look at their content strategy and consider rationalising the site. If you’ve got a section on your site that isn’t adding any value, or isn’t getting any traffic, and the development team will need to spend time making it work in the new design, perhaps that’s a candidate for the chop?

Let's also consider prioritizing elements that will get us most of the way to ultimate completion. This fits back into the Agile development principles where each sprint should deliver a shippable, potentially production ready features. I would interpret this to mean that we should make sure that the site as a whole doesn’t look broken, and then we can layer on the bells and whistles afterwards, similar to the progressive enhancement approach when dealing with legacy browsers. If you're not sure whether you’ll have time to get everything done, don’t spend an excessive amount of time perfecting one section of the site to the detriment of basic layout and styling that will make the whole site look reasonably good.

Try starting with a simple set of styles that cover all common HTML elements, then put these into known common components (like forms / navigations etc.). This will become the foundation to build the rest of the site up on and will ensure all the components that build a page up are presentable and uniform. You can even test these components against their real contexts of use, user motivations and persona driven user flows driven out by the UX sessions. Also, by this point, the content audit should have highlighted the depth, variety and count of different layouts the site will be accommodating, so you should be able to validate whether these look good and are presentable enough to ship.

There is another option though, its slightly more radical but can work for the largest of redesign projects. Ask do you have to deliver all the changes at once? Can you (and should you) do a partial go-live with a redesign? (Depending on how significant the redesign change is, the attitudes to these changes and capability of continuous delivery within your development team and of course the client’s appetite.) Also consider the technology stack involved; it may make sense to deliver changes incrementally. In other words, put the new sections of the site live as they’re ready, and keep serving the more important old bits from the existing soon to be legacy system. This could deliver some inconsistency in the look and feel and a little bit of awkwardness in user flows through the site, but if there are obvious stand alone sections of the site that users have been identified to stay exclusively within, then chances are they won't even be hitting the new stuff anyway.

To close… We shouldn’t fall into the trap of thinking of redesigns as big-bang events that sit outside the day-to-day running of a site. Along with software upgrades, redesigns should be considered as part of a business’ long-term strategy, and they should be just one part of a plan to keep making improvements through continuous delivery.

Jul 25 2016
Jul 25

Our existing users may have already noticed a few changes and improvements in Drop Guard. However, not everything is visible enough, so we decided to make a short list with the recent updates.

Composer support

Drop Guard is now capable of managing your composer.json and composer.lock files, in the same fashion as you would do it normally via CLI.

When executing the update task, Drop Guard modifies the composer.json to accommodate the recommended module or core version and runs "composer update" command to keep the composer.lock in sync. Both files get pushed to the repository, and the only thing you need to take care about is running "composer install" to receive the updated packages.

Both repositories - the official Drupal.org one and the Drupal Packagist are supported.

We are providing personal assistance for those having trouble with the setup or configuration. Just drop us a note and we will arrange a personal onboarding and setup session.

We encourage everyone who's in love with Composer to test this feature and give us your generous feedback. As always, we're looking for both positive and negative thoughts - don't be shy!

The Dashboard

Not long ago the only way to manage update tasks and statuses across multiple projects was to inspect each project connected to Drop Guard separately. Things have changed now - you have a sleek dashboard-like interface where tasks from all projects are collected. You can adjust the Dashboard to fit your needs by rearranging widgets or adding new ones. 

The Activities page and widget give you a bird's-eye view of everything going on under the hood - all update attempts, patches application, changing statuses - arranged in chronological order.

Latest release date

Smallish, but a long-awaited feature (we take the responsibility for you waiting so long for it) - you can see the date of the recommended release next to the project version on all pages.

Project management systems support

We have started implementing project management systems support, starting from Jira (Redmine will follow shortly, as well as other popular services). Here's a little background on how it works:

On the project edit screen, you can provide your Jira/Redmine credentials (we recommend to create a separate user with the correct permissions for that), and map Drop Guard tasks statuses to the corresponding statuses in your system.

Once it's done, Drop Guard will start establishing connections between its own tasks and the tasks in the project management system. All you need to do is to create the Action reacting to the "New updates are available" event.

Next, when Drop Guard creates the update task, it will create the appropriate task in your system, listing the modules, their versions and everything you'd like to be included in such task.

Both tasks will be kept in sync, so once all tests and checks for an update are performed, and you close the project management task (or give it another status), the corresponding Drop Guard task will be also closed (or the status will be changed).

It allows you to manage Drop Guard tasks from the external system without ever visiting Drop Guard itself. How cool is that!

And as always, due to Drop Guard's extensible nature, you may choose to fire up the external task only if there's an error or the updates are ready to test, just make it work according to your established workflow.

You use another system? Send us a note with your favourite one and it may happen it will be added soon. Can't wait to hear from you!

A lot of speed and stability improvements were made as well, so don't forget to check your existing account (or create a new one), to see it by yourself.

We have more exciting news to come. Keep your ears open!

Jul 24 2016
Jul 24

Contributing to Drupal

Drupal has a fantastic community of contributors. People help with everything, from coding tasks to checking documentation and triaging issue queues on drupal.orgWith a community as large and diverse as Drupal's, it should come as little surprise that contributors range from full time, sponsored maintainers to casual hobbyists who choose to dedicate a few hours a week to improving something they use and care about. 

Getting started

For people looking to get started with contributing, it can be a bit difficult to work out where to begin. A good place to start is drupal.org's own overview on contributions. This page links through to a really helpful contributor task page which explains the many ways that the technical, and non-technical, can get started.

A common misconception is that you need to be an experienced software developer to make improvements to Drupal. Whilst there is often no shortage of technically demanding issues, there are a large number of simpler, smaller tasks which are just as important.

A very quick way of accessing this information is to (ironically, perhaps?) use the advanced search on the core Drupal issue queue. All of the tasks listed here have been tagged with ‘novice’ and should provide easy pickings for most people. For example, to find Drupal core's issue queue you would go to https://drupal.org/project/drupal, then find the 'all issues' link in the sidebar and follow the 'advanced search' link near the page title.

Drupal

In my opinion, the best way to get started with contributing to Drupal is to attend a sprint day or local developer event, and find a friendly mentor who can offer you some support to avoid the dreaded feeling of getting stuck, bored and disheartened. 

If you're a developer with a reasonable understanding of PHP frameworks, have a go at writing and submitting a patch. There are usually front end tasks and ticket triaging; almost anyone can check whether steps to reproduce a bug still work. Even if your first ever contribution is to read an issue's summary and leave a comment saying “it's actually far more complex than first estimated” - this is a very useful first step. Every little contribution makes a difference.

Contributions by Torchbox

Our team's contributions are summarised on our drupal.org organisation page. A lot of our developers' efforts go towards writing patches for existing/ new bugs on contributed modules. We also think it's really important that the projects we're most proud of are written up as drupal.org case studies. Not only does this promote our best work, but it also raises the profile of Drupal as a product that can deliver success for a huge variety of organisations and individuals across the globe. Many of our developers are also active in the #drupal-uk IRC channel: a public chat room inhabited by friendly Drupal community members in the UK.

Jul 24 2016
Jul 24

Mike interviews Gregg Marshall, Enzo Garcia, and Daniel Schiavone live from Drupal GovCon 2016! Gregg discusses his new book, Enzo talks about his upcoming community keynote and the upcoming DrupalCamp Costa Rica, and Daniel previews Baltimore DrupalCamp and discusses preparations for Baltimore DrupalCon 2017.

DrupalEasy News

  • The Fall, 2016 session of Drupal Career Online begins September 26; applications are now open.
  • Online and in-person workshops; introductions to both module and theme development for Drupal 8. See the complete schedule.

Sponsors

Follow us on Twitter

Intro Music

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Jul 23 2016
Jul 23

Ever since Andrew joined Annertech, he's been a champion of accessible web design and has ensured that accessibility has remained a key focus area in everything we do. That combined with his dedication to open source and contributing back to the community, meant that we were not surprised when he was asked if he'd be interested in becoming a Drupal core accessibility maintainer.

Andrew is truly passionate about accessibility and has increased the knowledge and awareness of issues encountered by people with disabilities for all members of our team. We can not think of a better candidate for a new Drupal core accessibility maintainer.

His response when asked to be a Drupal Core maintainer?

I was really stoked when Mike asked if I'd consider becoming a core maintainer. I have barely stopped bouncing around my home.

Congratulations from everyone in Annertech Andrew!

Jul 22 2016
Jul 22

In our weekly roundup of higher education notes and trends, you can usually count on three themes being discussed by the academic community: student demographics, budget constraints, and technology. In this post, we'll expand more on these themes by sharing some of our own insights, and we'll cover a few unique and emerging technology trends across higher education and technology.

Virtual Reality on the Horizon in Higher Education

As a web agency specializing in building high-end websites for colleges and universities, anything technology related that has the potential to impact the sector is sure to get our attention. As a VP at our agency, imagine my enthusiasm when I read Inside Higher Ed’s article on virtual reality in the classroom!

The technology is still in its infancy. As such, it’s oftentimes expensive to produce and procure, so it will likely be years before we see it make any kind of tangible impact in schools. That said, the potential it may have in the future on learning outcomes is significant. Imagine complimenting a history lesson with a virtual reality tour, or studying rock formation in a geology class by seeing it in augmented reality. Expensive field trips? No need! Plug into virtual reality and tour the world right from your seat! Another hypothesized value will be the ability for a truly global classroom where virtual classes can meet “face to face” and work together on problem-solving. 

Technology has come a long way; I remember how excited the classroom would get when the teacher rolling in a bulky tubed-TV meant we’d get to watch a grainy, severely outdated educational video. Kids today - they have no idea how lucky they are.

Machine Learning -- Adapting Content To Complex Higher Education Websites

In a previous blog post, we reviewed an interesting new trend in higher education where learning management systems were starting to predict student outcomes by their usage patterns. That particular article noted the stats that showed that the more a student logged into the system during the first week or two of classes, the higher the probability they would succeed in the class. 

The concept of a “machine learning” has been prevalently used in the commercial sector for years now, with trendsetters like Amazon and Facebook serving up product and advertising suggestions based on your purchase history, “likes” and what websites you’ve visited. But the application for higher education is just as promising.

As a specialized web agency that does most of our work for higher education institutions, we’ve been introducing machine learning or “personalization” concepts to our clients for some time now (if you want to read more about personalization in higher education, check out this blog post we published last year). Higher education websites are what we call “content complex,” meaning they have a large number of distinctly different visitor types (what the web industry calls “personas”) frequenting them. Prospective students, parents of prospective students, enrolled students, parents of enrolled students, faculty and alumni is the best case scenario; often times our clients will have very different types of prospective students who require further segmentation (imagine international students vs. local). How does one landing page identify and speak to six unique types of visitors? Personalization technology, that’s how. 

When used, web personalization technologies can log specific user criteria and attributes such as age, location, purchasing behaviors, social media and more. With user attributes logged and indexed, businesses can deploy unique, adaptive content (even web pages) that are custom tailored to individual users. A sophisticated personalization strategy will have a unique web experience for each type of persona where everything from the content, images, colors and messaging has been tailored for them. As a person engages with the website, the technology “learns” more about what they are looking for and can serve up relevant content to them (just like Amazon suggests products to you based on your search and purchase history). 

Of course, this level of personalization requires a complex and thorough content strategy that many institutions simply do not have (yet). We often recommend simpler ways of personalization such as explicitly asking the user when they arrive who they are as a starting point. This basic framework can be evolved as the content strategy of the website becomes more refined. ImageX believes that personalization will be the foundation of user experience on the web moving forward, much like responsiveness for a mobile experience is today. Getting started on this trend now will make future adaptations that much more efficient.

Native Mobile Apps

It seems like every organization has or wants to have a native app; in some cases for good reason, while others, not so much. If an organization has a customer base that needs to interact with a large and often complex data set or tasks on a frequent basis, a native app is likely a good idea. Mobile banking on an app is a great example; the complexity of a banking website and the volume of content make the mobile experience cumbersome for specific interactions, such as paying bills or transferring money (what people generally refer to as “doing their banking”). A banking app immerses the user in that specific set of tasks, with a light set of complementary content and features. The business case for higher education is, in our opinion, just as strong as it is for mobile banking.

The obvious use case for higher education is current students managing their courses. Class schedules, assignment submissions, reminders for upcoming deadlines or events, test score notifications, paying tuition; you get the point. It wasn’t that long ago I was in University and the student portal my school had wasn’t even mobile responsive; if you couldn’t get to a desktop computer, you were in trouble. Another potential use case we parents here at ImageX often discuss is a parent app for those of us with children in post-secondary. Imagine if we could stay up to speed on children’s class schedule, assignments and grades? Oh, the possibilities…

Like to stay on top of higher education notes and trends? Subscribe to our newsletter below!

Jul 22 2016
Jul 22

In our weekly roundup of higher education notes and trends, you can usually count on three themes being discussed by the academic community: student demographics, budget constraints, and technology. In this post, we'll expand more on these themes by sharing some of our own insights, and we'll cover a few unique and emerging technology trends across higher education and technology.

Virtual Reality on the Horizon in Higher Education

As a web agency specializing in building high-end websites for colleges and universities, anything technology related that has the potential to impact the sector is sure to get our attention. As a VP at our agency, imagine my enthusiasm when I read Inside Higher Ed’s article on virtual reality in the classroom!

The technology is still in its infancy. As such, it’s oftentimes expensive to produce and procure, so it will likely be years before we see it make any kind of tangible impact in schools. That said, the potential it may have in the future on learning outcomes is significant. Imagine complimenting a history lesson with a virtual reality tour, or studying rock formation in a geology class by seeing it in augmented reality. Expensive field trips? No need! Plug into virtual reality and tour the world right from your seat! Another hypothesized value will be the ability for a truly global classroom where virtual classes can meet “face to face” and work together on problem-solving. 

Technology has come a long way; I remember how excited the classroom would get when the teacher rolling in a bulky tubed-TV meant we’d get to watch a grainy, severely outdated educational video. Kids today - they have no idea how lucky they are.

Machine Learning -- Adapting Content To Complex Higher Education Websites

In a previous blog post, we reviewed an interesting new trend in higher education where learning management systems were starting to predict student outcomes by their usage patterns. That particular article noted the stats that showed that the more a student logged into the system during the first week or two of classes, the higher the probability they would succeed in the class. 

The concept of a “machine learning” has been prevalently used in the commercial sector for years now, with trendsetters like Amazon and Facebook serving up product and advertising suggestions based on your purchase history, “likes” and what websites you’ve visited. But the application for higher education is just as promising.

As a specialized web agency that does most of our work for higher education institutions, we’ve been introducing machine learning or “personalization” concepts to our clients for some time now (if you want to read more about personalization in higher education, check out this blog post we published last year). Higher education websites are what we call “content complex,” meaning they have a large number of distinctly different visitor types (what the web industry calls “personas”) frequenting them. Prospective students, parents of prospective students, enrolled students, parents of enrolled students, faculty and alumni is the best case scenario; often times our clients will have very different types of prospective students who require further segmentation (imagine international students vs. local). How does one landing page identify and speak to six unique types of visitors? Personalization technology, that’s how. 

When used, web personalization technologies can log specific user criteria and attributes such as age, location, purchasing behaviors, social media and more. With user attributes logged and indexed, businesses can deploy unique, adaptive content (even web pages) that are custom tailored to individual users. A sophisticated personalization strategy will have a unique web experience for each type of persona where everything from the content, images, colors and messaging has been tailored for them. As a person engages with the website, the technology “learns” more about what they are looking for and can serve up relevant content to them (just like Amazon suggests products to you based on your search and purchase history). 

Of course, this level of personalization requires a complex and thorough content strategy that many institutions simply do not have (yet). We often recommend simpler ways of personalization such as explicitly asking the user when they arrive who they are as a starting point. This basic framework can be evolved as the content strategy of the website becomes more refined. ImageX believes that personalization will be the foundation of user experience on the web moving forward, much like responsiveness for a mobile experience is today. Getting started on this trend now will make future adaptations that much more efficient.

Native Mobile Apps

It seems like every organization has or wants to have a native app; in some cases for good reason, while others, not so much. If an organization has a customer base that needs to interact with a large and often complex data set or tasks on a frequent basis, a native app is likely a good idea. Mobile banking on an app is a great example; the complexity of a banking website and the volume of content make the mobile experience cumbersome for specific interactions, such as paying bills or transferring money (what people generally refer to as “doing their banking”). A banking app immerses the user in that specific set of tasks, with a light set of complementary content and features. The business case for higher education is, in our opinion, just as strong as it is for mobile banking.

The obvious use case for higher education is current students managing their courses. Class schedules, assignment submissions, reminders for upcoming deadlines or events, test score notifications, paying tuition; you get the point. It wasn’t that long ago I was in University and the student portal my school had wasn’t even mobile responsive; if you couldn’t get to a desktop computer, you were in trouble. Another potential use case we parents here at ImageX often discuss is a parent app for those of us with children in post-secondary. Imagine if we could stay up to speed on children’s class schedule, assignments and grades? Oh, the possibilities…

Like to stay on top of higher education notes and trends? Subscribe to our newsletter below!

Jul 22 2016
Jul 22

Of the many things that contribute to the success of a project, communication is the most important. While every project will differ in its requirements, team members, and plan, at the most basic level their goals should always be the same: to add value for the client. Open communication -- that is, the free exchange of ideas, collaboration, and ensuring clarity and direction is the lynchpin that holds a project together in the pursuit of that goal.

At ImageX, we believe in using the right tool for the job. And while “tool” usually means the specific software our staff uses to execute tasks, it also extends to the individuals themselves and how we bridge together teammates and project details. Among the many benefits of being one of the top-ranked Drupal agencies in the world is that we attract some of the top-ranked talent in the world -- and just like we don’t confine ourselves to a specific geographic area when we’re choosing which clients to partner with, neither do we for the teams we build to serve them. That team is based in our office in Vancouver, but it also includes those of us who help expand the depth and breadth of our agency -- remote employees, or as we affectionately call them, our “remotees.” 

For those of us who do work remotely, myself included, the benefits are vast:

  • We’re liberated from our desks;
  • Our morning commute is usually from our breakfast table to our den or workspace;
  • Or for that matter, our office is wherever we make it -- a café, library, or even on our travels; and,
  • We have the flexibility to set our own schedules and be more available for life’s demands (as long as we’re available for meetings, of course -- more on that below).

And for ImageX, the world becomes our talent pool and this allows us to hire the best people available for every position -- whether they’re in Vancouver, Toronto, Sweden, Ohio, Seattle, Taiwan, Florida, or the Ukraine (ImageX has remotees in all of these locations).

Working remotely also has quantifiable benefits to a business’ bottom line:

  • Two-thirds of managers reported that employees are more productive when working remotely;
  • 54 percent of remote workers reported completing as much or more work in less time because of fewer distractions;
  • 82 percent of remote workers reported lower stress levels;
  • Attrition rates fall by as much as 50 percent;
  • 68 percent of younger workers said that the option to work remotely would “greatly increase” their interest in a specific employer; and,
  • Businesses can significantly lower their overall operating costs.

And not to mention the environmental impact of fewer people commuting. When health insurance company, Aetna measured the benefits of their remote working policies, they found that their employees drove 65 million fewer miles, saved over two million gallons of gas, and reduced carbon dioxide emissions by over 23,000 tonnes per year.

A remotee (me on the laptop screen) joining our weekly #toughcoders push-up challenge by webcam, via Google Hangouts.

But working with distributed teams isn’t without its challenges. Communication problems can surface easily, whether because of logistics due to time zones or something simply being lost in translation online, and it’s easy to feel isolated at a home office and disconnected from your team. Like any project, overcoming these challenges and adding value to your team comes from having a strong plan in place to mitigate them. Building your team with the right mix of individuals, having a structured communication plan in place, and using the right tools for each job can help you realize the benefits of working with a distributed team.

Building Your Remote Team

Managing distributed teams introduces some additional considerations to make when you’re recruiting for new staff. Outside of the core competencies of each position, we’ve found emphasizing these four qualities to be a good predictor of success:

  • Is the candidate self-motivated? Working autonomously and independently requires a very high degree of self-motivation, rather than the constant encouragement and motivation that can be expected in a traditional office environment.
  • Does the candidate have strong communication skills? With limited face-to-face contact, above-average communication skills become even more important. Can the candidate communicate clearly and concisely, regardless of the medium, and accommodate for the subtlety and nuance that can often get lost?
  • Is the candidate results-driven? In the absence of more subjective evaluations, it’s important that your team members set clear objectives and that they’re measured against them.
  • Is the candidate open, honest, and transparent? This one is often the most important because you’re relying on your team to pro-actively raise any problems or concerns that could otherwise slip by unnoticed if people confine themselves to communication silos. The more forthcoming and straight-forward, the better.

Building teams of self-sufficient individuals who are empowered to work autonomously will encourage open communication and collaboration between members, rather than top-down (micro)management.

A SMART Communication Plan

With a distributed team, it’s essential that all members unite around a clearly defined and shared goal or purpose. A strong project and/or client manager can act as the advocate for this goal or purpose when any gaps occur and be the face of the team to the client. 

When defining the team’s goal or purpose, consider the SMART framework:

  • Specific
  • Measurable
  • Attainable
  • Relevant
  • Time-bound

Whether it’s included in a formal team or project charter, or more informally in how the project manager oversees the team day-to-day. 

Creating and fostering a results-driven culture is essential. Rather than tracking the team’s working hours (though we still track project hours for billing, efficiency, and accountability to the client), it’s more important that they’re able to produce results that drive the team towards their goals on a sustained basis. And it’s incumbent upon the project manager to ensure continued clarity on what those goals are.

Bringing your team together for regular meetings, either in-person or digitally, is the best way to make certain of this. Daily stand-ups for the project team where each member shares what they worked on yesterday, what their goals are today, and if there is anything blocking their progress shouldn’t take more than 10-15 minutes each morning, but will save exponentially more time in focus.

Weekly team-wide stand-ups allow department leads and upper management to share higher-level progress and help bring any remote staff out of their project silos and into the “office”. We don’t use these meetings to discuss the specifics of any projects -- rather, they bring the team together so that we can hear each other’s voices and see each other’s faces (even if they’re just on a screen), and it reinforces the bigger picture that each individual project is working towards.

The Right Tool for the Job

Once you have the right team members in place and a plan to facilitate communication, you need the tools in place to keep them connected. Using the right software can make communication seamless and effortless, and gives teams the advantage of having every project discussion documented, archived, and searchable -- far from the risk of impromptu drive-bys in the office. 

Every project needs a central repository that captures the tasks, responsibilities, and dependencies involved. While physical backlogs with Post-It Notes are great for the office, they don’t help distributed teams. Trello is an easy-to-use Kanban-style board that lets you drag-and-drop cards between lists to show progress in real-time. Or for a more comprehensive and collaborative solution, we like Basecamp and Jira.

For conversations between team members and clients, in-person meetings allow the participants to communicate verbally as well as non-verbally. There is no substitute for this, but web-based tools like Google Hangouts, Skype, and Slack are the next best thing. Their video functions help approximate in-person meetings and allow the participants to see each other’s faces to better detect nuance. And as a bonus, they have the added functionalities such as screen sharing that further simulate a meeting room setting.

Skype and Slack are particularly helpful for team members in different time zones who may be limited in the meetings they can attend. Because they archive the transcripts of any typed conversation, it’s easy for anyone to catch themselves up at the beginning of their day without the risk of anything being missed in a game of broken telephone. It also allows for easy searching anytime someone needs something confirmed.

Finally, document repositories and collaboration tools like Google Drive and Dropbox can centralize any templates, documentation, design artifacts, and project assets while providing versioning control as well as the ability for multiple team members to collaborate on the same file at the same time.

Final Thoughts

  • It can be difficult for a team to build a positive culture when its members are distributed -- it’s not as simple as grabbing a coffee or going out for lunch. But a strong team culture extends beyond being social. It’s also about “seeing a vision, aligning to a mission, creating a sense of community and belonging and having loyalty to a project that gets people excited about work.”
  • Get to know each other personally. Catch-up with before and after calls, take breaks and make time to chat, and build relationships that help bridges across timezones and cultural divides;
  • Take advantage of your communication tools and create spaces for team members to share off-topic, interesting, or funny content. We have Slack channels for #office, #kudos, #random and even #nhl; and,
  • Iterate. Like any project, test an idea and adapt based on what works and what doesn’t. Every team will have its own dynamic, and it’s essential that any plan adjusts to accommodate it. You won’t get everything right at first, but you can continually improve over time.

Does your organization have distributed teams? If so, what benefits have you realized, what challenges have you encountered, and what have you learned from the process? Get in touch below and let’s talk.

Jul 22 2016
Jul 22

Today marked the final day of this year's Drupal GovCon. It's been three days of insightful talks, swapping knowledge, and catching up with industry peers.

One of this week's most hands-on talks was this morning's overview of the structural differences between custom modules in Drupal 7 and Drupal 8. Unlike Drupal 7, Drupal 8 utilizes Symphony, Autoloading, and Composer. Additionally, the use of YAML files for .info configuration takes some getting used to. While the minimum structure of a Drupal 8 module is at first glance more complex than in Drupal 7, it minimizes effort as the module grows in complexity, utilizing Drupal 8's object-oriented structure to its advantage.

Another fantastic talk today took an in-depth look at implementing living style guides within Drupal. With the ever-changing nature of the web, a living style guide pulls in real code from a website to gather all of the site's components and styles in one place. This is a valuable tool not only for designers and developers, but also for content editors to see their options. In this talk, Sarah Thrasher showed how her team implemented the popular style guide library KSS not only to pull in the site's CSS, but also to leverage the same Twig templates that Drupal used for the theme to minimize duplication of code.

Last but certainly not least, I attended a talk on using usability.gov, a resource provided by the Department of Health and Human Services (HHS) to promote better usability across both government and private-sector sites. This site provides a number of valuable tips and templates not only on development, but also on everything from design to content strategy to project management.

All in all, this has been a fantastic event. I look forward to implementing the new knowledge and ideas this week has provided, and to GovCon 2017! If you missed them, check out the recaps of day 1 and day 2.

Jul 22 2016
Jul 22

This is the second post in a series about coding standards. In our first post, we talked about code standards and why they are so important. In this post, we’ll talk about how to implement Drupal coding standards in your projects.

Other posts in this series:

  1. Code Standards: What Are They?
  2. Code Standards: How Do We Implement Them?
  3. Code Standards: Formatting
  4. Code Standards: Documentation
  5. Code Standards: The t() function
  6. Code Standards: Object Oriented Coding & Drupal 8

Read the coding standards and keep them handy.

It’s a good idea to read over the Drupal coding standards so you have an idea of what’s expected. Even if you’re familiar with them, we can always use a refresher. They’re also a living document, so there’s a good chance something may have been changed or added since the last time you gave them a go-over. Use this post as a reason to read them again! Make sure you have them bookmarked for reference, as well. https://www.drupal.org/coding-standards

Set up your editor for success

The easiest way to keep your code clean and up to par is by having your editor do the work! There are a lot of editors out there, and even the ones that don’t have many bells and whistles can be set up to help you keep standards in mind when you’re coding.

Sublime Text

This post from Chris is a couple years old, and geared towards front-end developers, but has lots of great Sublime Text setup tips and plugins for every developer.

There’s some great info on drupal.org as well: https://www.drupal.org/node/1346890. Here you can find the basic configuration for adhering to Drupal coding standards, a script to set it up on OSX and Linux, and great plugins to help with development. Now you don’t need to worry about line length, spaces, tabs, line endings, and more. It’ll all be handled for you!

PhpStorm

If you’re using PhpStorm, their website has extensive instructions for getting set up with Drupal configuration here.

If you’re using another editor, you can see if it’s listed here: https://www.drupal.org/node/147789

If not, I’d suggest googling it, and if you don’t find instructions, create them and add them to the list!

Review your own code - Use coder

The easiest way to make sure you’re conforming to coding standards is to use a program like codesniffer. You can install coder, which is a Drupal module that allows you to check your code from the command line using custom rules and PHP Codesniffer. Here’s an example of what you might see:

Example Coder output

Let’s walk through this screenshot.

  1. I’ve navigated to a module directory - here, I’m checking the countries module.
  2. The alias I have set up for codesniffer, using custom Drupal rules, is drupalcs.
  3. I want to test the file at tests/countries.test.
  4. Sometimes this command can take a little while. If it seems like it’s hanging, especially if you’ve checked a directory, it may be too much, so try a single file at a time.
  5. The first thing you’ll see is which file you checked, and the full path. Here, it’s /Applications/MAMP/htdocs/countries/tests/countries.test
  6. Next, you’ll see how many errors and warnings, and how many lines they affect - there can be multiple errors per line, and coder will catch them all.
  7. Next, each error or warning will be listed line by line.

I find it’s easiest to go in order, because sometimes one error causes others - coder can only understand so much, so if you have, for example, an array that has one line indented improperly, it may also think the subsequent lines are indented improperly, even if they’re correct.

Christopher did a great post on PHP Codesnifffer last year, check it out here.

Generally, you want to run coder every time you make a change, and before you commit your code or submit a patch. This way, you’re always writing clean code, and anyone reviewing your code is reviewing it for content, and they don’t have to worry about style. Of course, everyone is human and we all make mistakes. Sometimes you’ll push up a tiny change without running coder, and not realize there was a style issue. That’s why team code reviews are so important!

Team code reviews - make the time

The most successful teams build in time to review one another’s code. There’s no substitute for code reviews by another person, and making sure that you view them as an essential part of your process - the same goes for reviews on drupal.org. When planning time and resources for a project, make sure that there is time set aside for code reviews. When you’re working on contrib projects, make sure you take a look at issues marked "Need Review," and test them. If you want a way to dive into a project or just Drupal and contrib work in general, reviewing patches is a great way to get acclimated. You get exposed to other people’s code, and if you find something that needs to be corrected, that will stick with you and you’ll remember it.

Two things to remember when reviewing other people’s code, or when receiving reviews of your own:

  1. Treat others as you would like to be treated. Be kind, courteous, respectful, and constructive. Be aware of your tone. It’s easy to come off more harshly than you intended, especially when working quickly. Take just a second to re-read your comments, especially if you’re communicating with someone you’re not acquainted with.
  2. Take everything in stride, and don’t take it personally. Those reviewing your code want it to be good, and corrections aren’t a personal attack. This can be especially hard when you start out, and even after years, you can still get a comment that comes off in a way that hurts your feelings. Don’t dwell on it! Thank them, make the corrections, submit them, and chances are, they’ll thank you, too.

Now you know what code standards are, why they’re important, and how you can get started implementing them in your code. Set up your editor, install coder, and get ready for our next code standards post on formatting! We’ll talk about the nitty gritty of how you should format your Drupal code.

[1] Hero photo attribution: charlene mcbride

Jul 22 2016
Jul 22

The more I work with Drupal 8, the more I realize how much has changed for developers in the Drupal community. While the transition to a modern, object-oriented system is what's best for the longevity of the platform, it certainly doesn't come without challenges. As someone who doesn't come from an OOP background, I've found the transition difficult at times. In many cases, I know exactly what I want to do, just not how to do it the "Drupal 8 way". On top of this, tutorials and blog posts on D8 are all over the map in terms of accuracy. Many posts written during D8's development cycle are no longer applicable because of API changes, etc.

Below is a list of snippets that might be helpful to site builders or developers more familiar with D7 hooks and procedural. It might also be useful to OOP folks who are new to Drupal in general. My goal below is to add to and update these snippets over time.

Routes & Links

Determine the Current Drupal Route

Need to know what the current Drupal route is or need to run some logic against the current route? You can get the current route like so:

$route = \Drupal::routeMatch()->getRouteName();

To some, the \Drupal::routeMatch() syntax might look foreign (it did to me). Here's a rundown of what's happening here:

First, \Drupal. This is calling the global Drupal class, which, in Drupal 8, is a bridge between procedural and OO methods of writing Drupal code. The following comes from the documentation:

This class acts as a unified global accessor to arbitrary services within the system in order to ease the transition from procedural code to injected OO code.

Right. Moving on to ::routeMatch(). Here we're using the routeMatch() method which "Retrieves the currently active route match object." Simple enough. But what is "::" all about? This StackOverflow answer helped me to understand what that's all about.

From there, the getRouteName() method returns the current route name as a string. Here are some example routes: entity.node.canonical, view.frontpage and node.type_add.

Is this the Front Page Route?

Need to check if the current route is the front page route? There's a service and method for that:

// Is the current route/path the front page?
if ($is_front = \Drupal::service('path.matcher')->isFrontPage()) {}

Here we're calling the path.matcher service (defined in /core/core.services.yml) and using the isFrontPage() method. For more on services, check out the "Services and Dependency Injection Container" documentation on api.drupal.org which helped me understand how all of these bits work together and the why of their structure.

Get the Requested Path

Need to know what the current page's requested path was, as opposed to the route? You can do this:

$current_uri = \Drupal::request()->getRequestUri();

Redirect to a Specific Route

Need to redirect to a specific page? In Drupal 7, you would likely handle this with drupal_goto() in your page callback function. In Drupal 8, you can use RedirectResponse() for that. Here is the relevant changelog.

Here are some examples, borrowed heavily from said changelog. First, in procedural PHP:

use Symfony\Component\HttpFoundation\RedirectResponse;

function my_redirect() {
  return new RedirectResponse(\Drupal::url('user.page'));
}

Here is how you would use a Drupal 8 controller to accomplish the same thing:

use Drupal\Core\Controller\ControllerBase;

class MyControllerClass extends ControllerBase {

  public function foo() {
    //...
    return $this->redirect('user.page');
  }
}

Links on the Fly

Drupal 7 and prior relied heavily on the l() function. (In fact, I would wager this was my most used function over the years. In Drupal 8, if you need to create links on the fly, utilize the Link class

$link = \Drupal\Core\Link::fromTextAndUrl($text, $url);

Working with Entities

Query Database for Entities

If you need to query the database for some nodes (or any other entity) you should use the entityQuery service. The syntax should be pretty familiar to most D7 developers who have used EntityFieldQuery:

// Query for some entities with the entity query service.
$query = \Drupal::entityQuery('node')
  ->condition('status', 1)
  ->condition('type', 'article')
  ->range(0, 10)
  ->sort('created', 'DESC');

$nids = $query->execute();

Loading Entities

If you need to load the actual entities, you can do so a number of ways:

While the following will technically work in Drupal 8:

$node = entity_load_multiple('node', $nids);

This method has been deprecated in Drupal 8 and will be removed before Drupal 9, in favor of methods overriding Entity::loadMultiple(). To future-proof your code, you would do something like the following:

$nodes = \Drupal::entityTypeManager()->getStorage('node')->loadMultiple($nids);

Here's how you would do similar for a single node:

$node = \Drupal::entityTypeManager()->getStorage('node')->load($nid);

Here are a few other entity snippets that might be useful:

// Link to an entity using the entity's link method.
$author_link = $user->toLink();

// Do the same thing, but customize the link text.
$author_link = $user->toLink('Some Custom Text');

// Given a node object, here's how to determine its type:
$type = $node->getType();

// To get the full user entity of the node's author:
$author = $node->getOwner();

// To get the raw ID of the author of a node:
$author_id = $node->getOwnerId();

Image Styles

Need to whip up an image using a particular image style on the fly? This will work for that:

// Create an instance of an image using a specific image style, given a path to a file.
$style = \Drupal\image\Entity\ImageStyle::load('yourStyle_image');
$img_path = $user->field_profile_some_image->entity->getFileUri();
$img_style_url = $style->buildUrl($img_path);

That's it for now. I intend to keep this post updated as we learn more and more about the new world of Drupal 8. If you have a snippet worth sharing, drop us a line via Twitter and we’ll add it to this post (with credit of course).

Jul 22 2016
Jul 22

Like it or not, sometimes you have to output HTML in javascript.

Recently, I ran across a line of code something like this while reviewing a pull-request for a client:

var inputMarkup = '<span><label data-val="' + inputText + '"
for="checkbox-' + index + '" data-tid="' + tid + '">' +
  inputText + '</label><input type="checkbox" id="checkbox-' + index + '"
  data-tid="' + tid + '" data-val="' + inputText + '"
  /></span>';

Aside from the fact that this code was hard to read (and therefore would be more difficult to maintain), the same code was used with no significant modification in three separate locations in the pull-request.

In PHP, most developers familiar with Drupal would immediately reach for one of the well-known parts of Drupal's theme system, render arrays, theme(), or a *.tpl.php file. In javascript, however, I seldom see much use of Drupal 7's extensive javascript API (also made available in a nicely browseable--though not quite up-to-date--form by nod_).

In this case, the relatively difficult-to-read code, combined with the fact that it was repeated several times across more than one file were clear signs that it should be placed into a theme function.

The Drupal.theme() function in the javascript API works much like theme() in PHP. When using theming functions in PHP, we never call them directly, instead using the theme() function.

In javascript, it's similar; when output is required from a given theme function, we call Drupal.theme() with the name of the theme function required, and any variable(s) it requires.

For example, drupal.org shows the following usage:

Drupal.theme('myThemeFunction', 50, 100, 500);

The example uses Drupal.theme() to call the theme function, myThemeFunction(), and pass it the arguments it requires (50, 100, and 500 in this instance). A theme function can accept whatever number of arguments is necessary, but if your theme function requires more than one parameter, it's good practice to define the function to take a single javascript object containing the parameters required by the function.

So in the case of my code-review, I suggested we use a theme function like this:

/**
 * Provides a checkbox and label wrapped in a span.
 *
 * @param {object} settings
 *   Configuration object for function.
 * @param {int} settings.index
 *   A numeric index, used for creating an `id` attribute and corresponding
 *   `for` attribute.
 * @param {string} settings.inputText
 *   The text to display as the label text and in various attributes.
 * @param {int} settings.tid
 *   A Drupal term id.
 *
 * @return {string}
 *   A string of HTML with a checkbox and label enclosed by a span.
 */
Drupal.theme.checkboxMarkup = function(settings) {
  "use strict";

  var checkboxId = 'checkbox-' + settings.index;
  var inputText = Drupal.checkPlain(settings.inputText);
  var checkboxMarkup = '';

  // Assemble the markup--string manipulation is fast, but if this needs
  // to become more complex, we can switch to creating dom elements.
  checkboxMarkup += '<span>';
  checkboxMarkup += '<label data-val="' + inputText + '" for="' + checkboxId + '" data-tid="' + settings.tid + '">';
  checkboxMarkup += inputText;
  checkboxMarkup += '</label>';
  checkboxMarkup += '<input type="checkbox" value="' + inputText + '" id="' + checkboxId + '" data-tid="' + settings.tid + '" data-val="' + inputText + '">';
  checkboxMarkup += '</span>';

  return checkboxMarkup;
};

This allowed the calling code to be much simpler:

// Creates themed checkbox.
checkboxMarkup = Drupal.theme('checkboxMarkup', {
  index: i,
  inputText: $('.inputText').val(),
  tid: $('.tid')
});

$container.append(checkboxMarkup);

The HTML generation is now also more loosely coupled, and more portable, meaning that we can easily use Drupal.theme.checkboxMarkup() elsewhere in this project--or in any other Drupal project.

Jul 22 2016
Jul 22

This webinar has passed. Keep an eye on our blog for future webinars.

You know how to get things done with git: pull, add, commit, push; but have you mastered it like a Jedi does the force? Nothing is a more lasting record of our work then our git commits. In a galaxy where companies ask you for your Github account in lieu of, or in addition to a resume, we have one more reason to make sure that our commit history is as readable as our code itself.

In this one hour session, we will cover:

  • Rewriting commits
  • Reordering commits
  • Combining commits
  • The perfect commit message
  • Finding bugs using git
  • Avoiding common pitfalls

Join us for this session and you will leave a jedi-level git master!

These Are Not the Commits You're Looking For

Jul 22 2016
Jul 22

Below is a site launch checklist, with details on individual areas of interest to follow in the appendix. While some are Drupal specific, the majority would apply to most any site.

Launch Checklist

  • Is the web server instance size large enough?
  • Is there a load balancer in front of your web head(s)?
  • Is Jenkins configured to automatically deploy your code, run cron, etc?
  • Is Redis configured and enabled?
  • Is a CDN configured?
  • Is the CDN serving HIT's?
  • Is Varnish serving HIT's?
  • Is New Relic configured?
  • Is the VirtualHost configured to redirect from www to the base url (or vice-versa)?
  • Is HTTPS enabled?
  • Is Apache configured for HTTP/2?
  • Is Google Analytics (or your analytics tool of choice) configured?
  • Is robots.txt configured for production (ie. did you remove any changes that were made for development)?
  • Is Drupal's internal page cache enabled?
  • Is the Security Review module installed and providing a clean report?
  • Do Drupal's settings.php & (if Drupal 8) services.yml files have the correct read-only permissions?
  • Are all of the checks on Drupal's status report page reporting green?
  • Are all development related modules disabled?
  • Are errors configured to be suppressed?

Appendix

Infrastructure

Though we use a number of different hosting providers in practice, our standard is Linode. Specific hardware recommendations follow:

Web Server

Use at least a 4GB cloud instance. If you or the client are price sensitive and are considering opting for a smaller instance size to save money, I would argue that the billable time spent troubleshooting an underperforming server is easily much more expensive then paying for more power.

Load Balancer

A load balancer is essential when configuring a site with multiple web servers, but using a load balancer is preferable even in situations with only one web server. Having DNS point to a load balancer, instead of to the web server directly, will give you instantaneous control over where your traffic is routed. For example, if you need to replace your web server hardware, you can redirect traffic instantaneously as opposed to waiting for DNS to propagate. Additionally, a load balancer can add simplicity when configuring a site that uses HTTPS, as you can configure the appropriate certificates at the load balancer level as opposed to on all of the relevant web servers.

Automation

At a minimum, the following jobs should be configured in Jenkins:

  • Automated deployments triggered from Github.
  • Cron to be run at least once every 24 hours.
  • A Drush cache clear job that can be run on-demand from the Jenkins UI.

Performance

New Relic

Chromatic configures all web servers meant for production with New Relic. If the client does not already have a New Relic account, create one and obtain the license key. When configuring production boxes using Ansible, utilize the New Relic role in the playbook and provide the correct API key.

Redis

Redis should be installed and configured for all production Drupal sites. Using Redis will improve database performance.

CDN

Putting a CDN in front of your site, provides many perfomance and security benefits. With the many low-cost and free options available, there is rarely a reason to not institute a CDN on every production site. We have had great success using CloudFlare.

Note: CloudFlare requires you to change your name servers and use them for DNS configuration. These changes should be made at least 24 hours in advance of launch.

If this is a Drupal 7 site, be sure to add the following line to your production settings.php file:

/**
 * Remove "cookie" from Vary header to allow HTML caching.
 */
$conf['omit_vary_cookie'] = TRUE;

Varnish

Many high traffic sites will benefit from an extra layer of caching between the web server and the CDN. In these instances one or more Varnish reverse proxy servers is recommended.

HTTPS

SSL can be configured easily with Let's Encrypt. These certificates need to be renewed quarterly but this renewal process can be automated.

HTTP/2

If you have configured HTTPS, you should go one step further and enable HTTP/2 to reap its additional performance benefits. While HTTP/2 does not technically require encryption, no browser currently supports it over HTTP, so for all intents and purposes HTTP/2 requires HTTPS.

Enabling HTTP/2 is a straight-forward process:

  • Enable the Apache HTTP/2 mod:
sudo a2enmod http2

  • Add the following line to the SSL vhost in question:
Protocols h2 http/1.1

  • Restart Apache
sudo service apache2 restart

This content has been generated from the Chromatic Site Launch Guide repository. Fork it on Github!

Jul 22 2016
Jul 22

DrupalCon New Orleans is nearly here and Chromatic will be attending in full force! Here's the rundown of what you need to know:

Learn About Render Arrays from Gus Childs

Gus will be presenting his session on Drupal 8 render arrays on Tuesday at 1pm in the Blackmesh Room (267-268): Aha! Understanding and Using Render Arrays in Drupal 8. If you're ever been confused by render arrays or just want to learn the best practices for how you're supposed to use them, be sure not to miss this session. Gus happens to be an awesome presenter to boot!

Schedule or Attend a BoF in the Chromatic Room

We're once again sponsoring a Birds of a Feather room. BoFs are a great way for folks to get together and discuss interesting topics in a more informal setting. There's already some great BoFs scheduled for the Chromatic room, including one on Drupal VM and Local Development from the well-known geerlingguy. We have a couple BoFs of our own too:

If you have a great idea for a BoF, schedule one for the Chromatic room!

Connect with Us at one of the Summits

In addition to attending the conference proper, we're once again sending our leadership to the Business Summit, as well as sending a couple of folks to the Media & Publishing Summit.

Grab Some Swag

Every year, DrupalCon attendees rave about how awesome and comfortable our t-shirts are. That's because we don't believe in making swag that we ourselves wouldn't love to wear. This year is no different. For NOLA, we've made a limited run of some special vintage baseball tees, printed on 3/4 sleeve American Apparel 50/50 cotton. These shirts are our best yet and we want to give you one for FREE!

See you in New Orleans!

Jul 22 2016
Jul 22

We're happy to announce two new releases for the YouTube Field module:

Improvements include:

Once again, it was a community effort. The module has now given credit attribution to 28 different people. A number of them have been the community member's first attributed commit! Not to mention, endless others have contributed in the issue queue. Thanks to their help, the module has now reached over 30,000 installs. That's enough to land in the top 200!

Why the "beta" label on the 8.x release?

The 7.x-1.x module includes Colorbox support, but that support has not yet been ported to the 8.x-1.x branch. We'd love help with that! We're planning on removing the "beta" label once that support is committed. The rest of the module is a direct port of 7.x-1.x and it already reports a healthy number of installs.

How else can I help?

Hop in the issue queue and have a look at the outstanding issues for either branch. As previously mentioned, any and all contributions are greatly appreciated!

Jul 22 2016
Jul 22

Civil Comments is a platform that brings real-world social cues to comments sections via crowd-sourced moderation and powerful community management tools. Civil Comments is the first commenting platform specifically designed to improve the way people treat each other online.

Unlike others who have thrown up their hands and accepted that the comments sections of the Internet would either be dominated by bullies and trolls, or become a moderation burden for a site's editors, the team at Civil is attempting to solve the problem with community moderation. It is an exciting new take on a widespread problem, and Chromatic is thrilled to bring Civil Comments integration to Drupal with a new contrib module.

It should be noted (and is on the project page!) that there is not currently a free version of Civil Comments. For the time being, it is only available with a subscription as Civil continues work on the platform, but from what I understand a free version is on the horizon.

A special thanks to Christopher Torgalson and Alanna Burke, whose contributions helped get this project off the ground!

Jul 22 2016
Jul 22

Aren't you a cutie?

Here at Chromatic HQ, the team is encouraged to give back to the open-source community. (And on company time!) One way to do this is by reviewing and contributing Drupal patches. For me, this can be both rewarding and frustrating. When things go well, I feel good about contributing and I might even get a commit credit! But there are times when patches don't apply, I have no clue what's wrong and I need to start fresh. First, I curse mightily at the time wasted, then I create a new db, and then re-install a fresh copy of Drupal, and then configure it etc. etc. Using drush site-install makes this process relatively easy, but what if it could be easier? (Hint: It is!)

Hooray for promiscuity!

I recently had a fling with Drush's core-quick-drupal command. I had known about it for years, but I hadn't realized what it could really do for me. This has now changed, and together we're having an open affair!

For the uninitiated, drush core-quick-drupal takes advantage of PHP's built-in web server (PHP >= 5.4) and uses a sqlite database to get a fresh, stand-alone copy of Drupal up and running, all in about a minute. It has two aliases: drush qd and, my personal preference, drush cutie.

Out-of-the-box overview

  • In about a minute it installs a full instance of Drupal.
  • Runs a web server at http://127.0.0.1:8888 (no apache config).
  • Uses a self-contained sqlite file as the db (no mysql db to create and configure).

It's so much fun, you may want to follow along. From the command line, just cd to a folder of your choosing and run drush cutie --yes. (You'll need to have drush installed.)

Behind the scenes, a folder is created called quick-drupal with a timestamp appended to the end. (One of my older cutie folders is quick-drupal-20160214193640... a timestamp from a Valentine's evening with Drush that my wife won't soon forget!) Inside the new quick-drupal folder are subfolders with the latest D8 files and the sqlite db file. (There are lots of options to customize the Drupal version and environment, but the default nowadays is Drupal 8.)

Running it looks something like this

drush cutie --yes
Project drupal (8.0.3) downloaded to 
...
Installation complete.  User name: admin  User password: EawsYkGg4Y
Congratulations, you installed Drupal!
Listening on http://127.0.0.1:8888

(The output above has been edited to highlight the tastier bits!)

And with that I have the latest version of D8 running at http://127.0.0.1:8888. As you can see from the shell output above, the superuser is admin with a password of EawsYkGg4Y.

Okay, okay, very cool, but what can I do with it?

Here's a breakdown:

  1. Review patches with minimal fuss, thereby giving back to the Drupal community.
  2. Investigate new modules without sullying your main dev environment.
  3. Test that new Feature you created to see if it really works.
  4. NOT RECOMMENDED! When that friend asks you how long it will take to build him a website, respond with "about a minute" and fire it up.

You thought I was done?

Let's run through the steps to review a patch. This is where drush core-quick-drupal really shines because it's best to have a clean install of Drupal to work with; this minimizes the number of externalities that can interfere with testing. Having a single-command, throwaway copy of vanilla Drupal is the way to go.

You could call this a blog version of a live demo; I have chosen a patch out in the wild to review. I found this one for the core taxonomy module, that had a status of "Needs Review" on D.O.

The patch file itself is here: https://www.drupal.org/files/issues/taxonomy-term-twig-cs.patch

Here are the steps I took on the command line:

# Install a temporary copy of D8 into a folder I named "test2644718"
drush cutie test2644718 --yes

With the above command I got my environment running. The patch itself simply fixes the formatting in taxonomy-term.html.twig, which is a default template file for taxonomy terms, provided by the core taxonomy module.

I first tested to see the original template in action. Satisfied with the way it was working, I took steps to apply the patch.

# Move into the root folder of the new site
cd test2644718/drupal/
# Use wget to grab the patch from D.O.
wget https://www.drupal.org/files/issues/taxonomy-term-twig-cs.patch
# Apply the patch
patch -p1 < taxonomy-term-twig-cs.patch
patching file core/modules/taxonomy/templates/taxonomy-term.html.twig

The patch was applied successfully and a minor change in taxonomy-term.html.twig was made. I quickly tested to ensure nothing had blown up and was satisfied that the patch works as expected.

Back in D.O., I added my two cents and marked the issue as Reviewed & tested by the community. And that's that.

Update

Though the patch originally sat awaiting review for 2 months, I'm happy to claim that my review got things moving again! After I posted RTBC, a flurry of activity took place with the scope increasing and new patches being created. I reviewed those too! A day later the patches were committed to 8.1.x. Nice.

Jul 22 2016
Jul 22

There are many different ways to handle offsite database backups for your Drupal sites. From host provider automations to contrib modules like Backup and Migrate and everywhere in between. This week, I was looking to automate this process on a Drupal 8 site. Since Backup and Migrate is being rewritten from the ground up for Drupal 8, I decided to whip up a custom shell script using Drush.

I knew I wanted my backups to not only be automated, but to be uploaded somewhere offsite. Since we already had access to an S3 account, I decided to use that as my offsite location. After doing a bit of Googling, I discovered s3cmd, a rather nifty command line tool for interacting with Amazon S3. From their README.md:

S3cmd (s3cmd) is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.

It works like a charm and basically does all of the heavy lifting needed to interact with S3 files. After installing and setting it up on my Drupal 8 project's server, I was able to easily upload a file like so: s3cmd put someDatabase.sql.gz s3://myBucket/someDatabase.sql.gz.

With that bit sorted, it was really just a matter of tying it together with Drush's sql-dump command. Here's the script I ended up with:

  
    # Switch to the docroot.
    cd /var/www/yourProject/docroot/

    # Backup the database.
    drush sql-dump --gzip --result-file=/home/yourJenkinsUser/db-backups/yourProject-`date +%F-%T`.sql.gz

    # Switch to the backups directory.
    cd /home/yourJenkinsUser/db-backups/

    # Store the recently created db's filename as a variable.
    database=$(ls -t | head -n1)

    # Upload to Amazon S3, using s3cmd (https://github.com/s3tools/s3cmd).
    s3cmd put $database s3://yourBucketName/$database

    # Delete databases older than 10 days.
    find /home/yourJenkinsUser/db-backups/ -mtime +10 -type f -delete
  

With the script working, I created a simple Jenkins job to run it nightly, (with Slack notifications of course) and voilà: automated offsite database backups with Jenkins and Drush!

Jul 22 2016
Jul 22

Dropcast: Episode 22 - The Jim Birch Society

Recorded July 6th 2016

This episode our number one fan, Jim Birch, comes on to talk about his Drupal life and sadly learns how unprofessional his favorite podcast is. With that we recognize how long it takes me to actually get an episode out of the can and into your ears. Bob highlights some blog posts, which derails into a discussion about the Google AMP service, which rolls perfectly in with Ryan’s Pro Project Pick. We discuss the latest Drupal News and of course Ryan brings it home with the Final Bell.


Episode 22 Audio Download Link

Updates:

Mediacurrent Blog Mentions:

Interview with Jim Birch (Mario):

Pro Project Pick:

Drupal News:

Jul 22 2016
Jul 22

Look for Talent, Not Experience

We got started with the idea to train inexperienced employees out of necessity, as well as from personal values. I had a few different careers before I got into web development. I was a pharmaceutical chemist, and I was also a math teacher. But I never worked at a place where anyone recognized my talent.

I had other jobs, too. In college, I was a telemarketer and did data entry. I was a bright person, but nobody seemed to notice. I always thought that it was a shame that talent goes to waste due to a lack of opportunity for young, unproven workers.

Back then, I wished someone would have given me an opportunity to do something at my level. Instead, I made it happen for myself, and co-founded this company in 2008. We didn't have any money, and back then there were even fewer qualified potential hires in our field. I'm super picky about the quality of the work my company does, and wasn’t willing to hire second rate workers.

We hired a few people and then the economy collapsed. Much of our work went away. We decided that we weren't going to cut anybody. By that point, we’d hired four or five people, and they didn't really have any experience. I was doing most of the billable work, plus teaching people. When the work dwindled, we decided to pay them from our credit cards. About half of those people are still with us, and they're doing great. I'm really glad that we kept them.

The main reason that we started hiring people who didn't really have much or any experience was that we couldn't afford anyone who did. We're not a virtual company; we've always had an office, so we're limited to our local area, and there really were very few good developers. Instead what we did was just bring candidates into the office, make sure everyone communicated, and for our part, we would bring a willingness to teach.

Believe that your people are intelligent, and that you can teach them something, and that they can learn. They might not learn as quickly as you would like sometimes, but if you believe that they will learn, they will.

*SOURCE: http://www.modernsurvey.com/fall2014

Download the full Grow Your Own white paper for free.

Jul 22 2016
Jul 22

An OSTraining member asked how to setup a bitnami lamp that supports PHP 5.4.

The Bitnami Wamp stacks are available in 3 versions here:

  • 5.5.37
  • 5.6.23
  • 7.0.8

Each of these provides an environment that is preconfigure with a different version of PHP.

5.5.37 comes with PHP 5.5, which you should also use for 5.4 setups. The reason 5.4 is no longer available is because the changes between 5.5 and 5.4 are minimal. You can read more about the changes on the official change log here.

To install the Wamp stack, follow our installation guide.

If you already have a Bitnami Lamp setup, you can install it again. Just be sure to use a different directory location to your original installation. All Bitnami files are contained within these containers, so you can simply install the stack as many times as you like. 

When installing your second stack, you will have to use a different ports for apache, ssl and mysql:

 apache port

apache port

Once the installation is complete, you should see this screen:

installed

installed

In this older version of the Bitnami stack, the enviroment doesn't include Drupal. So we will have to install Drupal now.

Download or copy your drupal site to Bitnami\wampstack-5.5.37-1\apache2\htdocs\drupal

Navigate to http://127.0.0.1:8080/drupal/ 

If you are doing a fresh install, you will see this screen:

drupal install

drupal install

 You can access PHPMyAdmin from http://localhost:8080/phpmyadmin/


About the author

Daniel is a web designer from UK, who's a friendly and helpful part of the support team here at OSTraining.

View the discussion thread.

Jul 22 2016
Jul 22

Friday 5: 5 Structured Data Items that Work for Every Website

We hope the work week is treating you well and that you're gearing up for an even better weekend!

Thanks for joining us for Episode 13 of The Mediacurrent Friday 5. This week, Director of Digital Strategy Dawn Aly joins host Mark Casias to cover 5 Structured Data Items That Work for Every Website. She discusses 5 schema markup types that will work for your website AND a bonus of 5 kinds of content that you should definitely consider marking up.

Watch the video below to learn more about Organization Schema, Website Schema, BreadcrumbList Schema, SiteNavigationElement Schema, VideoObject Schema and more!

[embedded content]

Have a topic that you want to learn more about? Feel free to email us your suggestions to [email protected] and stay tuned for Episode 14 in two weeks. Have a great weekend!

Additional Resources
Friday 5: 5 Tips for Improving Your Site's SEO | Blog Post
Mediacurrent's Crash Course in SEO and Drupal | Webinar
SEO Checklist | eBook

Jul 21 2016
Jul 21

Day two of Drupal GovCon brought with it more tales of Drupal success, lessons learned, and exciting new ideas.

The morning began with a case study on NIH.gov's move to Drupal. As a large federal agency, the NIH team was unused to open source software, but needed a highly flexible system. With Drupal, they created a custom solution using content types, entities, and taxonomy, all while maintaining a strict separation of content from presentation to allow for multi-channel publishing. They were also able to set up specific user roles and workflows for the numerous government stakeholders who use the system every day, and meet all FedRAMP and all FISMA requirements.

Next followed an in-depth discussion of how to improve accessibility in dynamic interface elements such as accordions, tabs, or slideshows. Section 508 and WCAG 2.0 provide extensive documentation on how to ensure that websites are accessible to the broadest audience, accounting for a wide range of limitations and abilities. While these standards are legally required for government sites, they are best practice for all sites across the web, both as the right thing to do and to help a site's SEO. In addition to common measures such as making sure that a site is keyboard navigable and all images have alt text, this talk specifically dug into the role (pun intended) ARIA attributes play when modifying content visibility using JavaScript.

In the afternoon, I went to talks on new features in Drupal 8 such as Twig and configuration export, as well as a detailed discussion of Paragraphs, a Drupal contrib module that we've used on several projects here at Third & Grove. The paragraph module allows the site architect to create fielded entities for custom chunks of content - for example, a photo gallery, an accordion, or anything else that can be built out of fields - and let the user insert them in any order they want within a content node. This lets content editors create far more interesting and dynamic pages than they would be able to in an plain WYSIWYG field, and future-proofs content by keeping it sorted in the database rather than a single field of HTML.

If you missed it, check out yesterday's Drupal GovCon Day 1 Recap, and stay tuned for tomorrow!
Jul 21 2016
jam
Jul 21
A conversation from DrupalCon Asia DrupalCon Mumbai 2016 with members of Acquia's Pune, India office: Prassad Shirgaonkar, Prassad Gogate, Prafful Nagwani, and Jeffrey A. "jam" McGuire in which we touch on Drupal and community in India, the history of the DrupalCon Prenote, Drupal's multilingual strengths, the Drupal Campus Ambassador Program in India, and more!
Jul 21 2016
Jul 21

Today I am very excited! A while ago I asked my friend David Ličen to help me improve appearance and UX for my personal blog. He carefully observed my desires and added some of his own ideas. When we agreed on the initial mock he proceeded with the theme implementation.

He finished his part a while ago. I needed to tweak few other things on the back-end too, which took me way too long to do. Today I finally decided to finish this and deployed the changes to the live website.

How do you like it?

Jul 21 2016
Jul 21

Republished from buytaert.net

Boston.gov's new homepage

Yesterday, the City of Boston launched its new website, Boston.gov, on Drupal. Not only is Boston a city well-known around the world, it has also become my home over the past 9 years. That makes it extra exciting to see the city of Boston use Drupal.

As a company headquartered in Boston, I'm also extremely proud to have Acquia involved with Boston.gov. The site is hosted on Acquia Cloud, and Acquia led a lot of the architecture, development, and coordination. I remember pitching the project in the basement of Boston's City Hall, so seeing the site launched less than a year later is quite exciting.

The project was a big undertaking, as the old website was 10 years old and running on Tridion. The city's digital team, Acquia, IDEO, Genuine Interactive, and others all worked together to reimagine how a government can serve its citizens better digitally. It was an ambitious project as the whole website was redesigned from scratch in 11 months; from creating a new identity, to interviewing citizens, to building, testing and launching the new site.

Along the way, the project relied heavily on feedback from a wide variety of residents. The openness and transparency of the whole process was refreshing. Even today, the city made its roadmap public at http://roadmap.boston.gov and is actively encouraging citizens to submit suggestions. This open process is one of the many reasons why I think Drupal is such a good fit for Boston.gov.

Tell Us What You Think form

More than 20,000 web pages and one million words were rewritten in a more human tone to make the site easier to understand and navigate. For example, rather than organize information primarily by department (as is often the case with government websites), the new site is designed around how residents think about an issue, such as moving, starting a business or owning a car. Content is authored, maintained, and updated by more than 20 content authors across 120 city departments and initiatives.

Screenshot of Towed Cars page

The new Boston.gov is absolutely beautiful, welcoming and usable. And, like any great technology endeavor, it will never stop improving. The City of Boston has only just begun its journey with Boston.gov—I’m excited see how it grows and evolves in the years to come. Go Boston!

Panel on stage at Boston.gov launch

Dries on stage at Boston.gov launch

Dries and Boston mayor, Marty Walsh

Last night, there was a launch party to celebrate the launch of Boston.gov. It was an honor to give some remarks about this project alongside Boston mayor, Marty Walsh (pictured above), as well as Lauren Lockwood (Chief Digital Officer of the City of Boston) and Jascha Franklin-Hodge (Chief Information Officer of the City of Boston).

Jul 21 2016
Jul 21
The before and after of Boston.gov

Yesterday the City of Boston launched its new website, Boston.gov, on Drupal. Not only is Boston a city well-known around the world, it has also become my home over the past 9 years. That makes it extra exciting to see the city of Boston use Drupal.

As a company headquartered in Boston, I'm also extremely proud to have Acquia involved with Boston.gov. The site is hosted on Acquia Cloud, and Acquia led a lot of the architecture, development and coordination. I remember pitching the project in the basement of Boston's City Hall, so seeing the site launched less than a year later is quite exciting.

The project was a big undertaking as the old website was 10 years old and running on Tridion. The city's digital team, Acquia, IDEO, Genuine Interactive, and others all worked together to reimagine how a government can serve its citizens better digitally. It was an ambitious project as the whole website was redesigned from scratch in 11 months; from creating a new identity, to interviewing citizens, to building, testing and launching the new site.

Along the way, the project relied heavily on feedback from a wide variety of residents. The openness and transparency of the whole process was refreshing. Even today, the city made its roadmap public at http://roadmap.boston.gov and is actively encouraging citizens to submit suggestions. This open process is one of the many reasons why I think Drupal is such a good fit for Boston.gov.

Boston gov tell us what you think

More than 20,000 web pages and one million words were rewritten in a more human tone to make the site easier to understand and navigate. For example, rather than organize information primarily by department (as is often the case with government websites), the new site is designed around how residents think about an issue, such as moving, starting a business or owning a car. Content is authored, maintained, and updated by more than 20 content authors across 120 city departments and initiatives.

Boston gov tools and apps

The new Boston.gov is absolutely beautiful, welcoming and usable. And, like any great technology endeavor, it will never stop improving. The City of Boston has only just begun its journey with Boston.gov - I’m excited see how it grows and evolves in the years to come. Go Boston!

Boston gov launch event Boston gov launch event Boston gov launch event Last night there was a launch party to celebrate the launch of Boston.gov. It was an honor to give some remarks about this project alongside Boston Mayor Marty Walsh (pictured above), as well as Lauren Lockwood (Chief Digital Officer of the City of Boston) and Jascha Franklin-Hodge (Chief Information Officer of the City of Boston).
Jul 21 2016
Jul 21

There are many talented designers with the ability to create a fabulous, responsive, web design worthy of the term “screen candy.” But looks aren’t everything and website design is not just art. When a website fails to engage the visitor, it’s often due to the designer’s failure to plan strategically.

A vital step in any website design project is to find out how the website will be used and understand the business behind it. What good is a blog targeted to senior citizens if the small fonts and low contrast make it a challenge to read? How successful is the ecommerce site when the visitor doesn’t complete their purchase because of a poorly structured checkout process? How can an educational website inform the user if the content library is unorganized and the user can’t find what they’re searching for? Before getting started on creating that beautiful website, here are some valuable questions to ask:

What’s the Goal?

In design school, this is the first question we learned to ask. Knowing the goal of the design is the key to unlocking the information needed in order to provide the best solution for your client: a website that attracts, informs, and engages the new and returning visitor. You need a clearly defined ‘Goal’ statement. The statement should be ‘actionable’ and ‘measurable. This will  guide you through the design process as you ask the question: “does this design accomplish the goal?”

An example of a website goal that provides online learning for children might be: “To become an authoritative learning resource for children ages K–8th grade, while providing fresh, quality content on the website.” We want to accomplish this by, “regularly adding new information through new classes and blog posts from professionals, establishing trust by highlighting case studies and targeted free offerings, and marketing the site through other websites and social media.”

Talk to your client. They know their business better than anyone and should have a clear idea of the main goal of their website. If they don’t know, work with them until you establish this essential element needed to create an interface that serves a meaningful function. Whether it be to inform the visitor, entertain them, provide a service, or sell a product, the main focus of the website design should work to achieve this primary organizational goal.

Who is the Audience?

It is said with good reason to “Know Thy User.” A well-known design (and business) best practice is defining the target audience. The audience of the website will not only influence the design elements, but also the voice of the editorial content and the way it’s organized. Answering this question is not always easy, and there is not always just one answer. Often, this step is aided with various tools such as conducting surveys, developing user personas, researching analytics, exploring the competition, and obtaining various insights from social media and email service providers.

What’s the Brand Image?

A design should accomplish a meaningful function, while also support the overall brand message the client wants to convey. Defining the brand message for a website is not just about the always important logo and color palette, but also includes special considerations for the use of icons, imagery, typography, and the ‘voice’ of the editorial content. The way the website behaves – how the user interacts with the various elements  – is another important tool often used to help clients set themselves apart from their competition.

If the client doesn’t have a ‘brand’ image or has no established digital identity, probe further with another question and let their answers paint the picture: “What do you want your website design to ‘say’ about your company and/or service?” Answers could range from “We’re fun and hip” to “Zen, peaceful, and calm”. Whatever the response, it will most likely help guide the layout and overall graphic approach to convey the message behind the brand.

Are There Accessibility Concerns for Brand Elements?

Considerations for accessibility should always be made when determining the best implementation for brand elements in the digital space. Special care is sometimes needed to navigate the political waters of brand color negotiation, so addressing potential accessibility concerns before beginning the design process will save time down the road.

Are the colors accessible? Not all branding agencies provide style guides including color palettes with enough contrast to pass basic accessibility standards in the digital format. Be sure to test the contrast of the foreground and background values for each potential color combination, and pay special attention to the font colors before including them in the design. If the color combinations don’t pass, there are tools available that will help with making adjustments to improve the contrast.

What are the brand fonts? The font size for all device sizes and the font style are considerations that should also be made. Are they available in web format and are they easy to read at a small size? Web fonts load faster, enlarge clearer, and are easier to translate. Certain font styles are found to be easier to read by those with Dyslexia when the ascenders and descenders are extreme, the counter is defined, and ALL CAPS are used at a bare minimum.

Thoughtful consideration of the brand elements prior to beginning the design will help to ensure all audiences including the color blind, vision impaired, and those with reading disabilities will be able to engage and interact with the website as the design intends.

What and Where is the Content?

Understanding the content that will be included in the website will help guide the architecture, the layout structure, and the ways it’s designed. Good organization leads to a carefully crafted navigation and an enjoyable user experience, helping the website not only look good but also serve it’s purpose.

  • Will the content include an extensive amount of copy?
  • How will the text files be provided?  
  • Who is the primary contact for the content?
  • How large is the image library?
  • Are the image sizes and resolution suitable for retina devices?
  • If the image library is small: are there plans for a photo shoot in the near future, or is stock photography an option?
  • Will videos be added? If so, what is the source of the video files?

If the website is a redesign, review each main section with your client to discover what is no longer relevant and possible changes they would like to see moving forward. If it’s a new website design, at the very minimum you’ll need a Content Outline with the names and categories of each section in order to determine the page templates and main navigational structure best suited for the design and user experience.

Additional Resources
Easy Ways to Make Your Website More Accessible | Blog Post
Designing With Personas | Blog Post
Why Web Analytics Are Important for Your Business | Video

Jul 21 2016
Jul 21

I'm super excited to be invited to be a keynote speaker for this year's DrupalCamp WI (July 29/30). If you're in the area you should attend. The camp is free. The schedule is shaping up and includes some great presentations. Spending time with other Drupal developers is by and large the most effective way to learn Drupal. So sign-up, and come say hi to Blake and me.

Why is Drupal hard?

The title of my presentation is "Why is Drupal Hard?" It is my belief that if we want to continue to make it easier for people to learn Drupal we first need to understand why it is perceived as difficult in the first place. In my presentation I'm going to talk about what makes Drupal hard to learn, why it's not necessarily accurate to label difficult as "bad", and what we as individuals and as a community can do about it.

As part of the process of preparing for this talk I've been working on forming a framework within which we can discuss the process of learning Drupal. And I've got a couple of related questions that I would love to get other people's opinions on.

But before I can ask the question I need to set the stage. Close your eyes, take a deep breath, and imagine yourself in the shoes of someone setting out to be a "Drupal developer."

Falling off the Drupal learning cliff

Illustration showing the scope of required knowledge across the 4 phases of learning Drupal. Small at phase 1, widens quickly at phase 2, slowly narrows again in phase 3 and through phase 4.

When it comes to learning Drupal, I have a theory that there's an inverse relationship between the scope of knowledge that you need to understand during each phase of the learning process and the density of available resources that can teach it to you. Accepting this, and understanding how to get through the dip, is an important part of learning Drupal. This is a commonly referenced idea when it comes to learning technical things in general, and I'm trying to see how it applies to Drupal.

Phase 1

Graph showing Drupal learning curve, showing exponential growth at phase 1

When you set out to start, there's a plethora of highly-polished resources teaching you things that seem tricky but are totally doable with their hand holding. Drupalize.Me is a classic example: polished tutorials that guide you step-by-step through accomplishing a pre-determined goal. During this stage you might learn how to use fields and views to construct pages. Or how to implement the hook pattern in your modules. You don't have a whole lot of questions yet because you're still formulating an understanding of the basics, and the scope of things you need to know is relatively limited. For now. As you work through hand-holding tutorials, your confidence increases rapidly.

Phase 2

Graph of Drupal learning curve showing exponential decay of confidence relative to time at phase 2, the cliff

Now that you're done with "Hello World!", it's time to try and solve some of your own problems. As you proceed you'll eventually realize that it's a lot harder when the hand-holding ends. It feels like you can't actually do anything on your own just yet. You can find tutorials but they don't answer your exact question. The earlier tutorials will have pointed you down different paths that you want to explore further but the resources are less polished, and harder to find. You don't know what you don't know. Which also means you don't know what to Google for.

It's a much shorter period than the initial phase, and you might not even know you're in it. Your confidence is still bolstered based on your earlier successes, but frustration is mounting as you're unable to complete what you thought would be simple goals. This is the formulation of the cliff, and, like it or not, you're about to jump right off.

Phase 3

Graph of Drupal learning curve showing relatively flat and low confidence over time at phase 3

Eventually you'll get overwhelmed and step off the cliff, smash yourself on the rocks at the bottom, and wander aimlessly. Every new direction seems correct but you're frequently going in circles and you're starving for the resources to help. Seth Godin refers to this as "the dip", and Erik Trautman calls it the "Desert of Despair". Whatever label you give it, you've just fallen off the Drupal learning cliff. For many people this is a huge confidence loss. Although you're still gaining competence, it's hard to feel like you're making progress when you're flailing so much.

In this phase you know how to implement a hook but not which hook is the right one. You know how to use fields but not the implications of the choice of field type. Most of your questions will start with why, or which. Tutorials like those on Drupalize.Me can go a long ways toward teaching you how to operate in a pristine lab environment, but only years of experience can teach you how to do it in the real world. As much as we might like to, it's unrealistic to expect that we can create a guide that answers every possible permutation of every question. Instead, you need to learn to find the answers to the questions on your own by piecing together many resources.

The scope of knowledge required to get through this phase is huge. And yet the availability of resources that can help you do it is limited. Because, as mentioned before, you're now into solving your own unique problems and no longer just copying someone else's example.

Phase 4

Graph of Drupal learning curve showing upswing of confidence, linear growth, at phase 4

If you persevere long enough you'll eventually find a path through the darkness. You have enough knowledge to formulate good questions, and the ability to do so increases your ability to get them answered. You gain confidence because you appear to be able to solve real problems. Your task now is to learn best practices, and the tangential things that take you from, "I can build a website", to "I can launch a production ready project." You still need to get through this phase before you'll be confident in your skills as a Drupal developer, but at this point it's mostly just putting in time and getting experience.

During this phase, resources that were previously inaccessible to you are now made readily available. Your ability to understand the content and concepts of technical presentations at conferences, industry blog posts, and even to participate in a conversation with your peers is bolstered by the knowledge you gained while wandering around the desert for a few months. You're once again gaining confidence in your own skills, and your confidence is validated by your ability to continue to attain loftier goals.

And then some morning you'll wake up, and nothing will have changed, but through continually increasing confidence and competence you'll say to yourself, "Self, I'm a Drupal developer. I'm ready for a job."

What resources can help you get through phase 3?

So here's my questions:

  • What resources do you think are currently available, and useful, for aspiring Drupal developers who are currently stuck in phase 3, wandering around the desert without a map asking themselves, "Panels or Context?"?
  • What resources do you think would help if they existed?
  • If you're on the other side, how did you personally get through this dip?

Responses from Lullabot

I asked this same question internally at Lullabot a few days ago, and here are some of the answers I received (paraphrased). Hopefully this helps jog your own memory of what it was like for yourself. Or even better, if you're stuck in the desert now, here's some anecdotal evidence that it's all going to be okay. You're going to make it out alive.

For me, it was trial and error. I would choose a solution that could solve the particular problem at hand most efficiently, and then I would overuse it to the extreme. The deeper lessons came months later when changes had to be made and I realized the mistakes I had made... Learning usually came also from working with others more experienced. Getting the confidence to just read others' code and step through it is also a big plus.

building something useful++. That's the absolute best way. Can't believe I forgot to mention it. Preferably something that interests you or fulfills your own need. You still fall off the cliff, but you at least see the fall coming, and your ability to bounce back is better.

At this stage I find that the best resources are people, not books or tutorials. A mentor. Someone that can patiently listen to your whines and frustrations and suggest the proper questions to ask, and who can give you the projects and assignments that help you grow and stretch.

Everything I know about Drupal I know through years of painful trial and effort and shameless begging for help in IRC.

I spent a lot of time desperately reading Stack Overflow, or trying to figure a bug out from looking at an issue where the patch was never merged, or reading through a drupal.org forum where somebody tries to solve something but then just ends with "nevermind, solved this" without saying why.

I'd agree that people is what gets you through that. I learned IRC and how to write patches and get help from individuals and that is when the doors opened.

Another approach that really boosted me to the next level, especially early on in my career as a developer, was to work with someone that you can just bounce ideas off of. I'll never forget all the hacking sessions Jerad and I had back in the day. Coding at times can be boring, or the excitement of doing something awesome is self-contained. Being able to share ideas, concepts, and example code with someone that appreciates the effort or awesomeness of something you've done and at the same time challenges you to take it to the next level is priceless.

Printing out the parts of Drupal code I wanted to learn: node, taxonomy and reading comments and code like a gazillion times.

Try and code something useful so I could ask others for help. That's how I wrote the path aliasing module for core.

I often find that as you get into more complicated, undocumented territory, being able to read code is super valuable. You can often get lost in disparate blog posts, tutorials and forums that can lead you all sorts of ways. The code is the ultimate source of truth. Sometimes it takes firing up a debugger, stepping through the parts that matter to see how things are connected and why.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web