Author

Jul 01 2016
Jul 01

The Core Conversations track is a place for sessions that spark discussion, open questions, and form the base for an ongoing development in the Drupal core space. We’d like to offer all accepted sessions some help facilitating or moderating discussion. The session could be a debate style panel, or a more traditional presentation.

Drupal 8 is a groundbreaking release for the Drupal community in many ways. Semantic versioning and the release schedule is one of the most applauded new arrivals. There are now scheduled minor releases every 6 months which can have new additions. This means if you want to get a new feature in (frontend or backend) you can make it happen within 6 months. Why not submit a session to the Core Conversation track on the new feature you’re looking to implement?

There are a number of core initiatives ongoing, many with changes in Drupal 8.2 already, due for release in October. In New Orleans Dries spent the majority of his keynote talking about some of the new proposed and planned initiatives, then about proposing further initiatives, once you have your plan issue in the queue the Core Conversation track is a great place to explain and get buy in for the initiative.

With all of these process and schedule changes there is always time for retrospectives and planning further non-code contributions to the Drupal community. The Core Conversation track is the ideal forum for these types of session too.

Drupal 7 opened the doors to improved user experience and accessibility in the community with the D7UX initiative. As we add new features into Drupal core we need to make sure time is given to user experience and accessibility. If this is something you care about please share your views in a session for the Core Conversation track.

As already mentioned, the landscape of Drupal core has changed, and will be changing more. This is an exciting time to get involved and the Core Conversation track is an ideal platform for this.

Look forward to seeing you in Dublin.

Submit your Session

-----------------------

Tim Millwood
Core Conversations Track Chair
DrupalCon Dublin

Jul 01 2016
Jul 01

Welcome to the first installment of our three part Drupal 8, Pantheon & GitKraken series.  For more information on what this series will be covering check out our intro HERE.

Step 1. Setting up a Pantheon Account

The first thing we are going to want to do is create an account on Pantheon:

1. Go to www.pantheon.io

2. Click the Create Free Account button at the upper right hand of the page

 

 

3. Fill out the form and click create account 

 

Step 2. Create a Drupal 8 Site

After creating your Pantheon account you will be automatically logged in and redirected to your dashboard.  Time to create a Drupal 8 site! 

1. Click the Create new site button

 

2. Name your site (this will also be the URL for your site) and click the Create Site button

 

3. Select Drupal 8 from the list of options

 

 

4. After a short wait your site will be created and you can click the Visit your Pantheon Dashboard button

 

5. Once on your dashboard click on your newly created site under the sites tab

6. Click the Visit Development Site button an go through the Drupal Install.

(*note: we need to have the site in SFTP connection mode for the Drupal 8 install, we will be switching to Git a little later )

Congratulations! You have just created a pantheon account, have a fresh Drupal 8 site installed and ready to go, and a dev/test/live workflow set up.

YOU ROCK!

In next weeks installment we will take a look at getting Gitkraken set up, generating SSH Keys, and cloning your remote Git Repo to your local machine.

Jul 01 2016
Jul 01

We have several Drupal 6 to Drupal 8 upgrade projects going on, which is particularly challenging given how quickly the Drupal Migration system is changing. Given that a couple of them are nearing launch, and were missing some node references, I set out to get the content updated from the production sites before launch.

When we started migrating these sites, you could run the Migrate Upgrade scripts passing the database credentials, and get a full list of what could be upgraded and what could not. And it would migrate all the stuff. The whole point of having a migration system like this is so that you can keep on incrementally migrating content until the new site is ready to go, do a final migration, and go live!

The first surprise: with migration in 8.1.x, that is no longer supported with the built-in automatic Drupal to Drupal upgrade -- it's a one-shot, one way deal.

Oh crap.

Fortunately, it's pretty easy to use migrate_upgrade to generate a custom migration, and with some database mangling, you can get your new migration to pick up the mappings from a previous auto-migration done in 8.0.x.

The main gotcha here is that node content still isn't showing up as migrated -- but for us, that was good enough -- that's what we wanted to update from production anyway. The big win here is that all of the rest of the dependencies for the migration -- users, input filters, taxonomy terms, blocks, menus, settings -- are all now mapped and so we can keep the changes we've made on the new site without clobbering with an entirely new migration.

Caution! Extremely technical backflips ahead...

Ok, not really... but this is tricky stuff, involving multiple wipes/reloading of your database. Make sure you do this all on a test copy!

Before starting: Gather your tools, backups

Before doing anything, I would recommend backing up your database. And make sure your tools are up to date. Here are the tools I use for this process:

  • drush 8.x
  • drupal console 1.0.0 (currently at beta3)
  • bash, sed
  • a working Drupal 8.0 site (it's ok if you've already upgraded it to 8.1)

Step 1: Back up database, upgrade all code

drush sql-dump > site-80.sql
(do whatever you usually do to upgrade)
drush dl --select migrate_tools migrate_upgrade migrate_plus # select the latest dev versions of these
drush updb
drush cr # cache rebuild
drush sql-dump > site-81.sql

Note: for some reason, drush dl migrate_upgrade wants to install in ~/.drush, and not in the site -- this caused me a bit of confusion, and lots of errors when trying to update! Be sure it installs in a location that doesn't get overridden by the 1.x version, which is not compatible with Drupal 8.1.

For changes as major as these, I like to have places I can roll back to. git tags can be useful, along with database snapshots at each step.

Step 2: Save your config

The config by default is stored under sites/default/files -- we exclude that path from git, and so we recommend moving it somewhere inside your git tree. You can set this in the settings.php file, to whatever path you would like:

$config_directories['sync'] = 'config/sync';

... the above configuration puts it under the site root, in config/sync.

Once you have that set:

drupal config:export
git add config
git commit

... and you have your current configuration exported and saved to code.

Step 3: Apply any core patches, prep the destination

In particular, we wanted node references migrated to D8 entity references: Add migrate support for Drupal 6 node & user reference

At this point you should enable all the modules that have migrations you want picked up, and generally make sure your target environment is ready.

If you make any further changes, export and commit your config again, and create a new database backup BEFORE creating your new upgrade.

Step 4: Create your custom migration module, and create the new migration

Here I borrowed heavily from other posts:

The gist of it:

drupal generate:module # Create a new module to contain the migration configuration
gedit sites/default/settings # Add another database credential array for the D6 database, using a key "upgrade"
drush migrate-upgrade --configure-only # Creates all the migration config objects it can find
drupal config:export

At this point you have a brand new migration in the config, and a module ready to store this config. Assuming your module is in module/custom/site_migrate:

mkdir -p module/custom/site_migrate/config/install
cp config/sync/migrate_plus.migration* module/custom/site_migrate/config/install/
rm module/custom/site_migrate/config/install/migrate_plus.migration_group.default.yml

... this exports your migration config to code, skipping the default migration_group that would conflict with migrate_plus.

Step 5: Mangle all the data

Ok. Now we're to the tricky part. We're essentially going to revert to where we were at the end of step 3, and then enable our migration module, with the mapping tables from the original migration.

So...

git add modules/custom/site_module
git commit
git clean -df config/sync
git checkout HEAD config/sync

... to get your code base back to the right spot.

Now, the guts of the change. The new migrate_upgrade uses mostly the same migration names as the old, except with upgrade_ inserted at the beginning of the name. This means you need to rename your migrate_map and migrate_message tables to pick up those original mappings.

There's one other gotcha: at the moment, drush migrate-upgrade looks for a database key named "upgrade" but in the exported configuration, it changes to expect a database key named "drupal_6". So you'll need to edit your setting.php file to change that, after step 4.

Here's how I renamed the tables. Exactly what you need to change may vary based on the version of your mysql libraries and drush, but this worked for me:

cp site-81.sql site-mm.sql
sed -i "s/ALTER TABLE `migrate_message_/ALTER TABLE `migrate_message_upgrade_/g" site-mm.sql
sed -i "s/ALTER TABLE `migrate_map_/ALTER TABLE `migrate_map_upgrade_/g" site-mm.sql
sed -i "s/^LOCK TABLES `migrate_message_/LOCK TABLES `migrate_message_upgrade_/g" site-mm.sql
sed -i "s/^LOCK TABLES `migrate_map_/LOCK TABLES `migrate_map_upgrade_/g" site-mm.sql
sed -i "s/^INSERT INTO `migrate_message_/INSERT INTO `migrate_message_upgrade_/g" site-mm.sql
sed -i "s/^INSERT INTO `migrate_map_/INSERT INTO `migrate_map_upgrade_/g" site-mm.sql
sed -i "s/^CREATE TABLE `migrate_message_/CREATE TABLE `migrate_message_upgrade_/g" site-mm.sql
sed -i "s/^CREATE TABLE `migrate_map_/CREATE TABLE `migrate_map_upgrade_/g" site-mm.sql

Step 6: Run your migration!

Almost there...

drupal database:drop # this deletes any extraneous tables that might cause issues
drush sqlc < site-mm.sql # reload the database with the changed tables, and without the second migration
drush en -y site_migrate
drush migrate-status

Presto! You should see most of your previous migrations, with most of the things already imported!

Now, if you migrate, you'll most likely get duplicate nodes of everything you previously imported. I would recommend bulk-deleting them, and then you should be able to proceed with:

drush migrate-import --all

... and get all node content migrated, along with any new users, menu items, files, etc.

But most importantly, you can now use the new migration improvements and same techniques as if you had started the migration in 8.1.x!

Thanks to Mike Ryan for assistance.

Jul 01 2016
jam
Jul 01
Vladimir Roudakov and I sat down at DrupalCon New Orleans to talk about an event close to my heart: the 2016 edition of Drupal South. This year, it'll be held in Australia's Gold Coast. Knowing the Australasian Drupal community, this will be a very high quality event in terms of what you'll be able to get out of it. And knowing the location, right by the world famous "Surfers' Paradise" beach, if you're into sun, fun and Drupal, you'll be in for a treat!
Jul 01 2016
Jul 01
The following post is from the Acquia Lightning blog. Acquia Lightning, “the Drupal distribution for Enterprise Authoring,” is a Drupal starter kit that enables developers to create great authoring experiences and empower editorial teams. Lightning provides users with a lightweight framework for building working solutions in Drupal. For more information, including a product roadmap, and installation instructions, check out the Acquia Lightning site.
Jul 01 2016
Jul 01

With three months left to go before DrupalCon Dublin, event planning is in full swing. Like all Irish Drupal events, Annertech are actively involved in preparations for DrupalCon. Our managing director, Stella Power, has taken on the role of local team lead as well as Business Track chair, and I myself am the Project Management track chair. There's less than a week now to get your sessions submitted for DrupalCon, so it's time to get writing!

Business Track

This year the business track is focusing on how agencies can grow their business and how to do so sustainably. We're looking for speakers who are willing to share their experiences and insights into growing their businesses, what worked, what didn't, and why.

Suggested topics include:

Marketing Drupal

Competition in the marketplace is growing, both from a growing number of agencies offering similar services, but also from alternative solutions being offered. We are looking for sessions on how you generate new leads for your business, and how you market and postiion Drupal and your agency as being the best solution available.

Going for Growth

Growing your business is all very well, but how do you manage this growth and ensure you have the necessary structures in place to ensure this growth is sustainable. We are seeking strategies which offer sustainable approaches to growth for the small and large Drupal agency alike.

Business Innovation and Diversification

Perhaps the solution to sustainable business growth is to innovate and diversify your service offering. Maybe you need to consider extending into additional or alternative markets or verticals. But when is the time right to do this? And what are your strategies to approaching this?

Drupal 8

With the release of Drupal 8, there are a number challenges, but also opportunities for agencies working with Drupal. What has been the impact of Drupal 8 on your business, on demand for your services, on your sales or marketing processes and on your team? How has it affected the estimation and delivery of your projects? How do you convince clients to upgrade to Drupal 8?

These are just some of the ideas we have for the Business Track this year. We want to hear from you, about your experiences and insights into the above topic areas. Got an idea for another session? Great, then let us know and submit it!

Project Management Track

Whatever the size of the project, no matter how many people are working on it, it will need to be managed. Some have dedicated project managers, others share the task within the team, whatever way you spin it it's got to be done, so why not make sure we can learn how to do it in the most appropriate way.

The focus is about sharing your experiences, good or bad, as long as you learned something which is relevant to others, you're welcome to submit a session, here are some topics which we're actively looking for:

  • Tips and Tricks: How did you improve your processes or learn how not to do something?
  • Becoming agile: How did you become more agile?
  • Getting the numbers right: Estimations, budgeting and reporting. How do you keep on top of things?
  • Freelance Project Management: Are you, or have you ever been, a freelance Project Manager? Tell us your story.
  • Managing an Open Source Project: Ever been involved in managing an open source project? How did you get by? Would you do it again?

As always, if you're unsure whether your session proposal is appropriate, reach out to myself or Stella and we will help you where we can.

Countdown to session submission closing starts now, are you ready?

Jul 01 2016
Jul 01

Six years ago the team at Amazee decided to start a Drupal agency called Amazee Labs in Zurich, Switzerland (read more about the Amazee journey) where I worked as a Drupal developer. Over the years, our small team at Amazee Labs grew and my responsibilities as a developer shifted. Suddenly I wasn’t coding all day. I became a team lead and finally CTO. I code much less now than I did during those first years.

Along with growing the team, we also grew Amazee. We added Amazee Metrics, Amazee Labs in Austin, Amazee Labs in Cape Town, and our newest venture, amazee.io. Each company demanded my attention, so I spent and still spend quite a lot of time in airplanes and on the road.

This growth has been awesome and a never ending journey of new challenges and learning. It’s also been exhausting and caused tensions and bottlenecks some times, as each group waited for my attention and time. Fortunately, this also taught me a very important lesson of good management: letting go.

Let go, enable your team, and make yourself redundant.

This is why I step down as CTO of Amazee Labs Zurich and pass the CTO-torch along to the next person who leads our tech team in Zurich: I’m very happy to announce that Josef Dabernig is our new CTO at Amazee Labs in Zurich from July 1, 2016 on.

Josef joined Amazee Labs in August of 2014 and has demonstrated his leadership and technology skills every day since then. He even taught me an important skill, that sometimes you need to slow down in order to be effective.

I wish Josef all the best with his new role. I am looking forward to see what his team will release. From what I’ve seen so far, they stand to deploy some pretty epic projects in the coming months.
As for me? I’ll be be spending this newfound freedom as the new CTO of Amazee Group, racking up travel miles and providing support and technical guidance for all our companies, specifically amazee.io, which we recently launched back in May.

Jul 01 2016
Jul 01

Sarah Thrasher (sarahjean), front-end developer with Acquia, joins Andrew (remember him?), Kelley, and Mike to discuss the upcoming Drupal GovCon, what is means to be a junior developer, how the Drupal community is helping to make sure our community members stays healthy, and we bid farewell to Mike using the word "pimp". All that and our picks of week and five questions!

Interview

DrupalEasy News

Three Stories

Sponsors

Picks of the Week

Upcoming Events

Follow us on Twitter

Five Questions (answers only)

  1. Learning Japanese
  2. DrupalVM
  3. Speaking at Drupalcon
  4. Chicken
  5. Before 2008 was a print designer. They had an internal site that needed update so hey, Drupal. Read "Using Drupal" from cover to cover and created the site.

Disclaimer

Sorry about the audio quality in this episode. We had to go to our emergency recording of the call which is only two channels (our regular recording is one channel per participant) and heavily compressed.

Intro Music

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Jul 01 2016
Jul 01

With phone in hand, laptop in bag and earbuds in place, the typical user quickly scans multiple sites. If your site takes too long to load, your visitor is gone. If your site isn’t mobile friendly, you’ve lost precious traffic. That’s why it’s essential to build well organized, mobile ready sites.

But how do you get good results?

  • Understand whom you’re building for
  • Employ the right frameworks
  • Organize your codebase
  • Make your life a lot easier with a CSS preprocessor
Let’s look at each of these points.


Design For Mobile

When you look at usage statistics, the trend is clear. This chart is from Mary Meeker's 2016 Internet Trends Report.

A vast array of mobile devices accomplish a variety of tasks while running tons of applications. This plethora of device options means that you need to account for a wide assortment of display sizes in the design process.

As a front end developer, it’s vital to consider all possible end users when creating a web experience. Keeping so many display sizes in mind can be a challenge, and responsive design methodologies are useful to tackle that problem.

Frameworks that Work

Bootstrap, Zurb, and Jeet are among the frameworks that developers use to give websites a responsive layout. The concept of responsive web design provides for optimal viewing and interaction across many devices. Media queries are rules that developers write to adapt designs to specific screen widths or height.

Writing these from scratch can be time consuming and repetitive, so frameworks prepackage media queries using common screen size rules. They are worth a try even just as a starting point in a project.

Organizing A Large Code Base

Depending on the size of a web project, just the front end code can be difficult to organize. Creating an organizational standard that all developers on a team should follow can be a challenge. Here at Zivtech, we are moving toward the atomic design methodology pioneered by Brad Frost. Taking cues from chemistry, this design paradigm suggests that developers organize code into 5 categories:
  1. Atoms
  2. Molecules
  3. Organisms
  4. Templates
  5. Pages

Basic HTML tags like inputs, labels, and buttons would be considered atoms. Styling atoms can be done in one or more appropriate files. A search form, for example, is considered a molecule composed of a label atom, input atom, and button atom. The search form is styled around its atomic components, which can be tied in as partials or includes. The search form molecule is placed in the context of the header organism, which also contains the logo atom and the primary navigation molecule.

Now Add CSS Preprocessors

Although atomic design structure is a great start to organizing code, CSS preprocessors such as Sass are useful tools to streamline the development process. One cool feature of Sass is that it allows developers to define variables so that repetitive code can be defined once and reused throughout.

Here’s an example. If a project uses a specific shade of mint blue (#37FDFC), it can be defined in a Sass file as $mint-blue = #37FDFC. When styling, instead of typing the hex code every time, you can simply use $mint-blue. It makes the code easier to read and understand for the team.

Let’s say the client rebrands and wants that blue changed to a slightly lighter shade (#97FFFF). Instead of manually finding all the areas where $mint-blue is referenced on multiples pages of code, a developer can easily revise the variable to equal the new shade ($mint-blue = #97FFFF; ). This change now automatically reflects everywhere $mint-blue was used. Another useful feature of Sass is the ability to nest style rules. Traditionally, with plain CSS, a developer would have to repetitively type the parent selector multiple times to target each child component. With Sass, you can confidently nest styles within a parent tag, as shown below. The two examples here are equivalent, but when you use Sass, it’s a kind of shorthand that automates the process.

Traditional CSS

#main { color: black; } #main a { font-weight: bold; } #main a:hover { color: red; }

Sass

#main { color: black; a { font-weight: bold; &:hover { color: red; } } } Although there are a lot of challenges organizing code and designing for a wide variety of screen sizes, keep in mind that there are excellent tools available to automate the development process, gracefully solve all your front end problems and keep your site traffic healthy.
Jul 01 2016
Jul 01

What's important to learn about the recent AberdeenCloud meltdown? You can not trust your backups and site to a single provider!

Well it is all over the blogosphere, AberdeenCloud breakdown. I already read a couple of articles on the subject and I couldn't agree more with the one from annertech, only way to avoid this kind of issues is offsite backup. I would normally write how to recover from something like this, but sadly as codeenigma explained there isn't much we could do, their servers just collapsed and were gone. But now for 36hs we'll be able to recover our data.

Sadly we didn't notice problems with AberdeenCloud until it was too late. Due to other reasons we had a plan to pull away, but we didn't get to apply it for the only site that remained there. We did have backups, but not automatic. So they weren't as frequent as we would have needed given the total meltdown on AberdeenCloud. Since we had some backups and it was only one site it didn't take that long to rebuild what we lost. It wasn't really an option to wait for them to come back as it didn't seem it would happen. I was really surprised and pleased to see they finally replied, but before that we used AberdeenCloud cache (wasn't realiable but with patience you could get something) and also use the Wayback Machine.

So what's important about this? We need to learn we should ALWAYS have an alternate site backup on a different provider. You'll need it for disaster recovery, that's the only way to guard your client's data. You can not trust a single provider with both your site and backups. We shouldn't be naive about this, it is also important that you track back to learn which company your provider trust on. If for example your sites are hosted by Pantheon and you want to trust your backups to NodeSquirrel, think again as they are both the same company.

Bottom line, be serious about backups even if you are only trying the service. In the company I work for we do it for the other services we have, and from now one we won't try a service without implementing offsite backups first. That needs to be part of the initial implementation, not a post launch burn down list task.

Jul 01 2016
Jul 01

This is Tommy, calling from the engine room! Were you affected by the Aberdeen Cloud incident that happened on the 28-06-2016? We weren't, but I'd call that partly luck and partly proactivity. We were actually prepared for this.

TL;DR

We were already prepared for a scenario like the Aberdeen Cloud breakdown, owing to our disaster recovery plan. Fortunately we didn't have to set it in motion. Each night we have a simple script which takes off-site backups of all of our hosted sites. We've made the source code available on github, so hopefully this will help others prepare for the likes of the Aberdeen Cloud implosion, and perhaps we can share ideas on how to improve each other's disaster recovery plans.

Our experience with the Aberdeen Cloud incident

We have a large number of sites hosted by various cloud services. Since autumn 2013, we'd mainly used Aberdeen Cloud, and in autumn 2015 we started to explore other options to see what else the market had to offer. Platform.sh was the one that we decided to give a serious test for new clients.

Soon after that, Aberdeen Cloud began to seem a bit flaky. Longer response times on support tickets. Solr services started to fail, along with random outages of various other services. We accepted that for a while, but after having lost and regained SSH access to all of our sites (including git and rsync), we eventually decided that enough was enough and we couldn't put our trust in them anymore.

We had to migrate everything to alternative hosting platforms. Given the similar price points and the fact that they seem well funded and offer excellent support, we decided on Platform.sh. And so began Project Exodus. 

Over the next three weeks we migrated over 20 sites to Platform.sh in a staged approach. I wouldn't say that it was a straight-forward process. Lots of clients had specific quirks to their setup. For example, some needed a PHPBB forum, others had FTP access for uploading files, some integrated with external systems that required firewall changes, etc, while others had custom .htaccess redirection rules that needed to be rewritten for Nginx. However, we were very lucky and had completed Project Exodus nearly a month before Aberdeen Cloud finally came tumbling down.

So what if you were not as fortunate as us, and still have sites whose assets are no longer accessible? Stuck in cyberspace, or maybe just plain deleted?

Well, I'm not sure there is a lot you can do. Maybe read Code Engima's blog post about the different things they've tried. However, to put it bluntly, it sucks! Enough said about the matter.

Now it's time to ensure you're never caught out again.

But what can we do to encounter this from happening again?

All companies probably have their own way to deal with this kind of scenario, but I'll tell you about how Annertech deals with backups and recoveries.

We have two sets of backups. A backup from each day, for each production/live/master environment, which is hosted by the cloud service. Then there is an off-site backup (again, daily) used for disaster recovery. The latter one is the important one in this scenario.

The idea is that even if Amazon (which hosts Aberdeen Cloud services) pulled the plug, we would still have access to our clients' data.

We have a server, hosted by a different company, that: 

  1. Pulls down a copy of the database, from every cloud-hosted site, every night, and saves it for four days, before it deletes it again.
  2. Runs `rsync` on every cloud-hosted site, to get an up-to-date version of the files folder, every night.

Sounds simple? It is. All it requires is that your hosting partner supports running Drush on your remote sites and you're good. If you run Drupal sites, and your hosting partner doesn't support running Drush on the remote sites, find somebody else who does. It's that important!

Regarding the code for the sites; we keep our source code repositories on dedicated git services. And, more than likely, we'll also have a copy or two on developers machines.

I'd like to show you the two backup scripts that I made, one for Aberdeen Cloud and one for Platform.sh.

The code is meant to work in our setup only, and is not (yet) generic enough to just work out of the box elsewhere. The release of the scripts is meant to give you a leg up and some inspiration. This is by no means the final end point for these scripts - we are continually evaluating and improving our system, and I look forward to hearing what ideas you have on where we could take it from here too.

The entire repository of code can "stolen" from github.

When you have a disaster recovery plan you also need to make sure that it actually works. You can do this by downloading the latest backup from each of your sites once a month, installing and then testing them. I've tested a site, where the DB file was corrupt, but only for that site, so make sure that you test all of them. The setup of test sites can also be automated by a script so you don't have to setup 10, 50, or 300 sites and test each manually. Scripts are your friends. Make good use of them and have them do all the hard work.

Now, if you really want to push this further, you should implement a "Smoke test" in all of your installation profiles, so that you can trigger that to see if the site is alive; or perhaps tie it in with a Jenkins server.

If something is unclear, feel free to put a comment below. If you feel like this could be improved, feel free to contribute with a pull request. We are all ears.

Want to talk to us about hosting your Drupal site? Great - just get in touch with us on 01 524 0312 or email [email protected] and we'll see what we can do to help.

Jun 30 2016
Jun 30

DrupalCon is brought to you by the Drupal Association with support from an amazing team of volunteers.
Built on COD v.7, the open source conference and event management solution. Creative design by ADCI Solutions.
 
DrupalCon Dublin is copyright 2016. Drupal is a registered trademark of Dries Buytaert.

Jun 30 2016
Jun 30
Many developers who work on Drupal (or other web/PHP) projects have error reporting disabled in their local or shared dev environments. They do this for a variety of reasons: some don't know how to enable it, some are annoyed by the frequency of notices, warnings, and errors, and some don't like to be reminded of how many errors are logged.But there are a few important reasons you should make sure to show all errors when developing:
Jun 30 2016
Jun 30
Matt and Mike sit down to deep-dive into the Drupal 8 version of Drupal Commerce with Ryan Szrama, Bojan Zivanovic, Matt Glamon, in addition to Lullabot's own Matt Robison.
Jun 30 2016
Jun 30

Drupal Camp Wisconsin is on! and will be held on July 29-30 in beautiful Madison Wisconsin. This year we will be meeting at the Chemistry biulding on the UW Campus. There are still open slots for presentations --- so please stop by at http://drupalcampwi.org or make a session proposal, or to sign up for the camp. This is always a fun camp, and it did not happen last year, so this is the year to stop by and make Drupal Camp Wisconsin a success!  See you there.

Jun 30 2016
xjm
Jun 30

Start: 

2016-07-05 12:00 - 2016-07-07 12:00 UTC

Organizers: 

Event type: 

Online meeting (eg. IRC meeting)

The monthly core patch (bug fix) release window is this Wednesday, July 06. Drupal 8.1.4 and 7.50 will be released with fixes for Drupal 8 and 7.

To ensure a reliable release window for the patch release, there will be a Drupal 8.1.x commit freeze from 12:00 UTC Tuesday to 12:00 UTC Thursday. Now is a good time to update your development/staging servers to the latest 8.1.x-dev or 7.x-dev code and help us catch any regressions in advance. If you do find any regressions, please report them in the issue queue. Thanks!

To see all of the latest changes that will be included in the releases, see the 8.1.x commit log and 7.x commit log.

Other upcoming core release windows after this week include:

  • Wednesday, July 20 (security release window)
  • Wednesday, August 03 (patch release window)
  • Wednesday, October 5 (scheduled minor release)

Drupal 6 is end-of-life and will not receive further releases.

For more information on Drupal core release windows, see the documentation on release timing and security releases, as well as the Drupal core release cycle overview.

Jun 30 2016
Jun 30

I feel really excited to have cleared the mid-Term requirement for my project in Google Summer of Code (GSoC). The results of the mid-Term evaluations were announced June 28, 00:30 IST. This was the evaluation for the first phase of GSoC. In this evaluation process, set up by GSoC organisers, students and mentors have to share their feedback about the current progress of the project. Mentors need to give a pass/ fail grade. Students can continue coding once they clear the evaluations successfully.

I have been working on Porting Search Configuration module to Drupal 8. Please go through my previous posts if you would like to have a look into the past activities in this port process.

Last week I worked on testing some of the units of this module using the Php unit tests framework. Testing is an important process when it comes to any software development process. It plays a crucial role for any software. It helps us to understand the improve our software to the required level by making use of various test cases. We input various values and check whether the tests are passed according to the requirement. If any condition fails to our expectations, we need to make the required changes to suit the application needs.

Php unit tests are generally used to test some units of an application. To check whether the functions implemented gives the expected output, behaviour of the functions in various test cases, giving different types of arguments as inputs to check the errors or flaws for improving the application.

We need to install the Php unit for this process. You could follow this documentation for this process. Furthermore, they give a detailed analysis of the Php Unit Tests.

Once the installation is completed, we can start writing the unit tests for the functionalities we have implemented. The tests are generally stored in the tests/src/Unit directory of the module. The name of the unit test file will be of the format xyzTest.php. All tests are suffixed by ‘Test’. ‘xyz’ can be replaced by the functionality you are going to test.

The following is a simple test to check the sum of two numbers: sumTest.php

<?php
class SampleTest extends PHPUnit_Framework_TestCase
{
  public function testSum()
  {
    $this->assertEquals(2+2, 4);
  }
}
?>

As mentioned in this above code snippet, we need to create a class, with class name suffixed by ‘Test’ which is an extension of PHPUnit_Framework_TestCase. Now, we need to write the tests inside as member functions. The functions starting with the name test are executed. Here we are checking the sum of the two numbers. This is a very simple demonstration.

The tests are run by using the command PHPUnit. i.e,

$ phpunit tests/src/Unit/sumTest.php

The output generated on running the above test is:

PHPUnit 5.4.6 by Sebastian Bergmann and contributors.

. 1 / 1 (100%)

Time: 252 ms, Memory: 13.25MB

OK (1 test, 1 assertion)

Stay tuned for future updates on this module port.

Jun 30 2016
Jun 30

If you were an #AberdeenCloud customer, you’ll be only too aware their platform went bang on the evening of 28th June 2016. Spectacularly.

As it happens, we had spotted the lack of support response, and while there had not (and still has not, I might add - they still have a Sign Up page!) been any communication from #AberdeenCloud that anything was wrong, we were starting to get a little nervous. It took time to ask our customers what they wanted to do, collate the responses, vet a new supplier (our ISO 27001 certification requires we procure carefully), negotiate contracts, and so on, but we had got there.

Having signed a contract with Platform.sh just the day before, we were about to start migrating #AberdeenCloud customers over. Unfortunately, as it turned out, we were just a few days too late.

So what happened? Our timeline of events went something like this:

  • On 28-06-2016 at approximately 1900 UTC we got alerts from Pingdom for one of our customers on #AberdeenCloud.

  • It had happened earlier in the day, but restarting the container had cleared the issue - we figured it was something that was triggered by a Drupal cron event, but had not yet managed to investigate. So we did the same again, restarted the container. It did not come back. This was the first  sign all was not well.

  • Then another site went sideways, so we tried to restart that container. Same happened.

  • At this point we realised trying to restart containers was making things worse. To test we tried to restart the container on a development site we didn’t care about, same happened.

Right about now we realise things are not at all well with the #AberdeenCloud platform. OK, time for an emergency migration then! No one sleeps tonight! So we:

  • Raised an emergency ticket with #AberdeenCloud support (still not responded to, of course).

  • Tried to pull backups from the #AberdeenCloud backup manager (which was still available to us), but it failed for a site with no container running.

  • Tried to pull a stage backup instead, that failed too.

  • Tried to pull a backup from a seemingly still healthy site (it wasn’t healthy, it was just entirely cached by Varnish as it happened) and that also failed.

  • Tried to use `drush` to get databases, but found Drupal sites had no configuration files any more and could not connect to their databases.

And this is when something horrifying became apparent. All those daily backups #AberdeenCloud had been taking for us - and they did work, we had cause to use them just the week before - were, for some unknown reason, taken out by the same platform failure! No backups!

At this point it was time to see if we could pull anything off the running services. We noticed pretty quickly that:

  • phpMyAdmin was still running for all sites, even the ones with dead containers, so we used its “Export” feature to grab all live databases.

  • Version control was still running, so we quickly updated all codebase copies to make sure we had the latest code.

So now we have code and databases, which is good, but still no files. The containers we’d tried to restart were gone. Nothing we could do, it’s dead, Jim. So we tried a few things:

  • We started trying to contact people - we got hold of people who used to work for #AberdeenCloud to see if they could help (they couldn’t) - we sent Sampo, the CEO, a message via LinkedIn pleading for help (he still hasn’t replied, probably never will) - but we ran out of road, couldn’t find anyone able or willing to step in.

  • While the communication effort was going on, we started trying every trick we could think of to get files from “good” containers, via SSH, via the `aberdeen` command line client, via SFTP, copying files to another location to pull down, nothing worked.

  • We also tried spidering the sites using the `wget` command for Linux, to pull as many assets as possible from remaining Varnish caches, but this had very limited success.

At this point we started looking at the root cause, and it became pretty clear the mounted directories containing client uploaded files and Drupal configuration files were no longer there. You could see it when you logged into a container, but it was just a cache. If you tried to check the disk space it didn’t even show up. That storage was just gone.

But far worse, it seems backup storage depended on the exact same service! This is quite astounding really, but it seems there was no separation of service between storage of backups and storage of files and configuration. So if you lose one, you lose the other. Whereas you would expect backups would be somewhere more resilient and, frankly, more simple to access. We’d never had cause to question this before, our backups had always worked and the platform was closed source - there’s no way we could’ve known this was the case, but there you go. Cloud files gone, sites gone (because no Drupal settings) and backups gone - all in one fell swoop!

We were left with no choice but to proceed with what we had, so we:

  • Restored Drupal 6 sites to a virtual machine we had spare (I don’t think Platform.sh supports PHP 5.3, though I may be wrong).

  • Restored Drupal 7 sites to equivalent Platform.sh accounts.

  • Pulled in as many files as we could.

  • Continue to work with customers to help them recover their files and websites.

Anyway, it’s a real shame, because (unknowable backup storage flaw aside) they provided a good service for several years and the platform had proven to be very solid. It’s quite beyond me how someone can allow a business like this to fly into the ground without so much as giving customers a shutdown date with reasonable notice. It is irresponsible beyond belief, but there you go, it happened, and now we have to live with the consequences.

We, at Code Enigma, are obviously very sorry this has impacted on some of our customers. We are doing our best to help people recover their sites, automating the import of files, pulling assets from archive.org, checking support developer local copies for missing data, and we continue to chase Sampo, offering payment for missing files, though I have no confidence he’s ever going to reply.

I will post an FAQ later on other aspects of fallout from this, to help people understand the situation more clearly.

Jun 30 2016
Jun 30

We’ve been getting questions about the Group Drupal module and how it relates to the general permission system that ships with Drupal. More specifically, the difference between group roles and user roles. So we figured we’d post a short explanation. 

Consider a regular Drupal site as a one-level thing.

There’s roles and permissions that allow you to do things, but it’s a flat structure. Either you have a permission across the entire site or you don’t. Group adds depth to that structure, allowing you to create smaller levels underneath the main one.

Inside those sub-levels you can have roles and permissions too, just like on the main level. The difference lies in the fact that they’re called group roles and group permissions and they only apply within their section of the site within their group.

Let’s use a practical example to demonstrate this. Suppose you’re building a news website and you allow John and Jane to post articles. But John only writes about sport and Jane only really knows about fashion. Yet they can both publish articles about anything. This is the default story in any Drupal website.

Group would allow you to divide the website into sections such as “Sport”, “Fashion”, “Domestic”, etc. and allow John to only publish articles in Sport and limit Jane to writing about Fashion.

This is just one of the many use cases Group can solve for you.

Interested? Try it out! It’s available for both Drupal 7 and 8, find out more here: https://www.deeson.co.uk/labs/9-reasons-group-drupal-8-awesome

Jun 30 2016
Jun 30
161 Website Audits and How to Do Them Right with Jon Peck - Modules Unraveled Podcast | Modules Unraveled

Skip to main content

Website Audits

  • What is a site audit?
  • Why would you do one?
  • Intended audience
    • New clients? Existing clients?
  • What are the goals of a site-audit?
  • How do you approach a site audit?
    • What are some other approaches you’re aware of?
  • What tools and techniques do you use?
  • How do you present the results?
  • How often should you do them?
  • What are some things to avoid?
Jun 30 2016
Jun 30
These are the slides of the presentation I gave last week at DrupalDevDays Milan about the ways using Queue API can help you speed up Drupal sites, especially in connected mode. This is an updated for Drupal 8 version of the presentation we gave at DrupalCon Barcelona with Yuriy Gerasimov.
Jun 29 2016
Jun 29

This is the third in a series of posts recapping ImageX’s presentations at this year’s DrupalCon.

A content-driven framework is essential for any successful user experience. Content strategy and UX can no longer be considered separately, and to ensure that they work together to make your website its most effective, the following components must be considered: your audience, their interactions, SEO, content distribution, and finally, measurement. 

ImageX’s Senior Digital Strategist, June Parent, and Senior Business and UX Architect, Bjorn Thomson jointly presented on content-driven UX at this year’s DrupalCon.

 

 

 

 

 

 

 

 

 

 

 

 

 

Content Strategy and User-Centered Design

Content-driven UX is the confluence of two paramount disciplines: Content Strategy and User-Centered Design. A content-driven UX approach places content at the center, side-by-side with users. A website at it’s most basic, is just, well, content that users interact with.

Websites are living representations of brands, storefronts, and communities. They motivate, influence, and compel us into action. Websites are powerful and their requirements are becoming more and more complex. Meeting those requirements demands stronger research and increased synergy across adjacent disciplines like content and UX design. When you merge the two, something wonderful and powerful happens -- you’re able to enhance web experiences by leaps and bounds. 

As practitioners, we have to be mindful of the businesses we serve on a deeper level – what are its ‘mission-critical’ goals, how will growth be achieved then nurtured, what’s the process to govern lead-generation content, etc. Part of our job may include scoping lead-generation strategies or increasing qualified traffic -- all housed in a beautiful, bold, and modern design. So, let’s stop and think about how we might create a growth-positive experience that motivates, influences, and/or invokes action.

Of All Topics, Why Content-Driven UX?

June was one of those designers who was put off by code. She was passionate about design and content, but not necessarily the execution of front-end development. She saw the evolution of requirements in the business and understood that it’s important to deliver the best services possible. A content-driven UX approach put her at the front-end of projects where strategies around experience are shaped and formed. When you piece the information together a lot of exciting things start to happen. A story begins to unfold, creative ideas spring up, and conversations about digital strategy happen with more granularity. There’s also a growing momentum, industry-wide, for the role because helps to meet complex marketing requirements head-on and allows creative teams to streamline workflows.
 

These days the web is bigger and entirely more complex. For some, success hinges on delivering experiences that have the ability to eclipse competitors. For others, it could be about visibility or community. No matter what the site is for, every piece of content matters. Every page and section must serve a purpose. Every interaction is consequential. When users enjoy and trust rock-solid content, they’ll return time and time again. It’s called ‘content-loyalty’ and it’s a wonderful gift from users. When you add in strategies to identify, nurture and roll ‘qualified leads’ into bonafide ‘brand-advocates’, you've got pathways towards ‘brand-loyalty’. Now, wrap a beautiful, bold design around all of that (and more). 
-  June Parent

Your Content-Driven Framework

The presentation was designed at an intermediate level. June and Bjorn assumed that a majority of the audience would likely be creative folks with backgrounds ranging from content strategy and marketing, to the full gamut of web design. There was an opportunity to transfer insights and what a content-first framework looks like. The presentation included some attributes anyone can use to start looking at their own web projects through a content-driven UX lens. 

You can view the full presentation below:

[embedded content]
Jun 29 2016
Jun 29
Drupal 8 Media module at Google Summer of Code 2016 slashrsm Wed, 29.06.2016 - 23:11

In this video Vijay Nandwani, our GSoC student, explains his progress on the Drupal 8 Media module.

He also wrote a blog post where he explains more technical details.

Jun 29 2016
Jun 29
1/2 Million Views of the Drupal 8 Beginner Class

Drupal 8 is now eight months old.

Has it been a success so far? Yes. Drupal 8 is running over 100,000 sites and is now more popular than Drupal 6.

In our own way, we've tried to help support Drupal 8's launch. We set ourselves a goal to help as many people as possible to use Drupal 8. We launched a Kickstarter project which provided enough financing to release over 200 free videos on YouTube. The training is the best available for Drupal 8, and it’s completely free.

The first videos were released were in The Drupal 8 Beginner Class

Today, I'm delighted to say we reached a major milestone. The Drupal 8 Beginner Class has been viewed over half a million times. That's a lot of people learning Drupal 8!

Here's the intro video from the Beginner class:

[embedded content]

Even 8 months after release, the popularity of the class shows no sign of dropping. The image below shows over 20,000 weekly views for the Drupal 8 Beginner Class, since the beginning of 2016. There's a good chance of passing one million views by the end of 2016!

d8-stats-june-2016

Next week: we launch our new Drupal 8 Site Building class

This Beginner class introduces people to all the key aspects of Drupal 8.

Next week, thanks to the support of Glowhost, we're going to launch the follow-up to the Drupal 8 Beginner class. 

This follow-up will be called “The Drupal 8 Site Building Class”.  This class is designed to help people learn how to build significantly more powerful sites with Drupal 8.

Bookmark or subscribe to our YouTube channel to be the first to know about next week's launch!


About the author

Steve is the founder of OSTraining. Originally from the UK, he now lives in Sarasota in the USA. He was a teacher for many years before starting OSTraining. Steve wrote the best-selling Drupal and Joomla books.


Jun 29 2016
Jun 29

During this last week, Google Summer of Code students must have submitted their midterm evaluation and waited for their mentors' evaluation. During these first weeks, I have been working on the main components of the Social API project. These components are:

  • Social API: contains the basic and generic methods, routing settings, and permissions every Social API sub-module will need
  • Social Auth: implements methods and templates that will be used by login-related modules
  • Social Post: provides methods to allow autoposting to social network’s accounts
  • Social Widgets: allows sub-modules to add functionality to add widgets (like buttons, embedded content) to node, blocks, etc.

There are other modules which are used to test and develop the abstraction of these projects. They can be found at the Drupal Social Initiative github repository.

Fortunately, on June 27th I received an email stating I have passed the midterm evaluation. This means I will have the chance to continue working on my project as a GSoC student!

Hi gvso,

Congratulations! You have passed the midterm evaluation

Social Auth abstraction

As a mentioned above, one of the four components of the Social API project is Social Auth (social_auth), which provides an abstract layer for modules which allow user registration/login in a Drupal Site.

To develop the Social Auth module, we created (actually adapted) a module for user login via Facebook to provide an and test the abstract layer. This implementation can be found at my github repository (simple_fb_connect). Nonetheless, to make sure that this abstraction works with other implementors, on June 10th we agreed to create another module for user login. For this purpose, we chose to work with Google API  as we believe most users would have a Google account. Therefore, during last week I have been working on a implementor called Google Login (google_login).

Social Auth integration listSocial Auth integration list

This module, Google Login, did not require any change in the Social Auth code. On the one hand, I am glad that it works with the current architecture. On the other hand, I am unhappy as I was looking for finding new requirements to challenge myself in re-adapting the Social Auth abstraction.

As of current, the Google Login settings form only allows to input the Client ID and Client Secret. It adds a Google+ icon which appears on the Social Auth Login block. Furthermore, It automatically redirects users who log in to the user page /user/{uid}.

Social Auth Login blockSocial Auth Login blockGoogle Login Settings formGoogle Login Settings form

 

After working on the module described above, I could conclude that the current abstraction proposed by the Social Auth works perfectly. However, we have tested it with two implementors (Facebook and Google) so far, so we can not assure that it would do so with any other social networking service. Nonetheless, we will keep refactoring the code if needed to adapt it to requirements.

Beta Release

We are happy to announce that we have released a beta version of Social API (social_api) and Social Auth (social_auth) modules. We invite you to test the projects and report any issue you find. And if you would like to add some new features, we will be glad to read your ideas in the issue queue.

Next week

In my last weekly summary, I described the module facebook_buttons which is based on Social Widgets to provide Facebook buttons. So far, it only allows to add Like buttons to teasers and nodes; nonetheless, it should also provide options for the Send and Share buttons. Therefore, I will be working on adding these features to the module and releasing a beta version of it in the Drupal website.

As always, feel free to contact me if you have any question. You can also collaborate with the Social Initiative projects (social_apisocial_authsocial_post, and social_widgets). We also have our weekly meetings, so follow the Social Initiative and join us on Fridays.

Stay tuned for the next weekly summary!

Jun 29 2016
jam
Jun 29
Each day, between migrations and new projects, more and more features are becoming available for Drupal 8, the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some prominent, useful, and interesting projects--modules, themes, distros, and more--available for Drupal 8. This week: Inline Entity Form.
Jun 29 2016
Jun 29
TL;DR The safe search constraint feature is now committed to the module along with proper web tests. So, this week I started off with a new feature offered by the Google Cloud Vision API- “Image Properties Detection”. It detects various properties and attributes of an image, namely, the RGB components, pixel fraction and score. I have worked on to detect the dominant component in the image present in any content, and display all the contents sharing similar dominant color. It is pretty much like what we see on the e-commerce sites.
Previous week I had worked on writing web tests for the safe search constraint validation on the image fields. This feature is now committed in the module Google Vision API.
This week I have worked on implementing another feature provided by the Google Cloud Vision API, namely, Image Properties Detection. This feature detects the color components of red, green and blue colors in the images along with their pixel fractions and scores. I have used this feature to determine the dominant color component (i.e. red, blue or green) in the image, and to display all those contents which have the same dominant color in their images.
I have developed the code which creates a new route- /node/{nid}/relatedcontent to display the related contents in the form of a list. This concept makes use of Controllers and Routing System of Drupal 8. The Controller class is extended to render the output of our page in the format we require. The contents are displayed in the form of list with the links to their respective nodes, and are named by their titles.
In addition to the grouping of similar contents, the colors are also stored in the form of taxonomy terms under a taxonomy vocabulary programmatically generated under the name Dominant Colors.
This issue is still under progress, and requires little modification. I need to add the link to the new route in each of the nodes, so as to  get a better interface to access those contents. Henceforth, I will put this patch for review.
A very simple example of creating routes and controllers in your module can be found here.
Jun 29 2016
Jun 29
TL;DR It has been over a month since I started working on my Drupal project “Integrate Google Cloud Vision API to Drupal 8”, and gradually I have crossed the second stage towards the completion of the project, first being selection in the Google Summer of Code 2016 programme. Here, I would like to share my experiences and accomplishments during this one month journey, and also I would like to summarize my further plans with the project and the features which I would be implementing in the coming two months.
Let me first describe the significance of this post and what actually does “midterm submission” means? The GSOC coding phase has been divided into two halves, viz. Midterm submission and Final submission. In the first half, the students try to accomplish around 50% of the project, and submit their work to the mentors for evaluation. Those who passed the midterm evaluations are allowed to proceed further and complete the remaining portion of their project.
Now coming back to my experiences, after successfully passing through the Community Bonding period of the GSOC 2016 programme, now it was the time for start coding our project proposal to reality. As I had shared earlier that during the Community Bonding period, I came to know that the project has already been initiated by Eugene Ilyin,(who is now a part of my GSOC team). So, we discussed upon the project and set a roadmap of the project and the goals we need to achieve in the GSOC period. I had started coding the very first day of the coding phase, moving the new as well as existing functions to services. My mentors Naveen Valecha, Christian López Espínola and Eugene Ilyin really helped me a lot and guided me whenever and wherever I needed their guidance and support. They helped me to get through new concepts and guided me to implement them in the most effective way to make the module the best that we could.
During this period, I also came to learn about a lot of new techniques and concepts which I had not implemented earlier. Right from the very first day of the coding period, I have been coming across new things everyday, and it is really interesting and fun to learn all those techniques. In this one month period, I learnt about services and containers and how to implement them. The post on Services and dependency injection in Drupal 8 and the videos of Drupalize.me were of great help to understand the concept of services and implement dependency injection. I also learnt about the use of validators and constraints and how can they be implemented both on general basis or specifically on fields. I also learnt about how to create test modules and alter various classes and their functions in our tests so as to remove the dependency on web access or on valid informations for our testing purposes. I learnt new things every day and enjoyed implementing them to code our module plan into reality. At present, the module supports the Label Detection feature of the Vision API, along with the tests to verify whether the API key has been set by the end user or not. Currently, the feature of Safe Search Detection is available as a patch which can be found here, which would soon be committed to the module.
I have shared all the details of my work on the Drupal Planet. In addition, I have also uploaded a video demonstrating How to use Google Vision API module.
Jun 29 2016
Jun 29
Mark Pavlitski's picture Jun 29th 2016Technical Director

There has been a lot of coverage recently about the benefits and opportunities associated with ‘Headless Drupal’, and while this can be an important feature for some audiences it has overshadowed an arguably more important feature of Drupal 8: its ability to seamless sit alongside other technologies and share content resources.

It is really important to continue advancing Drupal's web services support. There are powerful market trends that oblige us to keep focused on this: integration with diverse systems having their own APIs, the proliferation of new devices, the expanding Internet of Things (IoT), and the widening adoption of JavaScript frameworks.

Dries Buytaert, Founder of Drupal

One of the key development goals of Drupal 8 was to produce a content platform which can easily integrate with a wide ecosystem of business applications and digital technologies. Drupal 8 was built with an API-first approach which means connecting content to other sites and applications is an expected use-case.

Drupal 8 takes a presentation-neutral approach to managing content; the content workflow is separated from the presentation of that content, meaning it can be pulled into relevant contexts on your website or pushed out across multiple third-party platforms.

Content is king

Bill Gates

Quality content is pivotal to successful online marketing strategies. Leaving it to sit in a siloed website misses the opportunity to make the very most of your best marketing asset.

Instead, it should be produced once and then re-published and promoted everywhere from your corporate intranet through to consumer apps, partner-websites, and across your entire social media presence.

By utilising Drupal 8 as a publishing framework you can aggregate and publish content resources across your existing digital solutions through rich data services and content APIs.

What this means to you is that it’s easier than ever to share content across multiple channels from one central application.

With organisations such as Oxfam, The Economist and The City of London using Drupal as their primary content platform, it’s clear that Drupal 8 has a lot to bring to modern digital businesses.

Is it time you took a fresh look at Drupal 8 and how it fits into your digital ecosystem?

Microserve are experts in all things Drupal. If you're interested in a Drupal 8 project or want to know more about how Drupal 8 can help your business, please get in touch.

Rick Donohoe's picture

You may also like...

Why we recommend managed hosting to our clients

In this article we’d like to discuss what’s wrong with traditional hosting, why we often move new clients on to managed hosting solutions in order to fully support them, and what the difference means to you as a business owner.

Jun 29 2016
Jun 29

Drupal events always fascinate me. They don’t just provide a wonderful environment for learning from each other, to contribute to Drupal together during code sprints, and to meet new people, but they make the community stronger as well. Drupal Developer Days is probably one of the best kinds of events to make this all happen in one place. 

Since helping organize Drupal Developer Days in 2014 in Szeged, I was really excited to see how things go at other Drupal Dev Days. Sadly, I missed DDD 2015 in Montpellier, so attending the event in Milan sounded like a good idea. Until this June, my part in Drupal events was mainly organizing and volunteering, and of course attending sprints, but this time I was also a speaker, which (for me at least) made things a bit different. From Cheppers, Miro Michalicka joined me, and he gave a talk as well, about Drupal 8 REST.

zs+m

Drupal Developer Days’s format is focused mainly on sprints, but there are also sessions and BoFs. In Milan, the event went on for six days, which gave us enough time and space to do everything we can to make the most of our own experiences. In my case, submitting a session proposal about a totally non-techy talk (fyi: How to Drupalcamp) was an interesting addition on top of the regular ‘running around, fixing stuff, making sure all goes well’ things. I also wanted to help out as volunteer, as I have organized a DDD myself before, I had experience with these things, and I was very happy that the wonderful organization team appreciated this.

So, how was Drupal Dev Days in Milan?

The event was Tuesday June 21st to that Sunday, the 26th, with sprints on all six days and sessions on three days. Having almost 400 attendees in one place requires focus, and I believe the Italian organization team did a fantastic job. Here’s a list of volunteers: Alessandra Petromilli, Alessandro Sibona, Andrea Pescetti, Antje Lorch, Chandeep Khosa, Chiara Carminati, Claudio Beatrice, Edouard Cunibil, Fabiano Sant'ana, Guillaume Bec, Julien Dubois, Kester Edmonds, Luca Lusso, Marcello Testi, Marco Moscaritolo, Paolo Libanore, Pierluigi Marciano, Riccardo Bessone, Simone Lombardi, Tamer Zoubi, Yan Loetzer, Yi Yuan, Zsófi Major.

vol

We consumed a lot during this week - 800 liters of water, 100 kgs pizza (ofc!), and 1,200 coffees went down the throats of the attendees. The food was wonderful, tasty and very diverse - we could all find our preferred snacks during lunch breaks. And yeah, GO GO healthy food on all Drupal events!

Sprints - let the numbers talk!

One of the best slots a DrupalCamp can have is sprints. This is where the actual work gets done, bringing together many, many excited contributors to make better progress by helping each other. Of approx. 373 attendees, 107 signed up to the sprints (and most likely even more were sprinting), fixing a total of 82 tickets, and 217 more are now either in Needs Review, Needs Work, Active or RTBC. You can check the contrib kanban page for more details.

So many initiatives, so many people! As Cheppers also takes part in improving Drupal UX, I was happy to see so many people working on the UX and UI initiative, and the Media initiative, the Search API, Open Social and of course Drupal Commerce 2 were pretty populated with contributors. To see the variety of sprint topics, here’s the sprint planning sheet.

sprintersPhoto by dasjo

After my session I could also take part in the work, I translated modules into Hungarian, which gave me the satisfaction of finally actually doing something. Now Chaos tool suite (ctools) 8.x-3.0-alpha26 and Pathauto 8.x-1.0-alpha are both available in Hungarian as well.

Sessions

First of all, I am extremely happy to say that 3 out of 5 keynotes were held by women, and overall it was great to see that many women attended DDD itself. This is wonderful, and truly shows that this field is diverse and inclusive. One of the sessions, Are Geeks from Mars and Geekettes from Venus? was a panel discussion directly about gender & diversity. All the talks were really great and useful, it was great to see so many people performing their sessions in a language that isn’t their native language. You can find the program here, videos will be uploaded soon.

Cheppers talks

Miro, our Drupal Developer talked about Drupal 8 REST, in a shorter track, and despite the fact that there were two other sessions at the same time about similar topics, he also had a rather large audience. Here are the slides.

miro

My talk was about the one thing I really have experience in: How to organize DrupalCamps? This was my very first real talk at a Drupal event, and despite it not being a very techy topic, I was happy to have a nice audience and positive feedback during and after the session. Organizing DrupalCamps is really a great way of contributing to Drupal, and I hope I can continue my Drupal and speaker career with this topic in the future. You can check out my slides for some details.

zsPhoto by dasjo

The video recordings of both of our sessions will be published soon.

Social events

Of course, Drupalistas like to take some break from work, and it was no different at this DDD. Social events are the best way to get to know each other a little bit more outside of what we publicly do. Thankfully, we were not missing a minute of enjoying our time together.

As speakers, we were lucky to attend a great speaker & volunteer dinner. It was IN-CRE-DI-BLE! I’ve never seen that amount of courses during one meal. Literally, after eating our first starter, a seafood platter, then the second course, fried veggies, the following two rounds of hilariously good pasta and another two rounds of pizza challenged our bellies big time. Thank you again for feeding us well, food is always the best way to the heart ;)

On Thursday we had an awesome social evening, with great food (again!), and superb beers. All in all, we consumed at least 500 beers during the night, which of course resulted in some very great discussions about Drupal, community, work, and life.

j+pPhoto by dasjo

To celebrate the session days, as a goodbye gift we were invited to a Night at the Museum, at Leonardo3 at Galleria Vittorio Emanuele II, to study Leonardo Da Vinci’s life and work, who spent 20 years in the city of Milan. 

leonardoPhoto by dasjo

All in all Drupal Developer Days 2016 in Milan was wonderful. Thanks so much for all the great work and for having us! For next year’s event you can submit your proposal here.

Here is a highlight of upcoming events in 2016. Check out Drupical to find a full list.

Hope to see you soon somewhere!

Jun 29 2016
Jun 29

Everyone has a routine. For some, it’s their morning coffee, for others it’s going to the gym after work. For me, it’s heading to Mangia Pizza every Thursday from 7pm-9pm, for the last 6 years. Why do I do this? Because the Austin Drupal Dojo meets every Thursday, and we have a great crowd!

Members of the Austin Drupal Dojo at a meeting.Buster, Chris, Fito, Irma, James, John, Marc, Mark, Nick, me (behind the camera) and others at the August 27, 2015 Austin Drupal Dojo.

The Austin Drupal Dojo is a meetup where anyone is welcome to hang out with other Drupalistas in a "hive mind" environment. There are no set topics or presentations. The pizza is delicious, beer refreshing, and conversations vary wildly. Most people bring a laptop and a project, but others just come for the community.

Our regulars range from Drupal experts, to hobbyists, to newbies. From full-time employees, to freelancers, to those looking for work. We also have a steady stream of folks who are looking for help. Maybe they’re just curious about Drupal, or need to learn it for a new job, or want to start their own business. The Drupal community is known for it’s welcoming atmosphere, and the Austin Drupal Dojo is an exemplary model of that community spirit. Our members jump at the chance to answer questions and help those in need, often sparking a group conversation about best practices and possible solutions.

Many of our members contribute to the other Austin Drupal Meetups (yes, we have more than one!) by speaking and/or organizing, and to the Drupal project in general by fixing core/contrib/documentation and organizing Sprint Weekends. When I became the de facto organizer, attendance was modest, and I never imagined it could blossom into such a great group. It’s been my privilege to work alongside these members of our community, and I’d like to thank everyone who’s joined us, past, present, and future.

If you’re ever in the Austin area, grab your laptop, appetite, and come join us!

What: Austin Drupal Users Group - Drupal Dojo
When: Every Thursday 7-9pm
Where:

Mangia Pizza
8012 Mesa Dr
Austin, Tx 78731
(512) 349-2126

Jun 29 2016
Jun 29

With our new configuration management system as part of Drupal 8 core we now have a powerful system to manage site configuration between our different environments. This system does not declare a strict workflow for how to use configuration. In this post I’d like to explore some workflows we have been exploring.

First let's define what site configuration is.

There are two types of configuration in a Drupal site. First, simple configuration which stores basic key/value pairs such as integers, strings, and boolean values. Second, configuration entities, which we explored in this post. Basically configuration entities are more complex structures that define the architecture of our site. This would be things such as content types, views, and image styles.

Both types of configuration entities are exportable.

Where does this configuration live?

Site configuration lives in two places depending on its stage in life (no pun intended). There are two stages in a piece of configurations life:

  1. Staging

  2. Active

Active configuration lives in the database by default. It is the configuration that is currently being used by the site. Staged configuration lives on the file system by default.

When you make changes to the site within the web interface, for example changing the site name, you are changing the active configuration. This active configuration can them be exported and staged on another instance of the same site. The last piece is key:

Configuration is only to be moved between instances of the same site.

For example, I change the site name on our DEV environment and want to move this change to our TEST environment for testing.

Ok, let's talk workflows

There are a few ways we can manage the site configuration. Site configuration, like our code should follow a “flow up” model. That is, configuration will be committed to our git repo and moved up into higher order environments:

LOCAL ? DEV ? TEST ? PROD

In this workflow configuration changes will be made as far down the chain as possible. In this case on a developer's local environment. These configuration changes should be exported and committed to their own branch. This branch can then be pushed for review.

Once pushed these configuration changes will be “staged” on the DEV environment. A site administrator will need to bring these staged changes into the “active” configuration. There are two ways we do that today:

  1. Through the sites UI from under ‘admin/config/development/configuration’

  2. Using a drush command ‘drush cim’ command to import staged configuration to active.

From what we are seeing this seems to be the defacto workflow at this point.

Further workflow Ideas

I am thinking there could be some interesting workflows that could emerge here. One idea is to have a branch (or tag) that triggers configuration imports using drush and fires off some automated tests. If all passes then merge into another designated branch for movement to a live environment.

Another idea is to use some of the emerging contrib modules to manage different “configuration states”. I believe this was discussed in the config_tools projects issue queue. Using this idea we can tag known “good” configurations and move between different “configuration states” of a given site. I am thinking we could even have the configuration hosted in separate repo then our code, if that makes any sense.

Bottom Line  

The new configuration management system offers a powerful tool to move changes on our sites between different instances. It does not however define an exact workflow to follow. I look forward to talking to folks on how they leverage this system to coordinate the efforts of high performing teams!

If you or your team is using this in different ways we would love to hear about it!

Jun 29 2016
Jun 29

What feelings does the name Drupal evoke? Perceptions vary from person to person; where one may describe it in positive terms as "powerful" and "flexible", another may describe it negatively as "complex". People describe Drupal differently not only as a result of their professional backgrounds, but also based on what they've heard and learned.

If you ask different people what Drupal is for, you'll get many different answers. This isn't a surprise because over the years, the answers to this fundamental question have evolved. Drupal started as a tool for hobbyists building community websites, but over time it has evolved to support large and sophisticated use cases.

Perception is everything

Perception is everything; it sets expectations and guides actions and inactions. We need to better communicate Drupal's identity, demonstrate its true value, and manage its perceptions and misconceptions. Words do lead to actions. Spending the time to capture what Drupal is for could energize and empower people to make better decisions when adopting, building and marketing Drupal.

Truth be told, I've been reluctant to define what Drupal is for, as it requires making trade-offs. I have feared that we would make the wrong choice or limit our growth. Over the years, it has become clear that defining what Drupal is used for leaves more people confused even within our own community.

For example, because Drupal evolved from a simple tool for hobbyists to a more powerful digital experience platform, many people believe that Drupal is now "for the enterprise". While I agree that Drupal is a great fit for the enterprise, I personally never loved that categorization. It's not just large organizations that use Drupal. Individuals, small startups, universities, museums and non-profits can be equally ambitious in what they'd like to accomplish and Drupal can be an incredibly great fit for them.

Defining what Drupal is for

Rather than using "for the enterprise", I thought "for ambitious digital experiences" was a good phrase to describe what people can build using Drupal. I say "digital experiences" because I don't want to confine this definition to traditional browser-based websites. As I've stated in my Drupalcon New Orleans keynote, Drupal is used to power mobile applications, digital kiosks, conversational user experiences, and more. Today I really wanted to focus on the word "ambitious".

"Ambitious" is a good word because it aligns with the flexibility, scalability, speed and creative freedom that Drupal provides. Drupal projects may be ambitious because of the sheer scale (e.g. The Weather Channel), their security requirements (e.g. The White House), the number of sites (e.g. Johnson & Johnson manages thousands of Drupal sites), or specialized requirements of the project (e.g. the New York MTA powering digital kiosks with Drupal). Organizations are turning to Drupal because it gives them greater flexibility, better usability, deeper integrations, and faster innovation. Not all Drupal projects need these features on day one -- or needs to know about them -- but it is good to have them in case you need them later on.

"Ambitious" also aligns with our community's culture. Our industry is in constant change (responsive design, web services, social media, IoT), and we never look away. Drupal 8 was a very ambitious release; a reboot that took one-third of Drupal's lifespan to complete, but maneuvered Drupal to the right place for the future that is now coming. I have always believed that the Drupal community is ambitious, and believe that attitude remains strong in our community.

Last but not least, our adopters are also ambitious. They are using Drupal to transform their organizations digitally, leaving established business models and old business processes in the dust.

I like the position that Drupal is ambitious. Stating that Drupal is for ambitious digital experiences however is only a start. It only gives a taste of Drupal's objectives, scope, target audience and advantages. I think we'd benefit from being much more clear. I'm curious to know how you feel about the term "for ambitious digital experiences" versus "for the enterprise" versus not specifying anything. I'm hoping we can collectively change the perception of Drupal.

PS: I'm borrowing the term "ambitious" from the Ember.js community. They use the term in their tagline and slogan on their main page.

Jun 29 2016
Jun 29

CKEditor is a popular WYSIWYG (What You See Is What You Get). In Drupal default WYSIWYG editor is CKEditor. CKEditor has many of its own plugins.

Recently I got an opportunity to work for some top level media companies like Time Inc and Farm Journal with my Valuebound Teammates. It was a challenging experience , especially on the area of content creation and management work flow.  

We got a requirement where “Content Authors” should be able to upload the images in between  paragraphs of content. When the end user clicks on those images, the image has to be shown as a popup. So we decided to create a CKEditor plugin so that the users who are having  “Content Author” or “Content Editor” roles will be able to upload the image which will show up in a popup when it’s clicked. Because of this requirement, we were fortunate to develop a module called Simple Image Popup and  contribute back to Drupal Community.

Here we are going to see how to create a new plugin for CKEditor in Drupal 8, which inserts image wrapped in anchor tag.

Steps to create CKEditor plugin.

  1. Define and add basic plugin info in hook_ckeditor_plugin_info_alter() in your module file.
    File: my_module.module
    
     
  2. Define MyPlugin class which defines plugin, extending CKEditorPluginBase and implementing CKEditorPluginConfigurableInterface. In this plugin class we need to define the following methods:
    File: MyPlugin.php
    
    1. isInternal - Return is this plugin internal or not.
    2. getFile - Return absolute path to plugin.js file.
    3. getLibraries - Return dependency libraries, so those get loaded.
    4. getConfig - Return configuration array.
    5. getButtons - Return button info.
    6. settingsForm - Defines form for plugin settings, where it can be configured in ‘admin/config/content/formats/manage/full_html’. For example, this form shown below is where we can set the max size, width and height of the image, against which image will be validated when uploading an image.CKEditor Plugin setting form.
  3. Define client side plugin.js. Basic things we need to implement in plugin.js are:
    File: js/plugins/myplugin/plugin.js
    
    1. Add the new plugin to CKEditor. In beforeInit method we have to define our plugin command and add UI button which in turn triggers our plugin command on click of UI button as follows,
    2. Define routing for the dialog, which points to Form controller.
      File: my_module.routing.yml
      
    3. Define plugin dialog form and its form submit handler as we normally do in Drupal 8. In addition, on form submit we need to return ajax response command like below
      File: my_module\Form\EditorImagePopupDialog
      
      And whenever edit operation happens we need to populate existingValues json object in plugin.js file, so we can get those values in buildForm with the below line of code.

Finally configurations.

  1. Go to ‘admin/config/content/formats’, then click on configure action link for which format this plugin needs to be added.
  2. In configuration form, we can drag and drop newly created plugin button to CKEditor toolbar.
  3. Now, save the configuration form.
     

Hurray…! Now we can use our newly created CKEditor plugin. For more reference, find the contributed module, in which we have created CKEditor plugin for Drupal 8 Simple Image Popup.

Jun 29 2016
Jun 29

This one tripped me up on a recent Drupal 8 project.

Easy to miss when you're working in a development oriented environment with things like JavaScript preprocessing turned off.

A JavaScript file was being added just fine with aggregation turned off, but not getting added with it turned on.

While working on a Drupal 8 client project we were using our module's .libraries.yml file to add a custom plugin for Jquery Validation. Our plugin was using the Moment.js date library to add strict date checking so we could check for overflows. The default date validation in that plugin treats dates like 55/55/5555 as valid - because they are cast to valid JavaScript dates by the browser. We needed to detect overflows and report an error.

It was working all fine locally, but when I sent a Pull request, it didn't work in the Pull request environment (we have per pull-request environments).

After some head scratching I found the issue.

My libraries.yml definition looked like this:

moment_date:
  version: VERSION
  js:
    js/moment_date: {}
  dependencies:
    - clientside_validation/jquery.validate
    - mymodule/moment
    - core/modernizr

If you picked it, I've missed the .js suffix on the file name.

Locally I was working with developer optimised settings, so I had a settings.local.php with the following

$config['system.performance']['js']['preprocess'] = FALSE;

i.e. I was disabling JavaScript aggregation so I could rapidly iterate, something you'd normally do.

Problem was on the Pull Request environment JavaScript aggregation is turned on (as it should be).

And mysteriously this made a difference.

My libraries.yml file was just plain wrong, it should have been

moment_date:
  version: VERSION
  js:
    js/moment_date.js: {}
  dependencies:
    - clientside_validation/jquery.validate
    - mymodule/moment
    - core/modernizr

But with JavaScript aggregation turned off, my webserver was adding the file, sending moment_date.js when moment_date was requested - silently hiding the bug from me.

A tricky one, but one worth sharing.

Drupal 8 libraries JavaScript
Jun 29 2016
Jun 29

This week I worked on making the module a bit flexible via integrating pluggable systems into it. This is something we had planned initially while writing the architecture document for the module, but couldn’t pursue it earlier because our focus was on developing a working prototype first. But since that’s done, we’ve reached the perfect time for this development. It should be noted that the pluggable systems are important because Pubkey Encrypt deals with security, and it is essential for the module’s success to be as flexible as possible. In this way, users would be able to configure the behavior of the module as per their organizational security standards and other demands not provided by the out of the box functionality.

Accordingly, here are the two pluggable systems (i.e. plugin types) I added to the module:

  • Asymmetric keys generator: An implementation for this plugin type would address the responsibilities of Public/Private keys generation, encryption of a piece of content with Public key and decryption of a piece of encrypted content with Private key.

By having this logic encapsulated into a pluggable system, users would be able to use the asymmetric keys generator of their choice (e.g. OpenSSL library, Elliptic curve cryptography library etc.) simply by providing a corresponding plugin implementation.

  • Login credentials provider: An implementation for this plugin type would handle the selection of relevant user data attributes needed for key generation which Pubkey Encrypt would use during data encryption/decryption operations.

By having this logic encapsulated into a pluggable system, users would be able to use the login credentials of their choice (e.g. password, PIN etc.) simply by providing a corresponding plugin implementation.

For developing the pluggable systems, I referred to this really great article on D8 plugins, Unravelling the Drupal 8 Plugin System, provided by Drupalize.me. This in-depth article not only describes the higher-level overview of Drupal 8 plugins but also contains a step-by-step tutorial on creating your own plugin types along with some sample code. So I created the plugin systems and then added the configuration layer as an “Initialization settings form”. Through this form, users would configure and initialize the module so it could start working. See it in action here:

Pubkey Encrypt initialization settings form.

In my weekly meeting with mentors Adam Bergstein (@nerdstein) and Colan Schwartz (@colan), we discussed the overall architecture of the pluggable systems I implemented. We also made yet another design-related decision that we won’t be allowing anyone to change their choice of plugins once the module has been initialized. The reason is, that allowing for a plugin change after the module is initialized means re-initialization of all users’ keys and re-encryption of all data. And this is the kind of use-case we’ll never encourage as the process involved will have a lot of overhead on the module’s performance. So this basically means that if a user has chosen to use OpenSSL library for the generation of asymmetric keys, then he’s bound to use that for the lifetime of module. If he really wants to change any plugin, the only option available for him would be to uninstall and reinstall the module as per his desired configuration. I’ve created an issue on the D.O project page to formally capture this decision.

Restricting the change of plugins after the module has been initialized.

Then I wrote default plugins for the two pluggable systems, so to make the module behave as a plug-and-play solution for typical use cases. We’ll ship the following default plugins via a submodule within the module:

  • RSA-based asymmetric keys generator using OpenSSL php extension.
  • User passwords-based login credentials provider.

After integrating the pluggable systems into module, I thought it’d be really cool if I could make the “asymmetric keys generator” plugin type configurable from the UI. The reason is, that a keys generator plugin could ask for user configuration like key size, digest method etc. I couldn’t find any tutorial on how to do this, but I figured out that the Encrypt module provides one such configurable plugin type. So I used the source code from the Encrypt module for learning how to accomplish the task, but still got stuck in it for two days. Though I really enjoyed the learning process and after spending hours exploring Encrypt’s source code and running it step-by-step via a debugger for better understanding, I finally accomplished the task. I also tweaked the default asymmetric keys generator plugin to reflect this change so to make it easy for other developers to use this default plugin implementation as an example of how their plugin implementations can handle the user configuration. See it in action here:

An Asymmetric Keys Generator plugin configurable from the UI.

After that, I fixed the tests to have the module initialized before making any assertions. Re-running the tests revealed that everything is working completely as expected. Have a look at all the work I did this week here: Pubkey Encrypt - Week 5 work. My mentors have yet to review and merge the commits.

Jun 28 2016
Jun 28

Since the web was born, information technology (IT) professionals have been working to make sure their organizations had a presence online. In the past few years, we have seen a shift in those digital dollars - right onto the Marketing Department’s doorstep. This signals a larger pivot in thinking. Your website is no longer a stagnant or a “nice to have” piece of technology, but a dynamic, evolving hub for your company’s marketing, branding and lead generation efforts. The development of “decoupled” architecture serves to support this shift by giving marketers more flexibility to create components that provide unique user experiences across their customer’s journey. In simplest terms, “decoupled” refers to the separation between the back-end of your website (your CMS) and the front-end (or many front-ends). You can learn more in a recent article from Drupal’s founder, Dries Buytaert.

Here are 4 benefits that Enterprise Marketers should be excited about when leveraging this approach.
 

Experience Management

How often do you say “what if our website could…” and then dial yourself back because of time or resource constraints from your design or development team? A decoupled architecture makes “Yes” a far more often conclusion to support new ideas for your site. Drupal is traditionally more rigid in its design capabilities and while Drupal 8’s integration of Twig has provided a large leap forward, there are still limitations when you’re aiming to create unique, dynamic components to support your campaigns. With a decoupled system, Marketers are free to let their creativity reign, without running into limitations that previously existed.  
 

Flexibility for Constant Evolution

Making the decision for a website redesign is a huge commitment of time and resources. In many cases, marketing teams push their website past its expiration date. Teams are often forced to choose between accepting their website as it stands (even if the design is stale or not meeting expectations) or dedicating the resources for a redesign and potentially placing other strategic initiatives on the back burner.

However, when a website is built using decoupled architecture, Marketers have far more flexibility to upgrade the back-end technology powering a site or to update design and create new experiences independently of each other. You’ll have the opportunity to create and change experiences as you learn more about your customers and prospects by working with your design team. This allows your front-end team full control over the user experience by leveraging  their preferred tools and Drupal’s strength as a CMS. This means that marketers can engage designers based on their vision and strategy and break free of the limitations of a given tool and that designers can use the best tool to accomplish the desired feature. In our recent series on Weather Underground, Matt Davis gave a great example of how this worked with Drupal and Angular JS.
 

“Write once, publish everywhere” becomes standard

Drupal is designed with large organizations in mind when it comes to storing and organizing content in a single repository. In addition, Drupal 8’s upgraded publishing functionality provides contributors a more streamlined experience for content creation. You are now able to give content creators a streamlined editorial experience and leverage Drupal 8’s RESTful API to make it more straightforward than ever for external applications or services to interact with content on your Drupal site.  

Decoupled architecture requires Drupal’s core web services to provide data to the front-end, pushing content to other places becomes more manageable. This is relevant because once you publish content in an article, it can be available for use in many different places, including mobile apps, IOT devices, various feeds, and in other ways that haven’t been created yet.
 

Right tool for the right job

A recent study by Black Duck Software found that more than 78% of enterprises run on open source, with fewer than 3% indicating that they don’t rely on any open software. Given the key drivers for open source adoption are flexibility, scalability and speed, it’s no surprise that Drupal has become more popular than ever in recent years with Enterprise organizations. These organizations have learned that the more content you have, the stronger Drupal is at helping to categorize, store and structure that information.

What makes Drupal particularly unique is the back-end experience. When using a decoupled architecture to separate the front-end and back end, you open the door to a variety of potential programming languages and design philosophies to accomplish your goals. This means the options are limitless with  what your team can envision, what your UX/UI team can create, and who can help you create them.

Overall, decoupled architecture is a concept gaining popularity across the development community given the benefits and potential upside. In our next post, we’ll discuss the various options for decoupling, the differences and pros/cons of each.

Additional Resources

Real World Drupal 8 for Front End Developers | Blog Post
Drupal 8 for Marketers: SEO | Blog Post
An Overview of Drupal 8’s Business Benefits | Webinar Recording

Jun 28 2016
Jun 28

Hello Michigan Drupal folks and beyond,

The final call for sessions for this year's DrupalCamp Michigan will be July 5th. Please submit your session proposals before that time:

http://2016camp.michigandrupal.com/sessions

In the mean time take a look at some of the great sessions proposed by the community:

http://2016camp.michigandrupal.com/sessions/proposed

Jun 28 2016
Jun 28

I recently had to create a new layout that mimicked the Pinterest layout. Masonry to the rescue! (sorta...) With Drupal already crapping out the content via views, we could just use the Masonry views plugin right? Sorta. Well, it worked. ... sorta. There were problems, and I don’t like problems, only solutions.

I like a very NON-hacky way of doing things. Masonry views worked for the desktop screen size but failed miserably for anything smaller. We were working with a responsive design, so it was unacceptable. There was simply just no amount of tweaking the options and CSS that it came with, that I was happy with. I’m also not a fan of CMS plugins controlling layout. There tend to be crappy implementations and far less control. I don’t speak for everything, of course, just my experience.

I wanted to control.. as much as I could. So I abandoned the views plugin, and just decided to use the raw jQuery plugin, and use my own CSS.

This assumes ya know how to use requireJS and jQuery plugins.

Step 1 - The view.

So, the view just had to output the HTML, nothing else. Set the output format for the view to ‘Unformatted list’. Nice, just simple, clean Drupal output.

Step 2 - Including the jQuery plugin code.

I’m also not a fan of including boatloads of JS on every page so that you can use it “at some point” on some page on the site. Unnecessary load times, for all other pages.

RequireJS happiness ensued.

The main.js config file:

requirejs.config({
  paths: {
        // vendor
        'masonry': 'vendor/masonry.pkgd.min',

        // custom
'jquery': 'modules/jquery-global',
        'community': 'modules/community'
  }
});

require([
  'jquery',
  ], function ($) {
  'use strict';

   // DOM ready
  $(function() {

        // js loaded only on community page
        if ($('body').hasClass('page-community')) {
        require(['community']);
        }

  }); // DOM ready
});

We’re telling require where to find the libraries and specific module files, that has the code we need. Then when DOM has loaded, we check to see if we’re on a specific page, if so, load up the required JS. The ‘community’ page is where the masonry plugin is required. So, let's look at that file (community.js).

/**
 * Community
 * @requires jquery, masonry
 */
define([
  'jquery',
  'masonry'
], function ($, masonry) {
  'use strict';

  /**
   * object constructor
   */
  var Community = function() {
        this.init();
  };

  /**
   * init community module
   */
  Community.prototype.init = function() {
        var self = this;

        // init masonry
        $( '.view-community .view-content' ).masonry( { itemSelector: '.views-row' } );

  /**
   * DOM ready
   */
  $(function () {
        var community = new Community();
  });
});

So now we have the masonry jQuery plugin only loading up on the community page. By inspecting the output of that view on the page, I was able to figure out which elements to target for masonry to know which is which regarding content. The ‘.view-row’ elements were each ‘element’ of content, while the “.view-community .view-content” was the parent.

That just gets the all the JS working, but will most likely still look like poo (technical monkey term). -- Time to start slinging that poo around until we have something nice to look at. Yes, we can actually polish a turd.

Step 3 - CSS(SCSS) Kung-Poo

.view-community {
  max-width: 100%;
  margin: 0 auto;
  .view-content {
        width: 100%;
        overflow: hidden;
        .views-row {
        width: 48%;
        margin: 0 1%;
        margin-bottom: rem-calc(20);
        overflow: hidden;
        border-radius: 15px;
        background: $white;
        font-size: rem-calc(12);
        line-height: rem-calc(18);
        @media #{$medium-up} {
        font-size: rem-calc(16);
        line-height: rem-calc(26);
        }
        @media #{$small-portrait-up} {
        margin: 0 1%;
        margin-bottom: rem-calc(20);
        width: 48%;
        }
        @media #{$medium-up} {
        margin: 0 1%;
        margin-bottom: rem-calc(20);
        width: 31%;
        }
        @media #{$large-up} {
        margin: 0 1%;
        margin-bottom: rem-calc(20);
        width: 23%;
        }

Okay, now we have something decent to look at.

  • So for mobile views, the elements take up 48% of the horizontal screen space, effectively having two elements per ‘row’.
  • Moving up from that, we just increase the font-size.
  • Then up from that, (a tablet layout), we have effectively 3 per row, and the for desktop we have 4 per row.
  • So this is much cleaner, leaner than having to fiddle with the views masonry plugin. It allows us to have much more control over the plugin itself, and the CSS.

Sometimes, it pays to just go back to doing things manually, especially if you want more control.

Jun 28 2016
Jun 28
We loved Drupal Developer Days! slashrsm Tue, 28.06.2016 - 16:27

Last week part of the MD Systems team attended Drupal Developer Days in Milan.

Italian style dinner at Navigli in Milano. #drupaldevdays pic.twitter.com/CQOpIpmSGg

— Dragan Eror (@draganeror) June 23, 2016

I'd like to invite you to check our blog post to see how we liked it.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web