Jun 18 2018
Jun 18
June 18th, 2018

Last month, Ithaca College introduced the first version of what will represent the biggest change to the college’s website technology, design, content, and structure in more than a decade—a redesigned and rebuilt site that’s more visually appealing and easier to use.

Over the past year, the college and its partners Four Kitchens and design firm Beyond have been hard at work within a Team Augmentation capacity to support the front-to-back overhaul of Ithaca.edu to better serve the educational and community goals of Ithaca’s students, faculty, and staff. The results of the team’s efforts can be viewed at https://www.ithaca.edu.  

Founded in 1892, Ithaca College is a residential college dedicated to building knowledge and confidence through a continuous cycle of theory, practice, and performance. Home to some 6,500 students, the college offers more than 100 degree programs in its schools of Business, Communications, Humanities and Sciences, Health Sciences and Human Performance, and Music.

Students, faculty, and staff at Ithaca College create an active, inclusive community anchored in a keen desire to make a difference in the local community and the broader world. The college is consistently ranked as one of the nation’s top producers of Fulbright scholars, one of the most LGBTQ+ friendly schools in the country, and one of the top 10 colleges in the Northeast.

We emphasized applying automation and continuous integration to focus the team on the efficient development of creative and easy to test solutions.

On the backend, the team—including members of Ithaca’s dev org working alongside Four Kitchens—built a Drupal 8 site. The transition to Drupal 8 keeps the focus on moving the college to current technology for sustainable success. Four Kitchens emphasized applying automation and continuous integration to focus the team on the efficient development of creative and easy to test solutions. To achieve that, the team set up automation in Circle CI 2.0 as middleware between the GitHub repository and hosting in PantheonGitHub was used throughout the project to implement, automate, and optimize visual regression, advanced communication between systems and a solid workflow using throughout the project to ensure fast and effective release cycles. Learn from the experiences obtained from implementing the automation pipeline in the following posts:

The frontend focused heavily on the Atomic Design approach. The frontend team utilized Emulsify and Pattern Lab to facilitate pattern component-based design and architecture. This again fostered long-term ease of use and success for Ithaca College.

The team worked magic with content migration. Using the brainchild of Web Chef, David Diers, the team devised a plan to migrate of portions of the site one by one. Subsites corresponding to schools or departments were moved from the legacy CMS to special Pantheon multidevs that were built off the live environment. Content managers then performed a moderated adaptation and curation process to ensure legacy content adhered to the new content model. A separate migration process then imported the content from the holding environment into the live site. This process allowed Ithaca College’s many content managers to thoroughly vet the content that would live on the new site and gave them a clear path to completion. Learn more about migrating using Paragraphs here: Migrating Paragraphs in Drupal 8

Steady scrum rhythm, staying agile, and consistently improving along the way.

In addition to the stellar dev work, a large contributor to the project’s success was establishing a steady scrum rhythm, staying agile, and consistently improving along the way. Each individual and unit solidified into a team through daily 15-minute standups, weekly backlog grooming meetings, weekly ‘Developer Showcase Friday’ meetings, regular sprint planning meetings, and biweekly retrospective meetings. This has been such a shining success the internal Ithaca team plans to carry forward this rhythm even after the Web Chefs’ engagement is complete.     

Engineering and Development Specifics

  • Drupal 8 site hosted on Pantheon Elite, with the canonical source of code being GitHub and CircleCI 2.0 as Continuous Integration and Delivery platform
  • Hierarchical and decoupled architecture based mainly on the use of group entities (Group module) and entity references that allowed the creation of subsite-like internal spaces.
  • Selective use of configuration files through the utilization of custom and contrib solutions like Config Split and Config Ignore modules, to create different database projections of a shared codebase.
  • Migration process based on 2 migration groups with an intermediate holding environment for content moderation.
  • Additional migration groups support the indexing of not-yet-migrated, raw legacy content for Solr search, and the events thread, brought through our Localist integration.
  • Living style guide for site editors by integrating twig components with Drupal templates
  • Automated Visual Regression
Aerial view of the Ithaca College campus from the Ithaca College homepage. From the Ithaca College Homepage.

A well-deserved round of kudos goes to the team. As a Team Augmentation project, the success of this project was made possible by the dedicated work and commitment to excellence from the Ithaca College project team. The leadership provided by Dave Cameron as Ithaca Product Manager, Eric Woods as Ithaca Technical Lead and Architect, and John White as Ithaca Dev for all things legacy system related was crucial in the project’s success. Ithaca College’s Katherine Malcuria, Senior Digital User Interface Designer,  led the creation of design elements for the website. 

Katherine Malcuria, Senior Digital User Interface Designer, works on design elements of the Ithaca.edu website

Ithaca Dev Michael Sprague, Web Chef David Diers, Architect,  as well as former Web Chef Chris Ruppel, Frontend Engineer, also stepped in for various periods of time on the project.  At the tail end of the project Web Chef, Brian Lewis, introduced a new baby Web Chef to the world, therefore the amazing Randy Oest, Senior Designer and Frontend Engineer, stepped in to assist in pushing this to the finish line from a front-end dev perspective. James Todd, Engineer, pitched in as ‘jack of all trades’ connoisseur helping out where needed.

The Four Kitchens Team Augmentation team for the Ithaca College project was led by Brandy Jackson, Technical Project Manager, playing the roles of project manager, scrum master, and product owner interchangeably as needed. Joel Travieso, Senior Drupal Engineer, was the technical lead, backend developer, and technical architect. Brian Lewis, Frontend Engineer, meticulously worked magic in implementing intricate design elements that were provided by the Ithaca College design team, as well a 3rd party design firm, Beyond, at different parts of the project.

A final round of kudos goes out to the larger Ithaca project team, from content, to devOps, to quality assurance, there are too many to name. A successful project would not have been possible without their collective efforts as well.

The success of the Ithaca College Website is a great example of excellent team unity and collaboration across multiple avenues. These coordinated efforts are a true example of the phrase “teamwork makes the dream work.” Congratulations to all for a job well done!

Special thanks to Brandy Jackson for her contribution to this launch announcement. 

Four Kitchens

The place to read all about Four Kitchens news, announcements, sports, and weather.

Feb 01 2018
Feb 01
February 1st, 2018

Paragraphs is a powerful Drupal module that makes gives editors more flexibility in how they design and layout the content of their pages. However, they are special in that they make no sense without a host entity. If we talk about Paragraphs, it goes without saying that they are to be attached to other entities.
In Drupal 8, individual migrations are built around an entity type. That means we implement a single migration for each entity type. Sometimes we draw relationships between the element being imported and an already imported one of a different type, but we never handle the migration of both simultaneously.
Migrating Paragraphs needs to be done in at least two steps: 1) migrating entities of type Paragraph, and 2) migrating entities referencing imported Paragraph entities.

Migration of Paragraph entities

You can migrate Paragraph entities in a way very similar to the way of migrating every other entity type into Drupal 8. However, a very important caveat is making sure to use the right destination plugin, provided by the Entity Reference Revisions module:

destination: plugin: ‘entity_reference_revisions:paragraph’ default_bundle: paragraph_type destination:plugin:entity_reference_revisions:paragraphdefault_bundle:paragraph_type

This is critical because you can be tempted to use something more common like entity:paragraph which would make sense given that Paragraphs are entities. However, you didn’t configure your Paragraph reference field as a conventional Entity Reference one, but as an Entity reference revisions field, so you need to use an appropriate plugin.

An example of the core of a migration of Paragraph entities:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: 'feed.url/endpoint' ids: id: type: integer item_selector: '/elements' fields: - name: id label: Id selector: /element_id - name: content label: Content selector: /element_content process: field_paragraph_type_content/value: content destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: paragraph_type migration_dependencies: { } plugin:urldata_fetcher_plugin:httpdata_parser_plugin:jsonurls:'feed.url/endpoint'    type:integeritem_selector:'/elements'    name:id    label:Id    selector:/element_id    name:content    label:Content    selector:/element_contentfield_paragraph_type_content/value:contentdestination:plugin:'entity_reference_revisions:paragraph'default_bundle:paragraph_typemigration_dependencies:{  }

To give some context, this assumes the feed being consumed has a root level with an elements array filled with content arrays with properties like element_id and element_content, and we want to convert those content arrays into Paragraphs of type paragraph_type in Drupal, with the field_paragraph_type_content field storing the text that came from the element_content property.

Migration of the host entity type

Having imported the Paragraph entities already, we then need to import the host entities, attaching the appropriate Paragraphs to each one’s field_paragraph_type_content field. Typically this is accomplished by using the migration_lookup process plugin (formerly migration).

Every time an entity is imported, a row is created in the mapping table for that migration, with both the ID the entity has in the external source and the internal one it got after being imported. This way the migration keeps a correlation between both states of the data, for updating and other purposes.

The migration_lookup plugin takes an ID from an external source and tries to find an internal entity whose ID is linked to the external one in the mapping table, returning its ID in that case. After that, the entity reference field will be populated with that ID, effectively establishing a link between the entities in the Drupal side.

In the example below, the migration_lookup returns entity IDs and creates references to other Drupal entities through the field_event_schools field:

field_event_schools: plugin: iterator source: event_school process: target_id: plugin: migration_lookup migration: schools source: school_id field_event_schools:  plugin:iterator  source:event_school  process:    target_id:      plugin:migration_lookup      migration:schools      source:school_id

However, while references to nodes or terms basically consist of the ID of the referenced entity, when using the entity_reference_revisions destination plugin (as we did to import the Paragraph entities), two IDs are stored per entity. One is the entity ID and the other is the entity revision ID. That means the return of the migration_lookup processor is not an integer, but an array of them.

process: field_paragraph_type_content: plugin: iterator source: elements process: temporary_ids: plugin: migration_lookup migration: paragraphs_migration source: element_id target_id: plugin: extract source: '@temporary_ids' index: - 0 target_revision_id: plugin: extract source: '@temporary_ids' index: - 1 field_paragraph_type_content:  plugin:iterator  source:elements  process:    temporary_ids:      plugin:migration_lookup      migration:paragraphs_migration      source:element_id    target_id:      plugin:extract      source:'@temporary_ids'      index:        -0    target_revision_id:      plugin:extract      source:'@temporary_ids'      index:        -1

What we do then is, instead of just returning an array (it wouldn’t work obviously), use the extract process plugin with it to get the integer IDs needed to create an effective reference.

Summary

In summary, it’s important to remember that migrating Paragraphs is a two-step process at minimum. First, you must migrate entities of type Paragraph. Then you must migrate entities referencing those imported Paragraph entities.

More on Drupal 8

Top 5 Reasons to Migrate Your Site to Drupal 8

Creating your Emulsify 2.0 Starter Kit with Drush

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Dec 21 2015
Dec 21

For the last 6 months I’ve been helping Dick and Andrei with a number of Drupal modules to enhance the management of content.

Multiversion

This module enhances the Drupal core Entity API by making all content entities revisionable. Revisions are enabled by default and not optional. This means that edits to users, comments, taxonomy terms etc are created as new revisions.

Another ground breaking advance is that deleting any of these entities now just archives it. Delete is a flag in the entity object, and just like any other update, it creates a new revision.

The concept of workspaces has also been added, this allows for a new instance of the site, from a content perspective, can be created. An example use case for workspaces would be to have a dev, stage and production workspace, and move content between them as it gets promoted through the workflow.

Trash

Now that deleting content entities just means a new revision marked as deleted we need a way to recover or purge them. The trash module is a UI on top of Multiversion allowing users to do just this.

Relaxed Web Services

Drupal 8 has always been about getting off the island, Relaxed Web Services furthers this by getting content off the island. It uses Drupal core’s REST API to expose CouchDB compatible endpoints. This means that replicating content is just a case of using CouchDBs replicator. Then creating a decoupled Drupal site is as simple as using PouchDB.

This works really well with Multiversion’s Workspaces, where each workspace is exposed as a separate CouchDB database.

CouchDB Replicator

So that we don’t need to depend on CouchDB for replication, the replicator has been rewritten in PHP. This will allow us replicator content from within Drupal or even via Drush.

Deploy

There is a long history for Deploy in Drupal, but now in Drupal 8 it’s little more than a UI for the PHP based CouchDB replicator. It allows replication of content between workspaces, between Drupal sites, and between CouchDB databases.

Mango

Something we’re currently working on is Mango, inspired by MongoDB and based on Cloudant’s implementation for CouchDB. Mango will allow querying for content entities over the Relaxed Web Services API. This is going to be very interesting to those creating decoupled sites because PouchDB supports the same querying API.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Oct 20 2015
Oct 20

Over the last 7+ years working with Drupal, one question always asked by clients is, how do I copy my content from staging to production? Generally the answer has been, “Don’t! Just add it on production”. Another option is to use the deploy module, which allows for the deployment of content between environments.

Drupal core has changed greatly in Drupal 8, the deploy module is following suit.

Multiversion
The multiversion module will be the foundation of all this change. It introduces three big changes to the way content is managed in Drupal.
Workspaces - All content is associated with a workspace, you can have multiple workspaces and switch between them. Think of them much like branches in version control systems such as git.
CRAP - Create, Read, Archive, and Purge. This is an alternative to CRUD (Create, Read, Update, and Delete). It means that in multiversion no content is deleted and everything is a new revision. Deleting an entity will just archive it, flagging it as deleted.
Revisions for everything - All content entities are revisionable with multiversion module. This means blocks, user, comments etc all have revisions.

Relaxed
The relaxed module is a RESTful API which is aligned to the CouchDB API. It exposes each workspace as a Couch database. This allows it to be replicated to another Drupal site running relaxed, to a native CouchDB database, or to something else that uses the same API, such as PouchDB.

Deploy
Finally the deploy module. This is now just a UI module as all the the content deployment is handed with external projects. The relaxed module exposes the API needed, a PHP based CouchDB replicator uses a PHP based CouchDB client to replicate / deploy the content.

The current plan is to have a “Deploy” button in the Drupal toolbar, this will open a modal where you can choose which site you want to deploy to and which workspace you want to merge into. It will be possible to deploy to a workspace on the current site. For example, a staging site may have two content editors, both working on different workspaces, then can then deploy / merge their content to the default workspace. This default workspace can then be merged up to the production site’s default workspace.

Tagging is also an idea being worked on. It will allow you to tag a workspace at a current point in time, then continue to edit content. When deploying from a tag it will only merge up changed until the tag was created.

These are big changes for the deploy module, but something that will make the content workflow a lot easier.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Sep 11 2015
Sep 11

At heart Drupal is an awesome CMS. The reason I would advise going for Drupal over any other framework is it’s ability to manage a scalable amount of content entities.

Earlier this week I discussed revisions in Drupal 8. Specifically the use of Multiversion module to allow Drupal to use a CRAP (Create Read Archive Purge) workflow for content management.

Since then I have been working on the archive and purge aspects of this workflow. The Multiversion module is pretty astonishing in how it revolutionises the way revisions are handled and content is managed but so far there has been little focus on the archive and purge process. When entities are deleted they are simply flagged as “deleted”, they exist in the Drupal database but are gone from any where on the site’s UI. Until now…

The all-new Trash module builds on the Multiversion module provide a place where users can see all their deleted entities, listed by entity type, then restore them along with all their revisions. The first commit on the 8.x-1.x branch was only done 3 days ago on September 8th, so it’s still early days, but the MVP is available as a dev release to have a play with.

Now, this is more or less the archive process working, next up is purge. So what do we purge? Well, ideally, nothing. It’d be great if we never need to truly delete any entities, but instead purge specific revisions. This is something that will be getting a lot of thought over the coming weeks.

Another step to look at is compaction. Multiversion has been following CouchDB’s API and coupled with the Relaxed web services module it allows entities to be replicated to CouchDB and other platforms following the same API. CouchDB has a notion of compaction via the “_compact” endpoint. How harshly should the content be compacted? remove all revisions apart from the current one?

All of this will be discussed by Dick Olsson and I, in our session Planning for CRAP and entity revisions everywhere in core at Drupalcon Barcelona.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Jul 15 2015
Jul 15

Regardless of industry, staff size, and budget, many of today’s organizations have one thing in common: they’re demanding the best content management systems (CMS) to build their websites on. With requirement lists that can range from 10 to 100 features, an already short list of “best CMS options” shrinks even further once “user-friendly”, “rapidly-deployable”, and “cost-effective” are added to the list.

There is one CMS, though, that not only meets the core criteria of ease-of-use, reasonable pricing, and flexibility, but a long list of other valuable features, too: Drupal.

With Drupal, both developers and non-developer admins can deploy a long list of robust functionalities right out-of-the-box. This powerful, open source CMS allows for easy content creation and editing, as well as seamless integration with numerous 3rd party platforms (including social media and e-commerce). Drupal is highly scalable, cloud-friendly, and highly intuitive. Did we mention it’s effectively-priced, too?

In our “Why Drupal?” 3-part series, we’ll highlight some features (many which you know you need, and others which you may not have even considered) that make Drupal a clear front-runner in the CMS market.

For a personalized synopsis of how your organization’s site can be built on or migrated to Drupal with amazing results, grab a free ticket to Drupal GovCon 2015 where you can speak with one of our site migration experts for free, or contact us through our website.

_______________________________

SEO + Social Networking:

Unlike other content software, Drupal does not get in the way of SEO or social networking. By using a properly built theme–as well as add-on modules–a highly optimized site can be created. There are even modules that will provide an SEO checklist and monitor the site’s SEO performance. The Metatags module ensures continued support for the latest metatags used by various social networking sites when content is shared from Drupal.

SEO Search Engine Optimization, Ranking algorithm

E-Commerce:

Drupal Commerce is an excellent e-commerce platform that uses Drupal’s native information architecture features. One can easily add desired fields to products and orders without having to write any code. There are numerous add-on modules for reports, order workflows, shipping calculators, payment processors, and other commerce-based tools.

E-Commerce-SEO-–-How-to-Do-It-Right

Search:

Drupal’s native search functionality is strong. There is also a Search API module that allows site managers to build custom search widgets with layered search capabilities. Additionally, there are modules that enable integration of third-party search engines, such as Google Search Appliance and Apache Solr.

Third-Party Integration:

Drupal not only allows for the integration of search engines, but a long list of other tools, too. The Feeds module allows Drupal to consume structured data (for example, .xml and .json) from various sources. The consumed content can be manipulated and presented just like content that is created natively in Drupal. Content can also be exposed through a RESTful API using the Services module. The format and structure of the exposed content is also highly configurable, and requires no programming.

Taxonomy + Tagging:

Taxonomy and tagging are core Drupal features. The ability to create categories (dubbed “vocabularies” by Drupal) and then create unlimited terms within that vocabulary is connected to the platform’s robust information architecture. To make taxonomy even easier, Drupal even provides a drag-n-drop interface to organize the terms into a hierarchy, if needed. Content managers are able to use vocabularies for various functions, eliminating the need to replicate efforts. For example, a vocabulary could be used for both content tagging and making complex drop-down lists and user groups, or even building a menu structure.

YS43P

Workflows:

There are a few contributor modules that provide workflow functionality in Drupal. They all provide common functionality along with unique features for various use cases. The most popular options are Maestro and Workbench.

Security:

Drupal has a dedicated security team that is very quick to react to vulnerabilities that are found in Drupal core as well as contributed modules. If a security issue is found within a contrib module, the security team will notify the module maintainer and give them a deadline to fix it. If the module does not get fixed by the deadline, the security team will issue an advisory recommending that the module be disabled, and will also classify the module as unsupported.

Cloud, Scalability, and Performance:

Drupal’s architecture makes it incredibly “cloud friendly”. It is easy to create a Drupal site that can be setup to auto-scale (i.e., add more servers during peak traffic times and shut them down when not needed). Some modules integrate with cloud storage such as S3. Further, Drupal is built for caching. By default, Drupal caches content in the database for quick delivery; support for other caching mechanisms (such as Memcache) can be added to make the caching lightning fast.

cloud-computing

Multi-Site Deployments:

Drupal is architected to allow for multiple sites to share a single codebase. This feature is built-in and, unlike WordPress, it does not require any cumbersome add-ons. This can be a tremendous benefit for customers who want to have multiple sites that share similar functionality. There are few–if any–limitations to a multi-site configuration. Each site can have its own modules and themes that are completely separate from the customer’s other sites.

Want to know other amazing functionalities that Drupal has to offer? Stay tuned for the final installment of our 3-part “Why Drupal?” series!

Feb 17 2015
Feb 17

Drupal is an excellent content management system. Nodes and fields allow site administrators the ability to create complex datasets and models without having to write a singe mysql query. Unfortunately Drupal’s theming system isn’t as flexible. Complete control over the dom is nearly impossible without a lot of work. Headless Drupal bridges the gap and gives us the best of both worlds.

What is headless Drupal?

Headless Drupal is an approach that decouples Drupal’s backend from the frontend theme. Drupal is used as a content store and admin interface. Using the services module in Drupal 7 or Core in Drupal 8, a rest web service can be created. Visitors to the site don’t view a traditional Drupal theme instead they are presented with pages created with Ember.js, Angular.js, or even a custom framework. Using the web service the chosen framework can be used to transfer data from Drupal to the front end and vice versa.

Pros

So what makes Headless Drupal so great? For one thing it allows frontend developers full control over the page markup. Page speed also increases since display logic is on the client side instead of the server. Sites can also become much more interactive with page transitions and animations. But most importantly the Drupal admin can also be used to power not only web apps but also mobile applications including Android and iOS.

Cons

Unfortunately Headless Drupal is not without its drawbacks. For one layout control for editors becomes much more difficult. Something that could be done easily via context or panels now requires a lot of custom fronted logic. Also if proper caching isn’t utilized and the requests aren’t batched properly lots of roundtrips can occur, causing things to slow down drastically.

Want to learn more about Headless Drupal?  Check out the Headless Drupal Initiative on Drupal.org.  And on a related topic, check out “Drupal 8 And The Changing CMS Landscape.

Feb 17 2015
Feb 17

Drupal is an excellent content management system. Nodes and fields allow site administrators the ability to create complex datasets and models without having to write a singe mysql query. Unfortunately Drupal’s theming system isn’t as flexible. Complete control over the dom is nearly impossible without a lot of work. Headless Drupal bridges the gap and gives us the best of both worlds.

What is headless Drupal?

Headless Drupal is an approach that decouples Drupal’s backend from the frontend theme. Drupal is used as a content store and admin interface. Using the services module in Drupal 7 or Core in Drupal 8, a rest web service can be created. Visitors to the site don’t view a traditional Drupal theme instead they are presented with pages created with Ember.js, Angular.js, or even a custom framework. Using the web service the chosen framework can be used to transfer data from Drupal to the front end and vice versa.

Pros

So what makes Headless Drupal so great? For one thing it allows frontend developers full control over the page markup. Page speed also increases since display logic is on the client side instead of the server. Sites can also become much more interactive with page transitions and animations. But most importantly the Drupal admin can also be used to power not only web apps but also mobile applications including Android and iOS.

Cons

Unfortunately Headless Drupal is not without its drawbacks. For one layout control for editors becomes much more difficult. Something that could be done easily via context or panels now requires a lot of custom fronted logic. Also if proper caching isn’t utilized and the requests aren’t batched properly lots of roundtrips can occur, causing things to slow down drastically.

Want to learn more about Headless Drupal?  Check out the Headless Drupal Initiative on Drupal.org.  And on a related topic, check out “Drupal 8 And The Changing CMS Landscape.

Feb 17 2015
Feb 17

Drupal is an excellent content management system. Nodes and fields allow site administrators the ability to create complex datasets and models without having to write a singe mysql query. Unfortunately Drupal’s theming system isn’t as flexible. Complete control over the dom is nearly impossible without a lot of work. Headless Drupal bridges the gap and gives us the best of both worlds.

What is headless Drupal?

Headless Drupal is an approach that decouples Drupal’s backend from the frontend theme. Drupal is used as a content store and admin interface. Using the services module in Drupal 7 or Core in Drupal 8, a rest web service can be created. Visitors to the site don’t view a traditional Drupal theme instead they are presented with pages created with Ember.js, Angular.js, or even a custom framework. Using the web service the chosen framework can be used to transfer data from Drupal to the front end and vice versa.

Pros

So what makes Headless Drupal so great? For one thing it allows frontend developers full control over the page markup. Page speed also increases since display logic is on the client side instead of the server. Sites can also become much more interactive with page transitions and animations. But most importantly the Drupal admin can also be used to power not only web apps but also mobile applications including Android and iOS.

Cons

Unfortunately Headless Drupal is not without its drawbacks. For one layout control for editors becomes much more difficult. Something that could be done easily via context or panels now requires a lot of custom fronted logic. Also if proper caching isn’t utilized and the requests aren’t batched properly lots of roundtrips can occur, causing things to slow down drastically.

Want to learn more about Headless Drupal?  Check out the Headless Drupal Initiative on Drupal.org.  And on a related topic, check out “Drupal 8 And The Changing CMS Landscape.

Nov 18 2014
Nov 18

As many of us know, Drupal 8 beta was released at the beginning of October which has given us a great preview of the goodies to come. One of those goodies is a new theme engine, Twig, which replaces PHPTemplate. Twig gives us a lot of extra power over our templates and allows us to clean up a lot of the extra cruft that PHPTemplate forced us to add.

So you may be asking, how will our brand new Drupal 8 templates look? Well they are going to be a lot leaner. The extra cruft of php tags, functions, and other miscellaneous bits are no longer needed – in fact, they don’t even work in Twig templates. Arguably, it will be easier to read at a glance what is happening in the Twig versions of Drupal templates. For a great example of how the new templates will look, we can crack open the Views module and compare the templates in the Views module 7.x version to the ones in the Views 8.x version. So let’s compare the views-view.tpl.php file from Views 7.x to views-view.html.twig file in Views 8.x.

Views-compare-part-oneViews-compare-part-two

Twig Comments

Starting from the top, lets work our way down to see what has changed. In the documentation section we can see that for the large block of comments, as well as single line comments, the {# #} syntax is used.

In the list of variables variables that are available to use, you may notice a few other changes. For example, Twig is not raw PHP, so the ‘$’ is not used at the beginning of variable names. Another change is that is that the $classes_array has been replaced by the attributes variable.

Rendering Variables

 Views-compare_example_one

Moving down to line #40 we can see the first instance of a variables rendered using Twig. The double curly bracket syntax, {{ my_variable }}, tells Twig that this is a variable and it needs to be rendered out. A variation of this syntax uses dot-notation to render variables contained in an array/object. This syntax looks like {{ my_variable.property_or_element }}, which makes it extremely easy to use. Dot notation is not used in this particular Twig template.

Twig Filters

Another powerful feature of Twig is template filters. Filters allow us to do simple manipulations of variables. The syntax for using a filter is similar to the syntax for rendering a variable. The difference is that after the variable name, you insert a | (pipe), followed by the name of the filter. For an example to make a string from a variable all lowercase you would do {{ variable_name|lower }} which transform all of the characters to lowercase. If you have used templating systems from other frameworks, such as Angular.js, this syntax may look familiar to you. This particular Views template does not use filters, but you can see examples of different filters on the Twig documentation site. If none of the predefined filters satisfy your requirements, you can extend Twig to create your own filter. The Twig documentation site provides details about creating your own custom filters..

Twig Logic

Views-compare_example_two

Jumping to line #42, we can see the {% %} (curly-bracket and percentage symbol) syntax, which is used for template logic, such as if statements. This syntax tells Twig that we are not rendering something, but rather that we need to process some logic, such as a control structure or loop, in our template.

The Takeaway

This blog post is a high level overview of how Twig templates in Drupal 8 will look.  With Twig, we can choose to use the out-of-the box tools it provides, or we can dive in and extend it with additional features such as new filters. For more information I would highly recommend reading through the Twig documentation for designers and for developers.

Nov 05 2014
Nov 05

In September, Phase2 released a Grunt-based tool for building and testing Drupal sites. We have been working on the tool since January, and after adopting it as part of our standard approach for new Drupal sites, we wanted to contribute it back to the community. We are happy invite you to get started with Grunt Drupal Tasks.

Grunt is a popular JavaScript-based task runner, meaning it’s a framework for automating tasks. It’s gained traction for automating common development tasks, like compiling CSS from Sass, minifying JavaScript, generating sprites, checking code standards, and more. There are thousands of plugins available that can be implemented out-of-the-box to do these common tasks or integrate with other supporting tools. (I mentioned that this is all free and open source software, right?)

Grunt Drupal Tasks is a Grunt plugin that defines processes that we have identified as best practices for building and testing Drupal sites.

Building Drupal

The cornerstone of Grunt Drupal Tasks is the “build” process, which assembles a runnable Drupal site docroot from a Drush make file and custom code and configuration.

The make file defines the version of Drupal core to use, the contrib modules, themes, and libraries to download, and even patches to apply to any of these components. The make file can include components released on Drupal.org or stored in public or private repositories. For patches, our best practice is to reference patches hosted on Drupal.org and associated with an issue. With these options, the entire set of components for a Drupal site can be declared in a make file and consistently retrieved using Drush.

After the Drush make process assembles all external dependencies for the project, the Grunt Drupal Tasks build process adds custom code and configuration. This includes custom installation profiles, modules, and themes, as well as “sites” directory files, like sites.php and settings.php for one or many subsites, and other “static” files to override, like .htaccess and robots.txt. These custom components are added to the built docroot by symlink, so it is not necessary to rebuild for every update to custom source code.

These steps results in a Drupal docroot assembled from custom source in the following structure:

src/ modules/ profiles/ sites/ default/ settings.php static/ themes/ project.make 1 2 3 4 5 6 7 8 9 10 11 12 13 14 src/   modules/     custom modules>  profiles/     custom installation profiles>   sites/     default/       settings.php     optionally, other subsites or sites.php>   static/     optionally, overrides for .htaccess or other files>   themes/     custom themes>   project.make

Grunt Drupal Tasks includes other optional build steps, which can be enabled as needed for projects. One such task is the “compile theme” step will compile Sass files into CSS.

This build process gives us a reliable way for assembling Drupal core and contrib components, for adding our custom code, and integrating development tools like Sass. By using Grunt to automate this procedure, it becomes a portable script that can be shared among the project’s developers and used in deployment environments.

Testing Drupal

In order to help make best practices the default, Grunt Drupal Tasks includes support for a number of code quality and testing tools.

A “validate” task is provided that includes checking basic PHP syntax and Drupal coding standards using PHPLint and PHP Code Sniffer. We highly recommended that developers use this command while coding, and have included it as part of the default build process.

An “analyze” task is also provided, which adds support for the PHP Mess Detector. This task may be longer-running, so it is better suited to run as part of a continuous integration system, like Jenkins.

Finally, a “behat” task is provided for running test scenarios with Behat and the Drupal Extension. This encourages writing Behat tests for the project and committing them with the project code and build tools, so the tests can be run by other developers and in the integration environment by a continuous integration system.

Scaffolding for Drupal Projects

The old starting point for Drupal projects was a vanilla copy of Drupal core. Grunt Drupal Tasks offers scaffolding for Drupal projects that starts with Drush make, integrates custom code and overrides, and provides consistent support for a variety of developer tools.

This scaffolding is provided through the example included with Grunt Drupal Tasks, which is the recommended starting point for new projects. The scaffold structure adds a layer above the aforementioned “src” directory; this layer includes code and configuration related to Grunt Drupal Tasks (Gruntconfig.json and Gruntfile.js), dependencies for the supporting tools (composer.json), and other resources for the tools (features/, behat.yml, and phpmd.xml).

The example includes the following:

features/ src/ .gitignore Gruntconfig.json Gruntfile.js behat.yml composer.json package.json phpmd.xml 1 2 3 4 5 6 7 8 9 features/src/.gitignoreGruntconfig.jsonGruntfile.jsbehat.ymlcomposer.jsonpackage.jsonphpmd.xml

For full documentation on starting a new project with Grunt Drupal Tasks, see CONFIG.md.

Learning More

Watch the Phase2 blog for more information about Grunt Drupal Tasks. If you are attending the Bay Area Drupal Camp this week, please check out my session on Using Grunt to Manage Drupal Build and Testing Tools.

Nov 04 2014
Nov 04

I am proud to announce the release of the Behat Drupal Extension 3.0! Since 2012, the Drupal Extension has been used for Behaviour Driven Development of thousands of Drupal sites all over the world. The project began as part of the Drupal.org upgrade, and was quickly generalized to enable the testing of any Drupal site.

Version 3 has many exciting new features, and is compatible with Behat 3. Note version 2 does not exist, wanting to avoid version soup.

Behavior Driven Development at BADCamp

Before I dive into the new features, it is important to clarify the difference between testing an application, and Behavior Driven Development (BDD).

As everzet pointed out at Drupalcon no business-critical feature starts with

When browsing an article User Should be able to see related articles 1 2 3 When browsing an article User Should be able to see related articles

rather, it starts with a conversation

In order to read more interesting articles As a reader I need to see related articles to the one I'm reading 1 2 3 In order to read more interesting articles As a reader I need to see related articles to the one I'm reading

While both are testing something, only the latter is truly describing a behavior that matters to a stakeholder. This is the subject of a blog post, or series of posts, in and of itself, so more on this subject later.

What’s new

Documentation

Documentation for this project has been integrated into the repository, and is automatically building on readthedocs.org. Thanks to Melissa Anderson for this awesome work!

A starter context

A starter Drupal context that makes no assumptions around the language used for each test, but still provides all the previous functionality to interact directly with Drupal.

Users are encouraged to start tests from this context which will allow them to use truly ubiquitous language that is specific to each project.

Drupal Drivers

The Drupal Drivers now exist in a separate project, allowing for non-Behat applications to interact with Drupal (e.g., calling directly from Mink, or Codeception).

Note that the Drupal 6 driver has been removed, but since drivers are now separate projects, it will be easy to port that over to the Drupal Extension 3, should somebody want.

It should also be noted that Drupal 8 support will require ongoing work as the code base there evolves towards release.

More granular pre-defined step-definitions

Existing step definitions have been split into 4 indepentent contexts:

  • DrupalContext – This contains steps for working with content, users, and taxonomies.
  • MinkContext – This is an extension to the Mink Extension, providing additional steps for working with regions and forms.
  • MessageContext – Provides steps for working with Drupal success/warning/error messages.
  • DrushContext – Provides steps for calling drush commands directly from scenarios

This allows for the use of some pre-definied step-definitions, rather than the previous all-or-none approach.

No more regex!

The pre-definied steps now use the new turnip syntax introduced in Behat 3:

Given I am viewing a/an :type (content )with the title :title 1 Given I am viewing a/an :type (content )with the title :title

rather than

Given /^(?:a|an) "(?P[^"]*)" node with the title "(?P[^"]*)"$/ 1 Given /^(?:a|an) "(?P[^"]*)" node with the title "(?P[^"]*)"$/

What’s a ‘node’?!

The term node has been removed from steps and replaced with content in all pre-defined steps.

What’s next?

BDD Mini Summit

I will be attending the Behat mini-summit at BADCamp along with several other folks from Phase 2. I hope to highlight the Drupal Extension 3, and discuss best-practices for the wide variety of testing needs. I also hope to continue discussions around Behat, Mink and Drupal core.

Oct 20 2014
Oct 20

As I mentioned in my recent post, I got a chance to upgrade the drupal.org ELK stack last week. In doing so, I got to take a look at a Logstash configuration that I created over a year ago, and in the course of doing so, clean up some less-than-optimal configurations based on a year worth of experience and simplify the configuration file a great deal.

The Drupal.org Logging Setup

Drupal.org is served by a large (and growing) number of servers. They all ship their logs to a central logging server for archival, and around a month’s worth are kept in the ELK stack for analysis.

Logs for Varnish, Apache, and syslog are forwarded to a centralized log server for analysis by Logstash. Drupal messages are output to syslog using Drupal core’s syslog module so that logging does not add writes to Drupal.org’s busy database servers. (@TODO: Check if these paths can be published.) Apache logs end up in/var/log/apache_logs/$MACHINE/$VHOST/transfer/$DATE.log, Varnish logs end up in/var/log/varnish_logs/$MACHINE/varnishncsa-$DATE.log and syslog logs end up in /var/log/HOSTS/$MACHINE/$DATE.log. All types of logs get gzipped 1 day after they are closed to save disk space.

Pulling Contextual Smarts From Logs

The Varnish and Apache logs do not contain any content in the logfiles to identify which machine they are from, but the file input sets a path field that can be matched with grok to pull out the machine name from the path and put it into the logsource field, which Grok’s SYSLOGLINE pattern will set when analyzing syslog logs.

Filtering on the logsource field can be quite helpful in the Kibana web UI if a single machine is suspected of behaving weirdly.

Using Grok Overwrite

Consider this snippet from the original version of the Varnish configuration. As I mentioned in my presentation, Varnish logs are nice in that they inclue the HTTP Host header so that you can see exactly which hostname or IP was requested. This makes sense for a daemon like Varnish which does not necessarily have a native concept of virtual hosts (vhosts,) whereas nginx and Apache default to logging by vhost.

Each Logstash configuration snippet shown below assumes that Apache and Varnish logs have already been processed using theCOMBINEDAPACHELOG grok pattern, like so.

filter { if [type] == "varnish" or [type] == "apache" { grok { match => [ "message", "%{COMBINEDAPACHELOG}" ] } } } 1 2 3 4 5 6 7 8 filter{  if[type]=="varnish"or[type]=="apache"{    grok{      match=>["message","%{COMBINEDAPACHELOG}"]    }  }}  

The following snippet was used to normalize Varnish’s request headers to not include https?:// and the Host header so that therequest field in Apache and Varnish logs will be exactly the same and any filtering of web logs can be performed with the vhost andlogsource fields.

filter { if [type] == "varnish" { grok { # Overwrite host for Varnish messages so that it's not always "loghost". match => [ "path", "/var/log/varnish_logs/%{HOST:logsource}" ] } # Grab the vhost and a "request" that matches Apache from the "request" variable for now. mutate { add_field => [ "full_request", "%{request}" ] } mutate { remove_field => "request" } grok { match => [ "full_request", "https?://%{IPORHOST:vhost}%{GREEDYDATA:request}" ] } mutate { remove_field => "full_request" } } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 filter{  if[type]=="varnish"{    grok{      # Overwrite host for Varnish messages so that it's not always "loghost".      match=>["path","/var/log/varnish_logs/%{HOST:logsource}"]    }    # Grab the vhost and a "request" that matches Apache from the "request" variable for now.    mutate{      add_field=>["full_request","%{request}"]    }    mutate{      remove_field=>"request"    }    grok{      match=>["full_request","https?://%{IPORHOST:vhost}%{GREEDYDATA:request}"]    }    mutate{      remove_field=>"full_request"    }  }}  

As written, this snippet copies the request field into a new field called full_request and then unsets the original request field and then uses a grok filter to parse both the vhost and request fields out of that synthesized full_request field. Finally, it deletesfull_request.

The original approach works, but it takes a number of step and mutations to work. The grok filter has a parameter calledoverwrite that allows this configuration stanza to be considerably simlified. The overwrite paramter accepts an array of values thatgrok should overwrite if it finds matches. By using overwrite, I was able to remove all of the mutate filters from my configuration, and the enture thing now looks like the following.

filter { if [type] == "varnish" { grok { # Overwrite host for Varnish messages so that it's not always "loghost". # Grab the vhost and a "request" that matches Apache from the "request" variable for now. match => { "path" => "/var/log/varnish_logs/%{HOST:logsource}" "request" => "https?://%{IPORHOST:vhost}%{GREEDYDATA:request}" } overwrite => [ "request" ] } } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 filter{  if[type]=="varnish"{    grok{      # Overwrite host for Varnish messages so that it's not always "loghost".      # Grab the vhost and a "request" that matches Apache from the "request" variable for now.      match=>{        "path"=>"/var/log/varnish_logs/%{HOST:logsource}"        "request"=>"https?://%{IPORHOST:vhost}%{GREEDYDATA:request}"      }      overwrite=>["request"]    }  }}  

Much simpler, isn’t it? 2 grok filters and 3 mutate filters have been combined into a single grok filter with two matching patterns and a single field that it can overwrite. Also note that this version of the configuration passes a hash into the grok filter. Every example I’ve seen just passes an array to grok, but the documentation for the grok filter states that it takes a hash, and this works fine.

Ensuring Field Types

Recent versions of Kibana have also gotten the useful ability to do statistics calculations on the current working dataset. So for example, you can have Kibana display the mean number of bytes sent or the standard deviation of backend response times (if you are capturing them – see my DrupalCon Amsterdam slides for more information on how to do this and how to normalize it between Apache, nginx, and Varnish.) Then, if you filter down to all requests for a single vhost or a set of paths, the statistics will update.

Kibana will only show this option for numerical fields, however, and by default any data that has been parsed with a grok filter will be a string. Converting string fields to other types is a much better use of the mutate filter. Here is an example of converting the bytes and the response code to integers using a mutate filer.

filter { if [type] == "varnish" or [type] == "apache" { mutate { convert => { [ "bytes", "response" ] => "integer", } } } } 1 2 3 4 5 6 7 8 9 10 filter{  if[type]=="varnish"or[type]=="apache"{    mutate{      convert=>{        ["bytes","response"]=>"integer",      }    }  }}  

Lessons Learned

Logstash is a very powerful tool, and small things like the grok overwrite parameter and the mutate convert parameter can help make your log processing configuration simpler and result in more usefulness out of your ELK cluster. Check out Chris Johnson’s post about adding MySQL Slow Query Logs to Logstash!

If you have any other useful Logstash tips and tricks, leave them in the comments!

Jul 16 2014
Jul 16
teaser image for blog post

Since we integrated the EdgeCast CDN for one of our clients, and released a related EdgeCast Drupal module we have been encouraging more and more clients to consider a CDN layer to accelerate performance to multiple geographic locations and maintain an excellent uptime even during site maintenance periods.

A recent international client who is running many domains with federated content using the Domain module needed to make use of the content delivery network to improve performance and resiliance for their sites.

The use of the Domain module created a problem with the purge process employed in the EdgeCast v1.x module. As with most of the cache purging modules available for Drupal it sent each URL individually and waited for confirmation back to say it the request had been successful.

With a large multi domain site this multiplied the page load time upon submitting a node edit form to the point of a PHP timeout and Ngnix HTTP 500 errors. Increasing the timeouts was a short term fix, but it still meant content editors were slowed down waiting for forms to submit.

There was also another effect cause by external feed aggregation which created nodes, which in turn triggered URL purge requests, and slowed down each node creation step.

We toyed with the idea of using a Drupal batch queue to process the purge requests during the regular cron call but this delayed the refreshing of content pages far too long for the content editors.

We settled on what appears to be the little used Multi Curl facility. This allowed us to build up an array of all the URLs using the Expire module and then send a single HTTP request to the EdgeCast API followed by ignoring the response back from the API. This created a much faster content editing experience for the users.

Note: The Expire module doesn't currently implement Domain module purging - the code class is empty. See issue #2293217 for details. We worked around this using the Cache Expiration Alias module.

The side effect of the Multi Curl development work was the speeding up of the single domain sites purge requests too!

The v2.0 EdgeCast module is available now for Drupal 7.

Find out more about the content delivery network from Edgecast.

Jun 11 2012
Jun 11

Drupal's comment module gracefully handles anonymous posters when they want to leave messages. Administrators can choose whether those comments are left unattributed, whether anonymous commenters must leave personal information like a name and an email address, and so on. When it comes to full-fledged Node content, though, Drupal's a lot less flexible: the authors of anonymously written posts show up as "Anonymous," and you'd better like it. Fortunately, fixing that problem is the purpose of the Anonymous Posting module!

Screenshot of admin page

Setting up the module is painless; just visit its settings page, choose which content types should use its expanded Anonymous Posting options, and double-check to make sure anonymous users have permissions to create those content types. Once you've done that, a special set of options will appear on that content type's settings form. They're the same options that normally apply to comments, transplanted into the world of nodes.

Once you've set it up, usage is a no-brainer. Authenticated users will be able to create nodes just as they did before, but anonymous users preparing to author a post will see a Comment-module style list of personal information fields. Name, email, and home page are all supported. The result? Easy anonymous posting, complete with "Unverified" next to the author's name to prevent visitors from impersonating registered users of the site.

Screenshot of an anonymously authored node

Anonymous Posting is available for Drupal 6 and 7. It's simple, streamlined, and because it still relies on Drupal's normal content authoring permissions it doesn't complicate the site permissions and security unnecessarily. Considering the potential for abuse of anonymous content creation, it's probably a good idea to combine this module with an anti-spam tool or a content moderation system.

*/
May 16 2012
May 16

A continuation of the useful administration modules list started in Part I.

Devel

Used extensively by the developers, devel module can also be very helpful to the site administrators. One of its more popular features is automatic content generation (such as users and nodes) - including media files! Another nice feature is switching between users (for example, to test access permissions). It also integrates well with the admin menu module mentioned in the first part of the article.

To use Devel on Drupal 6, go to admin/generate and choose the type of items to generate (the modules comes with taxonomy, content and users, other modules add more types of content). The module can also delete the existing items (for examples, nodes by type).

Devel generate on Drupal 6

To switch users, go to admin/build/block and enable Switch user block. Make sure the user you switch to belongs to a role that's allowed to switch back (it's not recommended to enable this feature on the production sites).

Devel module settings can be configured on admin/settings/devel.

To use devel on Drupal 7, go to admin/config/development. Note you can now also generate menu items. If you use administration menu, you can quickly disable all developer modules (you can set up which ones remain enabled on admin/config/administration/admin_menu). Again, to switch users go to admin/structure/block and enable the Switch user block and set up the permissions appropriately so that you can switch back again.

diff

It's generally a good idea to use revision when editing the content (so that, for example, you could easily roll back if needed). Diff module help to compare different versions of the same node and see what changes have been made line by line. 

Set up content types to use diff on each content type admin page. After you create a new revision, go to the Revisions tab and choose diff for the version you'd like to compare. 

Reivions tab

If you're using WYSIWYG, it might not look very nice:

Revision diff with WYSIWYG

The 2.x-dev version can deal with it better if you enabled "Remove markup" option on the content type config page:

Diff configuration

You can also see "Hide/show markup" option on the diff view.

Check this issue for more information and additional patches.

Vertical tabs (Included in D7)

Vertical tabs is one of those small modules that significantly improve user experience by tabbing Drupal's fieldsets (such as node type and block forms). It's also integrated with other modules, such as xmlsitemap and nodewords. It's part of the core in Drupal 7.

Vertical tabs node form

Module Filter

Any Drupal site with many contributed modules (i.e. the majority) is at risk of a modules administration page being too unwieldy to manage. Module filter helps quick find modules by adding a search field and organizing modules into tabs by their package (with the top tab showing all the modules alphabetically). To configure module filter on Drupal 6, go to admin/settings/module_filter (D7: admin/settings/module_filter)

Module filter

Taxonomy Manager

Taxonomy manager is a very powerful modules that provides an AJAX powered interface for creating, deleting, moving and merging taxonomy terms as well as export and other features (see the project page for more details). Any time you need to perform mass operations on the vocabularies and terms, taxonomy manager is invaluable. 

Taxonomy manager

Taxonomy manager is available at admin/content/taxonomy_manager/ (d6) or admin/structure/taxonomy_manager/ (d7)

Related stories: 

Feb 01 2012
Feb 01

Whether you're on Drupal 7 with it's clean administration theme, or still on Drupal 6, there're ways to make interface more userfriendly and improve the workflow.

Administration menu

Administration Menu module is a must for Drupal 6, but it's still helpful on Drupal 7 as a replacement for the built in admin toolbar. It's main feature is a toolbar with dropdown menus where you can drill down the entire menu tree (you can even add local tasks such as tabs to it). It also integrates with Devel module and VBO (see below) and has more nice features.

Drupal admin menu module

To configure admin_menu, go to:

D6: admin/settings/admin_menu

D7: admin/config/administration/admin_menu

The usefulness of this module goes beyond mass updating the views rows. I routinely use VBO's content and user admin views as a replacement for the core pages. The links are, respectively:

D6: admin/content/node2 and admin/user/user2  

D7: admin/content/node2 and admin/user/user

If you use admin menu, you can set up these links as defaults. The main advantage is that you can customize these pages using views and VBO (and of course, nothing stops you from creating your own custom pages). 

VBO node page

VBO node page configuration

Environment indicator

The whole dev-stage-production workflow can be mighty confusing sometimes. Environment indicator module solves the issue by displaying a bright strip for each site. You can set the color with the color module, customize the text and position. What if you need to dump the database from production? You can deal with that by overriding configuration settings in settings.php

Environment indicator configuration

Admin role (D6)

I've lost count of how many times I would add a new module and wouldn't be able to use it. In Drupal 6, you have to go and manually check permissions for each role, even the administrative ones (only UID 1 has all rights enabled by default, but it's better not to be in the habit of using it on a day to day basis). A simple thing, but easy to forget sometimes. Admin role module allows you to set up a role that has all permissions by default. It was so useful for Drupal 6, that it actually was included in core for Drupal 7.

To configure admin role after installation on D6, go to 'admin/user/settings', scroll down to the "Administrator role" fieldset.

 Enable admin role

Scheduler

Thanks to the Scheduler module, you go on vacation and still have new content to appear on your blog every day. Or unpublish that temporary announcement on a specific day. Integration with Date Popup module (part of the Date module package) enables the calendar popup.

Scheduler settings on D6

After checking the module configuration (on D7: admin/config/content/scheduler), you can enable scheduling and set some options for each content type separately.

Enable scheduling for a content type

Related stories: 

Nov 22 2011
Nov 22

1. Create a webform with fields

After you install Webform and Webform Conditional, you can add a new webform at Home >> Administration >> Content. Enter a title and (optionally) a short description. After you click save, you'll be on the Form Components page. Add a new Textfield called "Your name" and make it mandatory. Then click add.

Create a textfield

Keep the default settings and click on Save component button.

Create a field type email and make it mandatory.

Create a new field of Select options type, give it a label "Request type" and make it mandatory. On the settings page, you're required to enter the options using the key|value format. Since we'll be using these options to send emails to different people, enter emails as keys like this: "email|Category".

Create select options

Next, create a textarea where the request is going to be entered. Next, we'll let our users to select their countries by using Select options. Fortunately, you don't have to type any of the countries yourself thanks to the pre-built option lists shipped with webform. You can load one of these lists on the settings page and Options are going to be populated. On the display box, choose "Listbox".

Load a countries list

2. Create a list of States and Provinces

Now we can create a list of States using, again, the pre-built option to populate values and choosing listbox. Obviously, we only want this dropdown to show up if a user chooses USA as their country. To do that, open Conditional Rules and select Country as a component. For the value you actually need to enter the key (it's a bit confusing). The key for the United State is US (you can check the field if you're not sure, I think it's "USA" for Drupal 6).

Create States as a dependent field

You can check if the form is working, but don't forget to clear the cache before you do it!

Now let's add the Canadian provinces in the same manner. The difference is, they're not prebuilt but you can copy them from this page (scroll down a bit). The code for Canada is CA. Again, clear the cache and test the webform. The list of states or provinces should appear only if the corresponding country is selected (provided the javascript is turned on, of course).

Webform with dependent fields

Webform with dependent fields

Create a confirmation message and a list of email sending rules

(Optional: add a confirmation message on the form setting page and use "No redirect").

Create a confirmation message

The last item is setting up emails of the people who will be receiving the form submissions. You can do it on the "E-mails" tab. We'll be using the emails we already defined previously. Select this field by it's label (in this example, it's "Request type") and click Add button. You'll get to the email settings page where you can define the values of the email (such as address and name). You can also modify the email template as needed.

Setup email recipients

Setup email recipients

Oct 17 2011
Oct 17

r4032login is one of those small modules that can significantly improve Drupal user experience. Remember what happens if anonymous users, for example, are trying to access restricted content (such as admin pages)? They see this:

access restricted content

Not very helpful, especially if the login box is nowhere to be seen. Granted, you can actually configure default 403 and 404 pages, including user login. But in this case, you don't see the message at all (which is not very helpful either). r4032login solves the issue and even allows the users to go back to the page they tried to access in the first place. It's available for Drupal 6 and Drupal 7 (note the configuration path is different).

To configure the module on Drupal 6, got to admin/settings/r4032login. Make sure 403 is set to r4032login at admin/settings/error-reporting.

r4032 config drupal 6

On Drupal 7, all the settings are available at admin/config/system/site-information.

r4032 config drupal 7

You can still set the optional redirect page, if needed.

Now, when access restricted area, the user is redirected to the login page and the destination is appended to the url.

redirect to user login saving destination

Note: modules with the similar functionality are LoginTtoboggan and login destination.

Related stories: 

Aug 15 2010
Aug 15

This lesson shows how to automatically import content from RSS or Atom feeds with the help of the Feeds module. The process relies on cron to run periodically so that the feed processor can check for new content. The feed processor creates regular Drupal nodes from the items contained in the feed. You can choose to have the imported items published by default or not. Once the items have been imported as nodes you can then create custom paths to those using Pathauto and/or customize the display using the Views module.

Note: Click the 'full screen' icon (to the right of the volume control) in order to watch online at full 1280x720 resolution.

Video Links

Flash Version

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web