Feeds

Mar 05 2013
Mar 05

On Friday, we posted the Community Plugin Toolkit to the community.openatrium.com site. This “toolkit” is a set of documentation (with code examples to follow) describing how to build Plugins for Open Atrium 2.0 in Drupal 7.

For the past several months, Phase2, with help from some wonderful sponsors, have worked to build a solid architecture and framework for Atrium 2.0. To make community contribution as productive, valuable, and easy as possible, we focused on the API and framework first. Now that we have a stable core and solid architecture, we are excited for the community to start working on Plugins that will provide much of the end-user functionality of Open Atrium.

openatrium_logo copy

Getting Started with Open Atrium 2 Development

Open Atrium 2 (OA2) was architected to make it very easy to develop your own Plugins. An OA2 Plugin is just a Drupal module, whether it comes from Apps, Features, or is just a regular module. The Community Plugin Toolkit (CPT) describes how you hook your module into the Groups and Sections that provide the structure and access control for OA2. Your Plugin can define new Content Types, Views, Panes, Blocks, Boxes, Menus, or most any other normal piece of Drupal content.

For example, let’s say you want to add an Event Calendar plugin to Open Atrium 2. You simply create an Event content type, then add some Date fields for the Start/End of the event. Then add a Google Map Address field for the Event Location. Then you add the Calendar module and create a View of your event calendar. Using the Features module, you export all of this into your own “Event Module”. Once your module is enabled in Open Atrium, you add the Group and Section fields to your Event content type as described in the Community Plugin Toolkit. Then you create a new Panelizer layout for a Section Page called Events. On this page layout you add the Calendar view to the main content region, then add an Upcoming Events view to the right sidebar.

Voila! You now have a rudimentary Event calendar landing page (Section) for a Group. The Group Admin can create a new Section using your Event layout. Members of the Group can create Event content that is assigned to their group. You can even create multiple calendars with different levels of access control, such as a private calendar within a private Section of your Group.

The built-in message and notification system in Open Atrium 2 will automatically generate messages when Events are created, updated, or deleted. You can add your own custom hooks to create a new type of message to remind members when an event is about to begin using custom email templates. You can also create templates to control how your custom event message is displayed in the OA2 Recent Activity “river”.

Creating Plugin functionality for Open Atrium isn’t much different from creating normal Drupal functionality. The steps to integrate your content types into OA2 Groups and Sections are easy to follow. You still have all of the power of Drupal, along with the enhanced content organization and access control of OA2. This architecture allows you to easily re-use your existing Drupal modules within Open Atrium.

Contributing to the Community

While Phase2 Technology continues to maintain the core components of Open Atrium 2, this distribution is open source and community-driven; so most of the functionality will come via community plugins. For example, Open Atrium 2 doesn’t have any “CaseTracker” functionality as part of the core. There are a variety of Project Management tools that different people might need, from simple “To-do” list, to Issue Tracking, to integration with 3rd party ticketing systems such as JIRA. Open Atrium does not want to dictate a specific solution hard-coded into the core project, but wants to allow different Plugin solutions to better meet customer needs.

The new architecture should make it relatively straightforward to integrate your existing modules and functionality into Open Atrium 2. By sharing your Plugins with the community you will be helping to build a world-class Open Source solution for Collaboration, Project Management, and Communications. Eventually, we will have an “App” model that can be used to easily share your Plugins throughout the Open Atrium community. We will have a Plugin browser area on the Open Atrium community site to provide a way to share information about your plugins. For now, just use Drupal.org and submit your modules as normal Drupal Projects. With the Drupal.org Sandbox functionality, any developer can get started and post their own OA2 Plugins.

Where is the code?!

We know it is difficult to start Plugin development without the actual Open Atrium 2 code base to play with. We hope to have the OA2 code put into the Drupal.org Open Atrium Project space within the next week or so. This will be very early Alpha code, but will at least give the community something to start with. Once this code is posted keep in mind that it will be missing functionality and have plenty of bugs. We will eventually enable the Issue Tracker on Drupal.org to handle patches and issues.

In the meantime, feel free to post questions or plugin announcements to the Development group on the community.openatrium.com site or contact us directly at [email protected].

Feb 27 2013
Feb 27

Posted Feb 27, 2013 // 0 comments

Are you ready to DrupalCon Portland style? Here at Phase2 we are anxiously waiting to hear which sessions get accepted, while we dust off all of our organic cotton shirts with birds on them and study up on local Portland micro-brews.

With about 600 sessions submitted to DrupalCon Portland, deciding which sessions to comment on is no easy task. After polling Phase2’s DrupalCon enthusiasts, I have created a list of the Phase2 top 10 session picks of 2013. This list is a mix of Phase2 sessions and also sessions from the Drupal Communtiy that we are really stoked about.

Check out these awesome sessions, and be sure to add a comment if you would like to see them too!

Speakers: Molly Byrnes, Steven Merrill, Brian McMurray, John Robert Wilson, and Heather Johnson

When Lady Gaga and the Black Eyed Peas are helping you promote your Sandy Hurricane Relief Campaign, you can’t get bogged down with website troubles.  See how Phase2 and Robin Hood Foundation created elegant and effective Drupal web solutions.

Add a

Speakers: Kris Vanderwater, Kristof De Jaeger, Chris Johnson, Matt Cheney, and Neil Hastings

When Drupal rock stars come together, great things happen.  You won’t want to miss these leading Drupal layout experts discussing the different ways to layout your site, and the benefits of each solution.


Add a !

Speaker: Josh Miller

We are as excited for Josh’s ecommerce presentation as his Q&A discussion at the end.  Josh has a wealth of knowledge about ecommerce in Drupal and we can’t wait to pick his mind!

Add a !

Speaker: Mike Potter

Learn what’s new with Open Atrium and how you can use the new and improved Open Atrium to easily add new functionality and integrate with other systems.  

Add a

Speaker: Michael Meyers

It is more important than ever to learn about the needs of the biggest companies using Drupal.  Learn from Michael Myers how we can move forward with Drupal enterprise development.

Add a

 

Speaker: Tobby Hagler

Tobby Hagler makes structured data and content strategy fun and approachable with his example of Dungeons and Dragons.  Tobby Hagler is a DrupalCon vetran, you won’t want to miss this!

Add a

 

Speaker: Dan Mouyard

This should be a great session for all the advanced Drupalers.  Let’s talk markup in Drupal 7!

Speakers: Brandon Morrison, Patrick Hayes, Josef Dabernig, Pol Dell'Aiera, Tom Nightingale, and Nate Parsons

 You really can’t miss the opportunity to learn about mapping in Drupal from some of the lead Drupal mapping experts around. Learn about mapping efforts to date and what’s on the horizon for Drupal 8.

Add a

 

Speakers: Kellye Rogers, Josh Turton and Rob Roberts.

Learn how Energy.gov created an innovative responsive solution and avoided having to wait for a complete redesign.  

 

Speakers: Vesa Palmul, Jeff Walpole, Fred Plais, and Michael Caccavano

The leaders from some of Drupal’s most talked about mergers, reveal how they have successfully navigated their company’s merge and the unique challenges with M&A in the services industry.

Add a

As marketing coordinator at Phase2, Annie is involved in the open source events in the New York metro area. She really enjoys engaging the lovely Drupal Community and promoting Phase2 to prospects, clients, staff members, and the greater ...

Feb 25 2013
Feb 25

I am very excited to participate in the an OpenPublic webinar hosted by Acquia this Wednesday, with my Phase2 public sector parters in crime, Greg Wilson, and Karen Borchert. Our goal for OpenPublic this year is to celebrate and highlight what can be built with OpenPublic and Drupal and how we can nurture this growing community of users.

We will start by introducing how OpenPublic works and its capabilities out-of-the-box. Then you will get to hear about the awesome new features being developed from some major government projects that use OpenPublic. We will also demo some often requested features that can be built or added into OpenPublic:

  • A new Security Application for managing user policies and authentication controls
  • A great approach on multi-lingual content built on the best core and contrib Drupal modules
  • An example of a simple API for feeding critical information to other Drupal and non-Drupal Sites

We’ll finish up with some additional examples of some of the exciting OpenPublic based projects from 2012 and what’s to come in 2013, including some community events that Phase2 is sponsoring.

As Drupal and OpenPublic mature, we want to make sure that we encourage organizations to use OpenPublic to take care of the basics, and continue to dedicate resources to contribute additional site specific development like data API’s for the community at large. We hope that the webinar will get folks excited about the great sites that can be built with OpenPublic; It’s what the distribution exists for: to provide a great out-of-box experience that can (still) be used to build custom sites.

Watch the webinar here:

Feb 14 2013
Feb 14

Posted Feb 14, 2013 // 0 comments

With Mobile in the forefront of digital government initiatives, laying the foundation for a mobile solution for the Department of Energy (DOE) was a priority. With this in mind, we wanted to meet this challenge for DOE in a way that was efficient and affordable. We saw a unique opportunity to quickly and easily adapt the existing site to be flexible for all devices. This bypassed a long and possibly difficult redesign process, saving our friends at DOE time and money. Tasking a single developer to work with their existing assets and make them flexible, we were able to create a mobile solution straight from their existing website.

Our starting point included a solid foundation - a static, pixel-based, 12-column grid, with some javascript doing additional layout tweaks. We knew that with this as a base, we could create a responsive solution with what DOE already had. Instead of creating a whole new design, we were able to primarily work on the front-end with CSS and not have to add too much javascript or Drupal development to the process.

Our strategy here was to start with the basics - converting the pixel-based grid to a percentage-based grid, to achieve the broadest results in the shortest amount of time. This worked quite well. Our grid was 1000px wide, which made the math quite simple; where it wasn’t we made subtle tweaks to padding and widths to make it work.

Once the grid was made flexible, we started shrinking down the page in-browser, looking for points at which the design and layout broke down. When content got too narrow or the layout just didn’t work, we added additional style sheets at these points, which switched the layout and styling up a little bit to work better. This process is detailed at Web Designer Wall.

We also made some adaptations to the large highlighted “hero” images and image galleries, so that they would provide different size images at different screen sizes, using the Adaptive Image Styles module. 

These techniques brought us most of the way towards a fully mobile-friendly site. There are a few outstanding visual pain points; that’s where we have brought our design partners, HUGE Inc., into the process, asking them to provide additional insight and guidance to this agile solution.

The success of this project is as much in our relationship with the Department of Energy and our passion for innovation as it is in any engineering techniques. We wanted to give them the best, most efficient solution we could, and their trust in us allowed us to experiment to find it.

The payoff for this approach is that, in just 65 hours of development and project conception, we have come most of the way to a fully-realized mobile solution. We are looking forward to completing this project with the Department of Energy, addressing further mobile needs and refinements. Stay tuned for the deployment of this work in the very near future!

We're working to keep Energy.gov as a model government site, not just in its overall presentation but also in how we cost-effectively manage and develop the site. This move to a mobile solution without a complete redesign is a great example of what we're working toward.
-- Robert Roberts, Director of Digital Strategies, Department of Energy

Senior Developer Joshua Turton brings a wide variety of skills to his role at Phase2. A programmer comfortable working on both front-end and server-side technologies, he also brings a strong visual sensibility to his work. More than nine years ...

Jan 28 2013
Jan 28

Node Access: Who wins?

While Drupal has always had a pretty robust access control mechanism, it was difficult in the past to handle multiple contributed modules who wanted to impose different types of access control. Who wins? If a node is within a private Organic Group, but is also in a public Forum, is the node private or public? In Drupal 6, multiple access control modules could conflict and had to take special care to co-exist. It was messy.

In Drupal 7 the access control API was cleaned up and now it is relatively easy to handle multiple access control systems. Let’s learn the best way to implement your own access control system in Drupal 7.

The Perils of hook_node_access

Drupal 7 added a cool new hook for developers: hook_node_access($node, $op, $account). On the surface, this seems like the ultimate hook to control access. You simply return NODE_ACCESS_ALLOW, NODE_ACCESS_DENY, or NODE_ACCESS_IGNORE. In reality, this hook can be very dangerous! It allows you to override the access control of any other modules on your site. For example:

1
2
3
function mymodule_node_access($node, $op, $account) {
   return NODE_ACCESS_DENY;
}

This would deny access to all of your content regardless of any other access control. If it returned NODE_ACCESS_ALLOW it would *allow* access to all of your content! Unless some other module returns NODE_ACCESS_DENY, in which case access would still be denied.

Even worse, your custom hook_node_access function is ignored by Views, Menus, and other content queries on the site. Even though you have denied access to all content, you’ll still see all of your normal menu links, and will see your nodes listed in Views. Only when you click on a node to view it’s full detail page will you then be denied. You might be violating content privacy just by showing that certain content exists!

A “Deny” based approach

Drupal is a “deny-based” access control system. In other words, if anybody denies access to a node, then the node is blocked. This is similar to having multiple locks on your door: you need to open ALL the locks to enter your door. Using hook_node_access to return NODE_ACCESS_ALLOW access violates this convention and is generally a bad idea. Instead you should design your modules to DENY access when needed, and otherwise return NODE_ACCESS_IGNORE to allow other modules to decide if access should be granted. The hook_node_access results are the “last line of defense” for denying access and don’t stop Views or Menus from showing parts of the content anyway.

The correct approach is to use the Drupal “Grant” system. This API existed in previous versions, but in Drupal 7 it was cleaned up and works much better. The key hooks are hook_node_grants($account, $op) and .hook_node_access_records($node). The documentation can be hard to follow and talks about “realms” and “grant ids”. Instead, let me explain this API using the concepts of Locks and Keys.

hook_node_access_records are Locks

LocksThe hook_node_access_records is called to determine if a specific node should be locked. Your module has the opportunity to create a Lock with a specific “realm” and “id”. The “realm” is like the color of your lock and is typically the name of your module. This allows a single node to have multiple locks with different colors (multiple modules). To open the door, you would need keys that match each color of locks on the door.

Within a realm, you can have multiple locks with different “ids”. This is like giving the colored lock a specific serial number corresponding to a key with the same color and serial number. If you have a key with the correct color and serial number, than all of the locks of that color are opened. To summarize:

  • Each lock Realm (color) must be opened to access the node
  • Only one ID (serial number) within the Realm needs to be unlocked to open that entire Realm.

These node Locks are stored in the node_access database table, which means they are cached. This table is only rebuilt when you run the Rebuild Permissions in the Status Report area of your Drupal admin. When you save a node, hook_node_access_records is called only for the node being saved to allow it’s locks to be updated. If changing a node can affect the locks on other nodes, then you’ll want to call node_access_acquire_grants($node) to update the locks on the related nodes.

hook_node_grants are Keys

KeysThe hook_node_grants is called to create a “key-ring” for a particular user account. This is called dynamically at each page load to determine what keys the current user has. As mentioned above, a particular node can be accessed only if the user has the appropriate keys for each Realm (color) of locks on the node. Because this key-ring is not stored or cached, it is important to make your hook_node_grants function very fast and efficient.

When implementing hook_node_grants, you are typically only concerned about the Realm implemented by your module (remember that Realm is usually your module name). You probably don’t want to be messing with keys for other modules. Your hook just needs to decide if the user has any of *your* keys. Specifically, your hook needs to return a list of key IDs (lock serial numbers) within your Realm for the specified user account.

REAL Node Access!

The beauty of using the two Grant API hooks described above is that they are respected by Menus, Views, and optionally other queries within the database API. If the user does not have the proper keys to open the locks on a node, then the node will never display in any Menu or View. Unlike hook_node_access(), this properly protects the privacy of your content.

With Views, you can turn off the node access filtering in the Query Options of the Advanced section of the View. Turn on the “Disable SQL rewriting” option and now Views will return all results regardless of the keys and locks.

If you create your own database queries using the Drupal database API, you can also easily filter results based upon node access. Simply add a “tag” to the query called “node_access”. For example:

1
2
3
4
$query = db_select('node', 'n');
  ->fields('n', array('nid', 'title'))
  ->addTag('node_access');
$result = $query->execute();

The above example would only return the nid and title of nodes the current user can access.

UPDATED: It is important to include this addTag(‘node_access’) for ANY query that you perform that returns node results to a user. Otherwise you’ll be introducing a security hole into your module. You can also use EntityFieldQuery which automatically filters results based upon node access.

An Example from Open Atrium 2

In Open Atrium 2, we implement a flexible node access system. All content is assigned to a specific “Section” within a normal Organic Group. Each Section can be locked based upon Organizations, Teams, and Users. For example, if Mike and Karen are assigned to the “Developer” Team, and the “Developer” Team is assigned to a specific Section, then only Mike or Karen can see the existence of that Section and the content within it. To accomplish this, we implement hook_node_access_records to assign locks, and hook_node_grants to assign keys.

First, let’s assign the locks for content within a Section:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
/**
 * Implements hook_node_access_records().
 */
function oa_node_access_records($node) {
  $sids = array();
  // handle the Section node itself
  if ($node->type == OA_SECTION_TYPE) {
    if (!oa_section_is_public($node)) {
      $sids[] = $node->nid;
    }
  }
  // Now handle pages within the Section
  else if (!empty($node->{OA_SECTION_FIELD})) {
    foreach ($node->{OA_SECTION_FIELD}[LANGUAGE_NONE] as $entity_ref) {
      $section = node_load($entity_ref['target_id']);
      if (!oa_section_is_public($section)) {
        $sids[] = $entity_ref['target_id'];
      }
    }
  }
  if (empty($sids)) {
    return array();
  }
  foreach ($sids as $sid) {
    $grants[] = array (
      'realm' => OA_ACCESS_REALM,
      'gid' => $sid,
      'grant_view' => 1,
      'grant_update' => 0,
      'grant_delete' => 0,
      'priority' => 0,
    );
  }
  return !empty($grants) ? $grants : array();
}

For a Section node, we just grab the node ID. For pages within a section we grab the referenced section IDs. Once we have a list of section IDs, we loop through them and create a $grants Lock record giving our module name OA_ACCESS_REALM as the Realm (color), and the Section ID as the ID (serial number). This adds our colored Lock to the nodes that are protected within Sections, using the specific Section ID as the lock serial number.

Next, let’s build the key-ring for the user account (*Note, this is a non-optimized version of code for instructional purposes):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
/**
 * Implements hook_node_grants().
 */
function oa_node_grants($account, $op) {
  $sections = oa_get_sections();
  // returns a list of all section IDs
  foreach ($sections as $sid) {
    // determine if the user is a member of this section
    if (user_in_organization($sid, $account) ||
      user_in_team($sid, $account) ||
      user_in_users($sid, $account)) {
        $grants[OA_ACCESS_REALM][] = $sid;
    }
  }
  return = !empty($grants) ? $grants : array(); }

For each Section that the user is a member of, we return the Section ID for that Realm in the $grants array. If a particular node has Section locks, only users with a key to that Section will be granted access. For example, if a node has locks for $sid 1, 2, and 3, but the user only has a key for $sid 4, then access is denied. But if the user has a key for $sid 1, 2, or 3, then access is granted. You only need a single matching key within the Realm to grant access.

Conclusion

If you think about the Drupal node access system as a system of Locks and Keys, then it’s pretty easy to understand. It’s a very powerful system and one of the key strengths of Drupal. Try using this Grant API and only use the new hook_node_access as a last resort, especially when building other contributed modules where your hook_node_access might conflict with other modules.

Jan 03 2013
Jan 03

When it comes to creating layout and design, empowering content editors is key to a successful project. We learned just how important content flexibility can be while working with Robin Hood, New York’s largest poverty- fighting organization. When Hurricane Sandy hit, Robin Hood was able to use the flexible content environment that we developed for them to quickly and efficiently launch new pages detailing the mobilization of their large-scale relief effort to aid victims in the tri-state area.

Robin Hood needed a flexible content solution for their website which emphasized their vibrant design, and which could be updated quickly as their relief efforts expanded. They wanted control when it came to content layout and they wanted to be able to reuse content like their rich media, dynamic displays, and large images and videos across their site. Our solution was to develop a flexible content architecture using Template Field and a customized WYSIWYG.

 Hurricane Sandy Relief Campaign homepage, created with Template Field

We organized Robin Hood’s content by a content type called a ‘Pane,’ which has a Template field. A number of custom templates are available that allow the Robin Hood team to pick different combinations of multiple column layouts, as well as some highly interactive templates that create interactive animations, powered by Raphaël, a JavaScript library for vector graphics on the web.

A custom interactive template for Template Field called "City Stats" powered by RaphaëlJS

We developed another content type called ‘Pane Stacks,’ which is a collection of Panes stacked together to create complex and compelling pages. Jimmy Park, of Robin Hood, says, “Pane Stacks are awesome. It allows us to be so flexible and fast. Both the homepage and the [Hurricane] Sandy pages were done with ‘configuration.’ Ridiculous when you think about it.”

In addition to the flexibility provided by Template Field and the content architecture of the Robin Hood site, content editors are further empowered by a robust WYSIWYG editor within the site. Custom WYSIWYG styles help content editors to keep within brand style guidelines but still have room to customize color, font, size and spacing when deviations from the site’s styles are necessary. Additionally, content editors can easily embed blocks and Beans into WYSIWYG content, allowing them to insert video content, social media widgets, and custom interactive call-to-action elements.

 The content flexibility created with Template Field and WYSIWYG customization gives a lot of creative power to content editors, allowing Robin Hood to quickly create Hurricane Sandy Relief campaign pages, getting their message out to donors, media and others efficiently.

 A ‘pane’ with video embeds

How Template Field Works:

Template Field was developed by a group of Drupal community and Phase2 team members led by Roger López, Neil Hastings, Brian McMurray, and John Robert Wilson. From the beginning it was designed to work as a developer tool with a robust API, as well as a content administrator tool with an administrative interface. Template Field is made up of three component modules:

  • Template API – The base API for defining, loading, and working with Templates
  • Template Field – A Field API Implementation that allows users to add templates to their content types
  • Template Mustache – A rendering library for Template API that uses Mustache

The Template Field module suite was an effort to avoid having to create a different content type per layout, and then theming them differently. Instead, Template Field bundles together the HTML, CSS, JS, and specialized inputs necessary for a particular custom layout and to store all of those different layouts uniformly as a Drupal entity field. What this means is that we (or a content administrator using an admin interface in the site) can put together a package of flexible field-like inputs (text fields, file uploads, WYSIWYG text areas, checkboxes, etc), specific HTML for formatting the output (using the Mustache template library, more on that in a second), CSS for styling, and JavaScript to add interactivity. All of these things are bundled together as “templated content.” You can then create a single content type on your site (perhaps you’d call it ‘page’), add a template field, and then when creating page content, you have access to your templates to pick and then fill out with content. You could have a ‘page’ with multiple extra image files uploaded, a ‘page’ with highly interactive JavaScript, or a ‘page’ with its content divided into multiple columns – all using the exact same content type so it only needs to be themed once.

The rendering layer of the Template API is pluggable, which means that a rendering library other than Mustache could be used as well. Currently, Template Mustache is the only rendering library available.

Templates can be created in code directly, with a developer specifying inputs (like fields on a content type), and specific CSS, JS, and HTML files for the template to use; or templates can be created and managed using an administrative interface that can store template information in the database, or be exported to code later using Features. Because Mustache is a really simple template language, and because the Template API keeps input names for templates simple, even people unfamiliar with the intricacies of Drupal theming can put together new custom layouts for their template in minutes, and because a content administrator can add the CSS or JavaScript necessary for the template directly in the administrative interface, you don’t have to perform updates to your site’s theme to roll out new designs.

Template Field has proved itself in creating flexible experiences for admins to respond to crisis and unplanned relief situations. Content staging elements can also be added to Template Field to provide revisioning and permissions to the admins and content editors. We are excited to see how Template Field will be refined and used in more projects moving forward.

Dec 12 2012
Dec 12

Most Popular blocks are a pretty common requirement. One nice solution for Drupal is the Drupal Most Popular module. The Most Popular module provides several sources for your blocks, such as Drupal core Statistics and Comment modules, as well as Google Analytics (see the issue regarding Google Analytics/Reports), Disqus and AddThis, allowing you to create different block types like Most Viewed, Most Commented and Most Shared. The Most Popular module also provides a lot of nice themeing options and allows you to set up tabbed blocks without writing any code. I won’t go into documenting how the module works, as that has already been done. You may decide you just want to use blocks from Views. If you try to create a Most Commented block using the Disqus module, you quickly discover it currently only provides a field for views and not a sort option. Fortunately, this is not difficult to remedy. A simple module can be created to provide this functionality. (To be really creative, going to it: disqus_most_commented.) First, implement hook_schema in the module .install file to add a new table to hold the comment counts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
/**  
 * Implements hook_schema().  
 */
function disqus_comment_count_schema() {
  $schema = array();
  $schema['disqus_comment_count'] = array(
    'description' => 'Stores counts from disqus',
    'fields' => array(
      'nid' => array(
        'type' => 'int',
        'not null' => true,
        'description' => 'nid of related node',
      ),
      'count' => array(
        'type' => 'int',
        'not null' => true,
        'description' => 'number of reads',
      ),
    ),
    'indexes' => array(
      'disqus_comment_count_nid' => array('nid'),
      'disqus_comment_count_count' => array('count'),
    ),
    'primary key' => array('nid'),
  );
  return $schema;
}

The meat of the .module file could look something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
function disqus_comment_count_views_api() {
  return array(
    'api' => 3,
  );
}
 
/**  
 * Implements hook_cron().
 */
function disqus_comment_count_cron() {  
  // Assumes the Disqus module is installed   
  // Could be made into module admin settings if it is not   
  $secret_key = check_plain(variable_get('disqus_secretkey', ''));  
  $forum = check_plain(variable_get('disqus_domain', ''));
   
  // According to Disqus api: disqus.com/api/docs/threads/listPopular/   
  // acceptable interval options are: 1h, 6h, 12h, 1d, 3d, 7d, 30d, 90d   
  $interval = '1d'; // hard-coding one day, but could make this an admin setting   
 
  // Using the Disqus php api downloaded to sites/all/libraries from   
  // github.com/disqus/disqus-php   $path = libraries_get_path('disqusapi');    
  require($path . '/disqusapi/disqusapi.php');  
  $disqus = new DisqusAPI($secret_key);  
  $data = array(); //will hold return data    
 
  try {
    $data = $disqus->threads->listPopular(array(      
      'forum' => $forum,        
      'interval' => $interval,    
    ));  
  }  
  catch (Exception $e) {    
    // Log or throw exception   
  }    
 
  if (!empty($data)) {    
    // Clear out the table and insert new rows     
    db_query('delete from {disqus_comment_count}');      
    foreach ($data as $comment_info) {
      $nid = str_replace('node/', '', $comment_info->identifiers[ 0 ]);      
      $record = array('nid' => $nid, 'count' => $comment_info->posts);      
      drupal_write_record('disqus_comment_count', $record);    
    }  
  }
} 

Finally, add a Disqus count sort option for Views in the module’s .views.inc file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/**
 * Implements hook_views_data(). 
 */
function disqus_comment_count_views_data() {
  $data = array();
  $data['disqus_comment_count']['table']['group'] = t('Disqus Comments');  
  $data['disqus_comment_count']['table']['join'] = array(
    'node' => array(      
      'table' => 'disqus_comment_count',
      'left_field' => 'nid',
      'field' => 'nid',
      'type' => 'left outer',    
    )  
  );    
  $data['disqus_comment_count']['count'] = array(    
    'title' => t('Comment Count'),    
    'help' => t('Number of Disqus posts on a node.'),    
    'sort' => array(      
      'handler' => 'views_handler_sort',    
     ),  
  );    
  return $data;
}

The above functions are relatively simple and can certainly be made more robust, but they illustrate a straight-forward means of adding a Disqus comment count sort option for Views.

Dec 06 2012
Dec 06

Even with the emerging push to get off paper and on to the screen with the use of rapid prototyping, it is still important to write down what you are going to build before you start building it. There are several strategies for how to communicate to both stakeholders and developers exactly what they’re getting themselves into. Here are a few I use and love.

Wireframe Annotations

I’ve gone on before about lowering the fidelity of your wireframes in order to get them to the screen more quickly.  I still believe in this concept, but the part you can’t skip in your wireframes is the annotations. They should not be formal business-analyst-sounding functional requirements starting with, “The system shall execute the…” mostly because no one will read that. What they should be is a handy reference for clients and developers alike to communicate how the sketches they see will actually get on the page.

The more you write down, the more you are forced to think about how something will work, and it will be easier to see where the holes are. It can be really difficult to decipher precise functionality from wireframes alone. The client/stakeholder needs to have their expectations set in reality. If they’re just looking at a picture, they have no idea how it’s going to work.

This will help the developers out too, because they don’t have to think so much when they are building. For example, you’re creating a news site that has articles on it. The article pages have a box in the right rail for “related headlines”. There are a multitude of ways those headlines can get on this page, but how does the client want them selected? Will they have the staff in place to manually decide which related articles are the most relevant? Do they have a tagging system in place that you can use to automate the selection of the articles? Does the order in which they appear matter to the client? If so, how is it determined? These are just a few questions you’ll ask yourself and your client as you go through the process of annotating your wireframes. In the end, you should have a solid solution for how to proceed with building the page, which will save your developers tons of time and make your project manager love you.

Interactive Mockups

A lot of times clients have already gone through the wireframing and/or design process themselves and hand over a set of comps to start build off of. Inevitably, there are going to be issues that arise as you begin laying out your plan for building the site. A good solution for working through these issues with your clients is to throw the documents into an interactive mockup using a service such as Balsamiq or InVision (or a host of others). This allows you to have a running dialog directly on the comps/wires via a comment/reply scenario.

This is a very simplified version of what you can do using these interactive mockup tools, but it’s still very effective. It enables conversations to happen asynchronously, giving both parties time to think about how they want something to work or offer a better solution.

Write it Down

Sometimes having multiple conversations going via wireframe annotations, interactive mockups, emails, etc. can lead to more confusion than solutions. In this case, it’s often best to write it all out into one cohesive strategy in a centralized location such as a Writeboard on Basecamp or a notebook in OpenAtrium.

Overall, just remember that your life will be easier if you always write it down.

Dec 04 2012
Dec 04

OpenAtriumLast week we had a great 3-day code sprint for Open Atrium 2 (OA2) and Panopoly here at the Phase2 office in Alexandria. Joining the Phase2 Open Atrium team (Mike Potter mpotter, Peter Cho pcho, Frank Febbraro febbraro) was the Panopoly team of Matt Cheney (populist) and Brad Bowman (beeradb) from Pantheon. The goal of the sprint was to work on various issues in Panopoly/Panels that needed some improvements for Open Atrium 2. We achieved our initial agenda on the second day and spent the remainder of the sprint digging into even more detailed issues. Here are some highlights:

Access Control [#1772844 Allow More Granular Permissions in the IPE]
The goal was to control access to the IPE so OA2 Group Owners could only customize their own group and section pages. Shoutout to merlinofchaos who worked closely with Matt on IRC to get this committed and adding [#1854374 Panels IPE needs additional permission for Page Manager] to Panels itself so Group Owners wouldn’t have access to customize landing pages such as the Home page.

Allowing Multiple templates [#1790434 Panelizer panel select tab (activated with 'allow panel choice') not appearing when adding a node."]
Open Atrium 2 needs the ability to define several default page templates for Section landing pages. A Panelizer option called “allow panel choice” was available, but the list of added templates never showed anywhere. Merlin committed #1790434 to expose the selection and a Panopoly issue [#1854182 Support Allow Panel Choice for Panelizer] was added to finish looking at the access rules when using multiple panel choices.

Context (Module) and Panels [#305289 Integration with Panels module]
Many people have considered Panels and Context (Module) to be two completely different (and opposing) methods to perform site building and layout. But I’ve always thought they could co-exist and found this old #305289 issue talking about integration. Panels and Context both have their own “access rules” or “conditions” that can be used to control whether “panes” or “blocks/boxes” are displayed. And while these two sets of condition rules overlap a great deal, there are still some contrib modules that set a particular Context condition that would be nice to use to control Panels panes. Turned out to be pretty easy to write a new CTools access plugin that fires whenever a specified Context is true. The plugin simply shows a list of all Contexts defined in the Context UI and allows you to select which ones you want to use. If any of the selected Contexts are set, then the CTools access plugin returns True to select the Panels pane using that selection rule. It’s a limited use-case, but now you can use the Context module to control which panes are displayed on your page within Panels.

Media module Accessibility [#1847912 Support Alt and Title for Media Integration]
Panopoly provides excellent integration with the Media module, but there was a specific issue around Accessibility and missing Alt and Title fields. We did some testing to get sylus‘s patch applied to enable Alt and Title fields for embedded media in WYSIWYG.

Inherited Profiles [#1356276 Make install profiles inheritable]
The Open Atrium 2 distribution profile will inherit from the Panopoly distribution. We continued testing work on the core patch needed to support inherited profiles. This patch needs some D8 testing so we can get it back-ported to D7. Frank spun up a D8 environment to test and we still need to track down a warning message before marking this as reviewed.

Apps
Open Atrium 2 will be heavily based on modular “Apps”. We had a discussion about Apps and came to consensus on how we want to handle Apps between Open Atrium 2 and Panopoly to ensure as much compatibility as possible. Will continue to have further discussions with other App providers later this month to get everybody on the same track.

Multilingual
Multilingual support is important to both Open Atrium and Panopoly. We performed some testing with multilingual content and language translations. This looks pretty clean so far with the current versions of the Internationalization (i18n) module suite, so no new issues were added.

Other
We discussed and added several other issues to the Panopoly queue that we will all continue to work on. There was a great commitment from Pantheon to help work on any Panopoly issues needed to make Open Atrium 2 a success!

Beyond the technical coding details, the sprint was a great chance to exchange information about Panopoly and Open Atrium 2 and build an even better partnership for the future. Open Atrium 2 is a true win-win project for both organizations and for the Drupal community at large and it was great to see community members come together to help with these issues. It’s exactly the spirit needed to make Open Atrium 2 a great success. I look forward to having continued involvement from the Panopoly team at Pantheon in the Open Atrium 2 project!

If you are interested in following Pantheon’s lead and sprinting with us on Open Atrium 2, either in here in DC or remotely, please contact us at [email protected].

Nov 26 2012
Nov 26

As designers, we’re pretty spoiled for options to express our visual and interaction designs. That said, one tool I end up using at least a little on every project is the venerable OmniGraffle. Sometimes it’s just to draw a technical diagram or to quickly sketch a lo-fi wireframe, but in any case, I end up using it when I need to capture an idea visually in a way that’s clear and polished.  

That said, any asset you build for a project needs to be weighed against the time it takes to create it. I value good communication very highly, including artifacts, but there isn’t an unlimited amount of time; to turn to Graffle instead of throw-away paper or long-lived prototypes, it needs to be the right tool for the job and I need to be fast with it.  

To speed up working in Graffle, one of my favorite “tricks” isn’t a trick at all, it’s right on the default toolbar (you have already customized your toolbar, right?): the inspector palette. If you’re not familiar with the Graffle’s Inspectors, you may not know that they actually have a host of behaviors that normal palettes in OS X don’t, and they’re all useful:

  • Tabs can be made sticky: when you select another tab, they will remain open
  • Groups of tabs (Style, Properties, Canvas, and Document) can be moved around independently
  • Each group has a keyboard shortcut (normally command-1 through 4) to quickly show and hide

When I’m working I tend to be in a particular mode, and I’m going to go back to the same tabs, group of controls, over and over again. For instance, when I’m quickly building a new diagram I’ll rough it in with lots of boxes, and for each shape I’ll change the color and stroke. In this mode, I want quick access to Fill and Stroke tabs, so I just do the following:

  1. If not open already, open the Inspectors (from the document toolbar, from the Inspectors menubar command, or with the keyboard shortcut cmd-shift-i)
  2. In the Style group, double click the Fill and Stroke tabs (they’ll get a little green lock icon when you double click them)
  3. Grab groups 2-4 and drag them to snap them off  and position as you want (or close them entirely)

OmniGraffle Style Inspector  with Fill and Stroke locked and Shadow Open

The result is that right after I place an element, I can quickly change all it’s settings without having to open or change items (If I need to, I can open other tabs without closing Stroke and Fill). It may sound small, but this actually helps flow a lot: you don’t need to remember where things are or switch tabs constantly. There are also other advantages, for instance, you can drag color swatches between tabs, from fill to stroke or vice-versa. You can also position other palettes exactly in relation to your fixed controls; I like to put my color picker right next to the Style inspector, so it’s a quick trip to get a new color.

When I’m finishing up a diagram, I’ll tend to swap modes, and want to have easy access to the Canvas’s alignment tab and properties. Quickly hitting cmd-1 will close my Styles group (saving what I’ve locked). I can quickly open up my alignment and property controls in the space where my styles just were. 

If you’re not picking your Inspector tabs for the mode you’re in, give it a try. It may take some tweaking, but you’ll probably find one or two arrangements that you grow to love and help you complete tasks quickly.

(Note that most other apps in OS X don’t sport all these behaviors. However, in some apps, such as iWork, you can option click inspector tabs to open up multiple inspector windows.)

Nov 19 2012
Nov 19

Posted Nov 19, 2012 // 0 comments

As Drupal is adopted by larger organizations, there are more administrators, editors, and content creators that are working on large-scale Drupal sites. The need for an efficient and intuitive process for creating, editing, and publishing content has become increasingly important.

The Content Staging Initiative is an effort in the Drupal community to explore and build a system for content staging and management of publications on a Drupal site. This work has fueled new modules that enhance site preview capabilities, content “staging” for multiple publishing scenarios, and improved editor’s workflow tools.

To illustrate how these tools can be used for publishing, let’s look at how a prominent news site, covering an international soccer game like the Spain vs. Netherlands match in the 2010 World Cup, might use content staging.

Leading up to the game, content administrators would add news articles, images, and editorials about each team. To prepare for the match’s win, editors and reporters would create full content for two scenarios: one in which Spain wins and one in which Netherlands wins.

Netherlands Win Scenario

Content editors can effectively create two versions of the same home page for either scenario -- with content and layout revisions for each. Each scenario is classified in a “collection”, and each node of content (an article, image, or editorial) would be tagged for the Spain or Netherlands Collection. Editors can then use the site preview system to approve how content and layout looks on the site before it is published in either scenario. The Interactive Information Bar Module is used to alert the editor that he/she is in preview mode.

With the Content Staging Modules installed, the editor can review these articles using the Real-Site Preview Module. This module allows editors to see how the site will look with either “collection” of content in place.

When the game is officially over, editors know that Spain has won, and can act quickly to enable the Spain collection of content. Using the content revisions tab, the editor can quickly filter for all articles, images and layout edited for a Spanish win - including the homepage revised block and revised layout prepared for this scenario.

Then after making final copy edits and last look-over, they use the Node Revision Administration (NRA) Module, to select and publish the Spanish win coverage in bulk. Instantly the news site is completely up-to-date with the latest breaking World Cup coverage. Images and layout related to Spain's victory are perfectly configured since the editorial setup can be planned far in advance. This puts the editor in the driver's seat to execute a perfectly up-to-date site, ahead of the competition.

Content staging will help digital workflow in many important ways. This system allows an organization to look at content at any stage of revision and see what it looks like in the context of the entire site before the content is published live. Content staging can be used for:

  • Performing real site previews of content with certain parameters before the content is published
  • The approval process for a landing page layout before it goes live.
  • Looking at a piece of content at any point in history or in the future, and seeing how it looks with affected changes.
  • Publishing bulk publications with revisions
  • Efficient content staging and workflow is crucial for larger organizations who have several people working on content publication, and who need to curate and publish their content very quickly, according to unfolding news, rapidly updating events, and even sports wins and losses.

    The Content Staging Initiative is an exciting development in Drupal digital workflow. Currently, the CSI team, made up of developers from around the Drupal community, is working on improving workflow and revisions to support non-node entities in collection build operations. Stay up-to-date on the Content Staging Initiative by following the CSI Drupal Group announcements.

Team Architect at Phase2, Neil thrives on working with the development team to come up with creative implementations for complex situations. Neil enjoys pushing Drupal past its boundaries to implement cutting edge concepts, and has a ...

Nov 15 2012
Nov 15

Posted Nov 15, 2012 // 4 comments

Drupal can be a scalable platform that can handle high traffic and serve large volumes of data easily with the proper configuration set up and server layout.  The large number of themes and modules in the community makes site development extremely easy and valuable in a sense that you can develop a large set of features in a short period of time without too many hours burned on coding everything from scratch.

 

Knowing this, it’s tempting to simply install everything within your site.  However, if you are going to maintain a high-traffic website, it might be a good idea to offload some of these features elsewhere.

Plan and determine what features of your site should be maintained within your installation and which features should be hosted from external sources.  I’m not saying you should start copying and pasting embed codes from third party websites or start hosting all subsets of data and media files on separate servers.  More so, I recommend that you should entrust third party services to take care of certain features (especially if they can do it better than what exists in the module space) to minimize risk and increase performance.

There are plenty of modules that can help you with this.  For example, rather than use Drupal’s internal commenting module, you can rely on Facebook or Disqus to handle this for you.  Google Analytics is another way to analyze your site’s traffic externally.  I am also fond of other modules such as Paypal and Intuit Merchant Services, which handle credit card payments elegantly for you without the hassle of worrying about security right off the bat (assuming you configure your SSL certificates correctly).

Sharing (work) is caring!

So if, for example, all the comments on your site take up half the server space, wouldn’t it be nice to have someone take care of that so you don’t have to buy more server space?  You might be better off outsourcing that functionality elsewhere so that you can focus on more obscure features on your site.

I would suggest this approach for many reasons:

  • Reliable support for the application by a team of dedicated maintainers focused on one (or a few) functionalities

  • You can switch these features out with careful modularization

  • Simplifies your site’s architecture…and your job

  • More features outside Drupal’s framework

  • The features are readily available (in most cases)

  • More cost-effective (since the pricing is catered to sharing resources amongst multiple websites)

  • It’s kind of like running your website on several sites at the fraction of the cost (since these external services price their services efficiently to share them across various clients)

But do you trust other people with your information?

 You have to consider the potential repercussions that might occur if you do indeed take this route.  If you do decide to take this route, you should ask the following questions before you make the big move.

  • Is the service reliable?  If the service goes down, will my site go into utter chaos?

  • Is the data exportable?  If I build a new version of the site later on or if I want to use a different solution later, will it be easy to migrate it?

  • Does it have all the features I need and (a lot) more?  Can I trust that I won’t have to move to another solution 6 months from now?

  • Do I really want to own the data?  Will it be difficult to customize it however I want for my own needs?

  • Do I want to be liable for protecting the credit card numbers and other sensitive information in my database?

  • What does the terms of service say?  Do you truly own the data, or can they use it for certain purposes?

  • Do your users trust whoever is handling the data outside the site?  Will it tarnish your site’s reputation if you put it on the site?

You shouldn’t take some of these questions lightly.   Using a credit card processor with a history of data leakage or advertisement software that serves annoying and potentially malicious advertisements can ruin your site’s reputation.  It would be wise to do your research on anything you do externally before you implement it on your site.

Couldn’t I just create these features myself on my own separate servers?

If you have the funding and bandwidth to do it, go for it!   Many opt to use the Apache Solr module to delegate the search functionality away from the main site’s server to achieve this.  This approach is common and not unheard of.

Doing this requires more manpower.  Even if the features are developed externally, you still need to upgrade them internally.  Features maintained externally typically have the advantage of acquiring upgrades without a need to do anything on your end (that’s the beauty of webapps!).  Going this route, you should always subscribe to their blog to make sure you're aware of any changes they make on their end.

Note: At this point, you might think doing all this implies laziness.  Rest assured!  There is nothing wrong with good lazy, however, as long as you achieve the goal at hand.  When the job is done, the amount of work done on your end is trivial as long as you have the research and documentation to handle any future tasks proactively (i.e. be aware of any caveats and risks when using a service, and keep note of any muddy scenarios not covered by their guarantees).

In conclusion…

Remember a decade ago when things rarely ran on APIs, and most site maintainers typically have to copy and paste embeddable codes to achieve the same thing (not to mention the site’s architecture being bloated and messy after a while)?  In some cases, a lot of the code back then didn’t really fit everyone’s needs, so you tend to have a lot of duplicate codebases across sites that tend to go stale months after it gets implemented.

Consider it blessing that we’ve gone all this way to weave various web services into existing sites much easier while tapping into polished features without a sweat on your end!

As a Phase2 developer, Peter Cho strives to create elegant and efficient solutions to provide a positive user experience for clients.

Prior to joining Phase2, Peter worked for the Digital Forensics Center developing marketing and web ...

Nov 13 2012
Nov 13

Posted Nov 13, 2012 // 2 comments

Theming in Drupal has been complicated and difficult, particularly when approaching the problem of websites with multiple layouts.  We've all seen sites with dozens of tpls, with code written into templates, with mile-long preprocess functions covering multiple possibilities… It can be messy!

But it doesn't have to be this way. The addition of just a few modules, and a solid base theme, allow a site's layout to be configured without writing a single line of code. This was the topic of Omega: From Download to Layout in 45 Minutes, my presentation at BADCamp. This post will cover just one part of that, creating a layout for one of the wireframes without writing a single line of code.

Dev Site Setup

We'll make the assumption that you have a dev site set up, with the appropriate content types and some demo content to support this site, along with a few views and utility blocks.  You have also installed the Omega base theme and created a new sub-theme, titled Epsilon.

Wireframes

The site is laid out using 960gs, a popular grid system. For more information on 960gs, visit their site.
Here's a view of the section/content list page wireframe.

Layout

An alternate view of this layout clearly shows the columns widths used for layout on the grid.

The core of layouts in Omega lies in the Zone and region configration tab. Regions should be familiar to us from block admin - that's been around for a couple Drupal versions.  Regions are where you put your content and blocks. But what are Zones and Sections?

Think of them as larger containers into which the zones are placed, in a hierarchy.  Sections are the largest, and can have one or more zones inside them. Zones are next, and can have one or more regions inside them. Finally come regions, this is where the grid is really laid out, as Omega makes them quite easy to size by columns. These serve to wrap and contain your HTML, allowing for common styling and easy layout choices.

In this third view of our wireframe, we clearly see three different sections.

Header Section

Let's start by looking at the header section in detail.  The wireframe really only contains one zone, with two regions: logo and menu. Opening the Header Section configuration menu in Omega, on the Zone and region configuration tab, we see that this section actually has 4 zones, with a total of 6 regions in it – clearly too many for what we need, but a nice example of how versatile Omega is.

Omega allows us to move zones from one section to another, or even to disable them altogether.  We do this by opening each zone's fieldset, then the configuration fieldset within that, and setting Section to – None –.

In this case, we'll start by doing that for User Zone and Header Zone.

Next, we set the width of the Branding Region in the Branding Zone to 4 columns.  This is where we will put our logo, which if you recall was 4 columns wide.

After that, we go to the Menu Region in the Menu Zone, and set the width to 8 columns, as laid out in our wireframes. Weighting in Omega works just like it does anywhere else in Drupal – the higher the weight, the later the item renders in the process, so we set the weight to 5 to push the Menu Region after the Branding Region. And, again, Omega allows us to move a region from one zone to another, so we move the Menu Region to the Branding Zone. This will stack the Branding and Menu regions horizontally without any additional CSS.  Doing this leaves the Menu Zone without any regions, so as a last housekeeping item we set the Section for Menu Zone to – None –.

Content Section

The Content Section comes with three zones by default.  I'm going to leave the Preface and Postscript Zones alone for now, and work in the Content Zone of the Content Section.  Yes, there's a Content Section, and a Content Zone, and guess what?  There's a Content Region, too.  Naming things in Drupal is not an exact science.

We're going to focus our attention for now on the Content Zone.

In the Sidebar First region, I'm going to set the width to 4 columns, and set the weight to 5.  Again, a higher weight will push the item later in the rendering process - in this case, after the Content Region.

I'll set the width of the Content Region to 8 columns, and leave the weight alone.

Finally, I'll set the Zone of the Sidebar Second region to – None –, which will remove it from the Content Zone altogether.

Lastly, in the Footer Zone of the Footer Section, I'm going to set the width of the Footer First region to 8 columns, and the width of the Footer Second region to 4 columns. Since these two regions are already in the same zone, they will line up horizontally automatically, so there's no need to move them around from one region to another.

Save and view the page. Here's what a default Omega sub-theme looks like, out of the box.

And here's what Epsilon looks like now.

All that – and no coding!

Delta and Context

The problem of applying different theme settings – layouts – to different pages remains, of course.  That's where the delta and context modules come in.  For a more thorough explanation of how they work, see the slides from the full presentation at our Slideshare.

Thanks!

Senior Developer Joshua Turton brings a wide variety of skills to his role at Phase2. A programmer comfortable working on both front-end and server-side technologies, he also brings a strong visual sensibility to his work. More than nine years ...

Nov 01 2012
Nov 01

Posted Nov 1, 2012 // 21 comments

A Renewed Focus

Open Atrium has long been the go-to "Intranet in a box" distribution for Drupal 6.  Obviously, Drupal 6 is nearing it's official "end of life" and something needed to be done with Open Atrium.  The question for the past year has been: "What exactly should be done?"  Should we just port the existing modules to D7?  Should we just leap-frog to Drupal 8?  What about new functionality?  What about improving existing functionality?  How do we make Open Atrium more accessible to more types of users?  How do we take advantage of new Drupal 7 components?

We have spent the past year collecting feedback and suggestions from existing and potential Open Atrium clients regarding these questions.  Since DrupalCon Munich we have developed a detailed plan for moving forward with Open Atrium in Drupal 7, have put together a new internal project team, and have finally started development!

I'll be posting more technical details about the new Drupal 7 architecture for Open Atrium 2.0 to the community.openatrium.com site soon and will also be presenting a BoF session at BADcamp 2012 this weekend.  But here are some teasers on what we've been doing and what is coming.

A New Project Team

Like most firms, Phase2 Technology makes its business doing paid client projects.  Performing Drupal module maintenance and building Drupal distributions is something we try to make as much time for as possible to help support the Drupal community.  Fitting this in-between projects works most of the time, but large projects, such as a full Drupal 7 rewrite of Open Atrium, are more difficult and require a greater commitment.  To better achieve this goal, we have put together a full internal project team for Open Atrium 2.0 (OA2) and are treating it the same as a regular client project.  We are actively seeking sponsorship for this work, but also putting a large amount of Phase2 resources into this project.   In fact, Phase2 will be matching dollar-per-dollar any external sponsorship, so please email us if you are interested in funding some of this work.  With this new project team in place, and myself as the new technical lead for the project, you will now see increased momentum behind Open Atrium 2.0.

Open Atrium 2.0 Architecture

When facing a large site migration from Drupal 6 to Drupal 7, it's usually best to use this as an opportunity for improvements to the architecture of a site.  Drupal 7 brings many new core concepts to a project, including Entities, improved node access control, etc.  For Open Atrium, a new architecture was required to take full advantage of these new concepts.

The core concepts of the Open Atrium 2 architecture are:

  • Flexible and modular feature components (Apps)
  • Flexible layout customization (Panels and Panopoly)
  • Mobile-friendly, responsive base theme (Zen or AdaptiveTheme)
  • Improved user, group, section, team permission system
  • Customizable notification and subscription system
  • Plugin API based upon Entities
  • Available as a distribution, or just a set of Modules

More than just an "Intranet in a box", Open Atrium 2 will be a Framework for integrating your existing systems into an intranet.  OA2 will come with very simple and basic "features" for collaborative discussion, file organization, events, tasks, etc.  However, in each case the "simple and basic" App can be removed and replaced by an App with a high-level of integration with existing best-in-class business systems.OA2 Architecture

New Features

Open Atrium 2 adds several new core concepts that make it even more flexible for your organization.  First, rather than seeing a set of "tabs" for each App (Discussion, Files, Calendar, etc.) a Group Owner can create custom "Section" pages and place any combination of available "widgets" on each page.  You can use the default section page layouts to recreate the OA 1.x tab pages, or you can creatively combine various elements into your own section page layouts.  For example, you can create multiple different Discussion areas within your group, or combine a view of Tasks with your Event Calendar on the same section page.

OA2 TeamsYou can also assign "Teams" to a "Section".  A "Team" is a collection of Users, similar to Members of a Group.  A Team might indicate a user's Organization, or Department, or any other collection.  Each User can be a member of multiple Teams.  When Teams are assigned to a Section, only users who are both a Member of the Group and a Member of an assigned Team can access the Section.  This allows a Group Owner to create sections within their group with different Team access, such as Private section pages.  For example, Mary might be a member of the NewWebSite Group and might be assigned to the ProjectManagers Team.  The Group Owner  could create a new section within the NewWebSite group and assign the ProjectManagers team to it.  Mary would be able to read and post content to this new section, but Bob, who is a member of the NewWebSite Group but not in ProjectManagers would not be able to view the section.

Beyond some of these new concepts within the core of OA2, the real power of OA2 comes from its modular App design.  In Open Atrium 1.x it was possible to create a new plugin Feature.  But the plugin architecture was poorly documented and difficult.  In addition, the theme used for Open Atrium made it difficult to customize for many customers.  In January, we will release the new Community Plugin Toolkit which will contain documentation and examples for creating Open Atrium 2 plugin Apps. This Toolkit should make it much easier for the community to contribute new Apps for OA2.

Moving Forward

In addition to releasing the Community Plugin Toolkit in January, the first Alpha version of Open Atrium 2 is planned for Spring 2013, with the initial Beta version to be released in time for DrupalCon Portland in May 2013.  If you are interested in helping sponsor work on Open Atrium 2, or wish to be involved early in building OA2 Plugin Apps, please contact us directly via email .  I am very excited about the future of Open Atrium 2 and look forward to delivering the Intranet product and framework that you have been dreaming about and waiting for.

Mike Potter is a Team Architect at Phase2 who loves to pair a solid technical solution with an intuitive client-focused design. Mike started his career as an experimental neutrino particle physicist before creating the first WWW home page for ...

Oct 29 2012
Oct 29

Posted Oct 29, 2012 // 0 comments

Here at Phase2, we are very excited for the long anticipated BADCamp 2012 (BAY Area Drupal Camp) to kick off in 2 days!  With even more attendees than last year, this event promises to be an epic Drupal-tastic adventure! As a proud contributing sponsor and Product Summit sponsor, we are looking forward to seeing our old Drupal friends and meeting new ones around the camp and at our booth, (so don’t forget to stop by!)

BADCamp is one of the largest free Drupal events in North America and therefore, a testament to the Drupal community’s devotion to accessibility and contributing back.  We know Drupal camps are integral to the growth and innovation of Drupal and that’s why we are committed to participating with sponsorships, volunteer organizers, and submitted sessions.   This time around we are shipping out a troop of some of Phase2’s finest! Here’s where you can find us:

Let’s Talk...

Business

Products:

Community:

Mapping:

Government in Drupal:

Theming:

Deployment and Infrastructure:

Content Management and Versioning:

We are looking forward to sharing our ideas but we are also excited to learn from everyone else! Check out the complete lineup of awesome sessions at BADCamp. We’ll see you there!

As marketing coordinator at Phase2, Annie is involved in the open source events in the New York metro area. She really enjoys engaging the lovely Drupal Community and promoting Phase2 to prospects, clients, staff members, and the greater ...

Oct 25 2012
Oct 25

Posted Oct 25, 2012 // 2 comments

 Earlier this year at NYC Camp and DrupalDay ATX, I got to talk about my favorite subject: products and distributions in Drupal. And at the upcoming BADCamp Product Summit on November 2, I'm going to get to share ideas and learn from the top firms building products and distributions in Drupal. 

Products and distributions occupy a really interesting space in the Drupalsphere. After seeing Drupal's growth into a mature framework and platform, the natural next question is: "Where can we take this now?" Can Drupal be used to power stand-alone products? Can it be the basis of SaaS-hosted site building platforms? Are distributions best suited as tool kits for developers? Or should they be used to build "site in a box" solutions that reach a larger market of site builders? These are the questions our own product teams grapple with regularly, and that drive our work on our own distributions. 

But Drupal (and open source, generally) presents a second, perhaps even more important question. Beyond "what CAN we build?" lies the question "How do we take Drupal products to market, the Drupal way?" How do we make sure we aren't sacrificing community goodwill, collaboration and contribution, and general open sourceyness, while still deploying the business models that will sustain these "Drupal products" and their further development? 

To address the multiple decision points and questions of when, how, and sometimes, what to build when considering a distribution, OpenPublic technical lead (and dear friend) Erik Summerfield and I built a decision tree, tracking the many motivations, aspirations, and assumptions that we have seen in building distributions in Drupal. Our disclaimer to this decision model, which we gave in New York and Austin: none of this is meant to say that at Phase2, we've avoided every pitfall and done it perfectly. Quite the contrary. This is the result of early missteps, some challenging internal conversations, some hard decisions, a little well-earned community criticism, and ultimately, a lot of lessons learned. We share it not to say "we know how to do this" but to say "we're all navigating this together -- let's do it smarter."

Without further ado, here's our decision tree. (oh, and if you'd like to "zoom" through it, feel free to click through the prezi version online. 

We started with the question "why" because so often, when we talk to people about using or contributing to distributions, the first thing we hear is "oh, we're thinking about building a Drupal distribution." And our natural next question is "Why?" Distributions are expensive to build, more expensive to maintain, and can be community suicide if handled poorly or neglected in the long run. Knowing up front whether you have a good reason to build a distro -- be it to build your brand and reputation, to create new offerings or business models, or to have a "base" for future builds, is vital to what you build and how you build it.

Once you know why you're building a distribution, there are a few decisions that might help to ensure that you're preparing for your distro to be a useful, sustainable, and well-supported product in our community. Checking to be sure that your team has built sites that solve the problem you're trying to solve and that you have people on your team who know Drupal, understand what it's like to contribute to Drupal, and know how to navigate the community can save a lot of time and energy down the road. 

Finally, thinking ahead of time about your distribution's total cost of ownership can mean the difference between a successful experience and a distro disaster. Thinking about what it will cost to build, maintain, document, support, train, and market your distribution, and where you'll find the revenue models to support that cost, is absolutely key. 

It's not an easy set of questions -- but when we're trying to find out "where can we take Drupal now?", it's vital to ask ourselves the hard questions (and a lot of them) in order to hold our work to the highest standard. If you want to jump into these questions (and more) I hope you'll join us at the product summit next week. We're excited to join the discussion among awesome, product-minded companies like Commerce Guys, Pantheon, Acquia, ThinkShout, Volacci, Gorton Studios, and Aten Design Group

As a product director with us, Karen Borchert keeps Phase2 growing each day; she focuses on the business strategy for our products, including OpenPublish and OpenPublic.

Thanks to her deep background in product strategy, Karen can ...

Oct 24 2012
Oct 24

Posted Oct 24, 2012 // 6 comments

We've recently released another beta release (beta5) of Openpublish. We've updates various modules and provided some minor bug fixes. The main addition in this release is that we've added a bulkupload functionality for the Photo Gallery in Openpublish. This allows content creators and editors to upload many photos quickly and easily into the field collection which makes up the gallery. This works very similarly to the way that filefield_sources_plupload module (and other plupload implementations work. It provides a drag and drop interface and multiple file select along with concurrent uploads and upload progress. See the embedded video for a very quick demo of how it works. We are still working to improve the gallery and image support in OpenPublish in a robust, predictable way that reflects the structure and usage of the data, and that is extensible and works consistently.

The gallery bulkupload is a feature that was requested from the issue queue. As we contrinue to improve and stabilize OpenPublish, community participation becomes increasingly important. Understanding what features are desired, what is working and what isn't and how OpenPublish can be improved, along with the reporting of bugs and any code contributions, drive OpenPublish forward. At Phase2 we have experience from our clients about what publishers want and how they use Drupal. We take this knowledge and use it to try to construct a base of a site, something that has the basic tools which we would use to build a publishing site. However, often we don't know exactly what others in the community use to build sites, what requirements they or their clients have, and what value various clients and users ascribe to a given feature. Having more use cases to base OpenPublish off of will help us to improve the overall quality of Openpublish and make it more usable for the community as a whole.

Now obviously we can't implement every feature, find every bug or accomodate every use case. However, the more feedback, bug reports and patches and community support, the more bandwidth we have to work with. Please continue to ask questions, post bugs and contribute in any way you can on the Drupal.org issue queues. In order to let others benefit from answers, known bugs and available patches and fixes, please keep all the bug reports and support requests in the Drupal.org issue queues.

Senior Developer Josh Caldwell specializes in pairing a beautiful, intuitive interface with a solid, well-designed backend product. His ability to identify open-source solutions and front-end capabilities allows him to write impressive, ...

Oct 23 2012
Oct 23

Posted Oct 23, 2012 // 0 comments

To better fulfill its mission to increase disaster preparedness in the U.S., the Department of Homeland Security (DHS) decided to migrate its public-facing website from a proprietary CMS (Teamsite) and onto a new, modernized, and open source platform.  DHS selected Drupal as this new CMS platform, and the OpenPublic distribution was an important part of standardizing the agency’s migration to Drupal. The new site was launched in August 2012, and took roughly 4 months to design, develop, and deploy.

The new DHS.gov site is a huge success for DHS, it's also another big milestone for open source in government.  One of the main reasons for the site’s success is that it leveraged the existing FEMA.gov platform (which was developed on OpenPublic), giving it a jump start toward a finished product.  DHS.gov is a case study for the concept of a “Shared Platform” approach, as outlined in the Digital Government Strategy (and embodied in earlier materials, such as the “Shared First” principles).

The DHS.gov Requirements

 Above all, DHS wanted a re-vamped website that improved their web content operations by providing the following:

  • Improved administrative interface
  • Upgraded technology
  • Enhanced functionality

Also, a goal of this project was to extend the existing FEMA.gov platform to better enable the various DHS subcomponents (e.g., Immigration and Customs Enforcement (ICE), Customs and Border Protection (CBP), Transportation and Safety Administration) to get into the Drupal platform. This is a great example of promoting shared solutions throughout an agency in a very concrete way. With this model, subcomponents that migrate onto the platform will promote an agency-wide web content “infrastructure” that is more easily and efficiently maintained.

Like all government sites, DHS had to comply with 508 compliance, FISMA, and cross-browser accessibility.

The Solution

 DHS selected Drupal and they also selected the OpenPublic Drupal distribution since it provides ‘out-of-the-box’  enhanced features that would benefit DHS right out of the gate. For example:  a slideshow and carousel rotator, a streamlined administrative dashboard, and flexible customization. OpenPublic also provided important head starts on security and accessibility features that were requirements for the DHS.gov site.  Moreover, since OpenPublic is built on the Omega base theme, DHS can easily get onto responsive design and make its content much more mobile friendly.

On the public facing side, the topic based taxonomy implemented, provides an easy way to navigate to site content and enables users to view pertinent related content. On the administrative side, DHS.gov uses the Drupal Workbench module to handle content administration workflows. With many users and roles, Workbench allows users to quickly move content through workflow and allows editors and publishers to be notified easily when content is ready for their review.  Another distinguishing element of this site is the flexibility in creating "sidebar" content for article pages.  A user can cross-reference content easily using node references, creating external links, or add their own content to create a useful sidebar to articles featured on the site.

OpenPublic is built and maintained in a way that keeps an eye out on federal government web regulations - specifically requirements surrounding the Open Gov initiative  and the Federal Cloud Computing Initiative.  Since it's built on OpenPublic, it provides the security, accessibility and usability that federal government sites require.  And, because Drupal is an open source platform, it saves the federal government from having to maintain annual licensing fees for expensive, proprietary CMS’s.

It was a pleasure working with the DHS team, DHS’s adoption of Drupal and OpenPublic is representative of innovative government within the face of constrained resources.

As Phase2’s Federal Practice Manager, Greg Wilson is responsible for the success and direction of the company’s support to federal government clients. In this role, he provides guidance regarding Phase2’s role in helping to ...

Oct 22 2012
Oct 22

Posted Oct 22, 2012 // 0 comments

Some time back, I wrote a blog post entitled Compound fields in Drupal 7 where I discussed how to create custom fields for your content types using Field API. Since then, I've often been asked how to create custom fields for use in forms (using Form API), outside of using those fields in entities such as nodes and taxonomy terms.

Assuming you've read that blog post, I'll now show you how to extend an existing custom field into a form element that you can use in any FAPI form.

Hooks

To extend your field, you'll need to add some additional hooks to your module. It's important to note that you can extend any field into a form element that exists in Drupal, not just something defined by your module.

hook_element_info -- This hook tells Drupal about your form element.

1
2
3
4
5
6
7
8
9
10
11
12
function dnd_fields_element_info() {
  return array(
    'dnd_fields_attribute' => array(
      '#input' => TRUE,
      '#tree' => TRUE,
      '#process' => array('dnd_fields_attribute_process'),
      '#element_validate' => array('dnd_fields_element_validate'),
      '#theme' => array('dnd_fields_element'),
      '#theme_wrappers' => array('form_element'),
    ),
  );
}

Process Functions

In hook_element_info, we named a processor function that will define how our new form element should be processed.

In this example, we are "faking" a field instance so that we can reuse the field widget defined in hook_field_widget_form (dnd_fields_field_widget_form). If you prefer, you can skip to the next code sample to create a standalone element processor function without relying on what requirements this Field API hook has.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
function dnd_fields_attribute_process($element, &$form_state) {
  // Create a dummy field instance just to get the same output from our existing field widget
  $instance = array(
    'field_name' => 'dnd_fields_ability',
    'settings' => array(),
    'widget' => array(
      'type' => 'dnd_fields_ability',
    ),
  );
 
  $form = array();
  $field = array(
    'settings' => array(
      'abilities' => array(),
    ),
  );
  $langcode = LANGUAGE_NONE;
  $items = array();
  $delta = 0;
  $form_state['field'] = array(
    'dnd_fields_ability' => array(
      $langcode => array(
        'field' => array(
          'type' => 'dnd_fields_ability',
          'cardinality' => 6,
          'settings' => array(),
        ),
      ),
    ),
  );
 
  $element = dnd_fields_field_widget_form($form, $form_state, $field, $instance, $langcode, $items, $delta, $element);
  return $element;
}

If you are creating a new form element that does not come from an existing field, you will want this instead:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
function dnd_fields_attribute_process($element, &$form_state) {
  $fields = array(
    'ability' => t('Ability'),
    'score' => t('Score'),
    'mod' => t('Modifier'),
    'tempscore' => t('Temp score'),
    'tempmod' => t('Temp modifier'),
  );
 
  foreach ($fields as $key => $label) {
    $element[$key] = array(
      '#attributes' => array('class' => array('edit-dnd-fields-ability'), 'title' => t('')),
      '#type' => 'textfield',
      '#size' => 3,
      '#maxlength' => 3,
      '#title' => $label,
      '#default_value' => NULL,
      '#attached' => array(
        'css' => array(drupal_get_path('module', 'dnd_fields') . '/dnd_fields.3x.css'),
        'js' => array(drupal_get_path('module', 'dnd_fields') . '/dnd_fields.3x.js'),
        ),
      '#prefix' => '<div class="dnd-fields-ability-field dnd-fields-ability-' . $key . '-field">',
      '#suffix' => '</div>',
    );
  }
 
  return $element;
}

Technically, we're done. You now have a form element that you can define in any Form API code. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
function dnd_character_test_form($form, &$form_state) {
  $form = array();
  $form['test'] = array(
    '#type' => 'dnd_fields_attribute',
    '#title' => t('Attribute fields'),
    '#description' => t('Provide a description here.'),
  );
  $form['submit'] = array(
    '#type' => 'submit', 
    '#value' => t('Submit'), 
  );
  return $form;
}

Themeing your Element

The cool thing with your new form element is that you can theme it, even though this is not strictly neccessary, since the widget defined in the field will handle most of this for you. So feel free to skip this step if you're already happy with the Field API widget that you have.

The following code sample displays a generic form output, since the field that we defined is really just a special case of the 'textfield' form element type. However, you can feel free to get fancy here, and display literally any HTML output you want.

Also of note is that this is just like any other theme function in Drupal. You can refine this more using preprocess theme functions and .tpl.php files to theme your form element even more. However, for this example, I'm going to keep it simple.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function theme_dnd_fields_element($variables) {
  $element = $variables['element'];
 
  $attributes = array();
  if (isset($element['#id'])) {
    $attributes['id'] = $element['#id'];
  }
  if (!empty($element['#attributes']['class'])) {
    $attributes['class'] = (array) $element['#attributes']['class'];
  }
  $attributes['class'][] = 'dnd-fields-ability';
 
  // This wrapper is required to apply JS behaviors and CSS styling.
  $output = '';
  $output .= '<div' . drupal_attributes($attributes) . '>';
  $output .= drupal_render_children($element);
  $output .= '</div>';
  return $output;

As one of our superb Team Architects, Tobby Hagler expertise spans the gamut of technical capabilities -- from interface development to application layer engineering and system architecture.

Tobby’s specialties tend to focus on ...

Oct 15 2012
Oct 15

Posted Oct 15, 2012 // 8 comments

Out with the Old

The Drupal Features module has always had a quirky user interface. I'm talking about the "New Feature" or the "Recreate" screens. In those screens you see the name, description and other general information at the top. On the right is a summary of what components/items are exported to the feature. On the left there is a Components drop-down selector. To add a new item to the feature, you first select the Component from the drop-down (like Fields), and then you click the checkbox for the Item you wish to add (like a specific field name).

Old FeaturesEach time you click a checkbox, an Ajax call is made to refresh the summary shown on the right. This adds the item you just selected to the summary, and also computes new "auto-detected" items that are also added. For example, when you select a Content Type, all of the Fields for that Content Type are automatically detected and added.

For years Drupal users have complained about this and suggested improvements. I've summarized the issues over the years into the following (probably incomplete) list:

  • Ajax callback whenever an item is clicked is slow and cumbersome. Makes it a pain to select multiple items.
  • Dropdown list of Components is poor. Prevents seeing more than one component at a time. New users don't initially know to select a component.
  • Summary of items included in the feature shown on the right mostly duplicates the information shown on the left by which checkboxes are enabled.
  • No way to filter items to just those that contain certain text.
  • No way to specifically exclude auto-detected items from being added to the feature.
  • Cannot distinguish between newly added items that will be exported from items that were already part of the feature export.
  • Options such as "URL of Update XML" are for advanced users and shouldn't be shown all the time.

The new 1.1 branch of Features attempts to fix or improve all of these issues.

In with the New

New FeaturesIn the new featuresUI-1.1 branch makes the following changes to the user interface:

  • The Ajax on the checkboxes has been completely removed. The Component Dropdown menu has been completely replaced with a set of collapsible fieldsets, one per component, shown on the right. To add a new item to a feature, expand the fieldset for the component you want, and click the checkbox for the item you want. The item will be moved down to the "newly included" list of items to be exported.
  • If you add an item that adds dependencies, the auto-detected items will be added to the export (in blue italic). To prevent an auto-detected item from being exported, simply uncheck the box.
  • Newly added items are shown in bold with a gray background to distinguish from items already part of the export.
  • A Filter box is shown along the top. Enter text into this box to only show items that contain the filter text. Components with matching items will be expanded automatically. Clicking the Clear button will remove the filter and collapse all component sections.
  • Advanced options are collapsed into a new Advanced section on the left.
  • If Javascript is disabled, a Refresh button will be displayed that can be used to manually refresh the list to show newly auto-detected items. When Javascript is enabled, this button is hidden since javascript is used to display the auto-detected items.
  • Items that are auto-detected and then unchecked are saved to the export *.info file a "features-excluded" items. Having these items in the *.info file allows features to keep them unchecked in the future so they are not automatically added.

Give the new UI a test drive by checking out the featuresUI-1.1 branch:

git clone --branch featuresUI-1.1 [email protected]:project/features.git

If you have any trouble, it's easy to switch back to the 1.x branch. Let me know what you think, or feel free to post to the issue queue http://drupal.org/node/1810134

Mike Potter is a Team Architect at Phase2 who loves to pair a solid technical solution with an intuitive client-focused design. Mike started his career as an experimental neutrino particle physicist before creating the first WWW home page for ...

Oct 10 2012
Oct 10

Posted Oct 10, 2012 // 4 comments

OpenPublic has gone through a fast Beta 9 (some build process problems) and a quick Beta 10 and 11 recently. Although these smaller, faster releases are great, the team wanted to take the time to reflect on the time between Beta 8 and 9, one of the longest periods between releases for OpenPublic. During Beta 9, the distribution was under more active development than ever before. We still have work to do, but this latest iteration was focused on hard-fought improvements to make OpenPublic better for it's core purpose: building public sector sites.

We and others have built many sites on OpenPublic. Although OpenPublic is a huge leg up on development and is designed to be a great working site right out of the box, balancing the priorities of being an out-of-the-box product and a reasonable starting point for development is difficult. For example, the feature that provides demo content also needs to be easy to disable. The building blocks for custom calendar functionality can't get in the way of people expecting all features to be complete (or get in the way of the developers who want to build an entirely different feature). In short, every convenience to a site builder is potentially a thing that can ruin a developer's day (or worse) and vice-versa.

For the Beta 11 release of OpenPublic we took a big step back and focused entirely on the process of building sites with the distribution (see the release notes for Beta 10). To do this, we set up project teams not just to build regular sites, but platforms for building many, many sites--including highly custom specialty sites. This included about 50 agency sites and the unique GeorgiaGov as well as builds off a shared platform for the Department of Homeland Security: Ready.gov, FEMA.gov, and DHS.gov. We used multiple teams, ensuring that not just Phase2 developers familiar with OpenPublic were doing the work. I lovingly refer to these sites as OpenPublic++; they started as OpenPublic, but are wholly specific to their organization's needs and took thousands of hours of design and development to be created.

The result of this process is the latest release of OpenPublic: It represents the feedback from those teams and those projects. There is always more work to do, but I'm proud of OpenPublic's continued focus and commitment to being one of the best Drupal Distributions to build a custom site with, while retaining that out-of-box-ness that works for so many sites. We intend to stay focused on site building and continue refining the build experience for OpenPublic. That said, we are putting some features on our road map for the distribution itself, not just through apps.

A quick caveat: building a product roadmap for a distribution with dozens of contributed modules, a partner distribution, and a radically changing domain, the public sector, is important but also constantly shifting sand: don't read timeline or order into this.

Our recent projects convinced us that OpenPublic has some core needs we could safely address without bloating the distribution:

  • Better back-end asset handling for content creators, namely making it easier to link (offsite), reference (onsite), or embed (either) content and multiple pieces of content everywhere
  • General structures (read “content types and field collections”) for grouping things: the public sector has lots of evergreen content, and creating groups or libraries of it--outside a menu--increases visitor discoverability and editor maintainability.
  • More promotion opportunities for evergreen content. OpenPublic already has multiple rotators and Editor's Choice, but we've also got the core for boxes in the distribution, and we hope to show off how it can be used better

In addition, we've built a lot of custom functionality for sites using apps, and will continue to try to make marquee features optional for the distribution via that mechanism. We will not lose focus on keeping OpenPublic lean and mean, so anything we can move into an application, we will.

Of course, as OpenPublic is primarily for public sector sites, we'll keep working on increasing accessibility, building security, and fixing bugs. If you're interested in contributing, there are three things we can always use help with from the community after you download the distribution:

  • * Patch a module: Whether it's a bug you fix or a feature you want, it helps the module maintainers and the distribution
  • Build a theme: Themes are important, and with OpenOmega in the distribution you have a great starting point to sub-theme from and contribute. If you have a good general theme based on Omega for OpenPublic, let us know, we'd love to help publicize it
  • Build an app: Many folks in the Drupal community have been using the app server the same as us, so it's a general skill, and we've been making many improvements to make it easier to build apps
  • Thank you, in advance, for helping make Drupal better and making OpenPublic a great resource for the public sector. OpenPublic has come a long way thanks to the community, and we have a bright future ahead of us.

As a solutions analyst with us, Shawn excels at understanding users and bringing their needs together with business goals. When working on a project, he works tirelessly to find an elegant, simple means by which people can get what they need ...

Oct 09 2012
Oct 09

Posted Oct 9, 2012 // 0 comments

The Georgia Technology Authority (GTA) has just completed its hefty migration of 52 agency sites, from Vignette (versions 6 and 7) to a Drupal platform.  As this important project nears completion, it’s worth looking at the project’s impact on the State of Georgia, Drupal, and on open government efforts everywhere.

GTA’s migration kicked off in September, 2011 as one of the first and largest major Drupal projects for state government.  The successful migration of prominent federal agencies to Drupal (e.g., Whitehouse.gov, Energy.gov, DHS.gov, FCC.gov, FEMA.gov, and the House of Representatives) has broadened awareness of Drupal’s ability to handle prominent, large-scale migrations, and allowed GTA to build on those successes. GTA built a Drupal platform for state agency sites to migrate onto Drupal, and 52 agencies have made the transitions so far. This precedent has gained attention from state and local governments regarding the possibilities for Drupal.

The Georgia platform was built on OpenPublic, a distribution of Drupal, built specifically for the needs of federal, state, and local government sites. OpenPublic powers thousands of sites for the public sector and nonprofit sector, but never before has it powered a platform as extensive as Georgia’s.  OpenPublic gave Georgia a “jumpstart” on the platform since it allowed the team to focus on critical customer requirements, instead of standard features which are typical “up front” requirements for most government sites. Also, the project’s success is attributable to close collaboration between government and vendor teams. The Georgia government development team was an integral part of this project and worked very closely with the vendor teams.

The platform leverages the OpenPublic Drupal distribution which puts end users in the content development driver’s seat. State content administrators have already used their new Drupal platform to independently create four additional sites (bringing the total number of agency sites on the platform to... 52).  As with federal sites, Drupal provides GeorgiaGov content authors with a rich set of tools to publish content (e.g., video-embed, photo galleries, Twitter pull module, and custom boxes).  These tools along with the eight beautiful themes designed by the Phase2 design team (i.e., Samantha Warren, Dave Ruse), enabled GTA to create unique sites with distinct branding for each state agency.

Here’s a diagram which helps to depict visually the platform and its benefits.

Recently, we sat down with Nikhil Deshpande, director of Interactive Services at GTA , who will be presenting his experience with Drupal at the upcoming Atlanta Drupal Business Summit  on October 26th, 2012. Here’s an excerpt:

“I’ve been talking to states and cities, (essentially local government). Almost everyone has a pretty similar structure, they are maintaining websites at an enterprise level, they have departments that are autonomous in the functions that they perform, but at the same time they are attached to each other when it comes to the overall governing body.  They like the way we have presented the platform and sites, from a user perspective, the platform is very intuitive.”

Can you expand on the importance of the user perspective?

“I’m seeing a shift where government is really starting to consider the user, it has taken some time to change people’s minds -- it’s not about the organization, it’s about the people looking for information from the organization.  Having the website reflect the organization, doesn’t really make sense, it’s really all about who you are serving.  So with this new approach, everyone is looking at our project as a case study and saying: this is where we want to be, how do we get there?”

That’s a great point Nikhil, the GTA project is a great example of what Drupal is capable of.  Can you give us a taste of what you will be discussing at the Drupal Business Summit?

“At the Drupal Business Summit in Atlanta I’m going to talk about our experience, and tell the GeorgiaGov story. What I am seeing from other states and agencies is that they just want to know what the process was for us. So I’m going to talk about the process, it took us a long time to finalize the decision that we want to go ahead with Drupal.  Most of my discussion will be about how we arrived at where we are today, and I will highlight the steps to get there at a high level.”

               Follow Nikhil on Twitter: @nikofthehill. Follow Georgia.gov @georgiagov

You can catch Nikhil Deshpande speaking at the Drupal Business Summit in Atlanta, and hear more Drupal success stories the following day at Drupalcamp Atlanta. It was a pleasure working with Nikhil and the GTA team; now that Drupal has proven itself at the federal and state level, we are excited about the possibility of collaborating with more state and local governments in the future. Drupal and its community have a huge potential to impact open government efforts around the world and we see the GeorgiaGov platform as an important step in that direction!

As Phase2’s Federal Practice Manager, Greg Wilson is responsible for the success and direction of the company’s support to federal government clients. In this role, he provides guidance regarding Phase2’s role in helping to ...

Oct 08 2012
Oct 08

Posted Oct 8, 2012 // 0 comments

In the Drupal community, you see caching discussions related to pages, blocks, reverse-proxies, opcodes, and everything in between. These are often tied to render- and database-intensive optimizations to decrease the load on a server and increase throughput. However, there is another form of caching that can have a huge impact on your site’s performance – module level data caching. This article explores Drupal 7 core caching mechanisms that modules can take advantage of.

When?

Not all modules require data caching, and in some cases due to “real-time” requirements it might not be an option. However, here are some questions to ask yourself to determine if module-level data caching can help you out:

  • Does the module make queries to an external data provider (e.g. web service API) that returns large datasets?
  • If the module pulls data from an external source, is it a slow or unreliable connection?
  • If calling a web service, are there limits to the number of calls the module can make (hourly, daily, monthly, etc.)? Also, if it is a pay service, is it a variable cost based on number of calls?
  • Does the hosting provider have penalties for large amounts of inbound data?
  • Does the data my module handles require significant processing (e.g. heavy XML parsing)?
  • Is the data the module loads from an external source relatively stable and not change rapidly?

If you answered, “yes,” to more than a third of the questions above, module-level data caching can probably help your module’s performance by providing the following features:

  • Decrease external bandwidth
  • Decrease page load times
  • Reduce load on the site’s server
  • Provide reliable data services

Where?

OK, so you’ve decided your module could probably benefit from some form of module-level data caching. The next thing to determine is where to store it. You can always use some form of file-based caching, but to implement that with the proper abstractions to run on a variety of servers requires calls through the Drupal core File APIs, which can be a bit convoluted at times. File-based caching mechanisms also cannot take advantage of scalable performance solutions like memcache or multiple database server configurations that might be changed at any time.

Luckily, Drupal core provides a cache mechanism available to any module using the cache_get and cache_set functions, fully documented on http://api.drupal.org:

<?php
cache_get
($cid, $bin = 'cache')
cache_set($cid, $data, $bin = 'cache', $expire = CACHE_PERMANENT)
?>

By default, these functions will work with the core cache bin called simply “cache.” This is the main dumping ground for Drupal core for data that can persist in the system for a length of time beyond the one page call, and are not tied to a session. However, many modules define their own cache bins so they can provide their own cache management processes. A few core module ones are:

  • cache_block
  • cache_field
  • cache_filter
  • cache_form
  • cache_menu
  • cache_page

Seeing as how several core Drupal modules implement their own cache bins, the next questions for your new module are:

  • Does the module need to manage its cache in a manner that is not consistent with the main cache bin?
  • Will its cache need to be flushed independently of the main cache at any time, or have some other expiration logic assigned to it that falls outside of the core cron cache clear calls?

If the answer to either of these questions is, “yes,” then a dedicated cache bin is probably a wise idea.

Cache bin management is abstracted in the Drupal system via classes implementing DrupalCacheInterface. The core codebase provides a default database-driven cache mechanism via DrupalDatabaseCache that is used for any cache bin type that has not been overridden with a custom class (see the documentation on DrupalCacheInterface for details on how to do that) and has a table in the database named the same as the bin. This table conforms to the same schema as the core cache tables. For reference, this is the core cache table schema in MySQL that we will use as the base for our module’s cache bin:

+------------+--------------+------+-----+---------+-------+
| Field      | Type         | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+-------+
| cid        | varchar(255) | NO   | PRI |         |       |
| data       | longblob     | YES  |     | NULL    |       |
| expire     | int(11)      | NO   | MUL | 0       |       |
| created    | int(11)      | NO   |     | 0       |       |
| serialized | smallint(6)  | NO   |     | 0       |       |
+------------+--------------+------+-----+---------+-------+

How?

For the sake of simplicity, we will assume that our module is fine with using the default cache mechanism and database schema. As an exercise, we will also assume that we meet the criteria for defining our own cache bin so we can explore all the hooks required to implement a complete custom bin leveraging the default cache implementation. The sample module is called cachemod, and the cache bin name is cache_cachemod.

Define the cache bin schema

In order to add a table with the correct schema to the system, we borrow from some code found in the block module that copies the schema from the core cache table and add this to our install hooks in cachemod.install:

<?php
/**
* Implements hook_schema
*/
function cachemod_schema() {
 
// Create new cache table using core cache schema
 
$schema['cache_cachemod'] = drupal_get_schema_unprocessed('system', 'cache');
 
$schema['cache_cachemod']['description'] = 'Cache bin for the cachemod module';  return $schema;
}
?>

Now that we have defined a table for our cache bin that replicates the schema of the core cache table, we can make basic set and get calls using the following:

<?php
cache_get
($cid, 'cache_cachemod');
cache_set($cid, $data, 'cache_cachemod');
?>

Using our new cache bin

Notice the CID (cache ID) parameter. This will need to be unique to the data being stored, so in the case of something like a web service, the CID might be built from the arguments being passed to the service and the data will be the returned data. One way to abstract this so you get consistent CID values for calls to cache_get and cache_set is to build a helper function. This sample assumes our service call takes an array of key-value pairs:

<?php
/**
* Util function to generate cid from service call args
*/
function _cachemod_cid($args) {
 
// Make sure we have a valid set of args
 
if (empty($args)) {
    return
NULL;
  } 
// Make sure we are consistently operating on an array
 
If (!is_array($args)) {
   
$args = array($args);
  } 
// Sort the array by key, serialize it, and calc the hash
 
ksort($args);
 
$cid = md5(serialize($args));
  return
$cid;
}
?>

Now we can implement a basic public web service function leveraging our cache like this:

<?php
/**
* Public function to execute web service call
*/
function cachemod_call($args) {
 
// Create our cid from args
 
$cid = _cachemod_cid($args);  // See if we have cached data already
 
$data = cache_get($cid, 'cache_cachemod')
  if (!
$data) {
   
// No such luck, go try to pull it from the web service
   
$data = _cachemod_call_service($args);
    if (
$data) {
     
// Great, we have data!  Store it off in the cache
     
cache_set($cid, $data, 'cache_cachemod');
    }
  }  return
$data;
}
?>

Note that there are several values for the optional expire parameter to the cache_set call that are fully documented in the API docs.

Hooking into the core cache management system

If you want your module’s cache bin to clear out when Drupal executes a cache wipe during cron runs or a general cache_clear_all, set the expire parameter in your cache_set call above to either CACHE_TEMPORARY or a Unix timestamp to expire after, and add the following hook to your module:

<?php
/**
* Implements hook_flush_caches
*/
function cachemod_flush_caches() {
 
$bins = array('cache_cachemod');
  return
$bins;
}
?>

This will add your cache bin to the list of bins that Drupal’s cron task will empty.

Additionally, if you would like to add your cache bin to the list of caches that drush can selectively clear, add the following to your module in a file named cachemod.drush.inc:

<?php
// Implements hook_drush_cache_clear
function cachemod_drush_cache_clear(&$types) {
 
$types['cachemod'] = '_cachemod_cache_clear';
}
// Util function to clear the cachemod bin
function _cachemod_cache_clear() {
 
cache_clear_all('*', 'cache_cachemod', true);
}
?>

Note that if you set the expiration of the cache item to CACHE_PERMANENT (the default), only an explicit call to cache_clear_all with the item’s CID will remove it from the cache.

Conclusion

Sometimes it makes sense to have a module cache data for its own use, and even possibly in its own cache bin to maintain a finer-grained control of the data and cache management if something beyond the core cache management is required. Utilizing the cache abstraction built into Drupal 7 core and some custom classes, hooks, and drush callbacks can give your module a range of options for reducing data calls, processing overhead, and bandwidth consumption. For more detailed info, check out the API pages at http://api.drupal.org for the functions, classes and hooks mentioned above.

As a Senior Developer at Phase2, Robert Bates is able to pursue his interests in solving complex multi-tier integration challenges with elegant solutions. He has experience not only in traditional web programming languages such as PHP and ...

Oct 02 2012
Oct 02

Posted Oct 2, 2012 // 2 comments

While working on the SPS Module we became very well acquainted with Drupal's Unit Test Class. I will have a follow up Post on the lessons we learned about writing test, but here I will outline the basic steps for adding unit test to a module. It is worth noting that I will be talking about the Drupal Unit test not the Drupal Web Test.

Step 1 - Tell Drupal about your tests

Drupal will look for classes that define tests ,so all we need to do is make sure that those classes can be found. To do this we just add a files line to the .info file, so that Drupal knows which files to investigate for test classes.

MODULE.info

1
2
3
  name = MODULE
  core = 7.x
  files[] = tests/*.test

Now we can put all of our tests into the tests folder of the module, as long as we add .test to the file name.

Step 2 - Test Base Class

While a base class is not needed, it can be very helpful, as there are things (such as enabling your module) that can be done in a setup method.
tests/MODULEBase.test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
abstract class MODULEBaseUnitTest extends DrupalUnitTestCase {
  /**
   * One using of this function is to enable the module used for testing, any dependencies
   * or anything else that might be universal for all tests
   */
  public function setUp() {
    parent::setUp();
    //enable module
    $this->enableModule('MODULE');
 
    // enable dep and other thing for all tests
  }
 
  /**
   * Fake enables a module for the purpose of a unit test
   *
   * @param $name
   *  The module's machine name (i.e. ctools not Chaos Tools)
   */
  protected function enableModule($name) {
    $modules = module_list();
    $modules[$name] = $name;
    module_list(TRUE, FALSE, FALSE, $modules);
  }
  ...
}

In the code sample, we simply create a base class that extends DrupalUnitTestCase, and add a setup method, to take care of anything that needs to be done before all of our tests. We also include an enableModule method that fakes enabling a module (the DrupalUnitTestCase, does not have access to a database, so enabling a module the normal way is not available).

Other things we can add to the base case are new methods to support our tests (such as asserts) that one might want to use over and over. For example in SPS module we add an assertThrows, which tests to ensure the correct exception is thrown (note this assert only works in PHP 5.3)
tests/MODULEBase.test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
abstract class MODULEBaseUnitTest extends DrupalUnitTestCase {
  ...
  /**
   * One can also add helper assert functions that might get used in tests
   *
   * This one test if the correct Exceptions is thrown (5.3 only)
   */
  protected function assertThrows(Closure $closure, $type, $error_message = NULL, $message) {
    try {
      $closure();
    }
    catch (Exception $e) {
      if (!($e instanceof $type)) {
        throw $e;
      }
      if (isset($error_message)) {
        if ($e->getMessage() != $error_message) {
          $this->fail($message, "SPS");
          return;
        }
      }
      $this->pass($message, "SPS");
      return;
    }
    $this->fail($message, "SPS");
  }
 
  /**
   * One can also add helper assert functions that might get used in tests
   *
   * Test that an object is an instance of a class
   *
   * @param $class
   * @param $object
   * @param $message
   */
  protected function assertIsInstance($class, $object, $message) {
    if ($object instanceof $class) {
      $this->pass($message, "SPS");
    }
    else {
      $this->fail($message, "SPS");
    }
  }
}

After we have set up our base class, now we can start making our tests.

Step 3 - Write Tests

OK, now we get to write tests! The structure here gets a little bit confusing, one can have as many test classes as one wants, and each test class can have as many test methods, and each test method can have many assertions.

In the SPS module we did a test class for each class provided by the SPS module, with a test method for each method provided by the class and then a test for each method. We found this to be an effective way to structure the test, but there is no required structure.

I also used one file for each test class, each of which extended the base class defined earlier. Each test class should define a getInfo method, to tell us about the test.

The getInfo method returns a array with keys of name, description, and group (I use the module name for this).

1
2
3
4
5
6
7
8
9
10
class MODULETestnameUnitTest extends MODULEBaseUnitTest {
  static function getInfo() {
    return array(
      'name' => 'MODULE Testname ',
      'description' => 'Test the public interface to the Testname of the MODULE module',
      'group' => 'MODULE',
    );
  }
  ...
}

Now for the test methods. Each method that starts with the word 'test' will be run as a test. Each test should include one or more asserts. One can look at the Drupal Unit Test methods to see all of the asserts. AssertTrue and AssertEqual will be the most used asserts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class MODULETestnameUnitTest extends MODULEBaseUnitTest {
  ...
  /**
   * All test start with test in lowercase letters
   *
   * One can use any of the asserts in this space
   */
  public function testContruction() {
    $value = $this->getCondition('baseball');
    $this->assertIsInstance(
      'MODULECondition',
      $value,
      'Description of what is being checked in this assertion'
 
    $expect = new ModuleCondition('baseball', 'other');
    $this->assertEqual(
      $value,
      $expect,
      'Description of what is being checked in this assertion'
    );
  }
  ...
}

Other methods (those that do not start with 'test') can be used in the test for doing tasks that might be needed by multiple test methods.

1
2
3
4
5
6
7
8
9
10
11
12
class MODULETestnameUnitTest extends MODULEBaseUnitTest {
  ...
  /**
   * Methods that do not start with test, are ignored as test, but can be used for function
   * that might be reused on multiple test.
   */
  protected function getCondition($name) {
 
    return MODULE_get_condition($name, TRUE);
  }
 
}

Running Tests

The last item is to run these tests; that can be done by using the /scripts/run-tests.sh script (this is off of the drupal root). One must have the simpletest module enable to run these tests.

To run only the tests in a specific group (remember I set it to the name of the module earlier) pass the group name as the first argument to the run-tests.sh script.

 php scripts/run-tests.sh GROUPNAME
While one is developing one's tests, or if obe has to drill down on a failing test, one can use the verbose and class flags. This will give a more verbose report on only one class' tests.
 php scripts/run-tests.sh --verbose --class CLASSNAME

One can also use the Testing admin area for running the test (they are grouped by group name), but I find that the script is much more conducive if one is doing Test Driven Development.

When Erik Summerfield joined our software development team, we knew that his natural talents in math and economics would be an asset to our team and clients alike. Plus, his experiences in various programming languages including .NET/C#, perl, ...

Oct 01 2012
Oct 01

Posted Oct 1, 2012 // 0 comments

Recently, I revisited the publishing system we built for Thomson-Reuters' London Olympics coverage, one of the features I reviewed was the taxonomy processing aspects of the content ingestion engine. we built this to take in content feeds from Reuters' wire service content management and routing system. When you are in the weeds of building out a system, it's hard to appreciate the complexities of the systems that you are building. It was illustrative to return to the site months after we launched it and gain a deeper appreciation for the challenges we faced in building out the publishing engines that processed thousands of assets per day throughout the duration of the games.

The application of the taxonomies was a multi-layered process that progressively applied terms to the article nodes in several distinct steps:

  • Sports codes (example: "Athletics", or "Basketball") were parsed out of a series of tags in the article XML and matched against Sport records pulled from the third-party Olympic data provider.  When the Sport records were imported during development and the database populated with Sports and Events, the standard Olympic codes were included, and it was these that were mapped to.
  • In some cases, the codes were mapped instead against a local table of alternative Sport codes used internally by photographers to ensure that these alternative publishing paths would result in equivalent mappings.
  • Events also included in the tags within the XML, but not always.
  • The slugline was crafted to include sport, event, and match information, although only the match information was parsed out.
  • Athlete associations were applied by passing the text elements - title, caption, article body, summaries - through Thomson-Reuters' OpenCalais semantic tagging engine, and pulling 'people' terms from their library of terms. If there were any matches between the person references returned and the Athlete records created from the Olympics data associations with the Athletes, then they were applied.
  • Countries were NOT pulled using OpenCalais, although those mappings were available - the concern was that there would be far too many false-positives applied for Great Britain, given that nearly every article contained references to the host country.  Instead, if Athlete associations were obtained, we queried the Athlete record for the Country with which they were affiliated, and applied that reference to the article.

Although there were aspects of this process that were worked out as requirements changed and evolved, (in particular, it was discovered relatively late that photographers were using an alternate standard for sports tagging,) the system was ultimately successful because we had mapped out the process well before beginning development. We understood the complexities inherent in Reuters' content model.

It seems elementary that these things would be worked out ahead of time, but requirements evolve, and sometimes you just have to roll with the changes in order to ensure the success of the project.  What makes this process successful is a successful content strategy.

Data Informs Process

We had many sessions where we discussed potential ways of mapping data into the system, and there were a number of alternatives that were rejected because there were potentially too many holes in the processes of managing the data.  Make sure you get a look at production-level data as soon as possible in the project, and make sure your technical leads have a chance to work through any potential issues with decoding and processing the data.  If you can see ahead of time, that there are basic compatibility issues between what should be relatable data points in different third-party data feeds, then there is still time to get alterations made to the data, and, failing that, devising work-arounds, alternative mappings, or transformations using contextual clues in the data.  

Additional processing steps can be applied to handle systemic issues - as we did by using OpenCalais to gain athlete association before using athlete to create country associations.  Semantic tagging can be used to handle other cases where you know that a key piece of information might be missing from the original article, but an educated guess can be made as to what it is by seeing what subjects and terms are pulled through the parsing.  For example, if a set of articles are missing top-line mappings to sections within a larger news site, using OpenCalais or a similar technology can tell you that topically, it produces the strongest associations within particular vocabularies. References to sports teams and athletes would indicate that it should be a sports article, and references to members of Congress would place it within a politics vertical. 

Sometimes it's simpler to accept that weaknesses in the data, can be more easily handled by empowering the client with smart tools or smarter business processes.  If the problems can be isolated to an easily identifiable subset of content, these particular articles might be routed to a group of editors whose purview would include handling remapping the missing meta data.  If you know that there are systemic weaknesses in how taxonomies are applied in general - you know that a certain percentage of articles will as a matter of course, be missing terms. You can work the creation of more sophisticated taxonomy management tools into the budget to allow editors more immediate access to the taxonomy.  If your stakeholders decide that the incidence of bad data can be best solved by leaning on their editors and writers to use their internal taxonomies consistently and correctly, they'll start laying the law down as soon as you determine with them that this is the most efficient and promising route to better online content.

Try to Break the Content

Key to all of this is is the conversations you have with the client, where you work through their publishing workflows, sources of data, and the intersections between the two.  This needs to go beyond gathering requirements and documenting user stories - you need to try to break the system.  Brainstorm about worst-case scenarios.  Let the client talk about their worst fears regarding the system.  Poke holes in their ideas.  Let them challenge you on yours, and be prepared to walk through any implementations you have in mind.  You'll be much better prepared for the unexpected if you try to narrow down the possibilities for what that might be.   

Solutions Architect Joshua Lieb has a deep knowledge of web development, content management, and programming thanks to his more than than 15 years of web development experience in the government, publishing, and NGO sectors. His proven ...

Sep 24 2012
Sep 24

Posted Sep 24, 2012 // 5 comments

Local development, with its xdebug goodness, only works if you can mimic the production site as close as possible.

While there are very good arguments to use virtualization, sometimes a local site with prod code and a prod database is good enough.

Using a simple bash script, you can quickly and easily run a set of automated commands that get your local environment setup quickly.

Decompress and Import a Database File

It's really simple to import a database, even if it's already compressed, using Drush's SQL functions:

$ gzcat ~/path/to/file.sql.gz | `drush @alias sqlc`

The backticks are important, as you want that command to act on the results of the gzcat command.

Automate clearing cache and disabling modules

When you pull down a prod database locally, a lot of times you need to disable modules that are only applicable to prod. You may also need to clear caches or set values specific to development.

Below is a script you can put in your .bash_profile or .bashrc to do this for you:


#Project
project_clean () { drush $1 pm-disable [ADD PROD MODULES TO DISABLE] -y && drush $1 pm-enable devel dblog -y && drush $1 cc all && drush $1 updb -y && drush $1 vset --always-set --yes less_devel 1 && drush $1 vset --always-set --yes preprocess_css 0 && drush $1 vset --always-set --yes preprocess_js 0; }

Replace [ADD PROD MODULES TO DISABLE] with the name of the modules you want to disable separate by spaces (without the brackets).

You'll notice that the script also enables devel and dblog, clears cache, updates the database, and sets some development variables.

You'll also notice that the script accepts an argument. This is so if you have multiple sites setup locally, you can simply pass in the alias of the site.

For instance, if you had a site with alias 'foobar', you'd use the script at your command prompt via:

$ project_clean @foobar

Combine the Two

Hey! Can't we chain these together, so that in one line, we decompress a database file, import it to our local site, disable modules we don't need, enable the development ones we do, and run database updates?

Of course!

$ gzcat ~/path/to/file.sql.gz | `drush @foobar sqlc` && project_clean @foobar

Mountain Lion Caveat

I updated to Mountain Lion recently and kept running into a really weird error when doing this process:

ERROR 1016 (HY000): Can't open file: './XXXXXX.frm' (errno: 24)

A quick google search led me to believe that the error was related to a limit of concurrent files being open.

In OSX <=10.7, you can get around this by editing your my.cnf configuration for your MYSQL setup with the appropriate values for mysqld. Here is an example I use:

[mysqld]
# Packets.
max_allowed_packet=16m
# Wait timeouts.
innodb_lock_wait_timeout=600
wait_timeout=600
connect_timeout=10
# Set this as high as possible. On a dedicated server, 60% - 80% of machine RAM.
innodb_buffer_pool_size=512m
# Set this to the number of logical cores you have on the database server.
innodb_thread_concurrency=4
# Turn this on dynamically with a Jenkins job.
slow_query_log=OFF
# Max number of connections allowed.
max_connections=400
# Don't run out of file descriptors!
open_files_limit=32768
# If you set the query cache too high, your server risks severly slowing down and taking tens of seconds after an INSERT due to query cache mutex contention.
query_cache_limit=1M
query_cache_size=32M

In OSX >=10.8, this gets ignored because Mountain Lion has a soft virtual file limit that is maxed out at 256.

You can validate this one of two ways. Either check the VARIABLES table in MySQL for the open_files_limit or simply run 'ulimit -a' at a command prompt:

$ ulimit -a

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) 256
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) 256

Because open_files_limit is a MySQL setting, you can't update it directly in the VARIABLES table. Check this tutorial of how to change it.

Changing it is a moot point, however, if the virtual file limit in Mountain Lion OSX 10.8 isn't changed, as that will trump any value you put in MySQL.

You can, however, increase the virtual file limit in your terminal session by simply running:

$ ulimit -n [NUMBER]

Replace [NUMBER} with the number you want to use, like 4000.

You can then check it again by running:

$ ulimit -a

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) 256
open files (-n) 4000
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) 256

For that session, you should be able to do an import of a bigger database as the virtual file limit will be higher, your MySQL open_file_limit will be adjusted, and your world will be happier!

Fredric has many years of experience in the IT field, including as a consultant in healthcare IT and as an Interaction Designer. Since the days of the Tandy 3000, tinkering has always been a passion, and before joining Phase2, he dabbled in ...

Sep 19 2012
Sep 19

Posted Sep 19, 2012 // 0 comments

When it comes to making websites Section 508 compliant, there are a variety of things to remember to do. Ensuring images and image maps have alt links, links have titles, the skip links… What about forms?

While Drupal 7 has taken great strides forward in the realm of accessibility there are still an awful lot of sites on Drupal 6 that may need to come up to standards. I wanted to talk a bit about Form API and helping make it more accessible. To most people this means just two things: Tab order and labels. Yes, a user should be able to tab through the form naturally and each label should have a for attribute with the value being the id of its input field. These both come easy with FAPI and Form Builder.

Great, that’s all we need to do! But is it really? Take a minute to use a screen reader and you can find the forms confusing. It jumps from label to entry and then to a description if the field has one. This doesn’t present a very clear form now. Does the description apply to what comes next? It’s poor design and sadly it is the default FAPI render.

When I was first tasked to change the form to change from Label->Input Field->Description to Label->Description->Input Field I took a look at the form.inc file, form builder module code and our own custom code that plugged into that. I also searched Google to see if others have tackled this in Drupal 6. The Google endeavor was pretty fruitless as it brought up a lot of Drupal 7 accessibility pages. So digging into the forms a bit, I found a fairly easy solution (but not one I liked).

I could simply use the hook_form_alter to add to the elements data, callbacks to the [’#post_render’] and [’#pre_render’]. The pre render callbacks would essentially remove the description and add it to a new temporary entry in the field array. At the post render function, the function would alter the rendered html content and inject the description data. The method felt too hacky and a bit dirty. But it worked, we now had the form rendering as Label->Description->Input Field. I wasn’t terribly comfortable using this method and wanted to find something more ‘Drupal-ish’.

A coworker and I were talking about accessibility and how to work around issues like this when the mention of theme_element came up. While I was familiar with the core code that rendered the form elements, I wasn’t aware of the theme_element functionality. So after using core theme function to find the proper element name (form_element) I added a new theme_form_element function. Simply copying the core functionality of the theme_form_element function in form.inc (line 2208) and then altering the couple lines I needed resulted in a win. A quick one at that.

So if you need to make the form more accessible, just plug this into your template.php file:

/**
* override theme_form_element
* used to override placement of description field.
/
function theme_name_form_element($element, $value) {
// This is also used in the installer, pre-database setup.
  $t = get_t();  $output = '<div class="form-item"';
  if (!empty($element['#id'])) {
    $output .= ' id="' . $element['#id'] . '-wrapper"';
  }
  $output .= ">\n";
  $required = !empty($element['#required']) ? '<span class="form-required" title="' . $t('This field is required.') . '"></span>' : '';

  if (!empty($element['#title'])) {
    $title = $element['#title'];
    if (!empty($element['#id'])) {
      $output .= ' <label for="' . $element['#id'] . '">' . $t('!title: !required', array('!title' => filter_xss_admin($title), '!required' => $required)) . "</label>\n";
    }
    else {
      $output .= ' <label>' . $t('!title: !required', array('!title' => filter_xss_admin($title), '!required' => $required)) . "</label>\n";
    }
  }
  //new location for printing description before input field ($value)
  if (!empty($element['#description'])) {
    $output .= ' <div class="description">' . $element['#description'] . "</div>\n";
  }

  $output .= " $value\n";

  //moved
  //if (!empty($element['#description'])) {
  //  $output .= ' <div class="description">' . $element['#description'] . "</div>\n";
  //}
  $output .= "</div>\n";

  return $output;
}

And voila, the form elements will now be more accessible. Navigating and filing out forms using a screen reader are now much easier and more coherent.

It does bring to mind that perhaps FAPI should have better ways to reorder the way elements are being rendered. I’d also be curious as to other solutions community members may have found to solve this particular issue.

As a web developer for Phase2 Brian Nash works to create an effective and structured solution to provide a great experience for the client.

Before joining us, Brian worked for Ruby on Rails that specialized in a large-scale healthcare ...

Aug 29 2012
Aug 29

Posted Aug 29, 2012 // 4 comments

Meta tags are one way that content authors can add extra information to a webpage, typically for the benefit of machines (like search engines) to learn more about the purpose and meaning of a webpage. You may recall that once upon a time it was a “search engine optimization” technique to fill the “keywords” meta tag with long lists of words to try to bump up your placement in search sites like Google. The “keywords” meta tag won’t help you much in Google anymore, but that doesn’t mean that meta tags have no use anymore. Perhaps you’d like to provide Open Graph for Facebook, or perhaps you have your own custom set of meta tags for use in an enterprise Google Search Appliance or other tool.

The Meta Tags module is your answer in Drupal 7 for adding these meta tags to your website and being able to customize them for individual pages. The Meta Tags module provides some of the traditional meta tags like “keywords” and “description” out-of-the-box, and has some plugins for Open Graph, and also has a fairly simple API for integrating your own custom meta tags.

To declare your own custom meta tags, you need to declare them in a custom module.

To get started, create your custom module directory my_metatags and create the following files:

my_metatags.info

1
2
3
4
5
6
7
8
name = My Metatags
description = Provides my custom Metatags.
core = 7.x
version = 7.x-1.x
 
dependencies[] = metatag
 
files[] = my_metatags.metatag.inc

my_metatags.module

1
2
3
4
5
6
7
8
9
10
<?php
 
/**
 * Implements hook_ctools_plugin_api().
 */
function my_metatags_ctools_plugin_api($owner, $api) {
  if ($owner == 'metatag' && $api == 'metatag') {
    return array('version' => 1);
  }
}

What we’ve done here is to create a new custom module called my_metatags and we’ve declared in the .info file that we will be including a file called my_metatags.metatag.inc. In my_metatags.module we’ve implemented hook_ctools_plugin_api to tell CTools where to find our metatag plugin.

Now we need to create my_metatags.metatag.inc:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
<?php
//
// Implements hook_metatag_info().
//
function my_metatags_metatag_info() {
  $info['groups']['my_metatags'] = array(
    'label' => t('My Custom Metatags'),
  );
 
  $info['tags']['my_custom_metatag'] = array(
    'label' => t('My Custom Meta Tag'),
    'description' => t('This is a custom meta tag'),
    'class' => 'DrupalTextMetaTag',
    'group' => 'my_metatags',
  );
 
  return $info;
}
 
 
//
// Implements hook_metatag_config_default_alter().
//
function my_metatags_metatag_config_default_alter(array &$configs) {
  foreach ($configs as &$config) {
    switch ($config->instance) {
      case 'global':
        $config->config += array();
        break;
 
      case 'global:frontpage':
        $config->config += array();
        break;
 
      case 'node':
        $config->config += array(
          'my_custom_metatag' => array('value' => 'This is a default value.'),
        );
        break;
 
      case 'taxonomy_term':
        $config->config += array();
        break;
 
      case 'user':
        $config->config += array( );
        break;
    }
  }
}

In this file we are implementing two hooks provided by Meta Tags, hook_metatag_info() and hook_metatag_config_default_alter().

The code in hook_metatag_info() does two things: 1) Creates a new Meta Tags group called “My Custom Metatags” and declares a single custom metatag “my_custom_metatag.” By default, this metatag will get output on a page like:

<meta name="my_custom_metatag" content="This is a default value." />

The code in hook_metatag_config_default_alter() provides default values for our custom meta tag. The defaults can of course be overriden within the Meta Tags administration area and additionally on a per-entity basis (node, taxonomy term, etc) based upon your configuration of the module.

Brian is the foremost authority on all things Mobile at Phase2 Technology. Brian frequently speaks about topics like front-end web performance, jQuery Mobile, and RaphaelJS. He has been working with Drupal since 2005 and has presented at ...

Aug 28 2012
Aug 28

Posted Aug 28, 2012 // 1 comments

Search API Attachments is very similar to Apachesolr Attachments in that it lets you extract text from attachments using Apache Tika. It makes this text indexable and searchable so that documents on the site can be searched along with nodes and entities.

However, while Apachesolr Attachments lets you select either to use a local copy of Tika or Tika installed on a remote SOLR server, Search API doesn't support the same configuration. Search API Attachments only supports local Tika extraction. For large-scale sites, this is an issue as it takes resources away from the web server to do resource-intensive processing work.

There is a way to enable remote SOLR extraction in Search API with just a few patches.

First, make sure you are using the 7.x-1.2 copy of the Search API Attachments module. If you are not, upgrade to that version.

Next, apply the http://drupal.org/files/search_api_attachments-allow_external_extraction_and_cache_extraction-1289222-8.patch patch to your Search API Attachments module. This patch adds a configuration option to the Search API Attachments screen to allow for remote SOLR extraction or local Tika extraction, and contains the necessary code to make it work. It also adds a table to store the text that was extracted, so that you don't need to send the files to the server every time you need to re-index your site. Don't forget to run the database updates after this patch has been applied.

Last, apply the http://drupal.org/files/search_api_solr-allow_abitrary_query-1580118-1.patch patch to your Search API module ( not Search API Attachments ). This patch is required by the previous patch in order to make the query to the remote SOLR server.

You'll want to re-index your site after you've made these changes if you are already using the local Tika extraction.

Web Developer Brad Blake brings a wealth of expertise to our team and our clients whenever he creates software tools and websites on the LAMP platform. For more than five years, he has been using PHP to build cutting-edge technologies that ...

Aug 27 2012
Aug 27

Posted Aug 27, 2012 // 0 comments

 One of the hallmarks of Agile project management is maintaining a product backlog. A product backlog is an important artifact of any project - it's where requirements are stored and prioritized by both the project team and the client.  Keeping a well-groomed backlog should be a continuous process throughout the project lifecycle. If maintained correctly, a product backlog can make Sprint planning a much more effective exercise.

I'm currently working on an Agile project and I've put together a few recommendations on effectively and easily managing a product backlog that I've learned along the way.

1. Schedule a regular Backlog "Grooming" Session

Ideally, this grooming session would take place weekly throughout the duration of the project. Make sure to schedule enough time for the session -  two hours is generally enough, but up to four hours may be necessary for larger projects. 

The goal of this session is to review items (either tasks or user stories) in the backlog and to adjust the priorities of the items. Top-ranked items can then be easily moved into sprints during sprint planning sessions. 

2. Involve the entire team

The entire project team should be involved in maintaining the product backlog through weekly grooming sessions and individual tweaks throughout the week.  It's important that the whole team agrees on the prioritization of the backlog as this activity will ultimately define development sprints. 

3. Make sure to estimate 

Another reason to involve the entire project team in product backlog grooming is to assign estimates to prioritized items in the backlog. This can take two forms - a rough hour estimate (for example: this task will probably take 6 hours), or via assigning story points.  Story points roughly show the level of effort involved in a task, without assigning a period of time to it (for example: this task has a story point value of 1, but another has a story point value of 5, indicating that the tasks will take 5 times as long as the first task). 

Story points are a good way to go if you will be presenting the backlog to the client for prioritization. This way, the client has a rough idea of the level of effort of a task before a task is broken down and assigned more specific estimates, which are likely to change from the high-level estimate.

4. Involve the client

It's important to involve the client in backlog grooming - especially as priorities change throughout the project. It's likely that the backlog will continue to change throughout the project as new User Stories are added and requirements are developed and fleshed out. Throughout the week, collect new client requests and requirements in the form of User Stories and add them to the backlog.   Set up a weekly meeting with the client to review and re-organize priorities. 

In her role as a Solutions Analyst, Dida brings her years of experiences as an online manager to deliver to the client user-friendly implementations. A natural-born communicator, Dida uses her talent to help clients find the most efficient ...

Aug 22 2012
Aug 22

Posted Aug 22, 2012 // 0 comments

When Moshe Weitzman posted his idea for a Drupal community initiative called DrupalGive on Drupal.org, we knew we wanted to get on board. As Moshe simply explains, Drupalgive is a page that organizations publish on their website to highlight the ways they have contributed to Drupal with the intent to educate clients and partners about the Drupal community and also "nudge" other organizations to contribute.

By nature, open source software is dependent on contribution. As Drupal matures, organizations are using it to build bigger, more complex websites. It is therefore more important than ever to contribute and share within our community in order encourage further innovation.

We are proud to announce the launch of our own DrupalGive page, designed by Dave Ruse and developed by Tirdad Chaharlengi and Josh Cooper.

Our page highlights 4 different ways we contribute to Drupal:

Modules

We recently posted a blog about our contributions to the Large Scale Drupal Initiative (LSD)  Specifically Site Preview System, read more about the project and this module here.

Events

We are a proud silver sponsor of DrupalCon Munich. We've been busy in Munich with some great collaborative sessions:

Distributions

We maintain 4 distributions: OpenPublic, OpenPublish, Open Atrium and Managing News.  We are excited about the most recent OpenPublish release including a new demo theme: Gazette.  Stay tuned for the impending OpenPublic release!

Presentations

We make sure we post as many of our session slides as we can, to promote Drupal Learning. Look out for our DrupalCon Munich session slides posted soon to our slideshare.

We had a lot of fun putting our Drupalgive page together, we look forward to our contribution lists growing, and using other organization's Drupalgive pages to stay informed and up to date on the latest Drupal contributions. 

As marketing coordinator at Phase2, Annie is involved in the open source events in the New York metro area. She really enjoys engaging the lovely Drupal Community and promoting Phase2 to prospects, clients, staff members, and the greater ...

Aug 20 2012
Aug 20

Posted Aug 20, 2012 // 0 comments

Last month we released our first Beta of OpenPublish on Drupal 7. Included in the release is a brand new demo theme called Gazette. Built as a sub-theme of Frame and based on the Omega base theme, Gazette is a fully responsive and a mobile friendly theme. It was originally designed by Samantha Warren and then adapted for responsive design by Dave Ruse. We were excited to add it to OpenPublish in the Beta release as a demo theme to show just what OpenPublish can do.

Our goal with this theme was to demonstrate that OpenPublish can handle a large amount of dynamic content and mobile-friendly, while still looking attractive and appealing to the eye. This theme features the use of the League Gothic open source webfont.

Gazette homepage

Gazette homepage on the iPhone

As part of the development of Gazette, we incorporated the PhotoSwipe javascript library into core OpenPublish. The Photo Gallery content type now supports swipe events on touch enabled devices. This was a feature that we felt was crucial to highlighting the mobile-friendly aspect of OpenPublish and prepares site administrators for supporting mobile in the future. The touch photo gallery functionality was actually de-coupled from Gazette, and built as part of the openpublish_media feature so all themes will have this functionality by default.

Gazette’s code can serve as a solid demonstration for themers who are just beginning with Omega or OpenPublish. Frame is a base theme for OpenPublish developed by Jake Strawn. It is also a sub-theme of Omega so it inherits all of Omegas capabilities, while adding basic theme support for all of OpenPublish’s functionality. We deliberately chose to build Gazette as a sub-theme of Frame to demonstrate how this is done, and to help define some best practices.

David Coffey’s user interface development skills form the bridge between Phase2’s design and development teams. His specializations in CSS, Javascript, and Drupal theming, plus his focus on Drupal as a platform, make David an ...

Aug 17 2012
Aug 17

Posted Aug 17, 2012 // 4 comments

The Site Preview System Drupal module is a framework for previewing a site with a set of conditions active. Here is a video to introduce you to what the module does. SPS was developed out of the LSD CSI project as part of a suite of modules.

The Site Preview System works by accepting conditions, generating a list of override revisions and altering the page to display the correct revision on the page.

Modules that came out of the CSI project

Modules that are currently integrated with SPS.

Workflow

Layout

Entities

  • Nodes
  • Any other entity that supports Revisions

If you are going to be in Drupalcon Munich please come to one of the sessions or BOF

Team Architect at Phase2, Neil thrives on working with the development team to come up with creative implementations for complex situations. Neil enjoys pushing Drupal past its boundaries to implement cutting edge concepts, and has a ...

Aug 16 2012
Aug 16

Posted Aug 16, 2012 // 0 comments

We recently had the opportunity to work with Thomson-Reuters on a rather interesting project - create a custom-built editorial suite on top of Drupal, with an eye towards eventually building out a multi-site platform - and build it in three months, just in time for the 2012 London Olympics.

Reuters would be pushing hundreds of articles and thousands of photos into the site every day, with high-volume days (like the opening ceremonies, or the Men’s 100m final) doubling the volume.   Reuters uses a feed management engine called MediaConnect to serve out photos and text articles to their clients, and our site would be ingesting this content to populate the site.

The content model was complex - a single article could consist of dozens of photos along with article text, and this text was subject to repeated updates. For example: Images start to arrive with a slugline specific to the Men's 100m final, and as each is ingested they are appended to a slideshow that was published when the first image with a unique slugline arrived.  Then, a text item arrives with the slugline that consists of a short write-up of the final results, and that item is in turn integrated into the article, which now consists of a multi-image slideshow and a text article.  Later, more images arrive and are automatically appended, and an update to the text item (perhaps a final, longer story) overwrites the original text item.

All of this had to have minimal latency, of course.

We built an ingestion engine that took in the MediaConnect content stream, converted the text and photos to type-specific entities, and stitched together the associated items into an article instantiated as a complex node that could support hundreds of content associations via a system of content manifests.  This content fed a network of landing pages specific to sports, events, athletes, and countries that had been built out to receive automatic content streams.  The end goal was a site that, once hooked up to the MediaConnect Olympics content channel, could be populated with no editorial intervention whatsoever.

That was the easy part

Well, it wasn’t actually easy.  But it was straightforward - the content was all encapsulated in well-formed XML, and although the ingestion and assembly rules were complex, once in place the system would run.

Harder, was allowing human intervention.  The editorial use cases were legion, and as is the case with any high-profile news site, especially one covering a high-profile event like the Olympics, the qualitative differences between one photo and another, one version of a headline and another, were subtle but crucial.  MediaConnect (and our system) lay at the end of a chain of editorial content management systems, each of which applied a layer of curation, so the content stream was not a raw torrent of news (for a look at what the photo editors experienced, see "A Glimpse Into the Hectic Life of a Reuters Photo Editor at the Olympics").  Even so, the editorial team required a high degree of granular editorial control over the story assembly process.  If a story received 50+ photos, an editor would want to lock the best one at the top while still allowing automatic ingestion of more.  Updates would come in for articles that had already been edited, but editors would want to review these updates to ensure that their own corrections were not overwritten.  Articles built by hand would need to be ready to receive automated content at any time.

Drupal provided many, many advantages to us in building out the editorial toolkit that the Reuters team in London used to manage the constant ebb and flow of content management.  We used many contributed modules

(Workbench,

Solutions Architect Joshua Lieb has a deep knowledge of web development, content management, and programming thanks to his more than than 15 years of web development experience in the government, publishing, and NGO sectors. His proven ...

Aug 14 2012
Aug 14

Posted Aug 14, 2012 // 3 comments

With the right preparation and foresight, launching your new CMS, or relaunching your CMS in Drupal, can be a smooth process. Anticipating the future and looking at the big picture of your companies' goals is integral to the success of this process. Here are the top 5 mistakes that you should avoid:

1. Spending huge focus, time, and capital assuming only today's problems, instead of tomorrow's

Quick exercise: Look at your current digital endeavor and think back to July 2010. What were you focused on then? How much of it matters now? How much of what you did is still in effect on your current platform and providing ROI now? In my experience it's about half. If it's 60-80% you are an excellent and disciplined strategist. (If you say 100% you are probably suffering from selective memory loss.)

The way to correct this mistake is to embrace agility within the context of your big migration and launch, and (this is the kicker) throughout the lifespan of your new Drupal site. Regardless of your specific endeavor, you must cope with a digital landscape that shifts every three to six months. You must meet the challenges of digital, while simultaneously delivering digital products on-time, on-budget and on-scope. Fifty percent of your project should focus on assumptions of "now", and the other half should assume the unknown. Looking to the future also allows you to let go of some of the aging products or content on your site and not get mired in the minute of a digital product that should be left behind. You will likely be forced to kill/change/revamp/ignore half the site within two years. New challenges will emerge, even within the timeline of the project itself. You cannot prepare for everything so set yourself up for success by assuming iteration. Build solutions that can easily change. Don't lock yourself into solutions based on a set of assumptions about digital that are likely to change. It may sound crazy, but after five years of helping businesses and enterprises reach their digital goals I know you should weigh your investments carefully with this in mind: Don't build a thing you wouldn't be okay with breaking in six months. I'm not the first person to propose such an approach for dealing with an uncertain future. Not embracing agility in your build and your ongoing strategy, could leave you without a future to face.

2. Assuming a sign-off on designs means the work of designing is done

Seeing is [not] believing: there is a false security that comes with the almighty "design sign off" on mocks, PSDs or design comps. Once everyone works through comps and decides on "The Design We Want," teams tend to gather everyone in a room and throw a static image of your new homepage up on a projector. "This is our New Site." Everyone nods, ohs and ahas, but know that many steps remain between that moment and the end of the design process. That first exposure achieves something, but it's not your team's or your organization's full understanding and acceptance of the design. The team will return to design again and again. The greatest challenge is to retain the fundamental successes of the original design effort, but eliminate the mistakes that reveal themselves through incremental development. In order to leave space for this to happen you can't fall victim to mistake #2, and if you try to retain the original projector image with rigid enforcement your new site will suffer for it. See number #3 on how to handle the on-going changes to not only design but all other aspects of your site.

3. Listening to your Users Too much, or Too Little

There's a sweet spot with user feedback and how it can fit into your agile process. You need to 1. Create an efficient way for users to give you feedback and 2. Understand how your stakeholders - every stakeholder - will engage with the new site. Agile is grounded in a basic assumption that guides number one: your stakeholders will change their mind… a lot. To actually meet their needs you must listen to and keep a focus on all stakeholders but that can be challenging given their inevitably shifting requirements. How can you stay efficient while managing your project schedule with time-boxed iterations and listen to all your stakeholders who always change their mind? You must find a way to incorporate continuous user feedback into the process of building your new CMS - it's the agile way. Don't unintentionally ignore some users who aren't internal to your organization: customers/readers/audience. For them you should use proxies or people with in-depth user knowledge. Proxies can be anyone from technical service folks to site architects and UX specialists to the patient saint who fields phone calls and emails from users complaining they can't do "X" on your site. Also plan for the fact that a disconnect will always exist between what internal stakeholders *think* people come to the site for, and what they actually do. Check your analytics and present data to confirm/counter assumptions and make that check part of your iterative feedback process. This leads to my final point: understand your stakeholders. Do all your research before the project starts and work to create a definition of all stakeholders involved: roles, goals, tasks and create user stories. Then schedule check-ins against those definitions and user stories within your feedback cycles.

4. Trying to be an SEO expert overnight (or even within the lifespan of your rebuild)

You can achieve solid Search Engine Optimization with your new build, but you cannot know everything you will need to know when you launch. I've seen serious scope creep on projects with people who were convinced they could crack google's algorithm. They build requirements around their Master SEO Plan for their site's global dominance in all search ranking. You will make mistakes and miss things in how you optimize your site for search. Instead, build-in pre and post-launch check lists for search optimization best-practices for content, redirects and performance. Then move on to the rest of your new site. Also know about "the most powerful Drupal module that does nothing."

5. If we build it, they will come

It used to be a big deal to launch a website. It's not anymore. Simply launching a site isn't the traffic boost it used to be. You need a PR and social strategy in place before day one. If you want to leverage your site launch to make a big splash don't ignore PR. Craft your media blitz with nuance. Have a social strategy in place to refine messaging to your own users, including messaging in the first few hours, days and weeks after launch. Talk with your technologists about highlights of innovation. Don't depend solely on your traditional audience. Work with your partners in technology, business and content to promote the site in all sectors, not just your own. Everyone should get a space to brag. Make sure you build something worth bragging about.

Kellye Rogers is a Project Manager at Phase2 Technology. She is passionate about listening and communicating with clients, developers and designers to create the best Drupal products possible. She lives for streamlined, innovative solutions ...

Aug 13 2012
Aug 13

Posted Aug 13, 2012 // 0 comments

News editors and programmers have an important trait in common - laziness.

Not the sin of a sloth per se, but the innate desire to find ways to reduce unnecessary work, to become more efficient, to be able to do more with less. From that shared trait comes some of the best innovations in digital publishing.

That desire to minimize or remove unneeded steps allows both developers and editor to focus on more important parts of their jobs, the things that make a real impact. For an editor, that is curation and timeliness; making critical decision to ensure that their audience gets the most accurate, most up-to-date, and highest quality information available. These days, editors need to deliver on multiple platforms - mobile, Web, tablet, email, etc. Keeping up with multiple systems, large amounts of information and multiple platforms can be a big strain on a news organization.

Phase2 recently had a chance to do something we really enjoy - to customize a CMS that helps editors focus on what is important, instead of just sorting and pushing content. We were privileged to work with Thomson Reuters on a Web content management system (CMS) that supports their Olympics website, their mobile Olympics site, and provides content support for iPad and iPhone applications

Check out their fantastic coverage of the 2012 Summer Olympic Games in London at reuters.com/london-olympics-2012/.

Two of the goals of the Reuter's CMS which were especially intriguing for Phase2, were key work-saving steps. The first was a touch-once, publish-many content management system to support multiple outputs and the second was smart automation of content tagging, grouping and routing with the ability for editors to override at will. These two items create a CMS that works hard to handle all the small details, so editors can focus on the truly important and impactful.

It’s a system we’re proud of. A team of editors can route content to the website, the mobile site, and the iOS app from a single Drupal-based CMS. Stories, photos, results and other data, flow in from multiple data sources, are parsed and tagged, and then stitched together into packages automatically based on tagging and a set of rules. The content flows into all the appropriate sections of the site based on countries, sports, events, venues and athletes, and each story package grows organically as new related pieces of content arrive.

An editor need not touch a story for all this to happen. But they can make changes and override the automation when and where they feel it is needed. Editors can use the admin interface to search for additional content. They can change tagging, modify the packaging, and “pin” content in place.

It’s a great blend of automation and curation, allowing editors to spend their time on what is important - sharing the excitement of the Games with us.

Felicia Haynes brings 11 years of online publishing experience to her role as an account manager at Phase2. As an account manager, she is an advocate for the clients across all area of their projects, from the requirements gathering to ...

Aug 09 2012
Aug 09

Posted Aug 9, 2012 // 0 comments

Recently we were tasked with taking on the Thomson Reuters Olympics website which involved many intricate components including the task of developing a mobile specific theme for their android users and other non-iOS devices (iOS got a separate app). I was quite excited for this task, as it stepped away from the idea of responsive design and instead focused on a lightweight mobile specific theme and content. There is still plenty of debate whether a website should be responsive for all the devices it’s viewed on, or if a theme specified for a medium is appropriate.

Mobile vs. Responsive

In the case of Thompson Reuters Olympic website, it was determined that a mobile specific theme was a better solution than creating a responsive website. The desktop theme has a plethora of widgets in the right rail which wouldn’t translate well to the mobile theme, so the plan of action was to create a single content region with a listing style layout. Although the website is built on the Omega theme at its core, (which is mobile first/responsive), it was primarily used for its HTML5 abilities, built-in grid system, and region configuration. This proved to be a great time saver for the mobile theme with most of the legwork already completed with Omega’s default settings.


Although the ultimate goal of this theme was to address users on a mobile platform, responsive design was found not to be the correct solution. The amount of content displayed would have hindered a responsive layout and experience, so it was better to have a streamlined solution and go with a mobile theme.

The mobile theme came together quite nicely with content being placed inside of the main content region and stacked vertically. Custom mobile tagged contexts were used to place the blocks in the appropriate order in accordance to the comp, (more on this later). The custom contexts allowed us to place only the content that was useful rather than serving the user all the content and hiding it via CSS, thus avoiding increased http requests and overall load time.


Using Custom Tagged Mobile Contexts

Going back to the custom tagged mobile contexts, there were a few technical challenges which lead us to that solution. The first challenge was the obvious problem of having completely different layouts for the desktop and mobile versions of the site. Creating a separate set of contexts tagged with ‘mobile’ enabled us to place content blocks in the regions that we needed them to be in. The second and more difficult challenge was the complex and multiple layers of caching in place at various levels of the project; Acquia, Akamai and Varnish. Even with the ability of creating a different set of contexts for mobile, there was still a high chance that mobile users would be served the desktop context sets just because of the caching. Switching contexts at the drupal level was not an option because of this factor.

The final solution, by Team Architect: Tobby Hagler, was brilliant and simply elegant: Akamai was already setup to do redirects to ‘m.’ urls for mobile devices. In knowing that fact, Tobby created a patch for context that would detect the domain (in this case ‘m.’) and call the mobile tagged contexts in the DB and turn those on rather than the desktop context set. Et Voila: a mobile site with the exact content that you want. So rather than device detection, this is a domain detection scheme where we already know the domain will be device specific with ‘m.’.

Mobile web doesn’t have to be a daunting task by any means. With the proper game plan, it can actually flow together quite nicely. Unfortunately there can always be the curve balls that come into play. Even with reading countless articles on creating mobile specific and responsive websites, sometimes it comes down to the core pieces of the web that give us the biggest headaches. In this case, cache didn’t make the world go round, but proved to be a fun challenge to overcome.

Josh Cooper’s user interface development skills play a vital role in bringing great design ideas into fully functioning websites.  His specializations in HTML, CSS, Javascript plus his focus on Drupal as a platform, make Josh an ...

Aug 07 2012
Aug 07

Posted Aug 7, 2012 // 2 comments

When creating a new Drupal site, you’ll often come across a need to create new date types and date formats programmatically. However, they’re not exportable, so how can you create and manage them through code? You can still create them in code with a few hooks and statements.

If you want to create a new custom date format, that data is stored in the date_formats table. To create a new date format you can follow the example below, changing the PHP date string to the date string of your choosing.

/**
* Create the example date format.
*/
function mymodule_update_7001() {
  db_insert('date_formats')
    ->fields(array(
      'format' => 'M. j g:i A T',  // PHP Date String
      'type' => 'custom',          // 'custom'
      'locked' => 0,               // 0 = user-specified date format
    ))
    ->execute();
}

That’s all that you need to create a new date format, and the new format should be available now on the /admin/config/regional/date-time/formats page.

To create a new date type ( for example, to use in the format_date() function ), you’ll need a few extra statements. The data for date formats is stored in two places:

  • The date_format_type table contains the title and type ( machine name ) of the date format.
  • The variable table contains the string format as the value, with a key of the date format machine name.

To programmatically insert a date format in an update hook, you can do:

/**
* Create the example date type.
*/
function mymodule_update_7002() {
  db_insert('date_format_type')
    ->fields(array(
      'type' => 'example',  // Machine Name
      'title' => 'Example', // Display Name
      'locked' => 0,        // 1 = can't change through UI, 0 = can change
    ))
    ->execute();  // Variable name is date_format_MACHINENAME from previous insert
  variable_set('date_format_example', 'M. j g:i A T');
}

After running that update hook ( or install hook ), you’ll be able to see your new date type at your /admin/config/regional/date-time page.

Web Developer Brad Blake brings a wealth of expertise to our team and our clients whenever he creates software tools and websites on the LAMP platform. For more than five years, he has been using PHP to build cutting-edge technologies that ...

Aug 06 2012
Aug 06

Posted Aug 6, 2012 // 3 comments

Git is growing in popularity amongst developers for the technical advantages it has to offer. Many would ideally wish for their clients or their supervisors to make the switch from other source control management (SCM) systems such as Concurrent Versions System (CVS) and Subversion (SVN). To most stakeholders who aren’t involved in writing code, converting to Git wouldn’t make sense from a financial standpoint.

However, there are many reasons using git can help you create better products without hemorrhaging time and money on unnecessary overhead and unrefined process. If you can make the transformation without interfering with your current day-to-day operation, I would highly recommend you do so, and here’s why:

Note: This article aims to bring to light Git's advantages from a different perspective. If you are interested in reading up on the more tech-savvy flavor of this article, look no further .

Concoct Without Fear

Developers strive for experimentation. Of course, it wouldn’t be good if untested experimental code gets committed to the repository and stifles the rest of the team’s progress because they end up troubleshooting bad code that shouldn’t be there in the first place.

Sure, you can set up a model where all developers maintain their own branch, and merge their branch to the master branch when everything is tested and approved. You can pretty much do the same thing in GIT…and more.

Remember that anything you commit to SVN will be pushed to the centralized repository. Git allows you to maintain a personal set of branches for experimentation and development on your own computer without interacting with the repository. That way, you don’t have a bunch of branches in the central repository and everyone has to coordinate which branch is whose. This also allows developers to isolate bad code before it becomes readily available to everyone else. This eliminates clutter and enhances organization and efficiency!

Keep Churning Without Interruption

SCM is a great asset to a team’s workflow if it is readily available at all times. Some say that SVN is great for storing all commits so that, in the event of computer failure, all changes won’t get lost. However, if you send your developer on a flight from New York to Los Angeles to attend a conference, he may not be able to take advantage of SVN since you theoretically need a constant internet connection at all times.

Git works on local and remote branches. So commits can be made on the developer’s computer and pushed to the central server when all is good. This way, rather than losing hours traveling, you can multi task and travel and get some work done!

(Near) Error-Free Development

One gripe I have with SVN is that it does not follow some of its basic SCM principles. In addition to creating branches, you can also create tags, which takes snapshots of the state of a branch (these tags are, in theory unchangeable). It’s implemented more like a recommendation because all you’re really doing is making copies of branches into the tags folder (kind of like how some people copy and paste folders in to their hard drive and append some sort of “backup” label).

It doesn’t mean that no one can make changes to them. You can commit changes anywhere in the repository, so there's a chance of missing changes on the next deployment if, for example, a developer commits changes to a tag (SVN doesn't have a problem with this).  With this, it's assumed that all SVN users follow a certain standard and trusts that no one deviates from the rules.

Git supports the creation of branches and tags...and enforces those principles. You can checkout copies of tags, but you cannot make changes to them, (besides deleting and recreating them again), enforcing it's true intent as a SCM. This way, no one can accidentally commit to tags accidentally and lose changes from current development branches.

Leave No Features Behind

Differentiating changes between files works differently between SVN and GIT. While GIT tracks changes on all files in the project, SVN recursively searches for changes within the directory you’re in. This is nice because you can only focus on a smaller subset of files rather than the entire project. However, I find this erroneous and dirty considering there is a chance of neglecting to commit changes and increasing chances of bugs appearing.

Sure, you could go all the way back to the base level of the project to get a full overview of changes, but again, that’s an added inconvenience that adds unnecessary time and effort to your workflow. I’d rather make sure I know exactly what’s going on throughout the entire project rather than a partially working commit.

Decrease Time Spent On Using The Tool

It is not pleasing to anyone when a large chunk of your time is spent on figuring out how to use the tools and not on the task at hand. It is not difficult to get accustom to using any source control management system, but I can argue that it does take more work to achieve certain goals.

I noticed that some of the syntaxes used in SVN are more cumbersome than their counterparts in GIT. It gets to a point where some copying and pasting is involved (at that point, using GUI like TortoiseSVN or Cornerstone would be faster). For a few examples, see below:

Creating a new branch

SVN

Git

git branch testcode

Creating new tag

SVN

Git

git tag testcode

Switch branches

SVN

Git

git checkout newbranch

Lose all changes from last commit

SVN

svn revert –R ./path/to/directory/with/files/you/want/to/revert/

Git

git stash

Merge branch

SVN

Git

git merge mergebranch 

Personally, I notice myself keeping a crib sheet for the correct paths in SVN (svn info will yield the same thing, too), but it’s still too much to remember and to do, especially when you’re working on multiple repositories. Less typing = better!

Eliminate Bureaucracy in Collaboration

Git is a distributed revision control system, which means that sharing and contributing to projects is easy (of course, one can secure repository for internal purposes using SSH keys and permission settings on either systems). And unlike SVN, where all changes are directly applied to the central repository, GIT allows developers to stage commits prior to pushing them to the central repository.

What does this all mean? When on-boarding someone up to an existing project, you can simply give them access and the URL to the repository and let them experiment and develop on their own without affecting the master branch directly (this way, a peer review process can be included in the workflow more elegantly.  Beats reverting the repository back to a previous state if bad code was committed).

Could you do the same in SVN? Possibly. But, in most cases, time may be spent on training new team members to commit to repository correctly to avoid interfering with fellow team member’s development branches (becomes even more critical when a project gets larger and larger).

Is Git Right For My Company?

Most developers would be delighted if they can change their workflow to use Git. Switching over early would be more ideal unless, of course, your SCM relies on a large network of dependent applications. If it’s not viable to change SCM systems, I would highly recommend using it on future projects.

Git is infamous for having a large suite of tools that even seasoned users need months to master. However, getting into the fundamentals of Git is simple if you’re trying to switch over from SVN or CVS. So give a try sometime.

As a Phase2 developer, Peter Cho strives to create elegant and efficient solutions to provide a positive user experience for clients.

Prior to joining Phase2, Peter worked for the Digital Forensics Center developing marketing and web ...

Jul 26 2012
Jul 26

Posted Jul 26, 2012 // 0 comments

Over the past several months, we’ve been working with OpenPublish on many client projects, and working on improvements to the distribution as we go. We are happy to bring you the beta release of OpenPublish!

Gazette: Our New, Responsive Demo Theme

The first thing you’ll notice with the new version is the very beautiful new Gazette demo theme, designed by Samantha Warren and Dave Ruse.  Gazette was built upon OpenPublish’s Omega base theme and is completely responsive with support for tablet and phone breakpoints.  It features a new touch friendly photo gallery built with PhotoSwipe, allowing for easier image viewing on mobile devices.

Streamlined Content Management Experiences

We’ve bundled in some of our latest improvements from the revamp of Context UI, allowing editors to manage their page layouts by selecting blocks to add/place/remove as necessary on major Section Fronts.

Also, we’ve streamlined the experience of related content creation on Articles (i.e. the associated Author) via the incorporation of References dialog.

Robust Semantic Web Capabilities

We’ve updated many modules, including schema.org and a patch to provide for RDFa annotations for OpenPublish’s content types, enhancing OpenPublish’s semantic web capabilities, too.

Up Next: More Apps for Publishers!

We’re working on two new apps for OpenPublish that will offer an easy way to find, install, and configure optional functionality for your OpenPublish site.

Workflow App

The Workflow App helps streamline the installation and default setup of Workbench functionality for OpenPublish. This empowers editors to move content through a set of configurable workflow states and surfaces lists of content you’ve edited as well as a breakdown by workflow state.

Twitter App

The Twitter App makes it easy for folks to leverage Twitter Pull by providing a configuration page allowing users to enter a Twitter username/hashtag and specify where the resulting block full of related tweet should appear.

Getting Involved with OpenPublish

Let us know what you think, and keep any patches coming to the issue queue. And hey, if you’ve developed functionality that you think should be in OpenPublish, (or if you’re a third party service provider interested in integrating your service or module into OpenPublish,) we would love to talk to you. Building an app that integrates your module or service with OpenPublish can bring your awesome work to the thousands of people who download OpenPublish every month. Contact us if you’re interested in seeing the functionality you’ve built or services you provide in OpenPublish.

Dave has a seemingly innate ability to solve problems, anticipate potential pitfalls, and translate business objectives into functional requirements -- which is why he excels as a Solutions Architect at Phase2.

Dave has an essential ...

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web