Apr 18 2017
Apr 18
April 18th, 2017

Fun & Games

DrupalCon Baltimore is next week and we’re so excited to get back together in Baltimore! As the official Drupal Games sponsors, we take fun very seriously and this year you can be sure to find some exciting things to do at our booth—we won’t spoil the surprise but let’s just say you’ll get to see some of us IRL and IVRL.

And if you visited us last year, you know we are all about that Free Throw game. Our undefeated Web Chef, Brian Lewis, will be there to take on any challenger. We’ve all been practicing and we are READY. Are you?

We’ll also have some of our widely-enjoyed Lightning Talks during lunch intervals right at our booth! Learn something new in just a few minutes, howbowdat? Stop by our booth to check out the schedule.

Web Chef Talks

It’s been an exciting year and the Web Chefs are ready to drop some knowledge, including:

Future of the CMS: Decoupled, Multichannel, and Content-as-a-Service, presented by Four Kitchens Co-Founder and CEO, Todd Ross Nienkerk.

Supercharge Your Next Web App with Electron, presented by Web Chef engineer, James Todd.

Why Klingon Matters for Content: The Secret Power of Language, presented by our content specialist, Douglas Bigham.

Training: API First Drupal 8 with React.js and Waterwheel, a training with JavaScript engineer, Luke Herrington.

Party with a Purpose

Last—but definitely not least—you’re cordially invited to our official DrupalCon gathering, Drinks with a Mission, hosted by Four Kitchens and our friends at Kalamuna and Manatí.

Join us on April 25th at Peter’s Pour House from 6-9pm for lively conversation, free-flowing libations, and a structured forum for hashing out ideas on how to use Drupal to overcome the challenges many of our communities face in today’s national and global political climate.

RSVP here!

See you in BMD!

Oh! The kittens are coming along to Baltimore as well—four of them to be exact—and we can’t wait to reveal this year’s DrupalCon t-shirt design. We’re not kitten around. We wish we could show you right meow.

P.S. Check out the 10-day Baltimore weather forecast.

Recommended Posts

Lucy Weinmeister
Lucy Weinmeister

Lucy Weinmeister is the marketing coordinator at Four Kitchens. She loves to share all the new and exciting things the Web Chefs are cooking up at 4K. She is forever reading a book.

Events

Blog posts about ephemeral news, events, parties, conferences, talks—anything with a date attached to it.

Read more Events
Mar 01 2015
Mar 01

DCLondon-2015-01DCLondon-2015-01 #DCLondon 2015 was nothing short of Epic, Drupal Camp London has in its own right become a mini-Con, with community members flying in from not only across Europe but the US, India, Australia and New Zealand it is hard to call it just a London camp!

London is the centre of the multiverse!
Drupal Camp London 2015Drupal Camp London 2015 It was awesome catching up with old friends, some new ones and finding an engaging audience for my session on using Empathy maps, content touch point analysis to develop a robust content strategy.

Bummed about not being able to catchup with everyone though!!

I’d like to reiterate my two asks from the community this March:

1) Like, Follow and spread the word on Bringing Peace Through Prosperity, it goes hand in glove with our activist nature and desire to make this rock a better place today, tomorrow and beyond.

2) Drupal Camp Tunis needs our support to bring their local community into the wider fold, the organisers at DCTunis are looking for speakers and support.

And a HUGE thank you to everyone who attended my session…

See y’all at the next Camp!

One more option to look on her packing at it levitra vardenafil it of course to take not so simply because a form another and there is a wish to hold in hand her not so strongly. You can carry by me on a wide field.

Sep 21 2014
Sep 21

When the search engine correctly reflects how important you think different parts of the content and metadata are, the simpler it is for editors to understand how it is functioning.
Then the search engine ceases to be a mysterious black box that delivers the result in an arbitrary manner. It becomes comprehensible as to why certain search hits land higher up and what they should do to influence the search hit presentation positively.

Make your content machine-readable

To enable the search engine to evaluate what part of the content is more important than another, it must be able to differentiate the various parts. Always apply language correct elements like h1 – h6, p, q etc, with comprehensive language class attributes.

Always begin with how the external search engines (mainly Google) interpret your data and let the internal search engine adapt correspondingly. This gives editors the opportunity to optimize the content both for Google and the internal search engine. Optimization for Google and for the internal search engine are effectively one and the same concept and when correctly done it is valuable for both.

Metadata is the weapon to beat Google

Google has the advantage of being able to interpret link deviations and to learn from a multiplicity of searches when ranking content, whereas the internal search engine has the advantage of using metadata which fits perfectly. You should capitalize on this.

Example of metadata for web – och intranet Keywords, best bets, description, personnel with content responsibility, type of site, organization/unit, subject and category etc. Example of metadata for e-commerce Storage status, campaign, discount, price, sales volume, product category etc.

The metadata may be used by your internal search engine to search in or to filter the search hit-list, the so-called facets. Imagination alone limits what metadata you may use and what functions you may build out from: contextual or personalized search are two examples.

But remember, that even metadata should be managed and that excessive metadata becomes slower to compile and to keep the content updated. Only use metadata that corresponds to a real user case scenario of importance for your business.

Taxonomy, ontology and semantic search

Sometimes you may want to establish a so-called taxonomi for the metadata attributes which define the elements within the organisation. A taxonomy aims to avoid calling one element by different names, which may create confusion and cause content to disappear. But remember that your users may not always understand your meaning or your ideas. By searches, it is your function to assist users as far as possible.

Power of keywords/best bets

Keywords or best bets are efficient for setting up site priorities for certain searches in the search hit list. Also a frequent way to apply keywords is to present them to the user so that he may click on a keyword and pull up all related content. This gives the user a quick overview on a subject. Another way is to let the user filter the search hit list on keywords. When showing keywords to the user, you would probably keep a certain flexibility in what keywords are presented. At the same time, you may want users to achieve good hits even when they are searching on similar words or pure synonyms for keywords. A good solution is to make it possible by adding synonyms to the keyword index.

It will be advisable to limit what keywords editors add, to enable them to associate synonyms in the keyword index in the search engine. Then, users searching on synonyms, make the right hit in the search- word list without editors having to restrict their own flexibility regarding what keywords are shown to users.

Good titles and descriptions

How hits are shown in the search hit list is decisive to the user’s reaction, that is, whether he will click on the word or not. When this does not distinctly signal that this search hit is going to answer the user’s question, he would not click even if the response is actually there. We recommend that you show different titles on the page and in the search hit list. The reason being that users understand the context of the page, whereas on the search hit list, it is completely independent without any context.

Different titles for pages and hitlists

We recommend that your content personnel are able to choose between creating a special title for the search hit list or the title of the page. Since <title> is used by Google, a domain or name of the whole web site is usually added to make it clear, in Google, where the page belongs. It may therefore be advisable to have addition metadata in the head line of the internal search engine.

Example:

  • For Google result page: <title>Page title | Page domain </title>
  • For site search result page: <meta name=”title” content=“Page title for internal search engine”>
  • For actual page: <h1>Page title on page</h1>

    Sometimes you want to show a particular description of a page when it is shown in the search hit list.

Google uses metadata description if present when it is displayed on Google`s search hit list. Even for the internal search hit list, it may be advisable to give a special description. We recommend that for all- embracing pages containing various further links, it is also useful to have a special description and this is even when pages contain little text but perhaps mostly tables, pictures or other data that do not display well in the search hit list.

x
    × Previous Next
    May 22 2013
    May 22

    We just launched Zact, one of our largest design projects to date at Chapter Three. We designed nearly 200 comps, including an e-commerce workflow, a customer dashboard that mirrors the functionality of the phone’s software, a Support section built on ZenDesk, and a consumer-facing website.

    A disruptive new cell phone provider, Zact is a new company looking to redefine how customers purchase mobile services by making your plan 100% customizable right from your phone with no overage fees or contracts. They even give you a refund every month for any unused minutes, texts or data.

    Helping Zact overcome business hurdles
    As a new company in a major market, Zact turned to Chapter Three to help them solve some of their immediate business hurdles online.

    • Establishing brand trust
      To overcome lack of brand recognition and to educate new customers about the key advantages of the service, we created the “Why we're different” and “How it works” sections as a way for new customers to get to know us.
    • Paying full price for the phone
      To educate customers about the long term savings of buying the phone at full price, we created an interactive Savings Calculator. The calculator allows customers to compare various plan and phone options to their current bill to show their dollar amount saved over a two year period.
    • Buying a phone online
      Without the ability to physically touch the phone customers are buying, we needed to build in extra guarantees to make customers feel comfortable purchasing a device online. We featured a “satisfaction guarantee” statement prominently throughout the site, promising a refund within 30 days if the customer did not like the phone.

    Herculean feats of UX strength
    The complexity of interactions across the site gave us an opportunity to flex our UX chops. We collaborated with Zact’s usability specialist, incorporating feedback from weekly usability tests to iteratively improve our designs.

    • Customer dashboard
      To provide the functionality of the phone’s software on the website, we designed a web-specific interpretation of the phone software that empowers customers to access and control the full breadth of Zact’s service offerings. Because the software was being developed in parallel with our web design, we adopted an agile design approach to iterate in sync with the development team.
    • E-commerce
      Our team worked with Zact’s usability specialist to implement a checkout flow pulling from best practices across the web. We delivered a solution that pushes the capabilities of Drupal Commerce and its ability to integrate with third-party systems.

    Agile design
    An agile design process was critical in the success of this project. We needed to be flexible as requirements and scope were changing daily. We met with the client daily via WebEx with new design deliverables for review, which allowed us to gather feedback often and respond quickly. For any given page, we were able to explore a number of options on a high level before focusing on a more final solution.

    In fact, some of the best ideas on the project came directly from the client, as a result of organic discussion during those meetings. The Savings Calculator, which allows users to more visually understand how they will save money over time with Zact, grew out of a conversation we facilitated.

    Our first iterations of the Savings Calculator were pretty skeletal and didn’t quite feel right; the user had to fill out the form and click a button before seeing results. After further discussion, the client suggested that we make the actual dollar savings visible and dynamic throughout the page, so that as you interact with the form you can directly see how your savings are affected. This minor design change immediately made the page more engaging and an effective tool in communicating why Zact is a viable alternative to a traditional phone contract.

    Starting up in Silicon Valley with Drupal
    One of the most exciting and challenging parts of the project was the rapid pace of startup culture. The level of expertise and web savvy amongst Zact’s staff allowed for a flourishing partnership where we were able to push boundaries and do great work together. So far, the site has been covered by some major press outlets, including Gizmodo, Engadget, Forbes and TechCrunch.

    The site is finally live, but our work isn’t over yet. We’re continuing to evaluate and optimize the usability of the site and will continue to roll out design updates over the coming weeks. We look forward to working further with Zact and seeing how users will react to the new site.

    Mar 04 2013
    Mar 04

    Hosting the Content Strategy Conversation in a Brand New Breakout Room

    Content matters! Bold, boisterous content matters; shy bits of micro-content matter; even the nuances of your error messages deserve careful attention! Lullabot’s Insert Content Here podcast is all about content, and every episode features a new conversation with content strategy experts from a variety of industries and disciplines. On June 3-5, Lullabot will also be sponsoring the third annual Confab, a fabulous conference that's dedicated to content strategy! Every year in friendly Minneapolis, Minnesota, Confab brings together a host of experts to discuss the big issues and tiny details of content strategy.

    Why Attend Confab?

    One critical issue has wiggled its way into quite a few episodes of Insert Content Here: the often-strained relationship between content teams and development teams. At Confab, Insert Content Here's host Jeff Eaton will take a closer look at that contentious relationship and offer an hour of feud-soothing tools. In Hugging the Hatfields: Turning Cantakerous Development Teams Into Allies, Eaton will teach content creators to understand the developer's world and speak their language, ensuring that critical editorial needs are heard. He'll also explain how content strategy's tools can make life easier for both disciplines by alleviating developer pain points.

    In addition to sponsoring and speaking at Confab, Lullabot is excited to debut the “Insert Content Here Breakout Room!” (Always fans of novelty, we're proud to note that it's the longest-titled breakout room in the conference’s history.) Fittingly, Jeff Eaton’s session will be held in the Insert Content Here breakout room, as well.

    See You There!

    Last year, Confab sold out quickly! If you're responsible for content in your business, or you're a stakeholder who needs a roadmap for sustainable content, reserve a spot ASAP! Be sure to make room in your schedule for Eaton’s Hugging the Hatfields: Turning Cantankerous Development Teams Into Allies, and stop by our breakout room afterwards to say hello. We look forward to seeing you!

    Feb 17 2013
    Feb 17

    Listen online: 

    • insert-content-here-10.mp3

    Jeff Eaton and Relly Annett-Baker discuss the difference that carefully crafted microcopy can make to users; explore the challenge of bringing writers, designers, and developers together; and plan for future hijinks.

    Mentioned in this episode:

    Release Date: February 17, 2013 - 4:00pm

    Album:

    Length: 41:37 minutes (16.33 MB)

    Format: mono 44kHz 54Kbps (vbr)

    Feb 14 2013
    Feb 14

    Writing engaging, reusable microcontent is tricky business. Whether you need titles, tweets, or summaries, consider the destination channels and the workflow.

    Writing short bits of user-facing text -- microcontent -- is no picnic. Coming up with a punchy, attention-grabbing tweet is tough enough; writing a memorable 50 character title for a breaking news story can stress out even a creative wordsmith. It's like the writer's equivalent of Fitts's Law: the smaller the target, the narrower the margin for error.

    In heavy-duty, reuse-oriented publishing systems, it's common practice to save several variations of an article's title and summary text. That gives writers some breathing room in more forgiving display contexts, but ensures they don't blow past hard limits for the short stuff.

    We're currently working with a client on the nitty-gritty details of their new content model, and we're trying to iron out the best mix of fields to provide flexibility without overloading content authors. How many variations are enough? Karen McGrane's advice is simple and to the point: "As many as the writers will fill out, but no more." We plan to do some experiments with simple prototype interfaces to see what they're comfortable with, but before proceeding I did a quick review of the microcontent landscape to better understand the constraints of popular formats and channels.

    From longest to shortest, here's the rundown:

    • App.net post: 256 chars
    • Twitter card summary text: 200 chars
    • Facebook og:description text: 160 chars
    • Google page description: 155 chars
    • Tweet: 140 chars
    • Tweet with link: 116 chars
    • Subject line in iOS Mail.app: 45 chars

    Other than the sharp 70 character dropoff between a tweet and and email subject line, there's no easy boundary line between short and middlin', but we can defintely see where we'll run into some constraints. We need something that won't be cut off when sending out email alerts, we want to be able to fit some kind of descriptive text into a tweet along with a link, and we'd like to squeeze a bit more text into channels that support it, like Google search results and Facebook link sharing. We also need to be sure that the various permutations are flexible enough to serve the primary web site's design needs.

    So, how does NPR do it?

    When analyzing how organizations currently handle this stuff, NPR's COPE API is usually the first place to go. Their internal content model is well-documented and available to the public, so it's a good choice.

    Seamus, NPR's CMS, exposes three variations of every story's title, as well as two teasers. There's a primary headline, a subtitle that's supposed to be a one-sentence description of the article, a 30 character short title, a teaser and miniTeaser. Their API doesn't list any specific length limits for the teasers, but it looks like standard ones run around 400-500 characters while miniTeasers weigh in at 100-120 characters. (Interestingly enough, they use 'Slug' to capture the name of the regular show or feature that a story came from, rather than the unique identifier/name for the story itself, but that's a tangent.) What WordPress and many other CMSs call a slug appears to be generated from an article's Short Title, but depending on how much of a stickler you are, it could be considered a fourth variation of the title.

    With those different building blocks in mind, we can take a look at the best matched channels for each story's microcontent. Short Titles, as the teeniest unique bit of information an article possesses, are the best (perhaps only) option for email subject lines and URL slug generation. The distinction between headline and subtitle is a tricky one: it looks like a lot of stories don't have subtitles, though, so I'd be nervous depending on them.

    The uncomfortable part comes when you get into the slightly longer microformat scenarios. Twitter cards give you a full 200 characters to work with, for example, but standard NPR teasers are almost always too long. The best bet is probably to use the standard title and URL as the standard social media post, then include the full title and microTeaser in the the Twitter Card and Facebook-leveraged Open Graph meta tags. (When squeezed for space, say when the date or a show/feature's name must go along with the social post, Category + Short Title + URL is probably a good bet for Tweet text.)

    It's worth remembering that the summary and title meta tags used by Twitter Cards and Facebook OpenGraph support aren't just for an organization's own social media posts. They'll get pulled in automatically whenever a user shares the link themselves; it's a way of ensuring that some well-crafted editorial content gets carried along for the ride even if the user writes their own tweet or post text to go with the link itself. With Twitter Card support, a well-crafted, metadata rich story could easily squeeze in the name of the show/feature, the short title, a link, as well as the full title and miniteaser. Photos and video players can even be worked in, but that's another ball of worms.

    Anyone else?

    There isn't much public documentation around it, but friends who've talked to the New York Times note that the Times maintains four variations of each article's title: Long and short, with 'colloquial' and 'keyword-optimized' versions of each. URL slugs can be generated from the short-keyword-optimized version, the short colloquial version can be shown in small sidebar lists, and the full colloquial version can be shown as the actual page headline. I can see the value, but I'm curious how many teaser/summary variations they produce as well.

    Another client of ours has developed a lightweight COPE-style API for content reuse, and decided to go minimalist. They support only one standard title; auto-generate their URLs from a combination of topical tags and post IDs; and treat social media posts as a separate writing task, with no pre-written article summaries. It allows their writers to fire off new stories with little time spent on extensive metadata and microcontent, but it also requires more manual labor by their social team: as with most systems, it's all about the tradeoffs that work for a given organization.

    Preliminary conclusions

    Beyond the actual character limitations and the need for smooth editorial workflow, clarity is a real concern. Lots of distinct fields doesn't just mean lots of copywriting work, it also increases the potential for accidental misuse of a field. It's easy, for example, to put a catchy tease instead of a factual description in the short summary field and assume that it will only be displayed on its own (rather than with a full title). However, that could make a social media post automatically "assembled" from several short fields feel awkward. Making sure there are clear distinctions in purpose between the different fields is a key.

    After talking to the editorial team and reviewing a few of the existing options, I'm leaning towards the following recommendation:

    • A 40-50 character Title field that serves as the short title, and the source text for an auto-generated URL slug.
    • A 100 character Colloquial title that's used when the article is displayed on its own page, and is also included in the OpenGraph/Twitter Card meta tags. This can default to the standard (short) title if a longer one isn't entered, but editors should get the chance if they want to write a longer one. If it's available, it would also be short enough to squeeze into a tweet.
    • A 155 character summary field that's short enough to include in most of the standard description and summary metadata fields for search engines, social networks, and so on.
    • A longer 200-400 character teaser that's auto-generated from the first paragraph of the article's text, but can be overridden by editors if they want extra control.
    • An optional "excerpt" field that's an actual quote from the meat of the article, intended for use as a pull quote on the full article page. It can also be used as a supplement to the teaser on certain landing pages when a high-profile article is being promoted.

    Titles and summaries should work in combination or independently, but the optional excerpt would always be used with some explanatory text like the summary or full body of the article. That setup would give them just two required fields -- the short title and the 155-character summary -- and allow everything else to be automatically generated or hidden by default. We'll see how it goes.

    It's nitpicky business, these titles and summaries, but with microcontent the margin for error is slim. In the meantime, I'm curious to hear how other content modeling teams are handling these challenges. Any other examples of interesting breakdowns and how they're working for the teams that use them?

    Feb 07 2013
    Feb 07

    Design Principles for a Multi-device World

    Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction.
    — Albert Einstein

    I would argue that a huge part of that genius Einstein refers to can be found in clarity of purpose and principles.

    We all wind up in those situations where we're focusing on technical details, implementation and points of process, and missing the bigger picture. I confess I've been there far too often. When we find ourselves in those situations as designers, it's important to have some guiding principles we can remind ourselves of and even share with our team and colleagues. Guiding principles can help get everyone on the same page and make it easier to work through the details of process and implementation. They're no panacea, but they've certainly helped me maintain my sanity.

    Below I've documented some of my emerging, fundamental design principles. These principles have helped guide me in this brave new world of a bazillion devices and amazing possibilities. Hopefully they'll be helpful to you as you hone your design process, document your own principles, and face challenges along the way.

      The mobile web is important!

      Secret: 98% of the following three paragraphs I learned directly from Luke Wroblewski. If you need help making the case for a focus on mobile, read his writing, see him speak, get in touch with him!

      Why care so much about mobile in our design process? By Q1 of 2012 Apple released numbers that showed there were now more iPhones sold every day than babies born in the entire world (300k babies to 402k iPhones)! That was just iPhones, there were actually 562k iOS devices (which includes iPod Touch and iPad) sold each day at that time. By Q1 2012 we'd also reached 700k Android devices activated per day, 200k Nokia smartphones and 143k Blackberry devices. According to Morgan Stanley Research, by 1990 there were 100M+ desktop internet users. By the early 2000's we had reached 1B+ desktop internet users. Today that number of desktop internet users is only slightly higher than it was in the early 2000's, yet the number of mobile internet users is now 10B+! The number of mobile devices on the planet surpassed the number of humans on the planet nearly two years before Morgan Stanley's research predicted it would, which means mobile is not only ubiquitous, it's growing faster than our expectations.

      But wait, there's more! In Q1 of 2012 Facebook announced they were seeing more people accessing Facebook on the mobile web than from ALL of their top mobile native apps combined! Facebook also released data suggesting that mobile web and app users invested noticeably more time on site than all of their Desktop web users combined. In Q3 of 2011 Nielsen US released their research on mobile users showing that of the millions and millions of mobile users across all platforms, significantly more were using the mobile web as opposed to native apps when given the choice (57M vs 49M).

      Create once, publish everywhere.

      Editorial teams need a singular, simple workflow to produce content once that then gets distributed efficiently and effectively to all device types. Editorial teams need to be focused on content quality, NOT things like device level placement, layout and aesthetic style. When developing your content model, model content types on core editorial and business needs, with an eye towards multi-channel reuse. You can then use those building blocks in the design process. This will ensure that editors aren't forced to become "designers by default". Ideas about form, structure and presentation that create new and more complex processes for editorial teams should be viewed with skepticism and caution. Anything that slows the editorial process, without adding significant content value, damages the core value in your product. A COPE approach (Create once, publish everywhere), with a consistent content model and simple data feeds that can be used by web-based widgets, apps, and business partners, helps facilitate rapid experimentation and innovation. It ensures that experimentation can happen at "the edges" without requiring foundational infrastructure changes.

      Editorial workflow is important!

      It's very easy for design teams to become focused on the consumption experience for content on a website, while completely ignoring how said content is created, reviewed, edited, curated and published. Great consumption experiences begin with great creation experiences. Spend time with the authors, reviewers, editors and publishers early in your design process. Watch what they do. Learn about the content they're producing. Gain an understanding of things like volume (how much of it do they produce), frequency (how often do they produce it) and average length (how much content makes up a single piece) for every type of content they're producing. As a designer, you can't create innovative ideas for new components and interaction methods without really understanding the content, and the best way to understand the content is to spend time with the people who create and nurture it.

      The second part of this principle is that bad or painful editorial workflows create content problems. Also, eliminating editorial workflow pain points makes happy clients. You may not be able to solve all the problems of an editorial workflow process as a designer, but you can play your part in the process by treating it as important.

      Release early and often.

      "Write drunk, edit sober."
      — Ernest Hemingway

      Always err on the side of the simplest viable product for each release (see KISS principle as well as Getting Real). Make quick decisions, make something, find out how users interact with it and what they're valuing. Discover pain points. Adapt. In a competitive market place we need to iterate quickly and fail gracefully. Failing is necessary for innovation, and we can't fail till we try something. Create a culture of rapid experimentation as opposed to analytical paralysis.

      Make existential design decisions based on data, 
not assumptions.

      By "existential design decisions" I mean decisions about whether a particular piece of content or component should exist on the screen. The basic rule here is don't remove things from a mobile experience because you assume mobile users don't want it. Conversely, don't add additional elements to a desktop experience because you assume those users want "enhanced experiences." Begin by delivering one content model and architecture across all devices, and then let real user data drive device specific optimization and customization.

      Mobile users will tell us what they're wanting as they use things (or don't use things). Their interaction patterns, values and preferences can guide optimization and customization, but not until we have them. We need to release something and watch people use it before we form assumptions (see earlier release early and often principle).

      Begin with the basic question of "Is this valuable for users?", not "Is this valuable to users on a particular device type or screen size?". While we may make some assumptions about hierarchical discrepancies from one device type to another, always start from the assumption that if it's important to users, it's important to ALL users.

      It's worth noting that gathering web-based metrics about the behavior of mobile users is easier than logging and tracking the detailed interactions of mobile app users. The mobile web experience can lead the way for us, providing the data we need to understand user values and interactions. Mobile users continue to defy expectations as to what they will do and want to do on their mobile devices. A common frustration for mobile web users happens when assumptions are made about what mobile users do NOT want or need from a desktop experience. It's extremely important that we not limit mobile users based on these assumptions. Creating tailored experiences with unique content models and components for different devices can create significant user experience problems. For example, lets imagine google indexes the desktop version of a website, and provides links to said content on mobile devices based on a search. If those mobile devices then redirect to a tailored site with a limited content model, editing out the content that was searched against, confusion and user frustration ensues. We must never dumb down or limit a desktop experience and call it a mobile experience!

      Design from content outward (not device type or display inward).

      Focus first on delivering the best and simplest possible experience of a complete content model across all devices. Design should begin by uncovering the most valuable type(s) of content, and designing an experience for those. All subsequent displays and views into that content should follow. For example, a news site could begin by determining the most valuable type(s) of news content they provide to their consumers. A design team can then begin researching, wireframing, prototyping and brain storming around the consumption experience of a representative piece of content from each of those types. Once that is fleshed out, the focus can shift to the various structural channels through which parts of that content type are displayed (e.g. a homepage, a top level category landing page, etc.).

      Nothing's more important than knowing what's important.

      "Design is the conscious effort to impose a meaningful order."
      — Victor Papanek

      Design is about helping people understand what's really important and meaningful. That's beautiful. Embrace it! Discover and understand the relative importance of each type of content, the pieces that make up that type of content, and the channels through which that content flows. You can't begin to apply visual hierarchy in design without first knowing the content hierarchy. Design decisions should begin with broad hierarchy evaluations. Develop a components list for each screen (a list of the discreet pieces or chunks of content that exist on the page) and assign a relative hierarchy (e.g. a 1, 2 or 3) to each component in the list. After all that, you can begin to work things out visually with placement, proportion, and style.

      Design mobile first.

      Once again, Luke Wroblewski has shined a spotlight on this and helped me understand it.

      Designing "mobile first" means that we embrace the constraints of a tiny screen early in our design process. We evaluate our content model, components list and hierarchy first with that tiny screen in mind. Once we've established that, we then ask if there are ways that hierarchy changes or interactions can be enhanced for users with screen sizes and bandwidth capabilities beyond mobile. The constraints of the mobile screen size help enhance focus during the design process and keep design teams more closely aligned with whatever the core product value is. It's like packing first in a carry-on suitcase to discover what you REALLY want to bring. Often, you'll find that those extra things you put in your larger suitcase never get worn or used.

      This does NOT mean that the visual experience can't be impressive. Remember, in many ways mobile devices have MORE capabilities than what's common among desktop devices. Things like device positioning, motion, location detection, multi-touch, gyroscope, and audio, video and photo input are common among mobile devices. Design teams may actually create more innovative and rich experiences focusing on mobile first during their design process.

      Optimize, then customize.

      After we actually make and release something, and have real user data to drive the next round of iteration and innovation, we need a way to prioritize that iteration. When both optimizations (e.g. technical solutions to serve up smaller file sizes or more appropriate ad sizes) and customizations (e.g. ideas about changes or enhancements to hierarchy, content or features) are being considered, optimizations should almost always be prioritized over customizations. Great experiences come from the ease and speed with which users can access, interact with, and contribute to content, and that ease and speed are very important. Mobile users continue to defy our assumptions about what they want to do on mobile devices, but they almost always want to do it faster and with greater ease.

      Create and maintain a visual language (NOT a myriad of distinct designs).

      Design teams need to produce a flexible visual language that can provide stylistic guidance across a myriad of screen sizes. There are some formal processes and design tools that can help you do this (e.g. element collages, style tiles, web style guides), but the core principle is to establish a visual language that can allow for quick design decisions across all breakpoints. This approach reinforces the "release early and often" principle above. Having a style guide and other tools to guide visual decisions, rather than a collection of concrete designs tied to specific device widths and scenarios, means that new experimental designs don't have to chart their own course. A design process that takes a tailored approach, providing a myriad of custom static comps can dramatically limit your ability to quickly respond and innovate.

      Feb 01 2013
      Feb 01

      Listen online: 

      Jeff Eaton and Drupal initiative lead Greg Dunlap discuss the history of digital transformation at the Seattle Times, the difficulties of cross-site content sharing, and the importance of cross-discipline communication.

      Release Date: February 1, 2013 - 2:30pm

      Album:

      Length: 39:20 minutes (16.37 MB)

      Format: mono 44kHz 58Kbps (vbr)

      Jan 04 2013
      Jan 04

      Listen online: 

      Jeff Eaton and Karen McGrane review 2012's best articles, presentations, and conversations about structured content, responsive design, and more.

      Mentioned in this episode:

      • Vexing Viewports, by Peter-Paul Koch, Luke Wroblewski, Stephanie Rieger, and Lyza Danger Gardner

      Release Date: January 4, 2013 - 2:30pm

      Album:

      Length: 48:42 minutes (16.3 MB)

      Format: mono 44kHz 46Kbps (vbr)

      Dec 21 2012
      Dec 21

      Listen online: 

      Jeff Eaton and Sara Wachter-Boettcher discuss her new book Content Everywhere, the benefits of cross-discipline communication, and the need to build tools for humans.

      Links mentioned:

      Release Date: December 21, 2012 - 10:00am

      Album:

      Length: 38:32 minutes (15.27 MB)

      Format: mono 44kHz 55Kbps (vbr)

      Dec 07 2012
      Dec 07

      Listen online: 

      Jeff Eaton and Deane Barker discuss the evolution of the CMS, its impact on content strategy, and the ins and outs of content modeling.

      Mentioned in this podcast:

      Release Date: December 7, 2012 - 11:30am

      Album:

      Length: 51:34 minutes (25.21 MB)

      Format: mono 44kHz 68Kbps (vbr)

      Nov 23 2012
      Nov 23

      Listen online: 

      • insert-content-here-05.mp3

      Jeff Eaton, Marc Danziger, and Andrew Berry discuss NBC's Juno Project, a new take on the corporate knowledge base that combines sales and project management tools with long-term organizational history. Topics covered include workflow challenges, the importance of iterative prototyping when dealing with complex content, solving "findability" problems, and more.

      Release Date: November 23, 2012 - 10:00am

      Album:

      Length: 45:28 minutes (25.64 MB)

      Format: mono 44kHz 78Kbps (vbr)

      Nov 09 2012
      Nov 09

      Listen online: 

      Jeff Eaton and Karen McGrane discuss the rise of mobile; the challenges of transitioning to reusable content; and Karen's new book, Content Strategy for Mobile.

      Mentioned in this episode:

      Release Date: November 9, 2012 - 10:00am

      Album:

      Length: 31:03 minutes (16.48 MB)

      Format: mono 44kHz 74Kbps (vbr)

      Nov 09 2012
      Nov 09

      Listen online: 

      Jeff Eaton and Karen McGrane discuss the rise of mobile; the challenges of transitioning to reusable content; and Karen's new book, Content Strategy for Mobile.

      Mentioned in this episode:

      Release Date: November 9, 2012 - 10:00am

      Album:

      Length: 31:03 minutes (16.48 MB)

      Format: mono 44kHz 74Kbps (vbr)

      Oct 26 2012
      Oct 26

      Updated Schedule: More Podcasts Means More Awesome

      This Friday, we've got two podcasts hitting the streets! Insert Content Here delivers an interview with Erin Kissane, long-time editor of A List Apart and the co-founder of Contents Magazine. Meanwhile, The Creative Process talks to Alex Cornell, author of Breakthrough: Overcome Creative Blocks and Spark Your Imagination.

      In case you haven't been paying close attention, we recently split the former "Lullabot Podcast" into 3 new podcasts. The Drupalize.Me Podcast is probably most similar to the old podcast, focused on keeping up with Drupal and the Drupal community. Insert Content Here is our new podcast talking about content strategy and the "content" part of content management. And The Creative Process talks with creative professionals about the process of turning ideas into things.

      Thanks to the great feedback we've received about both podcasts, we're updating our schedule. Starting in November, The Creative Process will be published twice a month on Monday, and Insert Content Here will be coming out twice a month on Friday! The Drupalize.me podcast will keep its current schedule on alternating Fridays. Confused? Subscribe to the "All Lullabot Podcasts" feed and you won't miss a thing!

      Oct 26 2012
      Oct 26

      Listen online: 

      Jeff Eaton and Erin Kissane discuss the past and future of web standards, new experiments in web publishing, and the challenge of predicting the future.

      Mentioned in this episode:

      Release Date: October 26, 2012 - 9:00am

      Album:

      Length: 35:20 minutes (19.78 MB)

      Format: mono 44kHz 78Kbps (vbr)

      Oct 26 2012
      Oct 26

      Listen online: 

      Jeff Eaton and Erin Kissane discuss the past and future of web standards, new experiments in web publishing, and the challenge of predicting the future.

      Mentioned in this episode:

      Release Date: October 26, 2012 - 9:00am

      Album:

      Length: 35:20 minutes (19.78 MB)

      Format: mono 44kHz 78Kbps (vbr)

      Oct 12 2012
      Oct 12

      Listen online: 

      Jeff Eaton and Kristina Halvorson of Brain Traffic discuss content strategy trends, the art of stakeholder wrangling, and proper bourbon pairings for your content audit.

      Links mentioned in the show:

      Release Date: October 12, 2012 - 9:00am

      Album:

      Length: 36:44 minutes (19.16 MB)

      Format: mono 44kHz 72Kbps (vbr)

      Oct 09 2012
      Oct 09

      In the first episode of Insert Content Here, Jeff Robbins and I chatted about the temptation and the danger of the "Dreamweaver Field" content model. When content types are just wrappers around giant chunks of hand-formatted HTML, editors have lots of flexibility but it's all but impossible to repurpose the hard-coded content for new designs and publishing channels.

      In presentations, articles, and her upcoming book, Content Strategy for Mobile, Karen McGrane describes the problem as a war of "Blobs" versus "Chunks." The challenge is figuring out how to decompose a site full of inflexible HTML blobs into discrete, bite-sized fields. There's no magic bullet (a model that works for one project can fail miserably for another), but over the past several years we've accumulated a few useful rules of thumb for "deblobbing" a site's content.

      The Basics

      Don't skimp on the content inventory and auditing process. Figure out what's there, what's going to be tossed, and what you want to have on the site. This is step zero, really: the modeling process is infinitely harder if you're dragging around piles of HTML that don't match what your're trying to build.

      Clump similar content. If your existing site doesn't have discrete content types, figuring out which pages are similar to each other is the next stage. Product reviews, staff bios, press releases, blog entries, portfolio slideshows… You know the drill. Remember to look for pages and content types that are really composites of other, smaller units of content. Often, some of the most complex pages and content types can be implemented as rule-based or curated collections of smaller, more management content types.

      Look for common "chunk" types. Once you've grouped your blobby content types into similar pools, zoom in and look for patterns that are unique to each content type. These are are potential candidates for dedicated fields. Some of the common field types we encounter include:

      • Links to related content
      • Links to downloadable files and embedded media that occur at consistent locations
      • Publication or event dates
      • Pull quotes, hand-written taglines, author bios, and summaries
      • Business and event addresses
      • Geographical locations and maps
      • Lists of information like features or rules and requirements
      • Ratings, prices, and product codes

      Most CMS's support multi-value fields that can be used to model repeating elements like feature lists or multiple file attachments. Be sure to note which elements occur once, and which ones repeat.

      Rinse and Repeat. Once you've broken things into multiple content types and identified the discrete fields on each one, look for overlaps. Are there several content types that share the same list of fields? Consolidating them into a single type might simplify things. Is there one "Godzilla" content type with dozens and dozens fields? It might really be several types that should be teased apart. The first pass of a content model is a lot like the first draft of an essay: there are always rough edges and awkward parts that need work.

      The Tricky Bits

      After identifying all of that easy stuff, large and complex sites usually have quite a few ugly blobs that still need to be broken down.

      Identify composite content. Sometimes, elements of one content type need to be broken out into their own sub-content-types, with simple parent-child relationships connecting them. Galleries that contain multiple photos, albums that contain multiple songs, and curated pages that include teasers for other content are common examples. If several content types in your model contain the same cluster of fields (like photo, caption, byline, and link, consider splitting out the cluster into a its own dedicated content type. Treating those scenerios as relationships between discrete elements can often simplify complex models.

      Look for common formatting complexities. If you have wireframes or existing pages, look for complex visual formatting around certain elements, in particular the stuff that requires lots of hand-written HTML to implement in a "content blob." Comparison tables are a common offender here. Breaking these out into dedicated fields whenever possible can help prevent massive pain when a piece of content needs to be displayed differently in new channels.

      Watch for design elements that change based on context. If you're building a responsive or adaptive site, or have access to designs for mobile apps or other output channels, keep an eye out for elements that appear differently or conditionally based on breakpoints, target device, and so on. It seems obvious, but controlling small elements is infinitely easier when they're broken out as discrete fields.

      Plan for searching and filtering. Try to identify as many different filtered lists of content as possible. Faceted search screens, topical landing pages, author-based blogs, product lists, and so on can't be built efficiently without the right data. If the lists and search indexes that you need don't correspond to fields you've already broken out, remember to add additional ones for the required metadata.

      Isolate the crazy. Inevitably, complex designs end up requiring "helper" content that doesn't seem to fit the well-understood content types the site's stakeholders imagine. Slides for promotional rotators, free-floating promotional microcontent for landing pages… These tend to be highly variable and often need the kind of raw-HTML flexibility that we're trying to avoid. Isolating them in their own content types and living with the cordoned-off craziness can help simplify models with overloaded, field-heavy primary types.

      Recognize when markup is good enough. Despite all the talk about the dangers of blobs, it is possible to go too far. Replacing every HTML div and span with a dedicated field simply to avoid raw markup is overkill, and can easily result in 'Edit Screens of Doom.' Modern WYSIWYG editors generally support plug-in systems, and developing a button to "insert caption here" or "style paragraph as warning" can be a simpler solution. This is where I repeat the warning: There's no perfect content model, only the one that works for your project.

      Test the Model

      The long-term impact of a bad model on a site's maintainability can be frustrating, but it's also impossible to predict every future application the content will be used for. Iteratively testing the model against real-world content and potential applications is critical.

      Put real content into the model. It seems obvious, but it's easy to go down the structural rabbit hole and forget the existing pool of content. Circle around frequently and ask, "How does the content we have in hand fit into these content types and fields?" Look for odd mismatches, required fields that the existing content will leave unpopulated, and so on. Sometimes, the design and the model have to change for practical reasons. Other times, clients or your team will have to update the content to close the gap.

      Plan for three channels. When building a model (or a software API), it's easy to imagine you're creating a reusable system while unintentionally baking in assumptions that make real reuse difficult. If you need content that will adapt to reuse in new channels, be sure that you keep at least three in mind -- think of them as user personas for the model. Desktop web, small-display devices, and rich HTML newsletters are common answers for some businesses. Even if you're only building one of them at first, proposed approaches can be compared against them to ensure you aren't painting yourself into any corners.

      Social sharing is a publishing channel, too. Twitter and Facebook can automatically embed headlines, summaries, and preview images when users paste one of your site's links -- if you provide the metadata that they're looking for. If your model doens't account for those, it will be much tougher.

      Let real users work with it. If you're using a web framework that allows rapid creation of a content model before the full site is finalized, or you can produce wireframes of some sample content input and editing screens, get user feedback sooner rather than later. The people who spend their time creating and maintaining the content can often spot problems and inconsistencies that would otherwise remain undiscovered until launch.

      No Rules, Just Lessons

      None of above ideas are hard-and-fast rules, obviously. At Lullabot, we've spent years building out complex sites (and the underlying content models) for media publishers, government agencies, corporate intranets, ecommerce sites, and more. And yet, every new client comes with surprises and challenges.

      In the coming months, I'll be digging deeper into each of these ideas with follow-up articles and interviews on Insert Content Here. In the meantime, what useful heuristics do you use when breaking down ugly "content blobs" into reusable chunks? Feel free to chime in with comments!

      Oct 01 2012
      Oct 01

      Posted Oct 1, 2012 // 0 comments

      Recently, I revisited the publishing system we built for Thomson-Reuters' London Olympics coverage, one of the features I reviewed was the taxonomy processing aspects of the content ingestion engine. we built this to take in content feeds from Reuters' wire service content management and routing system. When you are in the weeds of building out a system, it's hard to appreciate the complexities of the systems that you are building. It was illustrative to return to the site months after we launched it and gain a deeper appreciation for the challenges we faced in building out the publishing engines that processed thousands of assets per day throughout the duration of the games.

      The application of the taxonomies was a multi-layered process that progressively applied terms to the article nodes in several distinct steps:

      • Sports codes (example: "Athletics", or "Basketball") were parsed out of a series of tags in the article XML and matched against Sport records pulled from the third-party Olympic data provider.  When the Sport records were imported during development and the database populated with Sports and Events, the standard Olympic codes were included, and it was these that were mapped to.
      • In some cases, the codes were mapped instead against a local table of alternative Sport codes used internally by photographers to ensure that these alternative publishing paths would result in equivalent mappings.
      • Events also included in the tags within the XML, but not always.
      • The slugline was crafted to include sport, event, and match information, although only the match information was parsed out.
      • Athlete associations were applied by passing the text elements - title, caption, article body, summaries - through Thomson-Reuters' OpenCalais semantic tagging engine, and pulling 'people' terms from their library of terms. If there were any matches between the person references returned and the Athlete records created from the Olympics data associations with the Athletes, then they were applied.
      • Countries were NOT pulled using OpenCalais, although those mappings were available - the concern was that there would be far too many false-positives applied for Great Britain, given that nearly every article contained references to the host country.  Instead, if Athlete associations were obtained, we queried the Athlete record for the Country with which they were affiliated, and applied that reference to the article.

      Although there were aspects of this process that were worked out as requirements changed and evolved, (in particular, it was discovered relatively late that photographers were using an alternate standard for sports tagging,) the system was ultimately successful because we had mapped out the process well before beginning development. We understood the complexities inherent in Reuters' content model.

      It seems elementary that these things would be worked out ahead of time, but requirements evolve, and sometimes you just have to roll with the changes in order to ensure the success of the project.  What makes this process successful is a successful content strategy.

      Data Informs Process

      We had many sessions where we discussed potential ways of mapping data into the system, and there were a number of alternatives that were rejected because there were potentially too many holes in the processes of managing the data.  Make sure you get a look at production-level data as soon as possible in the project, and make sure your technical leads have a chance to work through any potential issues with decoding and processing the data.  If you can see ahead of time, that there are basic compatibility issues between what should be relatable data points in different third-party data feeds, then there is still time to get alterations made to the data, and, failing that, devising work-arounds, alternative mappings, or transformations using contextual clues in the data.  

      Additional processing steps can be applied to handle systemic issues - as we did by using OpenCalais to gain athlete association before using athlete to create country associations.  Semantic tagging can be used to handle other cases where you know that a key piece of information might be missing from the original article, but an educated guess can be made as to what it is by seeing what subjects and terms are pulled through the parsing.  For example, if a set of articles are missing top-line mappings to sections within a larger news site, using OpenCalais or a similar technology can tell you that topically, it produces the strongest associations within particular vocabularies. References to sports teams and athletes would indicate that it should be a sports article, and references to members of Congress would place it within a politics vertical. 

      Sometimes it's simpler to accept that weaknesses in the data, can be more easily handled by empowering the client with smart tools or smarter business processes.  If the problems can be isolated to an easily identifiable subset of content, these particular articles might be routed to a group of editors whose purview would include handling remapping the missing meta data.  If you know that there are systemic weaknesses in how taxonomies are applied in general - you know that a certain percentage of articles will as a matter of course, be missing terms. You can work the creation of more sophisticated taxonomy management tools into the budget to allow editors more immediate access to the taxonomy.  If your stakeholders decide that the incidence of bad data can be best solved by leaning on their editors and writers to use their internal taxonomies consistently and correctly, they'll start laying the law down as soon as you determine with them that this is the most efficient and promising route to better online content.

      Try to Break the Content

      Key to all of this is is the conversations you have with the client, where you work through their publishing workflows, sources of data, and the intersections between the two.  This needs to go beyond gathering requirements and documenting user stories - you need to try to break the system.  Brainstorm about worst-case scenarios.  Let the client talk about their worst fears regarding the system.  Poke holes in their ideas.  Let them challenge you on yours, and be prepared to walk through any implementations you have in mind.  You'll be much better prepared for the unexpected if you try to narrow down the possibilities for what that might be.   

      Solutions Architect Joshua Lieb has a deep knowledge of web development, content management, and programming thanks to his more than than 15 years of web development experience in the government, publishing, and NGO sectors. His proven ...

      Sep 14 2012
      Sep 14

      Listen online: 

      We've got rhythm! We've got music! In the innaugural episode of Insert Content Here, Jeff Eaton and Jeff Robbins discuss the meaning of Content Strategy, reminisce about the dark days of Dreamweaver, and introduce a dazzling new jingle.

      Release Date: September 14, 2012 - 2:00pm

      Album:

      Length: 38:59 minutes (20.93 MB)

      Format: mono 44kHz 74Kbps (vbr)

      Aug 31 2012
      Aug 31

      Listen online: 

      Jeff Eaton, Addison Berry, Jared Ponchot, and Jeff Robbins say farewell to the Lullabot Podcast and discuss the three new podcasts which will be spawning in its place.

      Release Date: August 31, 2012 - 9:23am

      Album:

      Length: 33:03 minutes (12.82 MB)

      Format: mono 44kHz 53Kbps (vbr)

      Jul 10 2012
      Jul 10

      Posted Jul 10, 2012 // 0 comments

      I recently read an inspiring presentation  on the emergence of specialists in the field of web design. As the field grew and as technologies progressed, specialists emerged to manage the wealth of information and knowledge about the field. One such specialization is the field of User Experience, and within that lies the relatively undiscovered field of Content Strategy.

      While it may not sound like something you want to devote budget, time, or staff to, developing a content strategy is essential to the success of any digital project because content is the fundamental unit of communication with target audiences. A content strategy can help define the purpose of your content, the method of production, and surface any gaps and areas of weakness in providing users with the information they are seeking. The burden of successfully developing, organizing and presenting content in an effective way, does not solely rest on the client, or even the user. It should be a critical component of any project and folded into a larger strategy that assesses the meaningfulness of the content.

      The Difference between Information Architecture and Content Strategy

      I believe there is an important distinction to make here when thinking about Information Architecture and Content Strategy concepts. Information architecture defines the structure of the content, essentially it is the foundation for the house that is to be built. It's the blueprint of the house, outlining where the kitchen is and where the light switch is placed. It organizes the house into rooms and sections and areas.

      On the other hand, Content Strategy defines the approach to the "furniture" of the house, the items that populate the structure. While furniture can come in different shapes and sizes, essentially it is all threaded together by the design and feel of the house, the overarching "theme." Content Strategy aims to create this overarching theme and guiding principles by which content producers can quickly create and disseminate meaningful and relevant content to their readers. While Information Architecture focuses more on the organization of the information and their location, Content Strategy focuses on the production of content, the tone, and the curation of content and aims for consistent and useful content.

      Getting Started

      If you're migrating existing content from a legacy site into a brand new website, take the opportunity to do some "content spring cleaning" which will go a long way toward improving the usability of your site. Here are some tips on getting started:

      • Take inventory of your content in a Content Inventory.  How much content do you have?
      • Create a Gap Analysis. What information is missing?  Where are there gaps in information?
      • Use analytics to generate a Production Strategy. What's the most popular content and how can you generate more of it? What is the least popular content and why? What's the desired workflow for publishing content?

      Showing that your content is well-cared for and carefully curated will go a long way in your user's eyes and will make the launch of a new site, and the maintenance of an existing site, much easier tasks.

      In her role as a Solutions Analyst, Dida brings her years of experiences as an online manager to deliver to the client user-friendly implementations. A natural-born communicator, Dida uses her talent to help clients find the most efficient ...

      Jun 20 2012
      Tom
      Jun 20

      Content management coupled with content strategies are the perfect 1-2 punch for the ultimate results oriented website.

      The biggest revolution in the history of marketing is in full swing. If you are in business and you are on the web, you are in the publishing business. You need to publish great content that attracts audiences and gets them excited about your brand – and, oh yeah, gets them telling their friends.

      However, rarely do organizations optimally leverage the strategies for how to write extraordinary content with the full power of the tools for publishing. The key is to bring them together.

      Content strategies tell you how to position, write and optimize your copy – or other media. Content management systems such as Drupal are your online printing press.

      In this video we cover the essentials of bringing content strategy and content management together in one streamlined framework. In the next video we will cover content promotion through search engine optimization and social media.

      Let us know if the video is helpful in the comments below!


      Get Drupal help when you need it most! Find hundreds of great tutorials. Track, rate, comment and more. Create Account
      Jun 19 2012
      Jun 19

      Posted Jun 19, 2012 // 0 comments

      It's no secret to anyone who's been working in the web over the past several years that "content is king." As more and more organizations start to recognize the value of quality content, they are also realizing that their content will only be as good as the people, tools, and processes involved in managing it.

      We've heard lots of horror stories from editors and content managers over the years about how their CMS was built in such a way that might have solved a hard problem or two but also somehow ended up making the simple things hard.

      Crafting an intuitive and streamlined content management experience gets even harder at enterprise scale, where you're supporting large content teams and any number of external systems that need to be seamlessly integrated with your CMS to support your organization's overall content strategy. We don't back down from this challenge; we embrace it.

      Empowering Content Managers

      Given a difficult economic climate coupled with a 24/7 news cycle, it is more critical than ever that content managers be empowered to execute on their goals without having to rely on technical resources or code deployments. As a result, content management needs at this point in time extend far beyond simply creating, editing, and publishing content. Now it's all about ingesting, packaging, scheduling, and curating content from any number of sources.

      Within larger content teams, content management responsibilities are typically divvied up amongst the group in one way or another (typically by site section for example). A logical extension of that is a core need for editors to see the subset of content under their purview as opposed to viewing a "find content" list that completely inundates them with a river of all incoming content. The right way to attack this varies (workflow states, section assignments, etc.) but the core problem is the same.

      As certain topics gain momentum, editors need to have real time insight via analytics and a set of flexible and intuitive content assembly tools that allow them to capitalize on the opportunity and "own" that topic.

      Efforts such as the Content Staging Initiative within the Large Scale Drupal group further underscore the need for content management solutions to be flexible and extensible enough to meet the needs of teams working together to ingest, evaluate, curate, and collaborate around their content before unveiling it.

      Beyond Drupal

      As I mentioned, big content management challenges inevitably require multi-faceted solutions involving systems other than Drupal.

      For example, one client of ours used a service called PublishThis to create and curate "super stories" around big events that incorporated coverage from all sorts of different sources around the web. We built some tools that allowed super stories to be manifested and updated within the main site built with Drupal.

      We've also worked on integrations with other legacy systems used to produce printed editions of content, so that it is instantly queued up for publishing on the web site. This concept could be extended to any number of other distribution channels as well (mobile sites, apps, etc). Karen McGrane did a brilliant job of breaking down the importance of structured content for facilitating this during her keynote presentation at the Content Strategy Forum in London last fall.

      My colleagues and I at Phase2 are really excited about the renewed focus we're seeing from our clients and colleagues on crafting better content management experiences. We strongly believe that putting powerful and flexible tools in the hands of passionate content creators will benefit us all in the form of more usable and engaging content. As an old client of ours put it: "Now we can focus on our business instead of our technology."

      Dave has a seemingly innate ability to solve problems, anticipate potential pitfalls, and translate business objectives into functional requirements -- which is why he excels as a Solutions Architect at Phase2.

      Dave has an essential ...

      Apr 10 2011
      Apr 10

      Do you know the exact state of the content in your large Drupal site? Thinking of revamping, redesigning or upgrading your site? If you answered 'no!' to the first question or 'yes!' to the second question, it's time for content audit.

      In this blog post, I cover the what, why, who, when, where and how's of content audits. I've conducted a few small content audits, and I'm leading a much bigger one, on Drupal.org.

      What the heck is a content audit?

      The idea behind a content inventory is to determine what content you have and where it lives (the quantitative survey). The content audit is qualitative, where you assess whether it's any good or not, and what needs to change to improve it.

      Traditionally, content inventories are compiled manually, one page of your site at a time, in some sort of spreadsheet. The content inventory should also include PDFs, images, videos, and utility pages such as checkout and log in pages. Content should be inventoried regardless of whether it's hosted on your site or externally. If it's seen or heard within in your content, it needs to be held accountable.

      Sites with 5000+ nodes could be auditing using a sampling of nodes that represent each content type, but what if you miss some glaring errors? In those instances, you could provide a small link to all site visitors who can flag the page as needing work (this can be done easily enough in Drupal, using a variety of approaches).

      Newsflash! Content audits aren't fun and exciting (unless you're a content geek like I am). They can be really boring.

      Disc Inventory X for Mac audits the contents and space of your hard drive, displaying the contents as colored squares to represent each file type, and how much space those file types consume. Wouldn't it be great to get this sort of clarity on your site content?

      Hard disc inventory depicted as colored squares

      Why you should audit your content

      Primarily, because it's impossible to be sure of the content quality in large sites, especially those with multiple contributors. When a site is audited, all sorts of oddities will be found, such as unfinished blog posts, unpublished nodes, outdated or inaccurate content, redundancies, content in the wrong place, or content that doesn't meet the organization's style standards/guidelines. (You do have style guidelines, right?). You might also find gaps in your content, e.g. discover that Product A, Product B and Product D was covered extensively, but Product C's listing doesn't include target audience and Product E's description was never written.

      "If you don't know what content you have now, you can't make smart decisions about what needs to happen next."

      If you want even more reasons why you should conduct a content inventory/audit, the books listed at the bottom of the post give reasons, and more in-depth advice.

      Who should do a content audit?

      A person who cares about the quality of your site's content, or a content strategist. Sometimes the person who cares becomes the content strategist.

      It's tempting to ask an intern or a random person in the organization who has free time to do, but it means the job may never get done, or done haphazardly, especially if they think they know the content really well. As with QA and user testing, content auditing is often better done with fresh eyes; someone outside the organization.

      When to do a content audit

      Ideally, you'll complete the audit before:

      • Reorganizing your current site structure
      • Upgrading to a new version of Drupal or (oh noes!) migrating to a different CMS
      • Adding new content types
      • Making content decisions based on your site's SEO performance

      If a site were migrating to Drupal, a content audit could be performed after the migration, but before key decisions about site architecture, IA/hierarchy, navigation/menus and publishing workflows are finalized.

      Like any website project, the more organized and informed you are, the more likely your project will be a success.

      Where to do a content audit

      I don't mean, "Should I be in the 3rd floor conference room, or should I be at my desk?" I mean, where should you store the content audit's data?

      The probem with constructing your content audit in a spreadsheet is that when the content audit is revisited 6 months later, it's going to be out of date—what a drag!

      The standard Drupal administrative content overview page at admin/content/node/overview doesn't allow you to add more columns or sort by column heading; the overview page alone isn't sufficient.

      Roll your own inventory with Views

      Here's a recipe, using Views 2 in Drupal 6 to (mostly) painlessly create your content inventory.

      Screenshot of a content inventoryThe content inventory will be a like a spreadsheet, with columns listing information for each of your nodes.

      Preparation for the content audit

      1. Ensure Views 2 is installed and enabled
      2. Enable core Statistics module (optional)
      3. You may want an administrative theme enabled; the table view of the View is going to get wide, and you don't want to try to decipher squished columns. Otherwise, just ensure the blocks or other page elements don't display on the inventory page.
      4. You can use the Annotate module to store editorial or "what needs doing" notes about each node.
      5. You can use flags to flag notes as needing a particular type of work.

      I'll cover annotations and flags in depth, in a future blog post about the Content Audit module, which will package a content audit View and other tools for quicker inventories.

      Building the view

      First, create a new View and call it something like 'node_content_inventory'.

      Basic settings

      Title: [Your site name] Node Content Inventory
      Style: Table
      Access: administer nodes (or perhaps you want to use your Admin role for access)

      Having long pages is desirable once you start resorting and filtering. If you have a couple hundred nodes, you can avoid using pagers, so set it to 'No' with Items per page as Unlimited.

      Otherwise, display 200 items or so per page and enable the full pager.

      In the Table styles options, ensure every field is marked as sortable. Unfortunately, you can't sort by path.

      (If you're using Book content, you should include the depth too! But unfortunately, there isn't a default way to display the parent of each book node.)

      Screenshot of view edit screen

      Fields to display

      Node: Nid Nid
      Node: Title Title
      Node: Path Path
      Node: Type Type
      Node: Published Published
      Node: Updated date Updated date
      Node: Post date Post date
      User: Name Author
      Node statistics: Total views Total views
      Book: Depth Depth

      These will produce the site's basic content inventory. The above are just suggestions; feel free to add more if you want to evaluate different fields.

      Create a Page display and give it a path such as 'node-content-inventory'.

      Filters

      Next, you'll want to filter out content that isn't relevant to your content inventory. Let's say you want to audit everything on your conference site apart from Room and Time slot nodes.

      Create a Node: Type filter, only including the content you want to audit. Then expose the node Type filter. You may want to create and expose other filters, depending on your needs.

      Views edit screen

      A manually-constructed content audit would include which section the site is in. So, if your site's sections are generated by taxonomy term, you should also include this taxonomy field in its own sortable column in the View.

      Ideally, the node paths would be sortable with an exposed filter, but that will require writing a Views handler. It's on the To Do list!

      In the header text, include the date and month the content will be reviewed, and update it each time you re-evaluate your content. (Every 6 months, right?)

      Finally, save the view (actually, you should probably be saving as you're constructing it).

      Evaluating the content

      Look at your page at node-content-inventory. It's a giant list of all your nodes.

      Now what? You have options.

      • Option A (Export): export this view to CSV and manage your content audit in spreadsheet software or Google Docs. The Views Bonus Pack module will let you export the view as a CVS file.
        Pro: Less data to store in your database
        Con: Your inventory is almost immediately outdated; will be extra time-consuming to update on a regular basis
      • Option B (Flags): 'Flag' content that needs to be updated using the Flag module. You can adjust the flag settings so that only certain roles can use this flag, and then set which content types can be flagged (or set it as global). You can then add the Flag to the content audit view, and/or create a separate view of nodes that have been flagged. However, marking something as "needs work" doesn't provide enough detail, so you'll want to record notes somehow (see Option C). You can create additional flags for each type of action to be taken, e.g. "needs style review" or "redundant", without the casual visitor even knowing, while keeping your content team in the loop.
      • Option C: Keep 'audit notes' field on each content type in Drupal site database with the Annotate module or by creating a field on the nodes.
      • Option D: Use the Content Audit feature module, which constructs the View, and will soon have configured Annotation (this hasn't been built yet).
      • Option E: Create another node type called Audit notes, and once filled in, then nodereference the node it covers. Then, it's easy to build a view of the Audit notes (I'm also wondering whether this could be a custom bit of code to automagically do this, then include in the Content Audit module).

      Option C and D are great, because you're not reinventing the wheel every time you review your content inventory.

      Content inventory for files and other non-node content

      For files, such as images or PDFs attached to nodes, you might want to create a separate View and call it file_content_inventory. You could combine it with your primary node content audit, but the headings are different.

      Other non-node content, such as landing pages created with Panels, Views, tpl.php or things like login pages need to be audited manually.

      1, 2, 3... Audit!

      Now the fun begins. Read existing content and check the following (and you may want to include extra instructions for your reviewers, if it isn't you doing the work).

      • Does it contain redundant info (is this content covered elsewhere? note down the node ID where it is covered)
      • Is it in the wrong place (does another page make more sense?)
      • Does the content need restructuring? Provide examples.
      • Is it inaccurate? Don't spend time researching the inaccuracies, but if you think the content is inaccurate, let's say so.
      • Is it useful? Does it add value? Or just wasting people's time? Just because it exists, doesn't mean it should be kept around.
      • Does it need language (grammar, spelling etc) improvements?
      • Is it written for the right audience?
      • Is there any key information that's missing?
      • Does the url path not make sense? As a top level or important page, does it need it's own path?
      • Could the content be supported by an image or illustration? If yes, what?

      This process could take weeks or months, depending on many factors.

      Summary

      • Content audits. They're important. Do them.
      • If you don't want to do the audit yourself, find someone who can. Hire an information architect or content strategist.
      • Take the time to improve your content based on the audit. People come to your site for the great content.

      process
      Content audit process: Start > Decide what needs auditing > Configure tools for auditing > Evaluate content > ??? > Profit!!

      The '???' is where you (or your content strategist and writers) do the magic of identifying and fixing content problems. After that, your site will reap the benefits.

      Now, I'm interested if anyone has done content audits using Views or other modules, and what your experiences it it were! Please drop some knowledge in the comments below.

      Additional Reading

      Content Strategy for the Web, by Kristina Halvorson, particularly Chapters 4 and 12

      About Drupal Sun

      Drupal Sun is an Evolving Web project. It allows you to:

      • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
      • Facet based on tags, author, or feed
      • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
      • View the entire article text inline, or in the context of the site where it was created

      See the blog post at Evolving Web

      Evolving Web