Jun 29 2017
Jun 29
June 29th, 2017

Recently I was working in a Drupal 8 project and we were using the improved Features module to create configuration container modules with some special purposes. Due to client architectural needs, we had to move the /features folder into a separate repository. We basically needed to make it available to many sites in a way we could keep doing active development over it, and we did so by making the new repo a composer dependency of all our projects.

One of the downsides of this new direction was the effects in CircleCI builds for individual projects, since installing and reverting features was an important part of it. For example, to make a new feature module available, we’d push it to this ‘shared’ repo, but to actually enable it we’d need to push the bit change in the core.extension.yml config file to our project repo. Yes, we were using a mixed approach: both features and conventional configuration management.

So a new pull request would be created in both repositories. The problem for Circle builds—given the approach previously outlined—is that builds generated for the pull request in the project repository would require the master branch of the ‘shared’ one. So, for the pull request in the project repo, we’d try to build a site by importing configuration that says a particular feature module should be enabled, and that module wouldn’t exist (likely not present in shared master at that time, still a pull request), so it would totally crash.

There is probably no straightforward way to solve this problem, but we came with a solution that is half code, half strategy. Beyond technical details, there is no practical way to determine what branch of the shared repo should be required for a pull request in the project repo, unless we assume conventions. In our case, we assumed that the correct branch to pair with a project branch was one named the same way. So if a build was a result of a pull request from branch X, we could try to find a PR from branch X in the shared repo and if it existed, that’d be our guy. Otherwise we’d keep pulling master.

So we created a script to do that: &lt;?php $branch = $argv[1]; $github_token = $argv[2]; $github_user = $argv[3]; $project_user = $argv[4]; $shared_repos = array( 'organization/shared' ); foreach ($shared_repos as $repo) { print_r("Checking repo $repo for a pull request in a '$branch' branch...\n"); $pr = <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch, $github_token, $github_user, $project_user, $repo); if (!empty($pr)) { print_r("Found. Requiring...\n"); exec("<strong class="markup--strong markup--pre-strong">composer require $repo:dev-$branch</strong>"); print_r("$repo:dev-$branch pulled.\n"); } else { print_r("Nothing found.\n"); } } function <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name, $github_token, $github_user, $project_user, $repo) { $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name"); curl_setopt($ch,CURLOPT_RETURNTRANSFER,true); curl_setopt($ch, CURLOPT_USERPWD, "$github_user:$github_token"); curl_setopt($ch, CURLOPT_USERAGENT, "$github_user"); $output=json_decode(curl_exec($ch), TRUE); curl_close($ch); return $output; } $branch=$argv[1];$github_token=$argv[2];$github_user=$argv[3];$project_user=$argv[4];$shared_repos=array(  'organization/shared'foreach($shared_reposas$repo){  print_r("Checking repo $repo for a pull request in a '$branch' branch...\n");  $pr=<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch,$github_token,$github_user,$project_user,$repo);if(!empty($pr)){    print_r("Found. Requiring...\n");    exec("<strong class="markup--strongmarkup--pre-strong">composer require $repo:dev-$branch</strong>");    print_r("$repo:dev-$branch pulled.\n");  else{    print_r("Nothing found.\n");function<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name,$github_token,$github_user,$project_user,$repo){  $ch=curl_init();    curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name");  curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);  curl_setopt($ch,CURLOPT_USERPWD,"$github_user:$github_token");  curl_setopt($ch,CURLOPT_USERAGENT,"$github_user");$output=json_decode(curl_exec($ch),TRUE);  curl_close($ch);  return$output;

As you probably know, Circle builds are connected to the internet, so you can make remote requests. What we’re doing here is using the Github API in the middle of a build in the project repo to connect to our shared repo with cURL and try to find a pull request whose branch name matches the one we’re building over. If the request returned something then we can safely say there is a branch named the same way than the current one and with an open pull request in the shared repo, and we can require it.

What’s left for this to work is actually calling the script:

- php scripts/require_feature_branch.php "$CIRCLE_BRANCH" "$GITHUB_TOKEN" "$CIRCLE_USERNAME" "$CIRCLE_PROJECT_USERNAME" -phpscripts/require_feature_branch.php"$CIRCLE_BRANCH""$GITHUB_TOKEN""$CIRCLE_USERNAME""$CIRCLE_PROJECT_USERNAME"

We can do this at any point in circle.yml, since composer require will actually update the composer.json file, so any other composer interaction after executing the script should take your requirement in consideration. Notice that the shared repo will be required twice if you have the requirement in your composer.json file. You could safely remove it from there if you instruct to require the master branch when no matching branch has been found in the script, although this could have unintended effects in other types of environments, like for local development.

Note: A quick reference about the parameters passed to the script:

$GITHUB_TOKEN: #Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a> $CIRCLE_*: #CircleCI vars, automatically available $GITHUB_TOKEN:#Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a>$CIRCLE_*:#CircleCI vars, automatically available

[Editor’s Note: The post “Running CircleCI Builds Based on Many Repositories” was originally published on Joel Travieso’s Medium blog.]

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jun 26 2014
Jun 26

TimeBanks USATimeBanks USA is a 501c3 nonprofit organization that promotes and supports timebanking. Timebanking was created by Dr. Edgar S. Cahn, who founded TimeBanks USA in 1995.

Timebanking is a tax-exempt alternative currency system that works like this: if I spend one hour helping you build your website, I earn one credit, or time dollar. You can then turn around and exchange that time dollar by giving it to someone who fixes your refrigerator, coaches you on your resume, or gives you a ride to the airport.

The possibilities are endless,” according to TimeBanks USA. “An hour of gardening equals an hour of childcare equals an hour of dentistry equals an hour of home repair equals an hour of teaching someone to play chess.” It’s different from bartering, because this type of timebanking is based on services (and not goods) between members of a network.

This wasn’t the first time Exaltation of Larks has worked with alternative currencies. We created a virtual economy for Digital Dollhouse, a casual game where girls are empowered to become their own interior designers. In this virtual world, it’s possible to trade or regift items like dolls, plants and pets, and work with an in-game currency named ddCoins.

In addition to our work with TimeBanks USA, our experience with timebanking includes working as volunteers with two Los Angeles-area timebanks: Arroyo S.E.C.O. Time Bank and the West LA timebank cleverly named Our Time Bank. Our Time Machine project is an experimental Drupal installation profile for communities and organizations looking for turnkey timebanking software for their members and participating businesses and organizations.

TimeBanks USA founder Dr. Edgar S. Cahn has spent more than four decades striving for social justice. He began his career working for the Kennedy administration, focusing on alleviating poverty and hunger. He then opened the Citizens Advocate Center, an organization dedicated to protecting the rights of community groups as they interacted with the government. In 1972, Dr. Cahn founded the Antioch School of Law, whose curriculum was designed to teach students to practice law for the greater good of society.

Here at Exaltation of Larks, we have enormous respect for Dr. Cahn: at the age of 80, he is still a rabble-rouser and hell-raiser who is fighting to change the world, and we’re proud to provide him with the technical assistance to further this goal. Dr. Cahn is a true visionary and we hope to work with — and write about — him and his partner, Chris Gray, TimeBanks USA’s CEO, more in the future.

TIMEBANKS USA’s ROLE IN TIMEBANKING

TimeBanks USA TimeBanks USA supports timebanking in myriad ways, including offering onsite trainings nationwide; organizing an annual timebanking conference; hosting webinars and teleconference calls; and consulting individually with clients. The organization helps members connect with local timebanks or create their own.

One of the parts of TimeBanks USA infrastructure is a large scale social networking platform named Community Weaver, which has a software-as-a-service subscription model. There are more than 400 timebanking websites all around the world that rely on it to help manage and organize their timebanking processes, community activities and other needs.

TIMEBANKS USA’s NEEDS

Exaltation of Larks performed a substantial security and performance audit on Community Weaver, a complex Drupal multisite system. We helped TimeBanks USA fix critical issues affecting one of their essential online organizational tools — their Community Weaver software. This software platform runs a quickly evolving and iterating network of Drupal websites, so it was vital that the software could be updated and developed sustainably and seamlessly, yet without overriding the autonomous decision-making processes of each chapter website.

In addition, we worked with TimeBanks USA to develop a project plan for version 3.0 of Community Weaver and raise the funds to build it; we addressed problems arising from the system’s simultaneous use of both WordPress and Drupal; and we helped streamline the organization’s decision-making process.

TimeBanks USA needed extensive rework on their Community Weaver software, specifically with regard to security, performance and usability issues. Community Weaver is an online organizing and tracking tool for timebank members: it records time exchanged, displays service offers and requests, keeps track of memberships, and displays announcements for the community. Any local timebank can subscribe to TimeBanks USA’s software-as-a-service (SaaS) system to manage their members’ work. TimeBanks USA hired Exaltation of Larks to audit and rework Community Weaver 2.0, with the plans to eventually migrate all their technology, online memberships and e-commerce data to version 3.

TimeBanks USA was also experiencing security problems with its self-hosted WordPress website, which was outside our original scope of work. TimeBanks USA used our emergency support system and we quickly mobilized to resolve this new issue. We determined that security had been compromised and implemented several solutions to tighten it up, from checking the code integrity to updating MySQL access and hardening file permissions.

In addition to our work with TimeBanks USA, we worked with the Arroyo S.E.C.O. Time Bank, one of the many timebanks affiliated with TimeBanks USA. Arroyo S.E.C.O. serves neighborhoods in the eastern and northeastern Los Angeles area, which meant the Larks who were in the Downtown Los Angeles area could work with them one-on-one.

OUR SOLUTION

We began by tackling the security issues found in Community Weaver. Fortunately, TimeBanks USA had an in-house Drupal developer, who we worked with on a massive infrastructure audit, focusing on security and performance. This multisite installation had been built by its previous developer with development practices that were common in 2007, before Features and configuration-in-code became popular. We identified which multisite instances had been modified by their local chapters’ coordinators — which meant examining data structures, views, and content types across hundreds of Drupal websites — and which had unsafe code or configuration. We found security vulnerabilities through the entire stack, from the Drupal systems and websites down to the server operating system, all of which we documented, prioritized and / or resolved.

This was an extensive audit that had both technical and political ramifications. Each chapter is run by its coordinators and volunteers and sometimes in completely different ways than other chapters. In a multisite environment, making technical decisions for the entire fleet of hundreds of websites would impact all local chapter websites that had been modified for their own business cases.

We worked in conjunction with TimeBanks USA to devise policies and joined them on many global community conference calls — open to all coordinators of all the timebanks in the world — to describe our technical approach and to solicit feedback. Our task was to provide technical leadership for the entire organization. We needed a set of standards for sustainable development of this enormous network, but we also needed to respect each individual chapter’s right to make its own decisions.

The project plan we provided included time estimates to address the security problems we found. TimeBanks USA’s tech coordinators reviewed our list of most-needed fixes and then we consulted with a local timebank coordinator and Community Weaver user to make sure these fixes matched their timebank’s list of essential tasks.

We worked with several popular web hosting providers, including Drupal-as-a-service platform companies, to negotiate competitive pricing on behalf of TimeBanks USA. Due to their unique system and web application architecture, we recommended SoftLayer based on their features and pricing.

TimeBanks USA Community Weaver

The unfortunate multisite architecture that the prior developers had devised had the result of creating exponential complexity precluding any proper maintenance and further development on the system. We navigated our way through thousands of lines of uncommented custom code. We also found that the Linux server environment was an abandoned and unsupported custom distro. In both cases, we replaced as many unknown components as possible with stable, peer-reviewed alternatives and we documented the rest. We also stabilized the system by locking down the kinds of changes that individual coordinators could make to their individual timebank chapter websites, thus reducing future maintenance costs.

We fixed several security issues in the system by altering file permissions, MySQL accounts, and text input filters. We used PHP Filter Lock, a module we developed that disables the text form fields that contain PHP code, thereby mitigating the risk of CSRF and XSS security threats on websites that have the core PHP Filter module enabled.

On the same server as the Drupal multisite network was a WordPress marketing website. This in itself is not a problem. Exaltation of Larks’ position is that WordPress is great for simple websites and Drupal is great for complex systems and web applications. Having both on the same server created unnecessary security issues, however. The WordPress installation was technically able to overwrite anything on the Drupal side as well as access the Drupal database. We changed all MySQL usernames and passwords and locked down the file permissions so that the WordPress website could no longer be overwritten or be a risk to other software on the server, including Community Weaver.

Next, we worked with TimeBanks USA to develop the requirements for the next version of Community Weaver. The materials we developed included specifications for a fully featured mobile app, a business plan with financials and pitch deck, and more, and were designed to help TimeBanks USA secure additional funding. In the meantime, we trained a member of their community to maintain the software so they could further reduce their total cost of ownership.

Exaltation of Larks also provided TimeBanks USA with communications strategy consulting services. We performed a 360-degree organizational audit and came up with a more streamlined decision-making process. We created flowcharts of all the key players and stakeholders at TimeBanks USA and highlighted the points at which they had both strengths and weaknesses, and made recommendations where more efficiency was needed.

COMMUNITY INVOLVEMENT

Timebanking has evolved very differently in other parts of the world in ways that no one could have predicted. Nowhere is this emergent behavior more apparent than in highly populated cities, where the numbers, density, and different practices around timebanking create vastly different needs. One such advanced timebank is the Arroyo S.E.C.O. Time Bank in Los Angeles, which has thousands of members across dozens of separate neighborhoods. They needed several custom workflows implemented on their individual timebanking website to manage the scale that had resulted from their impressive growth. By its very nature, the timebank had no money for further development on their individual website.

Barnraisings are a concept taken from Amish culture, where the community comes together to build a barn for a newly married couple who wouldn’t be able to afford the time or expense of building a barn on their own. In the context of web development, barnraisings are like code sprints where the programming community gets together with a deserving nonprofit, and works with them to create or improve their software. For the development community, this is a teaching experience, and newer developers get to learn from seasoned veterans about client relationships, requirements gathering, project planning and the tools used for effective teamwork. The nonprofit brings food — usually excellent food — and everyone benefits.

Starting in April, 2012, the Larks partnered with Droplabs and arranged three separate barnraisings to build new features for the Arroyo S.E.C.O. Time Bank. Not only was a good time had by all, the team built functionality that the Larks turned into Features-based modules that could then be securely distributed to the other timebanks, to be turned on, or not, according to the wishes of each individual timebank coordinator. Features built included a custom registration workflow, neighborhood-specific blogs, and structured data types for content, among others.

PROJECT OUTCOME

Previous to Exaltation of Larks coming on board, TimeBanks USA had been working with a different development company. The Community Weaver software proved challenging to rework and over the 2 years we worked together we ensured that key security and performance problems with the software were resolved.

TimeBanks CEO Chris Gray says of the project: “Given the importance of the software for the mission and vision of TBUSA, and given how much we had to learn, this was a very intense experience for us.”

In addition, with the help of the volunteers at the barnraisings, we added several new features to the Community Weaver software, including a blog post content type and RSVP feature that integrates with the Signup module. These features directly benefit all the hundreds of TimeBanks chapters around the world that use the same Drupal distribution of Community Weaver.

All members of the Larks team, from the principals to the project leader to the programmers, demonstrated that they cared deeply about the quality of the work undertaken,” Chris Gray said. “[They] provided many hours of consultation to this endeavor. We are truly grateful for those contributions. Under challenging circumstances, they provided highly professional services to TBUSA. We greatly appreciate the professionalism of the Larks and the ongoing willingness to go above and beyond.”

Jun 06 2012
Jun 06

Recently we had to create a Drupal 7 multisite install that used subfolders of a single domain, rather than the standard sub domain approach.

This proved to be quite the challenge but we found the following solution courtesy of some lengthy discussion on the #drupal-uk IRC channel.

We were surprised about the lack of documentation or blog posts around such a set-up so for the future we're documenting the process.

Setting up each sub-site

In your Drupal document root create a symlink for each of your sites to the same document root folder:

cd /var/www/drupal
ln -s . site_1
ln -s . site_2
ln -s . site_3

Repeat for however many sites you wish to run as a sub folder, we've done it for 3 subsites.

Mapping to a Drupal site install

Each of your Drupal sub sites should be installed as normal with their own folder under the document root sites/ folder. Each directory containing its own settings.php pointing to a dedicated database etc.

Now within the new Drupal 7 global sites/sites.php mapping file use the following code:

<?php
$prefix 
$_SERVER['SERVER_NAME'];
$sites = array(
    
"8080.{$prefix}.site_1" => "site_1",
    
"8080.{$prefix}.site_2" => "site_2",
    
"8080.{$prefix}.site_3" => "site_3",
);
?>

Note: in the above code '8080' is pre-fixed to the path because our local development environment web server is running on port 8080. If you're using port 80 (standard) this bit can be removed. It's also worth noting that if your vhost has FollowSymLinks in it's options then this will not work, it has to be commented out.

Your Drupal sites should now be visible at http://SERVER_NAME/site_1 http://SERVER_NAME/site_2 http://SERVER_NAME/site_3

Apr 27 2012
Apr 27

Organic Groups as a Multisite Implementation

Posted on: Friday, April 27th 2012 by Raphael

Drupal's Organic Groups module allows users to create and manage their own groups. With these groups, you can manage access to your content by associating content with a group. This association, combined with the Context module, provides us with the foundation to create a multisite geared towards communities. A foundation that solutions such as Drupal Commons use to provide community-driven content.

Groups as a Site

With organic groups, each group is treated as a content type. As a result, you can use each node of that content type as one of your sites. For example, if you had a sports-focused site, you could create a Group node called Hockey. Using this group node, you can utilize the Context module and treat it as a section of your website. With your groups dividing your site into sections, we can use each node as a context condition allowing us to

  • change the navigation of the site
  • change the page layout
  • retrieve content solely belonging to a group.

These actions will be the result of triggers that act upon the group node condition. As a result, deploying out new sites is as simple as creating a group node and applying the necessary context conditions and triggers.

Single Database Installation with Multisite Features
Approaching multisite implementations with organic groups allows us to keep a single database. A single database saves us from such issues as

  • moving content across sites
  • sharing users across sites
  • applying update scripts across sites

As a result, our data becomes easier to maintain as we simply need to assign content to a group. Moving content between sites is a matter of ticking a checkbox for a group to create the necessary link. This single database solution also eases update scripts that need to be run for modules as there is only a single database to act upon.

Organic groups and contexts allow us to provide community-driven multisite solutions. With each group as its site, we can deploy and take down sites by simply adding a new group node. As well, maintining the site becomes easier as we only need to deal with a single database when compared to a traditional multisite implementation.

Organic groups and context: providing the simplicity of a single database solution with the functionality of a multisite.

Aug 04 2011
Aug 04

I’ve been waiting a long time to tell you something.

Several months ago, Treehouse was signed on by (Top 100 Federal Prime Contractor) Energy Enterpise Solutions as the development partner for Energy.gov’s massive, eleven-site Drupal relaunch.

Today, I am ecstatic to announce that the platform is live!

The Energy.gov relaunch represents a huge step in the ongoing progression of major government agencies moving to Drupal; these agencies have included the US House of Representatives, The White House, and the Department of Commerce, among others. As with other federal and state projects, the Department of Energy has proven to be an enthusiastic supporter of the Drupal community, with the project yielding numerous patches and even a new module or two to be contributed back to the community.

New Energy.gov homepage

The project was executed by an all-star team that included not only Treehouse but also the design and usability gurus at  HUGE and Drupal support mavens at Acquia, all brought together by Energy Enterprise Solutions. Cloud-based hosting is provided by BlackMesh.

Running on Drupal 7, the multisite platform gives the Department of Energy the tools they need to both provide a more engaging, enjoyable user experience and realize operational efficiencies.

Highlights of the project include:

  • Easy sharing of content between sites on the platform. Editors can now write content that is easily deployed to multiple sites, rather than each site having its own completely distinct resources.

  • Data visualizations and mapping. Our devs cooked up some very cool things in rendering data visualizations and maps (using the super-slick MapBox). You’ll see more on this in the next week or so as they put together blog content about the technology implemented.

  • Major editor empowerment. Along with creating content that can be applied anywhere within the platform’s network, it is now actually possible for editors to create new sites without relying on the development team.

  • Drupal breakthroughs. Innovations in using Drupal include sophisticated workflows using a state machine, and the new Bean module (currently available as an alpha release). As with the data renderings, you’ll hear more about these innovations in the coming weeks.

  • Independence. The Department of Energy now owns their site platform; its open source code means that they will no longer have to rely on a single vendor for support or site updates.

Finally, a big thank you and congratulations to Cammie Croft and Liz Meckes at the Department of Energy. It has truly been a pleasure working with the DOE team on this project.

As the CEO of Treehouse, I couldn’t be more proud of our team, nor more awestruck by what they have achieved. This project is as much of a milestone for us as I hope it is for the Department of Energy, and I am very hopeful that it will lead to more such projects with our public agencies in the future.

Keep an eye out for tech details from our team in the coming weeks. Trust me, it will be worth the read. ;) In the meantime, check out Energy.gov and join us on Twitter (@treehouseagency)!
Jan 12 2011
Jan 12

A previous article covered some basic groundwork for mobile sites in Drupal. This article goes on to look at different ways to setup a mobile site in Drupal. It covers single site, multisite, single site with settings.php tweak and the Domain Access module. Caching strategies, redirect rules and other server side settings are also discussed.

RESTful design of URLS

REST defines the architecture of the World Wide Web. One of the principles of REST is that a single URI represents a resource and that resource is conceptually different from the representations returned to the client.

Representational State Transfer “Representational State Transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web. The term Representational State Transfer was introduced and defined in 2000 by Roy Fielding in his doctoral dissertation.[1][2] Fielding is one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification versions 1.0 and 1.1″

Here’s the passage from Roy Fielding’s thesis (emphasis added) which discusses the differences between resource and representation:

“This abstract definition of a resource enables key features of the Web architecture. First, it provides generality by encompassing many sources of information without artificially distinguishing them by type or implementation. Second, it allows late binding of the reference to a representation, enabling content negotiation to take place based on characteristics of the request. Finally, it allows an author to reference the concept rather than some singular representation of that concept, thus removing the need to change all existing links whenever the representation changes (assuming the author used the right identifier).”

A resource is named by a URI. The server chooses the best representation to provide to the client based on headers sent by the client. In this case we are looking at the User Agent.

If we were to follow RESTful principles then the mobile site should indeed be served from the some domain as the desktop site. ie. one resource, different representations. In this scenario the HTML returned to the mobile client is just a different respresentation to that provided to the desktop site. This is a natural way to design a web app as it means that there is only one “canonical” URI for the resource with no chance of nasty duplicate content issues. From an SEO point of view this is desireable. However…

Caching, the fly in the ointment

We’ve just seen that serving different representations from a single URI is a good thing from many perspectives: mobile first, progressive enhancement, REST and SEO. However, there is one reason why we may we may decide to go down the path of using two domains instead of one: caching.

Caching mechanisms, such as Drupal Core and Boost, used the fully qualified domain name of a URI to determine caching keys. This allows the cache to quickly serve content to different clients without knowing the criteria which decides the representation received by the client, ie. the cache just has to know about the URI, it doesn’t need to decipher the user agent. Currently, if different representations are served for the same resource then the cache will likely become populated with a mix of different representations, leading to chaos. For this reason it is generally accepted that having a separate mobile site on a sub domain is a good way to go. ie. we would have two sites:

Cache by theme for mobile sites mikeytown2 offering some great advice on Apache and Boost rules. .htaccess Mobile Browser Redirect User Agent processing in Apache to redirect to mobile.

Some users have solved the caching problem AND manage to serve different representations from the same URI. Going mobile with a news site that Just Works describes how browser detection can be done in the caching layer, in this case Squid, before redirecting the request invisibly to another domain. This is the perfect setup as RESTful principles are maintained and the site is scalable. Hats off. Unfortunately not everyone is running a reverse proxy which allows for this kind of setup. A request looks like this:

  1. mobile client does GET http://example.com/about,
  2. Squid (port 80) looks at User Agent, determines device and sends to http://m.example.com/about,
  3. Boost finds “about” in /cache/normal/m.example.com/ -> Static HTML returned OR,
  4. Drupal serves from multisite -> Dynamic HTML returned.

mikeytown2 claims that it should be easy enough to add some logic into the Boost rules based on user agent, he just needs to know what they are. So there is a good chance that Boost user’s will be able to server both mobile and desktop from two sites with one URI space. From my understanding of the proposed approach it looks like a single domain will be all that is required.

  1. mobile client does GET http://example.com/about,
  2. Boost looks at User Agent, determines device and uses a different “mobile” device rather than “normal”,
  3. Boost finds “about” in /cache/mobile/example.com/ -> Static HTML returned OR,
  4. Drupal serves from single site -> Dynamic HTML returned.

A slightly different approach has been described in Mobile Detection with Varnish and Drupal where Varnish sets a header which can then be read in the webserver or Drupal. This is a neat approach as it means that device logic needn’t be repeated in settings.php. The flow described by Morten Fangel is as follows:

  1. mobile client does GET http://example.com/about,
  2. Varnish also sets a X-Device header for the device
  3. Varnish looks at User Agent, determines the device and appends it to the hash for the key
  4. Varnish “about” in cache -> Static HTML returned OR,
  5. Drupal serves from single site -> Dynamic HTML returned.

Assuming you don’t have Squid, Varnish or a patched Boost to hand you will probably have a setup as follows:

  1. mobile client does GET http://example.com/about,
  2. Apache rewrite looks at User Agent, determines device and redirects http://m.example.com/about,
  3. Drupal Core or Boost finds “about” in cache -> Static HTML returned OR,
  4. Drupal serves from multisite -> Dynamic HTML returned.

Sub domain vs Different Domain

If you are going to use a separate site to host the mobile site then you are free to chose whatever domain you like. eg. example.mobi. However, it is generally recommended to stick with using a sub domain of the desktop site. This confuses users less and it is possible to share cookies across sites on the same domain.

Different theme

As discussed in the previous article, it is possible to serve the same default theme to both mobile and desktop sites and then progressively enhance the desktop site with some extra CSS. The method proposed in Rethinking the Mobile Web at slide 106:

<link href=’default.css’ type=’text/css’ rel=’stylesheet’
media=’screen’ />
<link href=’desktop.css’ type=’text/css’ rel=’stylesheet’
media=’screen and (min-device-width:1024px) and (max-width:989px)’ />

This is a very cool way to design a site as it keep things very simple. Mobile is first and then comes the progressive enhancement. However, this isn’t a pattern which is adopted by most Drupal themes where the presumption is for the desktop theme. If we did take this approach it would preclude us from using the majority of themes designed for Drupal so far. Given this, I would say that our Drupal site will support two separate themes, one for desktop and one for mobile. The general approach is to use a multisite setup. define a desktop theme as default in the GUI and then to override that theme via a tweak in settings.php for the mobile site.

Multisite setup

Assume we are using two domains due to caching requirements. How do we serve this content? Drupal does have a multisite feature built in where a single Drupal “platform” can support many different site instances. These sites can share all data, no data or partial data, depending on how they are setup in settings.php. In the case of a mobile site we would want to share all data between the sites.

One possible setup is to create a directory for the desktop and mobile versions under sites/

sites/

  • all/
    • modules/
      • contrib/
      • custom/
        • mw_mobile/
    • themes/
      • base_desktop/
      • base_mobile/
      • mw_desktop/
      • mw_mobile/
  • default/
  • example.com/
    • settings.php
  • m.example.com/
    • settings.php

The only trick to get this work is to manually set the default theme for the mobile site in the sites/m.example.com/settings.php file. For every page request, the config in settings.php will override the default variables defined in the variables table in the database.

$conf = array(
‘theme_default’ => ‘mw_nokia_mobile’,
);

If you manually set a value like this you won’t be able to change it in the UI, naturally enough. Make sure the theme is active in the GUI.

Alternative 1: Single site with settings.php logic

The above multisite setup will work, however, there is something wrong with it. It will stop you from hosting a true multisite setup where the sites share code but have different databases. This may not worry you if you are only hosting a single site for the platform but it could be important if you want multisites. Imagine a site for Company X served on example.com and Company Y is on example.net. You couldn’t use multisites with the above settup because of the reliance on shared files in default/files.

However, you can achieve a very similar effect with a single site by using a bit of conditional logic in settings.php for example.com and example.net. The idea is to set the theme based on the domain leading to only needing a single site to support desktop and mobile. Add this to sites/example.com/settings.php/

$parts = explode(‘.’, $_SERVER['HTTP_HOST']);
if ($parts[0] == ‘m’) {
$conf = array(
‘theme_default’ => ‘company_a_mobile’,
);
}

You could then support mobile with a pure sites setup with shared code and different databases/files. This is a good way to go.

sites/

  • all/
    • modules/
      • contrib/
      • custom/
    • themes/
      • base_desktop/
      • base_mobile/
  • default/
    • files/ -> empty
  • example.com/ -> company A
    • files
    • modules
    • themes
      • company_a_mobile
      • company_a_desktop
    • settings.php -> with conditional setting of default theme
  • example.net/ -> company B
    • files
    • modules
    • themes
      • company_b_mobile
      • company_b_desktop
    • settings.php -> with conditional setting of default theme
multi site a) standard theme, site b) mobile theme – same code and same tables? Discussion of multisite setups.

Alternative 2: Domain Access

The Domain Access module, discussed later, can set a lot of this up for you including sub domains, domain aliases and themes. You may prefer to use it for convenience, especially if you like configuring stuff in a GUI rather than settings.php or custom modules.

Mobile global variable

Modules are going to want to access a global variable which tells them the device accessing the site: mobile or desktop. There are a variety of ways to do this, some set the variable early, others late:

  1. Custom “X-Device” header set in a reverse proxy
  2. Conf variable set in settings.php
  3. Global variable set by a module during hook_init()
  4. API function offered by a module

It is possible to do this through the use of hook_init() in a custom module. I tried this but ran into problems with timing and module weight. Sometimes you will want the mobile module to be heavy, sometimes light :) In the end I went with an “api” function in my mw_mobile module which stored a static variable. It should be pretty fast and not to cumbersome. Other contrib modules take an approach similar to this.

/**
* An API function in a custom module
* Efficiently returns whether the site is mobile.
* Other modules should call it as follows:
* $mobi = module_exists(‘mw_mobile’) && mw_mobile_is_mobile();
*/
function mw_mobile_is_mobile(){
static $out;
if (isset($out)) {
return $out;
}
// set and return
if (substr($_SERVER["SERVER_NAME"], 0, 2) == ‘m.’) {
$out = TRUE;
} else {
$out = FALSE;
}
return $out;
}

This approach is perhaps not the best. It may be better to set a global variable very early in the bootstrap process, in settings.php, so that it could be reliably used by all other Drupal code.

Cross site authentication

It is possible to set cookies up so that they will be sent no matter what the sub domain. In settings.php uncomment the $cookie_domain variable and set it to the domain, excluding the sub domain. Please note that this will not work if you are using different domains.

$cookie_domain = ‘example.com’;

Redirecting the user to mobile

When a mobile user hits the desktop version of the site you want them to be redirected to the mobile site. There’s at least three ways to do this:

  • PHP
  • JS
  • Apache

The first inclination maybe to go with PHP as afterall, we are PHP developers. However, this has the shortcoming of requiring that Drupal be bootstrapped before the PHP can be run, destroying the chance to safely cache the page for anonymous users. It’s slow and ineffective. Doing it in PHP therefore isn’t an option. This is the approach some of the mobile modules take but I think it’s something to be avoided.

You could of couse do a client side check in Javascript for the client’s user agent. This will allow for caching but has the downsides of forcing a full page download. Also, not every client will have JS enabled. Not really an option.

The final option of doing it in Apache (or your webserver) is the only viable alternative. I went with a recipe similar to the following in my .htaccess.

# Mobile: force mobile clients across to the mobile site
RewriteCond %{HTTP_HOST} !^m\.(.*)$
RewriteCond %{HTTP_USER_AGENT} !ipad [NC]
RewriteCond %{HTTP_ACCEPT} “text/vnd.wap.wml|application/vnd.wap.xhtml+xml” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “acs|alav|alca|amoi|audi|aste|avan|benq|bird|blac|blaz|brew|cell|cldc|cmd-” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “dang|doco|erics|hipt|inno|ipaq|java|jigs|kddi|keji|leno|lg-c|lg-d|lg-g|lge-” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “maui|maxo|midp|mits|mmef|mobi|mot-|moto|mwbp|nec-|newt|noki|opwv” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “palm|pana|pant|pdxg|phil|play|pluc|port|prox|qtek|qwap|sage|sams|sany” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “sch-|sec-|send|seri|sgh-|shar|sie-|siem|smal|smar|sony|sph-|symb|t-mo” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “teli|tim-|tosh|tsm-|upg1|upsi|vk-v|voda|w3cs|wap-|wapa|wapi” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “wapp|wapr|webc|winw|winw|xda|xda-” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “up.browser|up.link|windowssce|iemobile|mini|mmp” [NC,OR]
RewriteCond %{HTTP_USER_AGENT} “symbian|midp|wap|phone|pocket|mobile|pda|psp” [NC]
RewriteCond %{HTTP_USER_AGENT} !macintosh [NC]
RewriteRule ^(.*)$ http://m.%{HTTP_HOST}/$1 [L,R=302] .htaccess Mobile Browser Redirect Outlines the approach taken above.

SSL issues

If you are using SSL on the desktop version of your site then you have a couple of extra hurdles to jump in order to get it working on the mobile site.

Firstly, as it isn’t possible to set up SSL for two different domains on the same IP address, you will probably need to rent a new IP address for the mobile version of the site. Sort this out with your ISP. It should cost you between $1 and $3 a month for another IP. They may have instructions for setting up A records, static routing for your IP addresses, etc.

Secondly, you will also need to sort out another certificate for the mobile site. You could purchase a wildcard certificate for the domain and all sub domains. These cost a fair bit more and will save you from buying a new cert. However, it is probably cheapest to get another cert for the mobile site along with a new IP. You will then need to install the certificate on your server and tweak your site config for the mobile site. This certainly is one of the pains of having two separate sites.

PositiveSSL from Comodo $10 pa. Gandi Free first year then 12 Euro pa.

Custom version of robots.txt

A corollary of having a shared database and file system with a multisite install is that you can’t have custom versions of key files such as robots.txt which sits in the root of your Drupal platform. In the simple case I don’t believe that there is any need to have a different version, however, if you do need to support different versions then you can do it with a bit of .htaccess magic. Place the following code under the mobile redirect rule. Just be sure to add robots.txt to the /sites/%{HTTP_HOST}/files/robots.txt.

# robots.txt: solve multisite problem of only one robots.txt
# redirects to file in /sites/files/robots.txt
RewriteRule ^robots\.txt$ sites/%{HTTP_HOST}/files/robots.txt [L] multi-site robots.txt GDO discussion of this approach.

Duplicate content, Canonical URLs and robots.txt

You now have two sites where the mobile site replicates all the content of the desktop site. This is a major issue as search engines such as Google will treat is as duplicate content leading to declines in ranking. We need to sort this out. Google came up with the concept of a canonical URL which can be defined in a link element in the head of the HTML page. In our case the link points back to the desktop site.

<link rel=”canonical” href=”http://example.com/about” /> Specify your canonical Google documentation on how to define a canonical URL.

We need every page in the mobile site to support this tag. This can be set in your mobile module:

/**
* Implementation of hook_init().
*/
function mw_mobile_init() {
if (!mw_mobile_is_mobile()) { return; }
// Add a canonical URL back to the main site. We just strip “m.” from the
// domain. We also change the https to http. This allows us to use a standard
// robots.txt. ie. no need to noindex the whole of the mobile site.
$atts = array(
‘rel’ => ‘canonical’,
‘href’ => str_replace(‘://m.’, ‘://’, _mw_mobile_url(FALSE)),
);
drupal_add_link($atts);
}

/**
* Current URL, considers https.
;* http://www.webcheatsheet.com/PHP/get_current_page_url.php
*/
function _mw_mobile_url($honour_https = TRUE) {
$u = ‘http’;
if ($_SERVER["HTTPS"] == “on” && $honour_https) {$u .= “s”;}
$u .= “://”;
if ($_SERVER["SERVER_PORT"] != “80″) {
$u .= $_SERVER["SERVER_NAME"].”:”.$_SERVER["SERVER_PORT"].$_SERVER["REQUEST_URI"];
} else {
$u .= $_SERVER["SERVER_NAME"].$_SERVER["REQUEST_URI"];
}
return $u;
}

The final thing to resolve is whether to set “noindex” on the mobile site. This definitely an area where there is some confusion on the web. After sniffing around I came to the conclusion that it was OK to allow Google to index the mobile site, so long as the canonical links have been specified. This means that any page rank given to the mobile site will flow to the desktop site and you won’t be punished for duplicate content.

The outcome is that you can go with the same robots.txt for both sites, ie. robots are free to index the mobile site. There is no need to specify a different robots.txt for mobile. You want the same stuff indexed for the mobile as you do with the desktop.

The one exception to this would be the files/ directory. A recent core update to 6.20 allowed files/ to be indexed. Fair enough, you want your public images to be indexed. However, you could raise the case that files/ shouldn’t be indexed in the mobile site, given that there is no way to specify a canonical link for these binary files. So, you may well want to support a different robots.txt for each site by blocking access to files on the mobile site. This is a very minor issue and probably not worth worrying about.

Be Sociable, Share!
Dec 23 2010
Dec 23

Drush + Bash + Cron: Datbase Backup Goals

  • Scan sites directory for a given drupal install
  • Find all multisite folders/symlinks
  • For each multisite:
  • Use Drush to clear cache - we dont want cache table bloating up the MySQL dump file
  • Use Drush to delete watchdog logs - we dont want watchdog table bloating up the MySQL dump file
  • Use Drush to backup the database to pre-assigned folder
  • Use tar to compress and timestamp the Drush generated sql file
  • Setup Crontab to run perodically with the above commands as a bash file

Assumptions and Instructions

You will need to adjust the Bash file if any of these are not the same on your server

  • Drupal is installed in /var/www/html/drupal
  • Multisites are setup in the /var/www/html/drupal/sites folder
  • Backup folder exists in /var/www/backup/sqldumps
  • Drush is already installed in /root/drush/drush. If drush is not installed follow this Drush installation guide
  • AWK is already installed, if not, type: sudo yum install gawk

Drush Backup BASH file

Copy paste the code below and create a new bash file ideally in your/root home folder. Make the Bash file executable.

#!/bin/bash
#
 
# Adjust to match your system settings
DRUSH=/root/drush/drush
ECHO=/bin/echo
FIND=/usr/bin/find
AWK=/bin/awk
 
# Adjust to match your system settings
docroot=/var/www/html/drupal
backup_dir=/var/www/backup/sqldumps
 
multisites=$1
 
START_TIME=$(date +%Y%m%d%H%M);
 
# Add all multisites for a given docroot into a list. Detects all web addresses which are a directory which isn't named all, default or ends in .local.
if [ "${multisites}" = "all" ];then
        # If multisites are folders change -type d
        # If multisites are symlinks change -type l
        # Adjust $8 to match your docroot, it $x needs to be the name of the multisite folder/symlink
        multisites_list="`$FIND ${docroot}/sites/* -type l -prune | $AWK -F \/ '$8!="all" && $8!="default" && $8!~/\.local$/ { print $8 }'`"
else
        multisites_list=$multisites
fi
 
 
# Must be in the docroot directory before proceeding.
cd $docroot
 
for multisite in $multisites_list
do
        # Echo to the screen the current task.
        $ECHO
        $ECHO "##############################################################"
        $ECHO "Backing up ${multisite}"
        $ECHO
        $ECHO
 
        # Clear Drupal cache
        $DRUSH -y -u 1 -l "${multisite}" cc all
 
        # Truncate Watchdog
        $DRUSH -y -u 1 -l "${multisite}" wd-del all
 
        # SQL Dump DB
        $DRUSH -u 1 -l "${multisite}" sql-dump --result-file="${backup_dir}"/"${multisite}".sql
 
        # Compress the SQL Dump
        tar -czv -f "${backup_dir}"/"${START_TIME}"-"${multisite}".tar.gz -C "${backup_dir}"/ "${multisite}".sql
 
        # Delete original SQL Dump
        rm -f "${backup_dir}"/"${multisite}".sql
 
        $ECHO
        $ECHO
        $ECHO "Finished backing up ${multisite}"
        $ECHO
        $ECHO "##############################################################"
 
done

Setup Crontab

Assuming your bash file containing the code above is saved as /root/drush_backup.sh, you can setup a crontab for root user.

crontab -e
1 1 * * * /root/drush_backup_db.sh

Further Reading and Links

Related blog posts: 


Bookmark and Share
Feb 14 2010
Feb 14

Since the release of Aegir 0.3 I've been using Drupals multisite configuration like crazy.  Mainly because Aegir uses it but also because it just makes sense.  Starting a site in multisite works great, but moving an existing site to multisite configuration can cause some problems.

For example, on a regular Drupal install the images and file uploads are stored in the "sites/default/files" directory.  On a multisite install, the files/images are stored in "sites/example.com/files" directory.  This causes a problem becuase the links to most images/files that are in content will point to the wrong location:

So you'll get a site that looks like so:

If I only have 4 images on the whole site thats not really a big deal but if I have hundreds of images in hundreds of nodes I don't really want to fix each one individually.

Using Search and Replace in phpMyAdmin to Quickly Fix Links

The command for mysql that allows you to a search and replace is pretty simple:

update [table_name] set [field_name] = replace([field_name],'[string_to_find]','[string_to_replace]')

This isnt completely dumby proof as you need to know what table and field name you want to search/replace.

Most of my images are in the node body and are stored in the table "node_revisions" in the field "body".  I am going to replace anytime there is a "sites/default/files" with "sites/example.com/files".

UPDATE node_revisions set body = replace (body, 'sites/default/files', 'sites/example.com/files');

Simple enough.  I also want to change where the main file system is set at:

UPDATE files set filepath = replace (filepath, "sites/default/files", "sites/example.com/files");

I can run both of these at the same time.  I first go to my database

Then click the sql tab and run the command in it:

Thats it for the basics.  You may need to do other tables besides these 2 depending on what else you've got going on, but this should help you out.

Imagecache Issues

To fix this, I had to do something like so:

WARNING, I have not tested this code on D5 (all the above I tested on D6).  But should be similar on D5

UPDATE files set filepath = replace (filepath, 'sites/old.example.com/file/', 'sites/new.example.com/file/');

I had to remove all the sites/example.com/file/ (dont forget the end slash on this one) from all the links.

Test this first

Always do this on a test site first and have backups.  Dont just start going around search replacing on your sites.  That = bad.

Other things to think about

  • site logo
  • images in comments
  • images in things other than comments and nodes

Am I missing anything?

May 03 2006
May 03

One of the great features of Drupal is its ability to run any number of sites from one base installation, a feature generally referred to as multisites . Creating a new site is just a matter of creating a settings.php file and (optionally) a database to go with your new site. That's it. More importantly, there's no need to set up complicated Apache Virtual hosts, which are a wonderful feature of Apache, but can be very tricky and tedious, especially if you're setting up a large number of subsites.

No worries, there is a solution.

Create a new LogFormat

Copy the LogFormat of your choice, prepend the HTTP host field, and give it a name:

LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vcombined 

Get the script

Next, download the attached script (split-logfile) and store it somewhere like /usr/bin (don't for get to chmod 755 that baby!)

Now, tell apache to use pipe logfiles to your script rather than writing them directly to disk:

CustomLog "| /usr/bin/split-logfile" vcombined 

Restart Apache

/etc/rc.d/init.d/httpd restart

That's it.

Naturally, you may have to modify split-logfile if you don't store your logfiles in the default location.

 

 

Apr 21 2006
Apr 21

This article explains a practical implementation of a technique outlined in the article "Sharing Drupal tables between databases using MySQL5 Views".

Problem

You have multiple (multisite) Drupal sites and you would like to manage the content for all of these sites through a single interface. Depending on the nature of a given piece of content, you may want the content published on one, several or all of your subsites, but you do not want to have to create copies of the same content for each site.

Solution

Taxonomy plus MySQL5 views. (NOTE: this solution will not work with versions of MySQL prior to 5.)

Assumming you have your subsites properly set up and running, the first step is to create a special vocabulary which you will use to target content.

Go to [your site's baseurl]/admin/taxonomy/add/vocabulary and create a vocabulary. We'll call it simply "sites".

Next, go back to your taxonomy page (/admin/taxonomy) and select "edit vocabulary" for the "sites" vocabulary.

Add a name for each of the subsites you would like to manage. For our example, we'll have two subsites, "foo" and "bar", and one master site, "master".

Now add at least three pieces of test content. Target one piece of content for each of foo, bar and both.

Next, we're going to create a node view for each of our subsites that we'll use to replace the actual node table.

The SQL is as follows:

CREATE VIEW [subsite, eg. "foo"]_node AS SELECT n.* FROM node n, term_data td, term_node tn, vocabulary v WHERE v.name = '[vocabulary name, eg. "sites"]' AND td.vid = v.vid AND td.name = '[subsite vocab term, eg. "foo"]' AND td.tid = tn.tid AND n.nid = tn.nid ;

Because the terms that serve as our subsite labels may very well exist within other vocabularies, we also need to join on the vocabulary table to ensure our solution works reliabley.

Finally, we need to have our subsites use the views we have created instead of our master nodes table, which only the "master" site will have access to directly.

In your drupal's sites directory, you should have directories that correspond to each of your drupal sites (both master and subsites). Edit the settings.php file for each of your subsites, and use the db_prefix variable to point the site to your view. So sites/foo.example.com/settings.php would contain the following:

$db_prefix = array( 'node' => 'foo_', );

At this point, you'll want to disable creation of content from within each of your subsites. You can do this in the from the admin/access page. If you attempt to create content from within the subsites, you'll likely get a 'duplicate key' error.

I hope that explanation is clear. These articles are written rather hastily. If you questions or suggestions regarding this solution, please leave a comment.

Jan 07 2006
Jan 07

After I was back from a short vacation without Internet access, I was surprised to see a post by Robert Douglass titled "Uwe Hermann's Drupal article in PHP Solutions is out" on the drupal.org main page. At that point I didn't even know that the article had been published ;-)

Robert wrote a short review of the article which outlines the main content. In contrast to my last article which was a general introduction to Drupal, this one concentrated on some specific (probably more advanced) topics instead: multi-site install, l10n+i18n, search engine optimization, and AJAX(-like) user interfaces of the upcoming Drupal 4.7 release.

The printed article should be available in multiple languages (German, French, Polish and Italian, at least). There might be a free online version as soon as the next issue of PHP Solutions is published, but I'm not entirely sure.

Thanks a lot to Robert Douglass for the review and to Drupal's Benevolent Dictator for Life Dries Buytaert who reviewed an early draft of my article and provided helpful suggestions!

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web