Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Feb 21 2017
Feb 21

My last post talked about how Docker microcontainers speed up the software development workflow. Now it’s time to dive into how all this applies to Drupal.

I created a collection of Docker configuration files and scripts to make it easy to run Drupal. If you want to try it out, follow the steps in the README file.

The repository is designed using the microcontainers concept, so that each Drupal site will end up with 3 containers of it’s own (Apache, MySQL and Drush containers), which are linked together, to run our application. If you want to serve a new site, you need to create separate containers.

In theory you could re-use containers for different web applications. However, in practice, Docker containers are resource-cheap and easy to spin up. So it’s less work to run separate containers for separate applications than it is to configure each application to play nice with the other applications running on the same container (e.g.: configuring VirtualHosts and port mappings). Or at least this is what my colleague M Parker believes.

Plus, configuring applications to play nice with each other in the same container kind of violates the “create once, run anywhere” nature of Docker.

How it works

My repository uses the docker-compose program. Docker-compose is controlled with the docker-compose.yml file, which tells Docker which containers to start, how to network them together so they serve Drupal, and how to connect them to the host machine. This means serving the Drupal repository filesystem and mapping a port on the host machine to one of the ports in one of the containers.

A useful tip to remember is that docker-compose ps will tell you the port mappings as shown in the screenshot below. This is useful if you don’t map them explicitly to ports on the host machine.

Docker terminal

Networking

If you’ve ever tried setting up a bunch of containers manually (without docker-compose), it is worth noting (and not very well documented in the Docker docs, unfortunately) that you don’t need to explicitly map port 3306:3306 for the mysql container, because docker-compose sets up a miniature network for containers run from the same docker-compose.yml. It also sets up hostnames between each container in the same docker-compose.yml. This means that the web container can refer to the mysql-server machine with the hostname mysql-server, and, even if you implicitly map 3306 to some random port on the​ host machine, web can talk to mysql-server on port 3306.

Note in this case that the container running MySQL is named db, so, when you’re installing Drupal, on step 4 (“Database configuration”) of the Drupal 7 install script, you have to expand “Advanced options”, and change “Database host” from localhost to db!

Filesystem

It is possible to put the Drupal filesystem into a container (which you might want to do if you wanted to deploy a container to a public server). However, it doesn’t really make sense for development, because most of the time, you’re changing the files quite frequently.

To get around this requirement for a development environment, we mount the current folder (often referred to as ‘.’) to /var/www/html in the container, which matches where the current directory is mounted in all three containers. This is done with the ‘volumes’ directive in the docker-compose.yml file. The ’working_dir’ directive says “when you run the Drush command in the Drush container, pretend it’s running from /var/www/html”, which is the equivalent of ‘cd /var/ww/html’ before you run a drush command.

So when you run the Drush command in the Drush container, it sees that it’s currently in a Drupal directory and proceeds to load the database connection information from sites/default/settings.php which tells it how to connect to the mysql on the db container with the correct credentials. (recall the links directive makes sure that the drush container can access the db container so it can connect to it on port 3306).

The Drush container

The drush container is a bit special because it runs a single command, and is re-created every time a Drush command is used.

If you look at the step 9 of my Docker configuration files you’ll see it says…

  • Run Drush commands with:
USER_ID=$(id -u) docker-compose run --rm drush $rest_of_drush_command

… i.e.: docker-compose run --rm drush i.e. start the container named drush, pass it $rest_of_drush_command

Docker terminal containers

If you look at the Dockerfile for Mparker’s Dockerfile, you’ll see it contains a line saying ‘ENTRYPOINT [“drush”]’. ENTRYPOINT is a variant of the CMD command which passes all the rest of the ‘docker run’ parameters to the command specified by the ENTRYPOINT line.

So what happens when you run that ‘docker-compose run’ line is that it creates a new container from the ‘mparker17/mush’ image, with all the configuration from the ‘docker-compose.yml’ file. When that container runs, it automatically runs the ‘drush’ command, and docker-compose passes ‘$rest_of_drush_command’ to the ‘drush’ command. When the ‘drush’ command is finished, the container stops, and the ‘–rm’ thing we specified deletes the container afterwards

Running USER_ID=$(id -u) before a command sets an environment variable that persists for that command; i.e.: when docker-compose runs, an environment variable $USER_ID exists; but $USER_ID goes away when docker-compose is finished running. You can leave out the USER_ID=$(id -u) if you add that line to your shell’s configuration. Essentially what that environment variable does is set the user account that the Drush command runs as. If you don’t specify the user account, then Docker defaults to root.

The main reason why I do this is so that if I ask Drush to make changes to the filesystem (e.g.: download a module, run drush make, etc.) that the files are owned by me, not root (i.e.: so I don’t have to go around changing ownership permissions after I run the drush command)

It may only be necessary on Windows/Macintosh, because the virtual machine that Docker runs in on Win/Mac has different user IDs — I think if you run a Docker command from a Linux machine, your user id is already correct; but because a Docker command on a Mac/Win is run with your Mac/Win user ID (e.g.: 501) but gets passed to the docker VM’s ‘docker’ user (which runs as user 1000), some problems arise unless you’re explicit about it.

Acknowledgements Lastly, I would like to thank Matt Parker here, who has been mentoring me since the beginning of setting up docker and telling me better ways to do it. He also recommends reading the Docker book if you want to explore this further.

Feb 15 2017
Feb 15

We use Docker for our development environments because it helps us adhere to our commitment to excellence. It ensures an identical development platform across the team while also achieving parity with the production environment. These efficiency gains (among others we’ll share in an ongoing Docker series) over traditional development methods enable us to spend less time on setup and more time building amazing things.

Part of our workflow includes a mechanism to establish and update the seed database which we use to load near-real-time production content to our development environments as well as our automated testing infrastructure. We’ve found it’s best to have real data throughout the development process, rather than using stale or dummy data which runs the risk of encountering unexpected issues toward the end of a project. One efficiency boon we’ve recently implemented and are excited to share is a technique that dramatically speeds up database imports, especially large ones. This is a big win for us since we’re often importing large databases multiple times a day on a project. In this post we’ll look at:

  • how much faster data volume imports are compared to traditional database dumps piped to mysql
  • how to set up a data volume import with your Drupal Docker stack
  • how to tie in this process with your local and continuous integration environments

The old way

The way we historically imported a database was to pipe a SQL database dump file into the MySQL command-line client:

mysql -u{some_user} -p{some_pass} {database_name} < /path/to/database.sql

An improvement upon the default method above which we’ve been using for some time allows us to monitor import progress utilizing the pv command. Large imports can take many minutes, so having insight into how much time remains is helpful to our workflow:

pv /path/to/database.sql | mysql -u{some_user} -p {some_pass} {database_name}

On large databases, though, MYSQL imports can be slow. If we look at a database dump SQL file, we can see why. For example, a 19 MB database dump file we are using in one of our test cases further on in this post contains these instructions:

--
-- Table structure for table `block_content`
--

DROP TABLE IF EXISTS `block_content`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `block_content` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `revision_id` int(10) unsigned DEFAULT NULL,
  `type` varchar(32) CHARACTER SET ascii NOT NULL COMMENT 'The ID of the target entity.',
  `uuid` varchar(128) CHARACTER SET ascii NOT NULL,
  `langcode` varchar(12) CHARACTER SET ascii NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `block_content_field__uuid__value` (`uuid`),
  UNIQUE KEY `block_content__revision_id` (`revision_id`),
  KEY `block_content_field__type__target_id` (`type`)
) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8mb4 COMMENT='The base table for block_content entities.';
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `block_content`
--

LOCK TABLES `block_content` WRITE;
/*!40000 ALTER TABLE `block_content` DISABLE KEYS */;
set autocommit=0;
INSERT INTO `block_content` VALUES (1,1,'basic','a9167ea6-c6b7-48a1-ac06-6d04a67a5d54','en'),(2,2,'basic','2114eee9-1674-4873-8800-aaf06aaf9773','en'),(3,3,'basic','855c13ba-689e-40fd-9b00-d7e3dd7998ae','en'),(4,4,'basic','8c68671b-715e-457d-a497-2d38c1562f67','en'),(5,5,'basic','bc7701dd-b31c-45a6-9f96-48b0b91c7fa2','en'),(6,6,'basic','d8e23385-5bda-41da-8e1f-ba60fc25c1dc','en'),(7,7,'basic','ea6a93eb-b0c3-4d1c-8690-c16b3c52b3f1','en'),(8,8,'basic','3d314051-567f-4e74-aae4-a8b076603e44','en'),(9,9,'basic','2ef5ae05-6819-4571-8872-4d994ae793ef','en'),(10,10,'basic','3deaa1a9-4144-43cc-9a3d-aeb635dfc2ca','en'),(11,11,'basic','d57e81e8-c613-45be-b1d5-5844ba15413c','en');
/*!40000 ALTER TABLE `block_content` ENABLE KEYS */;
UNLOCK TABLES;
commit;

When we pipe the contents of the MySQL database dump to the mysql command, the client processes each of these instructions sequentially in order to (1) create the structure for each table defined in the file, (2) populate the database with data from the SQL dump and (3) do post-processing work like create indices to ensure the database performs well. The example here processes pretty quickly, but if your site has a lot of historic content, as many of our clients do, then the import process can take enough time that it throws a wrench in our rapid workflow!

What happens when mysql finishes importing the SQL dump file? The database contents (often) live in /var/lib/mysql/{database}, so for example for the block_content table mentioned above, assuming you’re using the typically preferred InnoDB storage engine, there are two files called block_content.frm and block_content.ibd in /var/lib/mysql/{database}/. The /var/lib/mysql directory will also contain a number of other directories and files related to the configuration of the MySQL server.

Now, suppose that instead of sequentially processing the SQL instructions contained in a database dump file, we were able to provide developers with a snapshot of the /var/lib/mysql directory for a given Drupal site. Could this swap faster than the traditional database import methods? Let’s have a look at two test cases to find out!

MySQL import test cases

The table below shows the results of two test cases, one using a 19 MB database and the other using a 4.7 GB database.

Method Database size Time to drop tables and restore (seconds) Traditional mysql 19 MB 128 Docker data volume restore 19 MB 11 Traditional mysql 4.7 GB 606 Docker data volume restore 4.7 GB 85

In other words, the MySQL data volume import completes, on average, in about 11% of the time, or 9 times faster, than a traditional MySQL dump import would take!

Since a GIF is worth a thousand words, compare these two processes side-by-side (both are using the same 19 MB source database; the first is using a data volume restore process while the second is using the traditional MySQL import process). You can see that the second process takes considerably longer!

Docker data volume restore

Traditional MySQL database dump import

Use MySQL volume for database imports with Docker

Here’s how the process works. Suppose you have a Docker stack with a web container and a database container, and that the database container has data in it already (your site is up and running locally). Assuming a database container name of drupal_database, to generate a volume for the MySQL /var/lib/mysql contents of the database container, you’d run these commands:

# Stop the database container to prevent read/writes to it during the database
# export process.
docker stop drupal_database
# Now use the carinamarinab/backup image with the `backup` command to generate a
# tar.gz file based on the `/var/lib/mysql` directory in the `drupal_database`
# container.
docker run --rm --volumes-from drupal_database carinamarina/backup backup \
--source /var/lib/mysql/ --stdout --zip > db-data-volume.tar.gz

With the 4.7 GB sample database above, this process takes 239 seconds and results in 702 MB compressed file.

We’re making use of the carinamarina/backup image produced by Rackspace to create an archive of the database files.

You can then distribute this file to your colleagues (at Savas Labs, we use Amazon S3), or make use of it in continuous integration builds (more on that below), using these commands:

# Copy the data volume tar.gz file from your team's AWS S3 bucket.
if [ ! -f db/db-data-volume.tar.gz ]; then aws s3 cp \
s3://{your-bucket}/mysql-data-volume/db-data-volume.tar.gz db-data-volume.tar.gz; fi
# Stop the database container to prevent read/writes during the database
# restore process.
docker stop drupal_database
# Remove the /var/lib/mysql contents from the database container.
docker run --rm --volumes-from drupal_database alpine:3.3 rm -rf /var/lib/mysql/*
# Use the carinamarina/backup image with the `restore` command to extract
# the tar.gz file contents into /var/lib/mysql in the database container.
docker run --rm --interactive --volumes-from drupal_database \
carinamarina/backup restore --destination /var/lib/mysql/ --stdin \
--zip < db-data-volume.tar.gz
# Start the database container again.
docker start drupal_database

So, not too complicated, but it will require a change in your processes for generating seed databases to distribute to your team for local development, or for CI builds. Instead of using mysqldump to create the seed database file, you’ll need to use the carinamarina/backup image to create the .tar.gz file for distribution. And instead of mysql {database} < database.sql you’ll use carinamarina/backup to restore the data volume.

In our team’s view this is a small cost for the enormous gains in database import time, which in turn boosts productivity to the tune of faster CI builds and refreshes of local development environments.

Further efficiency gains: integrate this process with your continuous integration workflow

The above steps can be manually performed by a technical lead responsible for generating and distributing the MySQL data volume to team members and your testing infrastructure. But we can get further productivity gains by automating this process completely with Travis CI and GitHub hooks. In outline, here’s what this process looks like:

1. Generate a new seed database SQL dump after production deployments

At Savas Labs, we use Fabric to automate our deployment process. When we deploy to production (not on a Docker stack), our post-deployment tasks generate a traditional MySQL database dump and copy it to Amazon S3:

def update_seed_db():
    run('drush -r %s/www/web sql-dump \
    --result-file=/tmp/$(date +%%Y-%%m-%%d)--post-deployment.sql --gzip \
    --structure-tables-list=cache,cache_*,history,search_*,sessions,watchdog' \
    % env.code_dir)
    run('/usr/local/bin/aws s3 cp /tmp/$(date +%Y-%m-%d)--post-deployment.sql.gz \
    s3://{bucket-name}/seed-database/database.sql.gz --sse')
    run('rm /tmp/$(date +%Y-%m-%d)--post-deployment.sql.gz')

2. When work is merged into develop, create a new MySQL data volume archive

We use git flow as our collaboration and documentation standard for source code management on our Drupal projects. Whenever a developer merges a feature branch into develop, we update the MySQL data volume archive dump for use in Travis CI tasks and local development. First, there is a specification in our .travis.yml file that calls a deployment script:

deploy:
  provider: script
  script:
    - resources/scripts/travis-deploy.sh
  skip_cleanup: true
  on:
    branch: develop

And the travis-deploy.sh script:

#!/usr/bin/env bash

set -e

make import-seed-db
make export-mysql-data
aws s3 cp db-data-volume.tar.gz \
s3://{bucket-name}/mysql-data-volume/db-data-volume.tar.gz --sse

This script: (1) imports the traditional MySQL seed database file from production, and then (2) creates a MySQL data volume archive. We use a Makefile to standardize common site provisioning tasks for developers and our CI systems.

3. Pull requests and local development make use of the MySQL data volume archive

Now, whenever developers want to refresh their local environment by wiping the existing database and re-importing the seed database, or, when a Travis CI build is triggered by a GitHub pull request, these processes can make use of an up-to-date MySQL data volume archive file which is super fast to restore! This way, we ensure we’re always testing against the latest content and configuration, and avoid running into costly issues having to troubleshoot inconsistencies with production.

Conclusion

We’ve invested heavily in Docker for our development stack, and this workflow update is a compelling addition to that toolkit since it has substantially sped up MySQL imports and boosted productivity. Try it out in your Docker workflow and we invite comments to field any questions and hear about your successes. Stay tuned for further Docker updates!

Feb 11 2017
Feb 11

Docker, a container-based technology which I just came across, is great for setting up environments. It was first introduced to the world by Solomon Hykes, founder and CEO of dotCloud at Python Developers Conference in Santa Clara, California, in March 2013. The project was quickly open-sourced and made available on GitHub, where anyone can download and contribute to it.

Containers vs. Virtual Machines

You might be wondering, “What is the difference between Containers (like Docker) and Virtual Machines”?

Well, virtual machines (VM) work by creating a virtual copy of a computer’s hardware, and running a full operating-system on that virtual hardware. Each new VM that you create results in a new copy of that virtual hardware, which is computationally expensive. Many people use VMs because they allow you to run an application in a separate environment which can have it’s own versions of software and settings, which are different from the host machine.

On the other hand, container technologies like Docker, isolate the container’s environment, software, and settings, in a sandbox; but all sandboxes share the same operating-system kernel and hardware as the host computer. Each new container results in a new sandbox. This enables us to pack a lot more applications into a single physical server as compared to a virtual machine.

Docker containers are isolated enough that the root process in a container cannot see the host machine’s processes or filesystem. However, it may still be able to make certain system calls to the kernel that a regular user would not, because in Docker, the kernel is shared with the host machine. This is also why Docker containers are not virtual machines and thus a lot faster.

Note, however, that Docker relies on a technology which is only available in the Linux kernel. When you run Docker on a Windows or Macintosh host machine, Docker and all it’s containers run in a virtual machine.

That said, there are two projects trying to bring Docker-style containers natively to OS/X , Dlite and Xhyve. But last I heard, these projects were still very experimental. So consider yourself warned.

When you are done with a container, on a Mac host machine, it’s probably good to suspend the containers, because they run in a virtual machine and that has a lot of overhead. But on a Linux host machine, there would be no need to suspend them because they would not create (much) additional overhead (no more than, say, MAMP).

Docker is a tool that promises to scale into any environment, streamlining the workflow and responsiveness of agile software organizations.

Docker’s Architecture

This is a diagram explaining the basic client-server architecture which docker uses. Docker architecture

Image source: http://www.docker.com

Important Terminology

  • Docker daemon: A Docker engine which runs on the host machine as shown in the image above.
  • Docker client: A Docker cli which is used to interact with the daemon.

Workflow components

  • Docker image: A read-only disk image in which environment & your application resides.
  • Docker container: A read/writeable instance of an image, which you can start, stop, move, and delete.
  • Docker registry: A public or private repository to store images.
  • Dockerfile: A Dockerfile is instructions for how to build a single image. You can think of a Dockerfile as kind of Vagrantfile, or a single Chef cookbook, or an Ansible script, or a Puppet script.

Microservices

Because Docker allows you to run so many containers at the same time, it has popularized the idea of microservices: a collection of containers, each of which contain a single program, all of which work together to run a complex application (e.g. Drupal).

Taking Drupal as an example, every Drupal site has at least two dependencies: an HTTP server (Apache, Nginx, etc.) running PHP; and MySQL. The idea of microservices would involve packaging Apache+PHP separately from MySQL; as opposed to most Drupal virtual machine images which bundle them together into the same VM. For more complicated setups, you could add another container for Solr, another container for LDAP, etc.

For me, the main advantage of using microservices is that it’s easier to update or swap one dependency of an application without affecting the rest of it. Another way of looking at this is that microcontainers make it easier to modify one piece without waiting a long time for the virtual machine to rebuild.

When I was using a virtual machine on a particularly complex project, if I needed to make a change to a setting, I had to make that change in the Puppet config, then run vagrant destroy && vagrant up and wait two hours for it to tell me that the new configuration wasn’t compatible with some other piece of the system. At which point I had to repeat the two hour process, which wasted a lot of time.

If I had been using Docker (properly), then I could have just changed the setting for that one program, rebuild that program’s container (5 seconds), and not have to worry that one piece of the machine needed at least Java 6 and the other piece of the machine could not work without Java 5.

Now that you know the possibilities with Docker, watch this space to find out how all this applies to Drupal.

Nov 09 2013
Nov 09

Imagine how a new drupalist would feel by trying to work in the Drupal core issue queue or even to catch up with the changes, when a dozen comments are posted every day? It takes a while to get used to the issue queue and you’d better check updates every day if you’re following a hundred issues, or else you’ll quickly feel overwhelmed.

This community is so vibrant and awesome that many issues will easily have several hundred comments, ideas, suggestions and opinionated debates over a preferred architecture. All this is obviously great and preferable but it’s also causing a major pain to track things.

We already have the fantastic dreditor to review patches efficiently and it did solve an outstanding productivity and usability issue. But we don’t really have a way to remember what was said in comment #32 when we’re having 400+ comments, do we? One would argue that we do and we call it the issue summary. I’d have to disagree with that. The issue summary is excellent and enables anybody to quickly grasp what any given issue is all about. But obviously - to keep it manageable and short - you have to make compromises and only summarize things. Very often though, I’m reading thoughtful comments that are just lost at some point because they don’t necessarily need to be captured in the issue summary. Should we simply forget about them completely? I don’t think so, as they often teach you something you didn’t know about and open up your horizon on new tools and methodologies.

I’ve experimented working with the Diigo annotation tool to create my very own issue queue management and would like to explain why I can’t live without it now. First things first, Diigo is not mainly an annotation tool. They actually call it a “multi-tool for knowledge management” as it enables you to build your personal library of links, images and notes by archiving web pages via a browser extension. With Diigo browser tools, you’ll be able to either use a browser extension or a bookmarklet depending upon your preferences and habits.

The diigo Chrome extension pop up menu

But what I’d like to show you today is how annotations can dramatically increase your productivity and awareness for any given d.o. issue. Annotations provide you with an on-page highlighting tool which will immediately catch your eyes next time you visit the page and display the annotations again. Needless to say, when you have several hundreds comments and took the time to read them all 3 months before, you’re certainly not going to read them all over again and will definitely not remember many important discussions that took place before. Context is key, and with annotations you’ll be able to create your very own issue summary, that will be relevant to YOUR way of tracking things. And with this, the next time you come back to an issue, you’ll quickly remember what was of interest and what are next steps to move forward.

Diigo annotations and sticky notes

You can highlight text with different colors and can thus come up with some sort of standardization. For instance, yellow could be for informational text only, red to outline problems and green for potential solutions. It’s a perfect visual reminder of what you need to read first and what is especially important to you. But there’s more. With Diigo’s annotation tool, you also have the possibility to create sticky notes wherever you like on the page. When you hover on a sticky note, it’ll show up immediately and this can be a repository for an upcoming comment you want to make on the issue or simply to give context to any text you’ve highlighted.

Diigo changed the way I interact with the issue queue and I strongly encourage you give it a try. Hope you like it!

Sep 18 2013
Sep 18

Now that everybody's looking into virtualization, VMs, and even containers to speed up devs, I wanted to step back a little and go back to the basics by creating a simple bash script to set up a dummy Drupal 8 environment on Linux or Mac OS. Since only Drush 8 is compatible with Drupal 8 anyway, this is by far the easiest and quickest way to set up a sandbox site without installing any additional package or tool.

Check out the drupal-8-tools repo (under provisioning) and feel free to fork or contribute. Enjoy!

Jan 05 2011
Jan 05

It is the new year and I decided I needed to clean off my desk, and since it looked so clean, I snapped a quick picture. I currently am working from home as the company I work for has gone virtual, which is why I have this setup in the first place.


(click for full image)

Items:
*Lenovo T61 ThinkPad 15.4" 1680x1050 screen (company provided w/ docking station)
*24" Dell Monitor 1920x1200 screen (company provided)
*Apple Macbook 14" 1280x800\
*Microsoft Ergonomic Keyboard
*Logitech Thumbscroll mouse
*TrendNet KVM (allows the 24" monitor, keyboard and mouse to be shared by the Thinkpad and MacBook)
*iPad (hard to see, used for Drupal powered iOS app testing, other stuff)
*iPhone 4 (taking the picture)
*iPhone 3 (not pictured, keeping for backward compatibility of iOS apps)
*Brother Printer/Fax/Scanner w/ Drum ink (b/w only but uses less ink long term)
*Whiteboard and PostIts used to keep different projects and priorities in front of me.

The laptop screen is where I keep my mail program and other readable items that I might need as I code.

The main 24" monitor is where the main coding and testing happens.

Hidden behind my Komodo IDE window, is a FF Browser with Drupal Planet. There are a ton of Drupal 7 released announcements right now, in case you missed that.

Other items of interest:
*I am a hockey fan in San Jose, CA, thus the Sharks poster.
*The nearly ever present coffee mug is actually missing.
*Hidden under the laptop stand are snacks and TicTacs.
*Super cheap speakers don't give any bass.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web