Feb 15 2020
Feb 15

This month for SC DUG I gave a talk on the importance of self-directed learning for professional development as a developer — or really any other modern career. It was an extension and revision of my December blog post on the same topic. The presentation runs a hair over 30 minutes, and parts of the discussion are included as well.

[embedded content]

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. If you want to see a polished version checkout our group members’ talks at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on MeetUp for meeting times, locations, and remote connection information.

Dec 23 2019
Dec 23

From time to time conversations come up among developers, and other fellow travelers, about being self-taught vs getting formal training. Over time I’ve come to realize that the further and further you get into your career, the less the distinction means anything; eventually we are all mostly self-taught.

I’ve written before about the value of my liberal arts education and I stand by my assertion that what I learned in that setting was, and is, valuable to my life and work. But just because something was useful to life does not mean it was the only way to acquire the skills. It’s a good way for many people, but far from the only way.

For anyone in a technical field, and most professional fields really, to succeed over time you need to learn new tools, skills, and techniques. The tools I knew when I graduated college are all largely outmoded or significantly upgraded, and I’ve had to learn a variety of technologies that didn’t exist in 2001.

Within the Drupal community lots of people talk about being self-taught, sometimes with pride sometimes with embarrassment, but in truth very few people were formally trained on the platform. Lots of very successful developers in the Drupal community (and beyond) have degrees in fields like religion and art history, not computer science, and have taught themselves how to do awesome things. In fact, I’ll argue that just about every Drupaler taught themselves most of what they know about Drupal. How they did that can vary widely, but we are a community with few formal training programs and lots of people who stumbled into Drupal trying to solve a non-technical problem. Even advanced workshops at conferences dig deep into one small area and expect you to generalize that knowledge to your projects, which I count as self-teaching. For example, I had a friend ask the other day about how to control the PDO connection settings in Drupal 7 — which I didn’t know how to do, but knew they were similar to Drupal 8 — so I sent him my Drupal 8 instructions and he figured it out how from there. He’s now taught himself how to do what he needed for that project and in the process generalized the approach for whatever he may need next time.

So then it is important for all of us to find, and hopefully share, techniques for self-teaching — even for those who have some kind of formal training. Here are my suggestions for people who are starting out and haven’t yet found the pattern that works for them:

  1. Assume you first solution is wrong. Most of us have, or will, stumble our way through a project where we don’t really know what we’re doing without a lot of support. We usually learn a great deal in the process, and launching those projects can feel pretty good cause you’ve succeeded at something hard. It is easy to get into the habit of assuming the solutions from that project were correct because they worked. In truth those projects are really rough around the edges, and just because we got it to work does not mean the solution was good. Assuming the first solution is good enough forever is how you become an expert beginner which then takes a lot of effort to undo. Once you have a working solution, step back and see if you can think of a better one, or see if you now can guess better search terms to see if someone else wrote up a different solution to the same problem. Admit your work could be better and try to improve it.
  2. Learn a few more programming languages. Most people who are self-taught from the start, and even some who have a BA/BS in Computer Science, only know 2 or 3 programming languages (PHP, JS, and CSS+HTML are often the only languages new people learn at first). One of the courses I took by chance in college forced me to learn 8 in 16 weeks. It was grueling, miserable, and darned useful. I can still learn a new language in just a couple weeks and rarely do I hit a language construct I don’t recognize. You don’t need to go that far. When I first started out a mentor told me you should learn a new language every year, and for several I did. Some of those, not the languages I learned in college, are the ones I use most day-to-day. All told I’ve spent time writing code in more than twenty different languages. That many isn’t terribly useful but the more languages you learn, the more you learn to understand the elements of your primary language.
  3. Learn basic algorithms and to measure complexity. The kind of thinking that goes into formal algorithms will help you be a better developer overall; badly thought through processes is the place I tend to see the largest gaps between developers with and without formal training. Any college-level CS program will put you through an algorithms course that teaches a variety of specific algorithms and force you to understand their structures. If you didn’t go through one of those programs, this is probably the course that will help you the most. On the one hand most of us rarely rewrite these algorithms as on modern platforms some library or another will provide a better version than we are likely to craft for our project. But learning what they are, when they are used, and how to understand their performance is useful for any project that involves lots of data or processing. MIT has a version of their algorithms course from 2011 online, or find one through another provider. Even if you just watch the lectures (really watching, not just vaguely have them on while cooking and cleaning), you can learn a great deal of useful information. I learned a lot watching those lectures as it refreshed and updated my understanding of the topics.
  4. Find and learn from mentors. Notice I used a plural there; you should try to find a few people willing to help you learn your profession, and more generally help you learn to advance in your field. Most of us benefit from learning from the experiences of multiple people, and who we need to learn from changes over time. I had the great experience of having a few wonderful mentors when I was first starting out, and much of the advice they gave me still serves me well. Some of it contradicted, and resolving those contradictions forced me to learn to do things my own way and find my own solutions.
  5. Learn other platforms. This is both a protection against future shifts in the market, and also a way to see how things work from outside your current professional bubble. Drupal developers can learn a lot from writing a WordPress plugin, or better yet an add-on for a platform in another language (think about Plone, Gatsby, or Hugo). Or try to learn to work with a platform like Salesforce or AWS. Other platforms have different communities, different learning styles, and different patterns. Like understanding additional languages, different platforms help you broaden your understanding and provide insights you can bring back to your main work.
  6. Learn to give and take criticism. Part of learning is getting feedback on your work, and part of being on a team is sharing feedback with others. If you took art or music classes in high school or college you probably learned some of the basic lessons you need here, but if you didn’t, consider taking one now at your local community college or art center. The arts are wonderful for getting experience with criticism. For all art is often open to interpretation, it also requires specific skills. If you play off-key, it sounds wrong. If your sculpture collapses under its own weight, the project failed. If your picture’s subject is out of focus, you need to re-shoot it. Sure there are brilliant artists who can violate all the rules, but if you have never experienced an art critique you are not one of those artists. The experience of getting direct, blunt, and honest feedback will help you understand its value and how to give that feedback yourself.
  7. Share what you think you know. We learn a great deal with we teach others. Both because it forces us to refine our thinking and understanding so we can explain it, and because learners ask questions we cannot answer off the top of our heads. This can be user group or conference presentations, internal trainings for your team, mentoring junior developers, writing a blog, or anything else that gets your from learning to teaching. It’s okay if you’re not 100% right, that’s part of how we learn. A few years ago I was doing a joint project with a junior developer who asked me a lot of questions, and pushed hard when she thought I was making mistakes. When she asked why I was selecting a solution or setting a pattern, she was never satisfied with “because that’s the best way to do it.” She wanted me to explain why that was the best way. If I couldn’t walk her through it right away, I went back and hunted for reference material to explain it or if that failed I tested her counter ideas against my plans to see if I was missing something. While I was usually right, not always and we did make changes based on her feedback. More importantly it forced me to show my work in fine detail which was a good exercise for me and gave her insights to help her do better work.
  8. Find your own patterns. At the start I said this list was for people who didn’t have their own patterns yet. In the long-run of your career you need to figure out what you need to know to get to where you want to go next. Eventually you will need to find a pattern that works for you and the life you are living. No one can tell you what that is, nor how to learn it all yourself. Experiment with learning styles, areas of work, roles, and types of projects as much as you are able until you feel your way to the right solutions for you.
Jun 25 2019
Jun 25

I recently had reason to switch over to using Docksal for a project, and on the whole I really like it as a good easy solution for getting a project specific Drupal dev environment up and running quickly. But like many dev tools the docs I found didn’t quite cover what I wanted because they made a bunch of assumptions.

Most assumed either I was starting a generic project or that I was starting a Pantheon specific project – and that I already had Docksal experience. In my case I was looking for a quick emergency replacement environment for a long-running Pantheon project.

Fairly recently Docksal added support for a project init command that helps setup for Acquia, Pantheon, and Pantheon.sh, but pull init isn’t really well documented and requires a few preconditions.

Since I had to run a dozen Google searches, and ask several friends for help, to make it work I figured I’d write it up.

Install Docksal

First follow the basic Docksal installation instructions for your host operating system. Once that completes, if you are using Linux as the host OS log out and log back in (it just added your user to a group and you need that access to start up docker).

Add Pantheon Machine Token

Next you need to have a Pantheon machine token so that terminus can run within the new container you’re about to create. If you don’t have one already follow Pantheon’s instructions to create one and save if someplace safe (like your password manager).

Once you have a machine token you need to tell Docksal about it.  There are instructions for that (but they aren’t in the instructions for setting up Docksal with pull init) basically you add the key to your docksal.env file:

SECRET_TERMINUS_TOKEN="HASH_VALUE_PROVIDED_BY_PANTHEON_HERE"

 Also if you are using Linux you should note that those instructions linked above say the file goes in $HOME/docksal/docksal.env, but you really want $HOME/.docksal/docksal.env (note the dot in front of docksal to hide the directory).

Setup SSH Key

With the machine token in place you are almost ready to run the setup command, just one more precondition.  If you haven’t been using Docker or Docksal they don’t know about your SSH key yet, and pull init assumes it’s around.  So you need to tell Docksal to load it but running:
fin ssh-key add  

If the whole setup is new, you may also need to create your key and add it to Pantheon.  Once you have done that, if you are using a default SSH key name and location it should pick it up automatically (I have not tried this yet on Windows so mileage there may vary – if you know the answer please leave me a comment). It also is a good idea to make sure the key itself is working right but getting the git clone command from your Pantheon dashboard and trying a manual clone on the command line (delete once it’s done, this is just to prove you can get through).

Run Pull Init

Now finally you are ready to run fin pull init: 

fin pull init --hostingplatform=pantheon --hostingsite=[site-machine-name] --hosting-env=[environment-name]

Docksal will now setup the site, maybe ask you a couple questions, and clone the repo. It will leave a couple things out you may need: database setup, and .htaccess.

Add .htaccess as needed

Pantheon uses nginx.  Docksal’s formula uses Apache. If you don’t keep a .htaccess file in your project (and while there is not reason not to, some Pantheon setups don’t keep anything extra stuff around) you need to put it back. If you don’t have a copy handy, copy and paste the content from the Drupal project repo:  https://git.drupalcode.org/project/drupal/blob/8.8.x/.htaccess

Finally, you need to tell Drupal where to find the Docksal copy of the database. For that you need a settings.local.php file. Your project likely has a default version of this, which may contain things you may or may not want so adjust as needed. Docksal creates a default database (named default) and provides a user named…“user”, which has a password of “user”.  The host’s name is ‘db’. So into your settings.local.php file you need to include database settings at the very least:

<?php
$databases = array(
  'default' =>
    array(
      'default' =>
      array(
        'database' => 'default',
        'username' => 'user',
        'password' => 'user',
        'host' => 'db',
        'port' => '',
        'driver' => 'mysql',
        'prefix' => '',
      ),
    ),
);

With the database now fully linked up to Drupal, you can now ask Docksal to pull down a copy of the database and a copy of the site files:

fin pull db

fin pull files

In the future you can also pull down code changes:

fin pull code

Bonus points: do this on a server.

On occasion it’s useful to have all this setup on a remote server not just a local machine. There are a few more steps to go to do that safely.

First you may want to enable Basic HTTP Auth just to keep away from the prying eyes of Googlebot and friends.  There are directions for that step (you’ll want the Apache instructions). Next you need to make sure that Docksal is actually listing to the host’s requests and that they are forwarded into the containers.  Lots of blog posts say DOCKSAL_VHOST_PROXY_IP=0.0.0.0 fin reset proxy. But it turns out that fin reset proxy has been removed, instead you want: 

DOCKSAL_VHOST_PROXY_IP=0.0.0.0 fin system reset.  

Next you need to add the vhost to the docksal.env file we were working with earlier:

 VIRTUAL_HOST="test.example.org"

Run fin up to get Docksal to pick up the changes (this section is based on these old instructions).

Now you need to add either a DNS entry someplace, or update your machine’s /etc/hosts file to look in the right place (the public IP address of the host machine).

Anything I missed?

If you think I missed anything feel free to let know. Particularly Windows users feel free to let me know changes related to doing things there. I’ll try to work those in if I don’t get to figuring that out on my own in the near future.

Feb 04 2019
Feb 04

This fall the South Carolina Drupal User’s Group started using Zoom are part of all our meetings. Sometimes the technology has worked better than others, but when it works in our favor we are recording the presentations and sharing them when we can.

In November Kaylan Wagner gave a draft talk on using experiences in the world of online gaming to be a better remote team member.

[embedded content]

We frequently use these presentations to practice new presentations and test out new ideas. If you want to see a polished version hunt group members out at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

Nov 30 2018
Nov 30

In software just about all project management methodologies get labeled one of two things: Agile or Waterfall. There are formal definitions of both labels, but in practice few companies stick to those definitions particularly in the world of consulting. For people who really care about such things, there are actually many more methodologies out there but largely for marketing reasons we call any process that’s linear in nature Waterfall, and any that is iterative we call Agile.

Classic cartoon of a tree swing being poorly because every team saw it differently.Failure within project teams leading to disasters is so common and basic that not only is there a cartoon about it but there is a web site dedicated to generating your own versions of that cartoon (http://projectcartoon.com/).

Among consultants I have rarely seen a company that is truly 100% agile or 100% waterfall. In fact I’ve rarely seen a shop that’s close enough to the formal structures of those methodologies to really accurately claim to be one or the other. Nearly all consultancies are some kind of blent of a linear process with stages (sometimes called “a waterfall phase” or “a planning phase”) followed by an iterative process with lots of non-developer input into partially completed features (often called an “agile phase” or “build phase”). Depending on the agency they might cut up the planning into the start of each sprint or they might move it all to the beginning as a separate project phase. Done well it can allow you to merge the highly complex needs of an organization with the predefined structures of an existing platform. Done poorly it can it look like you tried to force a square peg into a round hole. You can see evidence of this around the internet in the articles trying to help you pick a methodology and in the variations on Agile that have been attempted to try to adapt the process to the reality many consultants face.

In 2001 the Agile Manifesto changed how we talk about project management. It challenged standing doctrine about how software development should be done and moved away from trying to mirror manufacturing processes. As the methodology around agile evolved, and proved itself impressively effective for certain projects, it drew adherents and advocates who preach Agile and Scrum structures as rigid rules to be followed. Meanwhile older project methodologies were largely relabeled “Waterfall” and dragged through the mud as out of date and likely to lead to project failure.

But after all this time Agile hasn’t actually won as the only truly useful process because it doesn’t actually work for all projects and all project teams. Particularly among consulting agencies that work on complex platforms like Drupal and Salesforce, you find that regardless of the label the company uses they probably have a mix linear planning with iterative development – or they fail a lot.

Agile works best when you start from scratch and you have a talented team trying to solve a unique problem. Anytime you are building on a mature software platform you are at least a few hundred thousand hours into development before you have your first meeting. These platforms have large feature sets that deliver lots of the functionality needed for most projects just through careful planning and basic configuration – that’s the whole point of using them. So on any enterprise scale data system you have to do a great deal of planning before you start creating the finished product.

If you don’t plan ahead enough to have a generalized, but complete, picture of what you’re building you will discover very large gaps after far too many pieces have been built to elegantly close them, or your solution will have been built far more generically than needed – introducing significant complexity for very little gain. I’ve seen people re-implement features of Drupal within other features of Drupal just to deal with changing requirements or because a major feature was skipped in planning. So those early planning stages are important, but they also need to leave space for new insights into how best to meet the client’s need and discovery of true errors after the planning stage is complete.

Once you have a good plan the team can start to build. But you cannot simply hand a developer the design and say “do this” because your “this” is only as perfect as you are and your plan does not cover all the details. The developer will see things missed during planning, or have questions that everyone else knows but you didn’t think to write down (and if you wrote down every answer to every possible question, you wrote a document no one bothered to actually read). The team needs to implement part of the solution, check with the client to make sure it’s right, adjust to mistakes, and repeat – a very agile-like process that makes waterfall purists uncomfortable because it means the plan they are working from will change.

In all this you also have a client to keep happy and help make successful – that’s why they hired someone in the first place. Giving them a plan that shows you know what they want they are reassured early in the project that you share their vision for a final solution. Being able to see that plan come together while giving chances to refine the details allows you to deliver the best product you are able.

Agile was supposed to fix all our problems, but didn’t. The methodologies used before were supposed to prevent all the problems that agile was trying to fix, but didn’t. But using waterfall-like planning at the start of your project with agile-ish implementation you can combine the best of both approaches giving you the best chances for success.  We all do it, it is about time we all admit it is what we do.

Cartoon of a developer reviewing all the things he's done: check technical specs, unit tests, configuration, permissions, API updates and then says "Just one small detail I need to code it."Cartoon from CommitStrip
Apr 26 2015
Apr 26


Drupal. I didnt come to Drupal code 14 years ago, I came for the community and stayed for the functionality. That is part why I never liked the "Come for the code, stay for the community" slogan. Sure, it is a perfect cheesy slogan. If all you want attract are coders in the community, it is even a perfect slogan. For a perfect community, of perfect happy coders.

We have got to learn to address humans. Not just humans who can code. That is, if we want to be a true community for a product. A product that is well designed and does attract both the business and the user to participate in the product, the process and hence the community.

Leaderers. Entrepeneurs. Visionaries. Testesters. Document writers. Project Managers, marketeers. To name just a few. Of course developers can also have the skills to do these jobs, an often overlooked fact. But someone who is "just" a marketeer, will not come for the code. (S)He might come for the job at hand, money that might be involved, the functionality, but the best reason why an external non developer should come to the community to help out, is the community that is helping her/him out. Not clean lines of code, but helping hands of love.


This is I am active in the Drupal community, to help out to get others on board. With a rocking team ( Marja, Imre, Rolf and Peter and others) we are organising the DrupalJam event in the low lands. The DrupalJam started with 20+ persons and pizzas in a room and is now a big event with over 300 people attending, over 25 sessions and a budget in the tens of thousands.

DrupalJam -organised by the Dutch Drupal foundation- will be held in Utrecht, April 30 and it really represents the helping hands -not just the lines of code- of the community. With keynotes from Bruce Lawson ( HTML fame), Marco Derksen (digital strategist, entrepreneur) and featured speakers like Jefrey Maguire (moustache fame, D8), Anton VanHouke (leading design agency in the NL, introduced scrum in to strategy and design), Stephan Hay (designer, writer) and Ben van 't Ende (Community Manager for the TYPO3 Asssociation).

And like last year, Dries will do a virtually Q-and-A. If you want to ask him nearly anything, do so at this form.

The event will be held in an old industrial complex as can be seen in these shots

I am really looking forward to this event, it has a long tradition and always strengthened the community and brought in new blood. People who "Come for the business and stay for the community" Those who come of the need for design and stay for the love. Or love the functional and stay for organising the next DrupalJam.

PS: Now this head has rolled, it is time we decide what we do the body. If you have 5 minutes of your spare time, read this post and if you have one minute more, see this one from 2008 as well.

Sep 22 2014
Sep 22

A star in a network
It may differ per country and continent, but for most of the regions I know of, Corporate Social Responsibility (CSR) has become a standard within corporations as a way of buying, selling and producing goods and services. We all know that resources are scarce and hence should be used for the best possible use and more important, reused when possible.

By reusing resources to produce new goods or services, we make optimal use of that what is there. This is no longer a “left" or “green" political statement but is being executed by all parties in the political and economical arena, simply because it is in the interest of the person doing so as well as all other persons. It makes economical sense to reuse resources, be good for persons, the community and the environment. Even if it was just for the tragedy of the commons or from a prioner’s dilemma point of view. For those interested in how doing good or bad impacts the group, this academic PDF might be a good start. If you master Dutch this TED quality keynote during a DrupalJam conference of my friend Yoast on vimeo is truly something to watch.

Garden city of to-morrow

So it is my opinion that CSR has moved beyond empty platitudes and has become truly in the genes of people and companies. Many people think that CSR started as corporate philanthropy, a way of the rich to donate to the poor. I don't think this is true, in every revolution, there have been powers to do good for the environment, the people and the community. For example during the Industrial Revolution there was a very strong new socialism trend with taking care of the housing, commnities and villages of the workers, “The garden cities of to-morrow". Not because “the Rich" want to do good perse (“philanthropy"), but because it made sense economically; less death and diseases (less risk) and a richer and happier workforce (and new business models around this growth).

Urban gardening

Most of the definitions I have seen of CSR have in common that it is an integral vision towards sustainable business with social responsibility in business decisions to balance the social and economic impact of the decision. That by itself is an excellent definition and one that will be supported by anyone who is been doing business. The implementation most see however is to have a policy on carbon footprint in a company or to only buy agricultural products that are produced in a sustainable way, without pesticides. All fine.

But it seems that there is a very easy way to have implementation of CSR: by using a product that is produced to be be reused, made with the knowledge of thousands and with target audience of the world. The product that is not wasting a single second of the future and not wasting a drop of the paste. Indeed, I am talking about using open source software (OSS)!
OSS is by definition made with CSR in mind, it is being produced by different people all over the globe to be reused for you and your knowledge will be direct input for making the product better, iterate on the development and implementation.

And hence, a company that is using open source has a sustainable competitive advantage by using valuable rare resource in the most optima form. Therefor I dare any company that is using software to produce goods, to take using open source software into account and into its’ Corporate Social Responsibility policy. For by using open source software, we can truly make a better world by using more knowledge and less resources.

A very healthy situation for any company.

Untitled

PS: if you want more information on this vison, do visit the 12 Best Practices from Wunderkraut session at the DrupalCon Amsterdam. Or visit Wunderkraut at booth number 1 in the sponsor lounge, right by the coffee! We are part of the community that uses and make open source software. With passion.

Aug 20 2013
Aug 20

Robert M. White
TL;RD

  1. Performance matter for all websites
  2. Performance is not just (80%) frontend
  3. SPDY kills 80% of your frontend problems

What
In the Drupal and broader web community, there is a lot of attention towards the performance of websites.

While "performance" is a very complex topic on its' own, let us in this posting define it as the speed of the website and the process to optimize the speed of the website (or better broader, the experience of the speed by the user as performance.

Why
This attention towards speed is for two good reasons. On one hand we have the site that is getting bigger and hence slower. The databases get bigger with more content and the the codebase of the website is added with new modules and features. While on the other hand, more money is being made with websites for business even if you are not selling goods or run ads.

Given that most sites run on the same hardware for years, this results in slower websites, leading to a lower pagerank, less traffic, less pages per visit, lower conversion rates. And in the end, if you a have a business case for your website, lower profits. Bottemline: If you make money online, you are losing this due to a slow website.
UFO's
When it comes to speed there are many parameters to take in to account, it is not "just" the average pageloading time. First of all the average is a rather useless metric without taking the standard deviation into account. But apart from that, it comes down to what a "page" is.

A page can be just the HTML file (can be done in 50ms)
A page can be the complete webpage with all the elements (for many sites around the 10seconds)
A page can be the complete webpage with all elements including third party content. Hint: did you know that for displaying the Facebook Like button, more Javascript is downloaded then the entire jQuery/backbone/bootstrap app of this website, non cacheable!
And a page can be anything "above the fold"

Moon Retro future
And then there are more interesting metrics then these, the time to first byte from a technologic point of view for example. But not just technical PoV. There is a website one visits every day that optimzes its' rendable HTML to fit within 1500 bytes.
So ranging from "First byte to glass" to "Round trip time", there are many elements to be taken into account when one measures the speed of a website. And that is the main point: webperformance is not just for the frontenders like many think, not just for the backenders like some of them hope, but for all the people who control elements elements in the chain involved in the speed. All the way down to the networking guys (m/f) in the basement (hint sysadmins: INITCWND has a huge performance impact!) Speed should be in your core of your team, not just in those who enable gzip compression, aggregate the Javascript or make the sprites.

Steve Souders (the webperformance guru) once stated in his golden rule that 80-90% of the end-user response time is spent on the frontend.

Speedy to the rescue?
This 80% might be matter of debate in the case of a logged in user in a CMS. But even if it is true. This 80% can be reduced by 80% with SPDY.
SPDY is an open protocol introduced by Google to overcome the problems with HTTP (up to 1.1 including pipeling, defined in 1999!) and the absence of HTTP/2.0. It speeds up HTTP by generating one connection between the client and the server for all the elements in the page served by the server. Orginally only build in chrome, many browsers now support this protocol that will be the base of HTTP/2.0. Think about it and read about it, a complete webpage with all the elements -regardless of minifying and sprites- served in one stream with only once the TCP handshake and one DNS request. Most of the rules of traditional webperf optimalisation (CSS aggregation, preloading, prefetching, offloading elements to different host, cookie free domains), all this wisedom is gone, even false, with one simple install. 80% of the 80% gone with SPDY, now one can focus on the hard part; the database, the codebase. :-)

The downside of SPDY is however that is is hard to troublshoot and not yet avaliable in all browsers. It is hard to troubleshoot since most implementations use SSL, the protocol is multiplexed and zipped by default and not made to be read by humans unlike HTTP/1.0. There are however some tools that make it possible to test SPDY but most if not all tools you use every day like ab, curl, wget will fail to use SPDY and fallback like defined in the protocol to HTTP/1.0

Measure
So can we test to see if SPDY is really faster and how much faster?
Yes, see Evaluating the Performance of SPDY-Enabled Web Servers (a Drupal site :-)
SPDY performance

So more users, less errors under load and a lower page load time. What is there not to like about SPDY?

Drupal
That is why I would love Drupal.org to run with SPDY, see this issue on d.o/2046731. I really do hope that the infra team will find some time to test this and once accepted, install it on the production server.

Performance as a Service
One of the projects I have been active in later is ProjectPAAS, bonus point if you find the easteregg on the site :-) . ProjectPAAS is a startup that will test a Drupal site, measure on 100+ metrics, analyse the data and give the developer an opinionated report on what to change to get a better performance. If you like these images around the retro future theme, be sure to checkout the flickr page, like us on facebook, follow us on twitter but most of all, see the moodboard on pinterest

Pinterest itself is doing some good work when it comes to performance as well. Not just speed but also the perception of speed.

Pinterest lazyloading with color
Pinterest does lazyload images but also displays the prominent color as background in a cell before the image is loaded, giving the user a sense of what to come. For a background on this see webdistortion

Congratulations you just saved 0,4 seconds
If you are lazyloading images to give your user faster results, be sure to checkout this module we made; lazypaas, currently a sandbox project awaiting approval. It does extract the dominant (most used) color of an image and displays the box where the image will be placed with this color. And if you use it and did a code review, be sure to help it to get it to a real Drupal module.

From 80% to 100%
Lazyloading like this leads to better user experience. Because even when 80% of the end-user response time is spent on the frontend, 100% of the time is spend in the client, most ofthen the browser. The only place where performance should be measured and the only page where performance matters. Hence, all elements that deliver this speed should be optimized, including the webserver and the browser.

Now say this fast after me: SPDY FTW. :-)

Aug 14 2013
Aug 14

King Eddie's Restaurant, 1954
As long as opensource exists -and maybe as long as software exists- people have been coding for money ("professionally") and for free ("gratis"). This is how open source works, scratching an itch where an itch can also be making sure you can pay for your food. There are many ways to earn a living coding open source, from the "give away the recepie, open up a resaurant" mantra to crowfunded support of a core chef.

Bike path through the Polder

An interesting new tendens is visible in the lowlands, opensource implementors -direct competitors- working together for *free* on marketing and code. In some ways that has been happening worldwide, but I do think that there is a trend in The Netherlands that might become bigger worldwide.

There are many Drupal agencies in the Netherlands and many of the people working in these agencies are active in the local and / or global Drupal community. Recently however 2 dozen of these agencies have been joining forces in a foundation Drupal companies Netherlands were they are together working on promoting Drupal to platforms were none of them could be present alone. This is from my understanding unique in the world and part of the reason why the session "growing the pie" was selected by the business track chairs. For a background interview see the Acquia TV site.

Now another interesting concept has come up. The Dutch national government bodies have a corporate identity and style guides describing how a site should look and work. They also have the obligation to comply to certain accessability rules based on the WCAG. There are over 100 Dutch government sites using Drupal and most of them have the same template. It would be a waste of resources that every implementer of a National government site would create their own template, waste of tax money and waste of time and talent. That is why Sogeiti and others made a standard theme for D7 on d.o/rijkshuisstijl. There are of cource some issues with rights on the logo but these are resolved.

kansen

The good news now is that in a recent "post-it" session Ordina, Limoengroen, Dutch Open Projects, Sogeti and many others together have decided that they will build upon this template to make it responsive, something the theme is not right now.

With the additional goal of having a Drupal 8 theme for the Dutch government the moment D8 hits stable. That would be the ultimate goal and do wonders for the adoption of opensource and Drupal in the Dutch government; a future-ready site with a validating responsive theme and the best CMS under the hood!

In case you want to join this movement, please do sign up for the linkedin group and get active in the issue queue of Rijkshuisstijl. And I do encourage you to visit the SDBN session at Drupalcon Prague.

I think that working together like these two initiatives is truly a sign of a mature market where competitors let the market grow together, in true open source style.

Mar 29 2013
Mar 29

The Outer Limits ... 'Cold Hands, Warm Heart'
A couple of weeks ago we launched the website of a service we have been working on hard for over half a year. The project started as a SAAS about performance and hence the internal project name was “ProjectPAAS”. As it goes with internal project names, it became the name of the service it self.

12 seconds start now

I still have problems explaining what the service is doing in an elevator pitch. But basicaly one installs a module on a to be tested staging site from d.o with the funky URL /project/paas, configures the service on the portal of projectpaas.com and then wait an hour or two. We start a service to measure your site from the outside and from the inside, analyse the data, make a report and when you check your mail you get an in depth report on all the elements of the chain that are relevant to the performance of the website.

1964 ... orbital assembly

We measure from one or more selectable (EC2) locations in the world with over 150 metrics and we only report on real data, no yslow wisdom. We know what influence speed, we see how it is configured at your site (with the module or from the outisde) and we simulate to find the the optimal value would be for your use case.

The cliché for example that one needs parallel download (images[1-4].example.com) to bypass the maximumum connection a browser can have to a host, is just that, a cliché. When one takes DNS lookup,TCP slow start and the sliding window in to account, for certain usecase, having images[x].example.com might actually be slower. So we are opinionated, we measure, we analyse, we report, you gain speed.

Easteregg

ProjectPAAS report 0.6
I really like retro future so we used this for a theme around the site and facebook. But since easter (Dutch "pasen" is coming up,
do check the projectpaas.com website, find the easteregg and twitter about it. :-)

This posting isn't as much about the service of ProjectPAAS as it is about why we made the service. To share our experience and to get feedback from you. There are two reasons we made it, one is internally driven and one is externally.

The internal reason is that we have been building some of the most visited sites and webapps in Drupal in the Netherlands. So after some time we got good at performance, we understood what to do and what not to do for the complete stack of elements that define speed, HTML, CSS, Linux, Apache, MySQL and yes, Drupal. Word got out that we were good and siteowners that have been building their site at another company, came to us for advice on how to get more speed in their site.
Once we had done a dozen of these reports, we wanted to make the reports more easily accessible for the site owners and website builders. This is part of why westarted the Performance Reporting

Land here

The external reason might be more interesting for you. We made the SAAS because we think that the CMS landscape will change and our business will change.

The landscape will change. 10 years ago everybody had his/her own CMS, there were more CMS-es then websites it seemed. 5 yeas ago it was clear who were going to be the winners in the consolidation, 80% of the proprietary "solutions" were gone and open source was no longer a dirty word in enterprises. Within the open CMS-es, the global top 5 was visible though especially in Europe there were still many local open source CMS-es. This consolidation perse was good open source and especially for Drupal shops.

1962 ... 'Planet Of Storms' (USSR)
However, the market won't stop here. Most of the Drupal websites are not complex, they don't have any connections to backend systems, less than 10k pageviews per day and are relatively expensive to build and most of all expensive to maintain. Here is the business case for open source SAAS, solutions based on open source software like Aqcuia and Wordpress.com offer. These solutions with standard modules and a customisable template is good enough right now for 20% of the Drupal sites out there and will cost a fraction of what building it "by hand" will cost.

The users of these open source SAAS hosting solutions will only grow. Good for the parties offering these services, bad for the Drupal shops that have been building relatively simple portfolio sites. By itself, this trend might have a big impact those coding Drupal core, modules or working in for example the security team. This is not meant in a bad way, but with most of the sites going towards a smaler group of SAAS companies, the number of "independent" individuals adding to core or writing modules might actually get lower, they might have another itch. It will be very interesting to see how this will develop, I might be completely wrong here.

Performance takes time

Traditionally most Drupal shops do projects, do maintenance and do consulting. Some have found a nice niche, a place geographically apart, a specific vertical or a certain service like migration from another CMS. However, most Drupal shops build relatively simple websites for SOHO plus. I know there are many shops that work for high end enterprises. But not all the 280.000 Drupal sites fit in the Alexa top 100.000. So I do think that if you are a Drupal shop, you have to find your sweet spot the next couple of month. On the one hand we have operational excellence (a SAAS to host sites like gardens or a service like ProjectPAAS itself) and on the other hand customer intimacy (the complex sites with lots of integration with backend systems and complex workflow). There might be space between these two, but the portfolio site area will get very crowded and Drupal will not be the best tool to serve this in my opinion. This is part of the reason why we build our first SAAS around a product we understand and is close to our core business. We are already planning next services that might still be build in Drupal but will target a broader audience.

ProjectPAAS logo
For the moment, if you are intersted in our product, dont be shy and talk to us on twitter or faces us. Potential resellers or users are welcome to fill out our form. We really do hope that our product can help you build faster websites and thereby push Drupal even more ahead of the curve.

Mar 13 2012
Mar 13

One of the unexpected challenges in raising money to grow your business is keeping mum about the deal until it's time. Time is likely among the many terms you'll find defined in your contract, and between the day you sign the papers and the day that time actually arrives, you're glowing inside because your investors believe in the potential of your business and want to see you do more.

Investors don't magically make a business plan succeed, nor do they single out the sole source of success behind a business or an idea. This is certainly the case with Commerce Guys' raise announced last week. We know for a fact that our investors get open source as much as they do eCommerce. Even as they evaluated us on our ability to execute our business plan, they evaluated us on how well we work within and alongside the larger Drupal community. When they took a close look, they saw the strengths of the community and the caliber of developers collaborating with us to build Drupal Commerce.

That's what makes it so exciting to share the news - investors and developers who have grown their own businesses and, in the case of the team at Open Ocean with MySQL, their own open source projects have looked closely into both Commerce Guys and Drupal Commerce and felt confident enough to front some serious cash for us to kick our efforts up a notch. Many of these guys have built their own eCommerce systems and understand the challenges we're in a unique position to solve through Drupal 7, Views, Rules, and Commerce, and they're guys who understand the importance of the community in the success of any open source project.

So, we're not crazy after all, and what we've been trying to build with our friends at Commerce Guys and in the Drupal community isn't crazy. Ambitious, sure, but achievable. Our vision for Drupal Commerce remains the same - to see Drupal Commerce become the world's leading open source eCommerce framework. For the last two and a half years, my time has been set aside by Commerce Guys to develop the code (with plenty of help from other brilliant Commerce Guys and community contributors) and grow the community needed to make it happen. Now we've sold the vision to some very smart people with deep pockets outside our normal circles and are eager to see what happens next.

Their affirmation is much appreciated, but so is the money that will let us hire and set aside even more developers to "scale me" out a bit. We need to address immediate concerns pertaining to documentation and community support on DrupalCommerce.org. We'll need to make sure we follow-through on our longstanding 2.x strategy to bring some sanity to the user experience for administrators even as 1.x has privileged developers. All the while, there will be more than enough module maintenance and distribution work to go around!

Addressing these needs for Drupal Commerce should only require a fraction of the money we've raised, but it's a good start that will have an immediate positive impact on the thousands of people already using Drupal Commerce to power their online businesses. If you think you can stomach working with me on a daily basis and have the chops to help us succeed, be sure to get in touch.

Feb 01 2012
Feb 01

El solo de guitarra
If you own a mac, you use spotlight daily. Or even better, use Alfred. A great way to navigate faster with a keyboard towards the app or data you need.

A very long time ago a Boy Wonder made something like this for Drupal 5, navigating through the /admin pages using a simple spotlight alike interface, see the menuscout module. Unfortunately, Boy Wonder, he wandered off and the module gained dust.

Then some time ago co-worker Michael Mol showed me a module he has been working on. Since he was doing D6 and D7 development at the same time and since the URL's changed and developers use URL's as well for navigation, he decided to d a spotlight alike search on the admin pages so he could remember the name, not the URL. It ended up being coffee, for now a sandbox project but anyday now a real project in d.o. To see it in action, take a look at this screencast.

Think of coffee as spotlight for the admin interface. And if you want an easter egg like the do a barrel roll, get active in this issue.

Jan 31 2012
Jan 31

“Why is your window transparent?” a coworker asked me when she noticed my screen. I told her about how I do my CSS theming, and she pulled another coworker over and made me repeat the explanation. Since that seems like something other people might find handy, here it is.

Sass: Syntactically Awesome Sytlesheets

I rarely do CSS/front-end theming work, but when I do, I try to make it as fun and easy as back-end development. I use Sass (Syntactically Awesome Stylesheets) so that I can use nested selectors, variables, and mixins. This makes my code cleaner and easier to write. You’ll need Ruby in order to install Sass, but the tool will give you CSS that you can use on any web platform.

Browser-based tools

I prefer doing the initial tweaking in Google Chrome, because I like the way that the developer tools make it easy to modify the stylesheet. The Chrome CSS Reloader extension is handy, too. Most of the time, I make my CSS changes in the text editor, then use the CSS Reloader to reload the stylesheet without refreshing the page. This makes it easy to manually toggle the display of some elements while allowing me to refresh style rules. If I want to figure out the values for a few simple changes, I’ll sometimes make the changes directly in Chrome (you can use arrow keys to adjust values), then copy the values to my Sass source file.

Colors, sizes, and spaces

A second monitor is totally awesome and well worth it.

Designs rarely specify all the colours, sizes, and spacing needed. To quickly get the color of a pixel, I use WhatColor. This shows the hex code for colors, and allows me to quickly copy the code with the F12 shortcut key. If you want to change the shortcut key, the source is available as an AutoHotkey script.

To make it easier to match sizes and spaces, I use WinWarden to make my browser window 20% translucent. Then I carefully position it over my design reference until the important features match. Magnifixer makes it easier to line things up because it can magnify a fixed portion of the screen. By focusing Magnifixer on the part I’m working on, I can tweak CSS without straining my eyes.

When I know I’m going to be making a lot of changes, I use AutoHotkey to map a shortcut so that I can refresh the CSS with one keystroke instead of several. When I happen to have my USB foot pedal handy, I rig it up to refresh my stylesheet.

Regression testing

Sometimes my CSS changes modify other rules. Instead of laboriously checking each page after changes, I’ve figured out how to use Selenium WebDriver to write a Java program that loads the pages in Mozilla Firefox and Internet Explorer, capturing screenshots and numbering them according to the pages in my design reference. This means that I can run the program in the background or start it before taking a break, and then flip through all the screenshots when I get back.

Cross-browser testing

What’s CSS theming without the requirement of browser compatibility? Someday, when I need to deal with more browsers, I might look into Selenium RC. In the meantime, I develop in Chrome, my Selenium-based program makes it easier to test in Firefox and IE, and it’s easy enough to try the URLs in Safari as well. Virtual machines handle the rest of the requirements. 

So that’s how I’ve been doing CSS theming on this project. What are your favourite tips?

Aug 10 2011
Aug 10

I’m wrapping up a Drupal 6 project which was funded by one of IBM’s corporate citizenship grants. The Snake Hill folks we’ve been working with will continue working with the client until they’re ready to launch. For my part, I’ve been in user acceptance testing and development mode for almost a month, rolling out new features, fixing bugs, and getting feedback.

The project manager has shuffled some hours around and made sure that I’ve got some “support” hours for follow-up questions after we turn the project over.

What worked well

Hey, I can do this stuff after all! I gathered requirements, estimated the effort, negotiated the scope, communicated with the clients and other team members, and generally did other technical-lead-ish stuff. I’ve done that on other projects, but usually that was just me working by myself and talking to clients. This one was more complex. It was fun figuring out what would fit, how things were prioritized, whether or not we were on track, and how to make things happen. I’d love to do it again. (And with the way the world works, I will probably get an opportunity to do so shortly!)

Understanding a project deeply: I was on the first phase of this project as well, and the experience really helped. We didn’t have any disruptions in technical leadership on our part, unlike in the first phase. See, last year, the IBM technical lead who had been talking to the client ended up leaving the country, so we had to repeat a few discussions about requirements. This time, I could draw on my experience from the first phase and our ongoing discussions about the client’s goals for the project. That was fun.

I’ll be turning the project over to the other development company, and the client’s concerned about whether they’ll be able to pick things up and run with it. I’ve tried to write down as many notes as I can, and I also invested time in briefing the other developers on the overall goals as well as the specific work items. Hope that works out!

Externally-accessible issue tracking: In the previous phase of this project, issue tracking consisted of e-mailing spreadsheets around. It was painful. One of the first things I did when we started this phase of development was to set up a Redmine issue tracker on the client’s server. After we gathered and prioritized requirements, I logged them as features in Redmine and split them up over the different phases. I reviewed our list of outstanding work and filed them as bugs, too. As feedback came in, I tracked bugs. I took advantage of Redmine-Git integration and referred to issue numbers in my commit messages. When people e-mailed me their feedback or posted messages on Basecamp, I created issues and added hyperlinks.

Having an externally-accessible issue tracker helped me worry less about missing critical bugs. I also shared some reporting links with the clients and the project manager so that they could track progress and review the priorities.

On future projects, I would love to get to the point of having clients and testers create issues themselves. Wouldn’t that be nifty?

Git for version control: I’m so glad I used Git to manage and share source code between multiple developers. The other developers were fairly new to Git, but they did okay, and I figured out how to clean up after one of the developers wiped out a bit of code after some commit confusion. Git stash and git branch were really helpful when I was trying lots of experimental code.

Developing with a non-default theme: We had a lot of development items to work on while the No.Inc creative team got their Drupal theme together. Once No.Inc sent the theme, I layered it on top of the site, fixed the usual problems, and had even more fun working on a site that looked halfway done. Definitely recommend getting a reliable theme in place sooner rather than later.

Mentoring people: I helped a new developer start to get the hang of Drupal. It was a slow process (must raise estimates even more when dealing with newbies), but I hope the investment pays off. I wrote (and updated!) documentation. I identified small tasks that he could work on first. I checked on him every so often. I successfully resisted the urge to just do things myself. Slowly getting there…

Decision log: I used a wiki to keep track of the design decisions I needed to make, the alternatives I considered, and what I eventually chose. That was helpful for me. I hope it will help future developers, too.

Linux VM on a Microsoft Windows host, XMing, and Plink: I’ve tried lots of different configurations in the course of this project. Doing my development inside a virtual machine has saved me so much time in terms of restoring from backup or being able to tweak my operating environment. I started with a Linux VM on a Windows host, using Samba to access my files and either Eclipse or Emacs to edit them. That was a bit slow. Then I shifted to a Linux VM on a Linux host, SSHing to the VM and using Emacs from the VM itself. That was great for being able to do Linux-y stuff transparently. But then I found myself wanting to be back on Microsoft Windows so that I could use Autodesk Sketchbook Pro (Inkscape and MyPaint aren’t quite as awesome). I ran XMing to create an X server in my Windows environment, used plink to connect, and then started a graphical Emacs running on my virtual machine. Tada! I could probably make this even better by upgrading to 64-bit Microsoft Windows, adding more RAM, and upgrading to a bigger hard disk. (Alternatively, I could host the VM somewhere else instead of on my laptop…)

What I’m going to work on improving next time

Better browser testing, including cross-browser: I’m getting slightly better at testing the actual site, motivating myself with (a) interest in seeing my new code actually work, (b) the remembered embarrassment of gaping bugs, and (c) the idea of slowing down and getting things right. Juggling multiple browsers still doesn’t make me happy, but maybe I can turn it into a game with myself. Selenium might be useful here as well.

Continuous integration: I set up Jenkins for continuous integration testing, but it fell by the wayside as I wasn’t keeping my tests up to date and I wanted more CPU/memory for development. I ran into a number of embarrassing bugs along the way, though, so it might be worth developing stricter discipline around this. I’m still envious of one of the Drupal projects I heard about in IBM, which got through UAT without identified defects thanks to lots of manual and automated testing. If I add more power to my development machine or offload testing to another machine, that might be a good way to stick to this process.

Closer communication with clients and external developers: We set up short daily meetings for the developers, but sometimes people still felt a little lost or out of touch. On future projects, I’ll make sure the clients have it on their calendar as an optional meeting, and maybe see about getting e-mail from people who can’t join on the phone. If I’m the tech lead on a future project, I’ll sit in on all client status update meetings, too. We found out about some miscommunications only when I handled one of the status calls. Fortunately, it was early enough that we could squeeze in the critical functionality while reprioritizing the others. Tense moment, though!

Better vacation planning: I realized we had a 4-day weekend the week before we had it, and we forgot about some people’s vacations too. Heh. I should get better at looking at the entire project span and listing the gaps up front.

Earlier pipeline-building: I nudged some project opportunities about a month before our projected end date, but that wasn’t long enough to deal with the paperwork lag. Oh well! Next time, I’ll set aside some time each week to do that kind of future pipeline-building, and I’ll set myself reminders for two months and a month before the project ends. Not a big problem.

My manager’s been lining up other Drupal and Rails projects for me to work on. Looking forward to learning all sorts of lessons on those as well!

Other Drupal lessons learned:

2011-08-10 Wed 17:08

Aug 05 2011
Aug 05

I know, I know. I shouldn’t allow IFRAMEs at all. But the client’s prospective users were really excited about images and video, and Drupal’s Media module wasn’t going to be quite enough. So I’ve been fighting with CKEditor, IMCE, and HTML Purifier to figure out how to make it easier. I’m hoping that this will be like practically all my other Drupal posts and someone will comment with a much better way to do things right after I describe what I’ve done. =)

First: images. There doesn’t seem to be a cleaner way than the “Browse server” – “Upload” combination using CKEditor and IMCE. I tried using WYSIWYG, TinyMCE and IMCE. I tried ImageBrowser, but I couldn’t get it to work. I tried FCKEditor, which looked promising, but I got tangled in figuring out how to control other parts of it. I’m just going to leave it as CKEditor and IMCE at the moment, and we can come back to that if it turns out to be higher priority than all the other things I’m working on. This is almost certainly my limitation rather than the packages’ limitations, but I don’t have the time to exhaustively tweak this until it’s right. Someday I may finally learn how to make a CKEditor plugin, but it will not be in the final week of this Drupal project.

Next: HTMLPurifier and Youtube. You see, Youtube switched to using IFRAMEs instead of Flash embeds. Allowing IFRAMEs is like allowing people to put arbitrary content on your webpage, because it is. The HTML Purifier folks seem firmly against it because it’s a bad idea, which it also is. But you’ve got to work around what you’ve got to workaround. Based on the Allow iframes thread in the HTMLPurifier forum, this is what I came up with:

Step 1. Create a custom filter in htmlpurifier/library/myiframe.php.

<?php
// Iframe filter that does some primitive whitelisting in a
// somewhat recognizable and tweakable way
class HTMLPurifier_Filter_MyIframe extends HTMLPurifier_Filter
{
  public $name = 'MyIframe';
  public function preFilter($html, $config, $context) {
    $html = preg_replace('/<iframe/i', '<img class="MyIframe"', $html);
    $html = preg_replace('#</iframe>#i', '', $html);
    return $html;
  }
  public function postFilter($html, $config, $context) {
    $post_regex = '#<img class="MyIframe"([^>]+?)>#';
    return preg_replace_callback($post_regex, array($this, 'postFilterCallback'), $html);
  }
  protected function postFilterCallback($matches) {
    // Whitelist the domains we like
    $ok = (preg_match('#src="http://www.youtube.com/#i', $matches[1]));
    if ($ok) {
      return '<iframe ' . $matches[1] . '></iframe>';
    } else {
      return '';
    }
  }
}

Step 2. Include the filter in HTMLPurifier_DefinitionCache_Drupal.php. I don’t know if this is the right place, but I saw it briefly mentioned somewhere.

// ... rest of file
require_once 'myiframe.php';

Step 3. Create the HTML Purifier config file. In this case, I was changing the config for “Filtered HTML”, which had the input format ID of 1. I copied config/sample.php to config/1.php and set the following:

function htmlpurifier_config_1($config) {
  $config->set('HTML.SafeObject', true);
  $config->set('Output.FlashCompat', true);
  $config->set('URI.DisableExternalResources', false);
  $config->set('Filter.Custom', array(new HTMLPurifier_Filter_MyIframe()));
}

Now I can switch to the source view in CKEditor, paste in my IFRAME code from Youtube, and view the results. Mostly. I still need to track down why I sometimes need to refresh the page in order to see it, but this is promising.

2011-08-05 Fri 16:34

Aug 04 2011
Aug 04

Drupal 6’s drupal_json method encodes ampersands incorrectly for JQuery 1.5, causing the rather cryptic error:

Uncaught Syntax error, unrecognized expression: ...

(If you’re lucky.)

The way to fix this is to borrow the JSON-handling code from Drupal 7. Here’s something you might be able to use:

function yourmodule_json_encode($var) {
  return str_replace(array('<', '>', '&'), array('\u003c', '\u003e', '\u0026'), $var);
}

// Fix Drupal JSON problems from http://witti.ws/blog/2011/03/14/jquery-15-json-parse-error
function yourmodule_json($var) {
  drupal_set_header('Content-Type: text/javascript; charset=utf-8');
  if (isset($var)) {
    echo yourmodule_json_encode(json_encode($var));
  }
}

Use yourmodule_json instead of drupal_json wherever applicable.

Hat tip to Greg Payne (Witti) for pointing me in the right direction!

2011-08-04 Thu 14:01

Jun 30 2011
Jun 30

Context-switching among multiple projects can be tough. I’m currently:

  • working full-time on one project (a Drupal 6 non-profit website)
  • consulting on another (helping an educational institution with Drupal 7 questions)
  • supporting a third (Ruby on Rails site I built for a local nonprofit, almost done), and
  • trying to wrap up on a fourth (PHP/AJAX dashboard for a call center in the US).

I’m doing the Drupal 6 development in a virtual machine on my system, with an integration server set up externally. Consulting for the second project is done on-site or through e-mail. The Rails site is on a virtual server. The dashboard project is now on the company’s servers (IIS6/Microsoft SQL Server), which I can VPN into and use Remote Desktop to access. I’m glad I have two computers and a standing desk (read: kitchen counter) that makes it easy to use both!

Today was one of those days. I helped my new team member set up his system so that he could start working on our project. He’s on Mac OS X. It took us some time to figure out some of the quirky behaviour, such as MySQL sockets not being where PHP expected them to be. Still, we got his system sorted out, so now he can explore the code while I’m on vacation tomorrow.

In between answering his questions, I replied to the consulting client’s questions about Drupal and the virtual image we set up yesterday. That mainly required remembering what we did and how we set it up. Fortunately, that part was fairly recent, so it was easy to answer her questions.

Then I got an instant message from the person I worked with on the fourth project, the call-center dashboard. He asked me to join a conference call. They were having big problems: the dashboard wasn’t refreshing, so users couldn’t mark their calls as completed. It was a little nerve-wracking trying to identify and resolve the problem on the phone. There were two parts to the problem: IIS was unresponsive, and Microsoft SQL Server had stopped replicating. The team told me that there had been some kind of resource-related problem that morning, too, so the lack of system resources might’ve cascaded into this. After some hurried searching and educated guesses about where to nudge the servers, I got the database replication working again, and I set IIS back to using the shared application pool. I hope that did the trick. I can do a lot of things, but I’m not as familiar with Microsoft server administration as I am with the Linux/Apache/MySQL or Linux/Apache/PostgreSQL combinations.

I felt myself starting to stress out, so I deliberately slowed down while I was making the changes, and I took a short nap afterwards to reset myself. (Coding or administering systems while stressed is a great way to give yourself even more work and stress.)

After the nap, I was ready to take on the rest. The client for the Rails project e-mailed me a request to add a column of output to the report. I’d archived my project-related virtual machine already, so I (very carefully) coded it into the site in a not-completely-flexible manner. I found and fixed two bugs along the way, so it was a good thing I checked.

Context-switching between Drupal 6, Drupal 7, and Rails projects is weird. Even Drupal 6 and Drupal 7 differ significantly in terms of API, and Rails is a whole ‘nother kettle of fish. I often look things up, because it’s faster to do that than to rely on my assumptions and debug them when I’m wrong. Clients and team members watching me might think I don’t actually know anything by myself and I’m looking everything up as I go along. Depending on how scrambled my brain is, I’d probably suck in one of those trying-to-be-tough job interviews where you have to write working code without the Internet. But it is what it is, and this helps me build things quickly.

On the bright side, it’s pretty fun working with multiple paradigms. Rails uses one way of thinking, Drupal uses another, and so on. I’ve even mixed in Java before. There were a few weeks I was switching between enterprise Java, Drupal, Rails, and straight PHP. It’s not something I regularly do, but when the company needs it, well… it’s good exercise. Mental gymnastics. (And scheduling gymnastics, too.)

I like having one-project days. Two-project days are kinda okay too. Four-project days – particularly ones that involve solving a problem in an unfamiliar area while people are watching! – are tough, but apparently survivable as long as I remember to breathe. =)

Here are tips that help me deal with all that context-switching. Maybe they’ll help you!

Look things up. It’s okay. I find myself looking up even basic things all the time. For example, did you know that Ruby doesn’t have a straightforward min/max function the way PHP does? The canonical way to do it is to create an array (or other enumerable) and call the min or max member function, like this: [x,y].max. Dealing with little API/language quirks like that is part of the context-switching cost. Likewise, I sometimes find myself wishing I could just use something like rails console in my Drupal sites… =)

Take extensive notes. Even if you’re fully focused on one project and have no problems remembering it now, you might need to go back to something you thought you already finished.

Slow down and take breaks. Don’t let stress drag you into making bad decisions. I felt much more refreshed after a quick nap, and I’m glad I did that instead of trying to force my way through the afternoon. This is one of the benefits of working at home – it’s easy to nap in an ergonomic and non-embarrassing way, while still getting tons of stuff done the rest of the day.

Clear your brain and focus on the top priority. It’s hard to juggle multiple projects. I made sure my new team member had things to work on while I focused on the call center dashboard project so that I wouldn’t be tempted to switch back and forth. Likewise, I wrote the documentation I promised for that project before moving on to the Rails project.

Breathe. No sense in stressing out and getting overwhelmed. Make one good decision at a time. Work step by step, and you’ll find that you’ll get through everything you need to do. Avoid multi-tasking. Single-task and finish as much as you can of your top priority first.

I prefer having one main project, maybe two projects during the transition periods. This isn’t always possible. Programming competitions helped me learn how to deal with multiple chunks of work under time pressure, and I’m getting better at it the more that work throws at me.

What are your tips for dealing with simultaneous projects?

2011-06-30 Thu 16:19

Jun 10 2011
Jun 10

One of our clients asked if we had any tips for documenting and managing Drupal configuration, modules, versions, settings, and so on. She wrote, “It’s getting difficult to keep track of what we’ve changed, when, for that reason, and what settings are in that need to be moved to production versus what settings are there for testing purposes.” Here’s what works for us.

Version control: A good distributed version control system is key. This allows you to save and log versions of your source code, merge changes from multiple developers, review differences, and roll back to a specified version. I use Git whenever I can because it allows much more flexibility in managing changes. I like the way it makes it easy to branch code, too, so I can start working on something experimental without interfering with the rest of the code.

Issue tracking: Use a structured issue-tracking or trouble-ticketing system to manage your to-dos. That way, you can see the status of different items, refer to specific issues in your version control log entries, and make sure that nothing gets forgotten. Better yet, set up an issue tracker that’s integrated with your version control system, so you can see the changes that are associated with an issue. I’ve started using Redmine, but there are plenty of options. Find one that works well with the way your team works.

Local development environments and an integration server: Developers should be able to experiment and test locally before they share their changes, and they shouldn’t have to deal with interference from other people’s changes. They should also be able to refer to a common integration server that will be used as the basis for production code.

I typically set up a local development environment using a Linux-based virtual machine so that I can isolate all the items for a specific project. When I’m happy with the changes I’ve made to my local environment, I convert them to code (see Features below) and commit the changes to the source code repository. Then I update the integration server with the new code and confrm that my changes work there. I periodically load other developers’ changes and a backup of the integration server database into my local environment, so that I’m sure I’m working with the latest copy.

Database backups: I use Backup and Migrate for automatic trimmed-down backups of the integration server database. These are regularly committed to the version control repository so that we can load the changes in our local development environment or go back to a specific point in time.

Turning configuration into code: You can use the Features module to convert most Drupal configuration changes into code that you can commit to your version control repository.

There are some quirks to watch out for:

  • Features aren’t automatically enabled, so you may want to have one overall feature that depends on any sub-features you create. If you are using Features to manage the configuration of a site and you don’t care about breaking Features into smaller reusable components, you might consider putting all of your changes into one big Feature.
  • Variables are under the somewhat unintuitively named category of Strongarm.
  • Features doesn’t handle deletion of fields well, so delete fields directly on the integration server.
  • Some changes are not exportable, such as nodequeue. Make those changes directly on the integration server.

You want your integration server to be at the default state for all features. On your local system, make the changes you want, then create or update features to encapsulate those changes. Commit the features to your version control repository. You can check if you’ve captured all the changes by reverting your database to the server copy and verifying your functionality (make a manual backup of your local database first!). When you’re happy with the changes, push the changes to the integration server.

Using Features with your local development environment should minimize the number of changes you need to directly make on the server.

Documenting specific versions or module sources: You can use Drush Make to document the specific versions or sources you use for your Drupal modules.

Testing: In development, there are few things as frustrating as finding you’ve broken something that was working before. Save yourself lots of time and hassle by investing in automated tests. You can use Simpletest to test Drupal sites, and you can also use external testing tools such as Selenium. Tests can help you quickly find and compare working and non-working versions of your code so that you can figure out what went wrong.

What are your practices and tips?

2011-06-09 Thu 12:25

Feb 14 2011
Feb 14



The excellent DrupalDevDays in Brussels -with over 500 developers attending- had a mystery sponsor, DrupalPond.

DrupaPond is an initiative of Krimson and DOP to get more Drupalistas together and give them the knowledge and the tools both companies have and use.

Like in many countries -if not all- the demand for Drupalistas in the low lands is much higher the the supply. While some Drupalshops might think this is a good thing, since they can ask for higher fees, it is actually a bad thing. Any market where supply and demand are only matched by changing the price towards extremes will get in-stable.

In the short term it will lead towards golddiggers, shops and independent consultants who claim to do Drupal as well. Knowing little or nothing about Drual, the code, the license or the community and just use it as a tool and even worse, abuse the tool. Because hacking core is so much user then overriding or using hooks to make a module. Sure, short term a "good" solution but the moment you want to update....

To get more independent Drupal developers together, Krimson and DOP started the Drupalpond for companies seeking talent and for consultants who have outgrown the mom'ndad websites and need a collective to get bigger opportunities. Because we need more Drupalistas, both in quantity and in quality.



Each Drop added -even the smallest one- is filling the pool. Leading to more developers getting more active. And many Drops will create interference, new patterns that are only possible when two or more Drupal developers (Drupalistas) work together. So far many people have shown interest in this new collective and we hope it will add value to the Drupal community. For now our scope is the Netherlands and Belgium but due to higher demand from outside this area, we might consider expanding this to Europe. The patchwork of cultures, languages and habits that Europe is, is not present when it comes to Drupal.

A strong European movement when it comes to open source / Drupal is clearly happening. And while the proprietary software vendors are undergoing a shakeout -every country in EU has a handful of "own" proprietary CMS-es- the competition from these closed source vendors will expand in to "Europe wise" as well, so it is time to join forces The DrupalDevDays is the best example of that!

Note: Drupal is a registered trademark of Dries Buytaert. DrupalPond is using the term "Drupal" with a license

Nov 11 2010
Nov 11

Setting up Simpletest and Drush on Drupal 6.x:

  1. Download and enable Simpletest with drush dl simpletest; drush en -y simpletest
  2. Download simpletest.drush.inc to your ~/.drush/drush_extras directory. This version allows you to run a single test from the command-line.
  3. Create a custom module with a tests/ subdirectory, and write your tests in it. (See this Lullabot Simpletest tutorial.)

We’re starting another Drupal project. While the IT architect is working on clarifying the requirements, I volunteered to implement the risky parts so that we could get a better sense of what we needed to do.

The first major chunk of risk was fine-grained access control. Some users needed to be able to edit the nodes associated with other users, and some users needed to have partial access to nodes depending on how they were referenced by the node. Because there were many cases, I decided to start by writing unit tests.

SimpleTest was not as straightforward in Drupal 6.x as it was in Drupal 5.x. There were a few things that confused me before I figured things out.

I wondered why my queries were running off different table prefixes. I didn’t have some of the data I expected to have. It turns out that Simpletest now works on a separate Drupal instance by default, using a unique table prefix so that it doesn’t mess around with your regular database. I’m doing this on a test server and I want to be able to easily look up details using SQL, so I needed to add this to my test case:

class ExampleTestCase extends DrupalWebTestCase {
  function setUp() {
    global $base_url;
    $this->originalPrefix = $GLOBALS['db_prefix'];
  }
  function tearDown() { }
}

I also didn’t like how the built-in $this->drupalCreateUser took permissions instead of roles, and how it created custom roles each time. I created a function that looked up the role IDs using the {role} table, then added the role IDs and roles to the $edit['roles'] array before creating the user.

Lastly, I needed to add the Content Profile operations to my custom user creation function. I based this code on content_profile.test.

$this->drupalLogin($account);
// create a content_profile node
$edit = array(
  'title' => $account->name,
  'body'  => $this->randomName(),
);
$this->drupalGet('node/add');
$this->drupalPost('node/add/' . str_replace(' ', '-', $role), $edit, t('Save'));

It would’ve been even better to do this without going through the web interface, but it was fine for a quick hack.

I had the setup I wanted for writing test cases that checked user permissions. I wrote functions for checking if the user could accept an invitation (must be invited, must not already have accepted, and must be able to fit). SimpleTest made it easy to test each of the functions, allowing me to build and test blocks that I could then put together.

The code in content_permission.module turned out to be a good starting point for my field-level permissions, while the Drupal node access API made it easy to handle the user-association-related permissions even though I used node references instead of user references.

It was a good day of hacking. I wrote tests, then I wrote code, then I argued with the computer until my tests passed. ;) It was fun seeing my progress and knowing I wasn’t screwing up things I’d already solved.

If you’re writing Drupal code, I strongly recommend giving SimpleTest a try. Implementing hook_node_access_records and hook_node_grants is much easier when you can write a test to make sure the right records are showing up. (With the occasional use of node_access_acquire_grants to recalculate…) Otherwise-invisible Drupal code becomes easy to verify. The time you invest into writing tests will pay off throughout the project, and during future work as well. Have fun!

Nov 09 2010
Nov 09

One of the best things about building websites with Drupal is that there are thousands of modules that help you quickly create functionality.

To set things up, you need to download Drush and add it to your path. For example, you might unpack it into /opt/drush and then add the following line to your ~/.bashrc:

PATH=/opt/drush:$PATH
export PATH

Reload your ~/.bashrc with source ~/.bashrc, and the drush command should become available. If you’re on Microsoft Windows, it might need some more finagling. (Or you can just give up and use a virtual image of Linux to develop your Drupal websites. You’ll probably end up much happier. ;) )

Once you’ve installed Drush, what can you do with it?

Drush is a huge time-saver. For example, I install dozens of modules in the course of building a Drupal website. Instead of copying the download link, changing to my sites/all/modules directory, pasting the download URL into my terminal window after wget, unpacking the file, deleting the archive, and then clicking through the various module enablement screens, I can just issue the following commands to download and enable the module.

drush dl modulename
drush en -y modulename

(The -y option means say yes to all the prompts.)

So much faster and easier. You can use these commands with several modules (module1 module2 module3), and you can use drush cli to start a shell that’s optimized for Drush.

Drush is also useful if you’ve screwed up your Drupal installation and you need to disable themes or modules before things can work again. In the past, I’d go into the {system} table and carefully set the status of the offending row to 0. Now, that’s just a drush dis modulename.

Drush has a bucketload of other useful commands, and drush help is well worth browsing. Give it a try!

Read the original or check out the comments on: How to use Drush to download and install Drupal modules (Sacha Chua's blog)

Dec 10 2008
Dec 10

OpenBand, an M.C. Dean CompanyMy employer, OpenBand, is going to be a Gold sponsor of DrupalCon DC in March 2009 and a number of our team members will be attending the conference.

We have a few presentations to give, and will be keen to see many of the other sessions that are going to be given.

In the Powering collaboration in a distributed enterprise session we'll be giving an overview of the work that we do, the collaboration platform we've been building (largely on Drupal) for our customers over the past three years or so, and some of the modules that we've contributed back to the Drupal community during that time.

Miglius Alaburda will be presenting a session titled Introducing a new File Framework about a new and powerful way of handling files in Drupal.

Darren Ferguson will be talking about Drupal with XMPP Integration and all the functionality that he has built up around the XMPP framework, allowing users of a Drupal site to use instant messaging capabilities.

With all the sessions that have been proposed by attendees, this is shaping up to be a great conference!

Updated: Added Darren's XMPP talk

Jul 24 2008
Jul 24

So back in April I started talking to Keiran at about doing a media and files sprint... well it's finally happening. aaronwinborn in in Portland and dopry is going to be helping remotely. Aaron posted a great writeup on what we're hoping to accomplish so I'll blockquote at length:

Andrew Morton (drewish), Darrel O'Pry (dopry, remotely), and I are heading up a Media Code Sprint in Portland this week! Come help, in person or remotely, if you're interested in multimedia and Drupal! It has now officially started, and as I've volunteered to help keep folks updated, here goes...

First the reasons.

Number One: Better Media Handling in Core

Dries conducted a survey prior to his State of Drupal presentation at Boston Drupalcon 2008, and number one on the top ten (or 11) list of what would make THE KILLER DRUPAL 7 Release was "Better media handling".

Let me repeat that. Better media handling.

People have done really amazing stuff in contrib, but it is difficult (if not impossible in many cases) for developers to coordinate the use of files, as there is no good means for file handling in the core of Drupal. Thus, we have several dozen (or more) media modules doing some small part, or even duplicating functionality, sometimes out of necessity.

We need (better) media and file handling in Drupal core. In particular, there has been a patch for a hook_file in the queue for over a year, which has been in the Patch Spotlight (for the second time, no less) since May! (And has been RTBC several times during that process...) Come on folks.

One of the powers of Drupal is its system of hooks. We have hooks to modify nodes, to notify changes to user objects, to alter nearly any data (such as forms and menus). Noticeably absent is a consistent handling for files or any sort of notification. We need hook_file.

So goal Number One: get media handling in core. The means? Add hook_file and make files into a 1st class Drupal object. We'll be creating a test suite for functionality in the hook_file patch to validate it and "grease the wheels" to get it committed.

The other goals of this sprint pale in comparison to the first in utility, but are still highly desirable and worthwhile.

Number Two: Refactor File Functionality in Core

As an extension to the first goal, there is a lot of inconsistency with how Drupal currently handles files. For instance, in some areas a function may return an object, and in others a string. Additionally, some functions are misnamed, or try to do too much to be useful as a file API.

Some specific examples: for what it does, file_check_directory may be better suited as something like file_check_writable, or maybe even split into that and file_check_make_writable. Also, for instance, file_scan_directory needs to return file objects, rather than the current associative array (keyed on the provided key) of objects with "path", "basename", and "name" members corresponding to the matching files. (The function does what it needs to, but the returned objects have keys not corresponding to anything else used in core.)

So goal Number Two: refactor file functionality in core. The means? Go through and check for (and fix!) existing file functionality for documentation and consistency.

Number Three: Spruce up Existing Contributed Media Modules

There are several much needed multimedia modules that have not yet been upgraded to Drupal 6 (or which are still in heavy progress). This includes (but is not limited to) Image Field, Image API, and Embedded Media Field. Additionally, some major improvements can be made, both to these, and to other essentials, such as the Image module, such as creating a migration path from Image to Image Field (once that module is stable).

So goal Number Three: spruce up existing contributed media modules. The means? Get these modules upgraded!

I want to recognize the valiant and heroic efforts made by everyone to date, as fortunately, there has already been significant progress on all these fronts. That makes our job (relatively) easy. In some respects, we just need to finish up the jobs that have
already been started.

Thus, drewish declared this week the Media Code Sprint!

We need you to help. If you are a developer, or want to be a developer, jump on in! If you aren't ready to develop, or consider yourself too new for that, you can still help test patches and functionality. Jump on in! And please, even if you don't know how to apply a patch, you can still help with documentation and other small (but important) tasks. Jump on in!

If you're in Portland, You Have No Excuse®. If not, you can jump into #drupal in IRC any time you're available.

The official dates for the sprint are today (Wednesday July 23, 2008) through Saturday (the 26th). We'll be online and working most of that time. I'll make sure we continue to post progress as the week develops.

Of course, as is the wonderful nature of Drupal, this is an ongoing process. Even if we achieve our stated goals, there will always be more.

Thanks,
Aaron Winborn

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web