Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jul 11 2018
Jul 11

Someone recently asked the following question in Slack. I didn’t want it to get lost in Slack’s history, so I thought I’d post it here:

Question: I’m setting a CSS background image inside my Pattern Lab footer template which displays correctly in Pattern Lab; however, Drupal isn’t locating the image. How is sharing images between PL and Drupal supposed to work?

My Answer: I’ve been using Pattern Lab’s built-in data.json files to handle this lately. e.g. you could do something like:


... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %} ... {%footer_background_image=footer_background_image|default('/path/relative/to/drupal/root/footer-image.png')%}

This makes the image load for Drupal, but fails for Pattern Lab.

At first, to fix that, we used the footer-component.yml file to set the path relative to PL. e.g.:


footer_background_image: /path/relative/to/pattern-lab/footer-image.png footer_background_image:/path/relative/to/pattern-lab/footer-image.png

The problem with this is that on every Pattern Lab page, when we included the footer copmonent, we had to add that line to the yml file for the page. e.g:


... {% include /whatever/footer-component.twig %} ... {%include/whatever/footer-component.twig%}


... footer_background_image: /path/relative/to/pattern-lab/footer-image.png ... footer_background_image:/path/relative/to/pattern-lab/footer-image.png

Rinse and repeat for each example page… That’s annoying.

Then we realized we could take advantage of Pattern Labs global data files.

So with the same footer-component.twig file as above, we can skip the yml files, and just add the following to a data file.

theme/components/_data/paths.json: (* see P.S. below)

{ "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png" }     "footer_background_image":"/path/relative/to/pattern-lab/footer-image.png"

Now, we can include the footer component in any example Pattern Lab pages we want, and the image is globally replaced in all of them. Also, Drupal doesn’t know about the json files, so it pulls the default value, which of course is relative to the Drupal root. So it works in both places.

We did this with our icons in Emulsify:



End of the answer to your original question… Now for a little more info that might help:

P.S. You can create as many json files as you want here. Just be careful you don’t run into name-spacing issues. We accounted for this in the header.json file by namespacing everything under the “header” array. That way the footer nav doesn’t pull our header menu items, or vise-versa.

example homepage home.twigthat pulls menu items for the header and the footer from data.json files



How to share Drupal content via Facebook with correct images

Jul 16 2015
Jul 16
Jul 16 2015
Jul 16

Over the past couple of years, the Drupal community has been hard at work trying to confront a variety of challenges concerning Drupal core development, especially issues like burnout and funding. People like heyrocker, YesCT, and Alex Pott have been especially outspoken. I have been paying close attention to these discussions as well as engaging in conversations with some of the key individuals involved.

Until last week, I worked in the nonprofit sector and I have been struck by the similarities between code contributors to Drupal core and the monetary contributions to nonprofit organizations. I do not think it is a coincidence that that nonprofit fundraising departments are also called development departments or that both Drupal and nonprofits require contributions.

What follows is a broad discussion of Drupal core development through the lens of current fundraising practices. It will unfortunately gloss over the important contributions of countless individuals and organizations, especially in the contributed module space. It is my hope that through this rubric we might develop an increased appreciation of the current state of Drupal core development.


For many nonprofits, individual gifts are the lifeblood of the organization and collectively are often the largest single source of revenue.

Drupal's "membership department" (my phrase, not theirs) consists of individuals like YesCT, Cottser, xjm, and ZenDoodles. These people take it upon themselves to help other people contribute to Drupal core by organizing frequent code sprints, helping folks climb the Drupal ladder, answering questions and offering assistance during core office hours, and much more. Like many of the people I know that work in nonprofit membership, these people often project a seemingly endless supply of patience.

Much like many nonprofits that are shifting their strategy from getting one-time contributors to attracting sustainers -- that contribute monthly donations -- the concerted efforts to grow the Drupal community of core contributors is succeeding. Not only is the number of contributors to Drupal 8 already more than twice the number that contributed code to Drupal 7, the number of people who have contributed more than one patch to Drupal 8 has also grown considerably.

Corporate Support

For nonprofits, corporate support can sometimes mean getting nice things, such a new building or an endowed professorship. Corporations like to be recognized for their contributions.

In Drupal, it would seem that we are rather healthy in this area, too. We have created a situation in which companies want to contribute back. In the contributed module space, for example, we have nice things like Drupal Commerce, Workbench, Backup and Migrate, and so much more. We also have numerous companies that encourage their employees to work on Drupal for a certain percentage of their time each week. Further, we have a somewhat unique situation with Acquia's Office of the CTO (OCTO), which, with the involvement of Drupal's project lead, helps tremendously in pushing Drupal core development forward.

I would guess that much of this corporate support in code comes from the Drupal Association's excellent work in securing financial contributions. However, because the Drupal Association has no authority over the planning, functionality and development of the Drupal software, their involvement in this area is necessarily limited.

Major Giving

Nowadays, nonprofits are increasingly involved in cultivating relationships with major donors. They are called "major gifts" because they have a major influence on an organization's ability to succeed (or not). For example, when I worked for the Minnesota Orchestra, I was shocked to find out that a single donor would pay for all of the costs to send the orchestra on a tour of Europe.

In Drupal, we have people like Daniel Wehner, Tim Plunkett, and many others that devote significant time and code to Drupal core. We have Alex Pott, who will forever hold a place in Drupal lore as the guy who quit his job to work full time on Drupal.

In many nonprofits the role of the top executive is increasingly becoming that of chief fundraiser. Presidents, Executive Directors, and General Managers are just as likely, if not more likely, to be experts in finance or fundraising as they are experts in their organizational subject matter.

This is one of the areas where my analytical tool begins to break down. For Drupal, the equivalent would be for Dries to be spending his time flying around the world, taking top contributors out to lunch to talk about the future of the Drupal project. While that model proves successful for nonprofits, it hardly seems like a good use of Dries's time. Alex Pott, for example, is a visionary leader in his own right and does not need to be cultivated. Alex needs people to help him reach his goal on Gitttip.

Overall, it seems like the Drupal community recognizes the contributions of its major givers and does its best to take care of them. When yched's laptop started melting, the community bought him a new one. DrupalCons and numerous DrupalCamps, such as BADCamp and Twin Cities Drupal Camp (TCDC), bring core contributors together in real life through scholarships and other support. (I like to brag that Views in Drupal Core was born at TCDC.) We are also starting to see some success with Drupalfund and the Drupal community on Gittip.

Parting Thoughts

While my goal here has been to re-imagine Drupal contributions rather than to solve some of the very real problems we face, I cannot help but wonder if some folks have already figured out the magic formula: patronage. This formula allows organizations to get what they need while supporting Drupal core development. While not perfect, the patronage system worked for the Esterházy family and Haydn, the Medici's and Da Vinci, and countless others. Nowadays, we have Stanford Univeristy and Tim Plunckett. Tim's employer requires him to work on Drupal sites, while also giving him a generous amount of time to work on Drupal core. His employer benefits, Drupal benefits, and Tim benefits.

Even if patronage is not the answer, then I think we will benefit by thinking about Drupal core like nonprofits approach fundraising. We need to grow contributors and do whatever it takes to keep folks coming back for more. We need to make it beneficial for businesses to support work on Drupal core and recognize those contributions. And we need to keep our major givers happy.

P.S. If you found this article helpful, please consider looking at these two issues that I'm responsible for that will help get CMI to beta: https://drupal.org/node/2108813 and https://drupal.org/node/2148199.

Dec 13 2012
Dec 13

There are a lot of approaches for resizing images (ImageCache, Image, iMCE, and many others). However when building a site for a simple user (no previous web experience), I found that all of these approaches required too much effort on the part of the end user. Why can't a user just resize an image in their WYSIWYG, and not worry about the size of the image at all? That's the goal accomplished by the Image Resize Filter. Despite its extremely techie-sounding name, it's ridiculously easy to use. It provides inline resizing of images to match any tag in any HTML inserted in any Drupal textarea that supports filtering.

Jul 24 2010
Jul 24

Posted by admin


This posts describes a method to take a picture with a digital camera (preferably a DSLR) and have it instantly uploaded to a Drupal website where it will be imported as an image node. All these steps will be done automatically and you will only need to press the shutter button of your camera for the whole process to start. Continuous shutting is also supported.

Camera to Drupal workflow: Trigger the whole process only by clicking the shutter of your camera.


You will need to setup some tools in order to have all the software necessary for make this work.

  1. Gphoto2

    Gphoto2 is the command line utility that we will use for connecting the computer to the camera. It supports tethered capture, which is how the process of taking a picture and having it instantly downloaded to your computer is called. Other well known applications like Aperture and Lighroom support tethering but they are not command line scriptable, not opensource and too bloated for what we intend to do here.
    Most used linux distributions have packages for installing gphoto2 including all its dependencies. In Mac you can install it using Macports with the command "sudo port install gphoto2".

  2. Drupal and Image module

    A Drupal site with the image module will be needed. You must enable at least the image and image_import modules. They are easy to install and configure. I leave it as an exercise to the reader setup them for image importing from a folder.

Capturing images with Gphoto2

Some previous tips that will save you troubleshooting time:

  • Use the most recent version of gphoto2, the software has pretty active development with new camera drivers and bugfixes added frequently. I wasn't able to write this post until a recent version of gphoto fixed a bug with my camera.

  • If your computer has any software that automatically launches when you connect your camera it will need to be disabled or finished since it will prevent gphoto from communicating with the camera. There may even be some daemon process that launches in the background and prevents gphoto from working.

  • For example, Mac users will have to kill the process called PTPCamera or it will prevent gphoto from communicating with the camera. The PTPCamera process launches in the background each time you connect your camera to the computer. You can see if the process is running using the "ps ax | grep PTPCamera" command in a terminal. To kill this process you need to execute the command "killall PTPCamera". Do this every time you connect the camera in order to gphoto to work.

  • Disable the camera auto-off feature. Usually cameras have this energy saving feature preconfigured to something like 30 seconds. I recommend to turn this feature off when capturing images from the computer since the camera will turn off if you don't do anything in the configured time and you will have to turn it on again or reconnect it, kill the PTPCamera process (or equivalent), and launch gphoto again. Having this feature disabled will allow you to take pictures without interruptions as long as you (or you batteries) want.

Once you connect the camera to the computer and turn it on you can check if gphoto detects it with:

$ gphoto2 --auto-detect
Model                          Port                                            
Canon EOS 450D (PTP mode)      usb:            
Canon EOS 450D (PTP mode)      usb:036,003

If you dont see your camera listed you should review the previous tips mentioned above. To see whether your camera is supported by gphoto2 you can use:

$ gphoto2 --list-cameras
Number of supported cameras: 1245
Supported cameras:
        "Achiever Digital Adc65"
        "AEG Snap 300"
        "Agfa ePhoto 1280"

A long list of supported camera models will show. (grep is your friend).

Gphoto has several commands that you can test:

  • gphoto2 --capture-image
    Will capture an image an leave it in the memory card on the camera.
  • gphoto2 --capture-image-and-download
    Will capture an image and download it immediately to the current folder

But the most interesting command is:

  • gphoto --wait-event-and-download

renamed in recent versions to:

  • gphoto2 --capture-tethered

This command waits for you to take a picture by pressing the shutter button on the camera and then the image is automatically downloaded to the current folder. The command keeps waiting for more events (more pictures) until you interrupt it with CTRL-C (you can also pass the amount of time to wait as an argument).

Uploading the pictures to the server

The gphoto option that makes the magic here is --hook-script. This options allows to specify an script that will be executed after the image has been downloaded and will receive the file name as an argument.

So if you do something like this:

$ gphoto2 --capture-tethered --hook-script=test-hook.sh

gphoto will wait for you to take pictures with the camera and after each picture is downloaded to the computer it will execute the test-hook.sh script passing the filename as an argument.

There is a stub test-hook.sh script in gphoto documentation folder (ie: /usr/share/doc/gphoto2). We will copy this stub script to the current folder and modify it to make it upload the picture to the server where our Drupal site is installed. Actually the file must be uploaded to the folder where the image_import module is configured to take the images from (config at admin/settings/image/image_import).

Depending on how you wish to upload the files to the server the commands to include in the script will be different. I will use SCP with public key authentication to upload the files. You may opt for FTP using wget, curl or ncftpput.

The final test-hook.sh script will look like this:

self=`basename $0`
case "$ACTION" in
  echo "$self: INIT"
  # exit 1 # non-null exit to make gphoto2 call fail
  echo "$self: START"
  echo "$self: DOWNLOAD to $ARGUMENT"
  # Upload the image file to the server
  DEST = example.com:/var/www/images.example.com/files/tmp/image/
  scp $ARGUMENT $dest
  # Call the image importing script
  wget -O - -q \
  echo "$self: STOP"
  echo "$self: Unknown action: $ACTION"
exit 0

Importing the uploaded images

I will explain now the wget part of the previous script, this is the most tricky part of the whole process.

When you upload the files to your server you need some way to tell the Drupal site: "hey, I've uploaded these files, import them to nodes". You can do this from the user interface of the "image_import" module but we want to make it automatic so we need another way.

The simplest way to do this is to write a php script that will invoke the image module function for importing images. The script will look something like this:

require_once './includes/bootstrap.inc';
$filepath = 'sites/images.example.com/files/tmp/image/'. $_GET['filename'];
global $user;
# Impersonate user with 'create images' permission
$uid = 3;
$user = user_load($uid);
if (!empty($filepath) && file_exists($filepath)) {
  if ($node = image_create_node_from($filepath)) {
    echo "NODE: $node->nid : $node->title : CREATED\n";

The $uid variable must be set to the uid of a user with the 'create images' permission.

The $filepath variable assignment must also be modified to point to the directory where the images were uploaded.

You can also pass other arguments to the image_create_node_from function, like the taxonomy term of the gallery where you want the pictures archived (see image.module).

Save this script in the Drupal installation root directory (where index.php is). I named it image-cap.php but you should use another name so no one except you knows how to execute it.

And this is where the wget part of the test-hook.sh script comes in:

wget -O - -q \

The command just calls the image-cap.php script in your server and pass the filename of the image that was just uploaded. The image-cap.php scripts will take this file and convert it to an image node.

Launching the whole process

Once you have configured everything as explained above, you should be able to launch:

$ gphoto2 --capture-tethered --hook-script=test-hook.sh

and each time you press the shutter button of your camera a picture will be taken, downloaded to the computer, uploaded to the server and imported into the Drupal site as an image node.

Gphoto works nicely queueing several pictures so you can take as many pictures or use continuous shooting without having to wait for the process to finish to take another picture, they will be nicely processed in sequence.

The output should look something like this:

$ gphoto2 --capture-tethered --force-overwrite --hook-script=test-hook.sh 
test-hook.sh: INIT                                                             
test-hook.sh: START
Waiting for events from camera. Press Ctrl-C to abort.
Saving file as capt0000.jpg                                                    
test-hook.sh: DOWNLOAD to capt0000.jpg
capt0000.jpg                     100%  544KB 543.8KB/s   00:00    
NODE: 141 : capt0000.jpg : CREATED
Deleting 'capt0000.jpg' from folder '/'...

TIP: You may be insterested in setting the resolution of your camera to a low value since publishing pictures for the web doesn't require a high resolution and it will take more time to upload them which is bad if you want instant publishing.


Sport events, conference session or keynote live blogging, you name it... There are several situations in which instant publishing of your pictures may be insteresting and have a competitive advantage if you are a professional.

With the method described here you can achieve instant distribution of your pictures by using only opensource software.

The proccess can be adapted to other CMS choices and expanded for more extensive processing of images (watermarking, rotation) or further distribution of the created media.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web