May 25 2018
May 25

Illustration of search function on a website.

Search is an important facet of any large website these days. We’d talked previously about why you want to take full control of your site search. Bombarding your users with a mess of links won’t do anyone any favors. One of our favorite solutions for this problem is Apache Solr and recently we had the opportunity to set it up on Drupal 8. Let’s take a moment to go through a bit of what that solution looked like and some thoughts along the way.

Setting the stage

Before we dive too far into the how, we really ought to give a bit of time to the why. More specifically we need to cover why we didn’t simply use one of the existing modules contributed to Drupal by the community. At the time, there was one prominent module group for implementing Solr search in Drupal 8. The Search API module in tandem with the Search API Solr Search module were really the most direct way to implement this advanced search on your site. These are great modules and for a different situation would have worked just fine. Unfortunately, the requirements we were working with for the project were more specific than these modules were equipped to handle.

There were three key things that we needed control over and we aren’t keen on hacking a module to get something like this done. We needed to have specific control over what was indexed into Solr. The Search API module allows for you to generically specify how fields are translated to the Solr index, but if you need some different handling you would either need multiple indexes or you would need to sacrifice some of that customization. The site also needed to make use of a fairly complicated feature of Solr, the more like this query. (Warning, incoming search jargon!) This query allows you to search the index for content relevant to another indexed piece of content. This relevancy is determined by fields you specify in the query and results can be limited to content that meets a certain relevancy score threshold.

The last thing we had to have in this was the ability to manage how often content was indexed. The existing modules allowed for this action to happen on a periodic cron, but wasn’t able to have the index updated as soon as changes were made to content. This project was going to have a lot of content updated each day and that meant we couldn’t afford to wait for things to be indexed and updated. With these three things creating hurdles to getting Solr implemented in this project it seemed like we were going to have to go another way, but after looking at some documentation we determined that creating our own implementation would not be so difficult.

Brainstorm your next development project with  an Ashday expert!  Request your free session today. 

Solr search with Solarium

Before we get too far ahead of ourselves, we should note that this wasn’t done with a contributable module in mind. That isn’t because we don’t like giving back the the community, we totally do, it was because it was created for a very specific client need. There will likely be a more generic version of this coming out down the road if demand is high enough. Also, we are under the impression that most use cases are covered by the modules mentioned above, so that would be where most would start. Enough with the disclaimers; let’s talk Solarium.

We went with Solarium as the Solr client to use for this. That is what most of the existing Drupal modules use and it seemed to be the most direct way to do this with PHP. Installing Solarium is pretty simple with Composer and Drupal 8. (If you aren’t using Composer yet, you really should be.) Using a client for communicating with a Solr instance isn’t specifically required. Ultimately, the requests are just simple HTTP calls, but the client saves you from having to memorize all of the admittedly confusing query language that comes with using Solr.

Installing Solarium can be done as simply as composer install "solarium/solarium". You could also do this by adding a line to your composer.json file in the require section for "solarium/solarium": “3.6.0”. Your approach on this part may vary, but this should be done from the root of your Drupal site so that this library goes into the global dependencies for the project. These instructions are also detailed a bit more in the official docs for Solarium, here. The official docs also have a bunch of example code that will help if you dive into this like we did.

For this implementation, we opted to create a Solr PHP class to do the heavy lifting and made use of a Drupal service for calls to it from the rest of the app.


namespace Drupal\my_module\Solr;
use Solarium\Core\Client\Client;


class SolrExample {

  /**
   * Connection to the solr server
   *
   * @var Client
   */

  protected $solr;
}

The heart of the class is going to be the connection to Solr which is done through the Solarium client. We will make use of this client in our constructor by setting it up with the credentials and default settings for connection to our Solr instance. In our case, we used a config form to get the connection details and are passing those to the client. We wanted to use the configuration management system so that we could keep those settings consistent between environments. This allowed more accurate testing and fewer settings for developers to keep track of.


/**
* Solr constructor.
*/
public function __construct() {
 $config = \Drupal::config(‘example.solr_config’); //Normally we’d inject this, but for this example we’ll ignore that
 $settings = [
   'endpoint' => [
     'default' => [
       'host' => $config->get('host'),
       'port' => $config->get('port'),
       'path' => $config->get('path'),
       'scheme' => $config->get('protocol'),
       'http_method' => 'AUTO',
       'site_hash' => TRUE,
      ]
    ]
  ];
 $this->solr = new Client($settings);
}


We are doing this in the constructor so that we don’t have to create a new client connection multiple times during a given call. In our case, we ended up using this as a Drupal service which allows us to only have the Client object created once per call and gives a simple way to use this class throughout the app.

The next part is the actual search method. This does a lot and may not be clear from the code below. In this method, we take parameters passed in and build a Solr query. We have a helper function in this that does some specific formatting of the search terms to put it in the right query syntax. For most sites, this code would serve fine for doing generic searching of the whole index or having multiple versions for searching with specific filters

/**
* General Search functionality
*
* @param array $params
*
* @return mixed
*/
public function search($params = []) {

 $query = $this->solr->createSelect();

 $default_params = [
   'start' => 0,
   'rows' => 20,
   'sort' => 'score',
   'sort_direction' => 'DESC',
   'search' => '*:*',
   'time' => '*'
 ];

 $params = array_merge($default_params, $params);

// Building a proper solr search query with the search params
 $search_string = $this->getTextSearchString($params['search'], $params['time']);

 $query->setQuery($params['search']);
 $query->setStart($params['start'])->setRows($params['rows']);
 $query->addSort($params['sort'], $params['sort_direction'] == 'ASC' ? $query::SORT_ASC : $query::SORT_DESC);

 try {
   $results = $this->solr->select($query);
   return ['status' => 1, 'docs' => $results->getData()['response']['docs']];
 }
 catch (HttpException $e) {
   \Drupal::logger('custom_solr')->warning('Error connecting to solr while searching content. Message: @message',['@message' => $e->getMessage()]);
   return ['status' => 0, 'docs' => [], 'message' => 'Unable to reach search at this time. Try again later.'];
 }
}

The code we’ve presented so far isn’t breaking new ground and for the most part does a similar job to the existing search modules available from the Drupal community. What really made us do something custom was the more like this feature of Solr. At the time that we were implementing this, we found that piece to be not quite working in one module and impossible to figure out in another, so we put our own together. 

Thankfully with Solarium, this was a pretty simple query to tackle and we were able to have related content on the site without much other setup. We can create a new more like this query and submit an id so Solr knows which content to compare against for similarity. The rest of it behaves very similar to the search method presented previously. The results are still returned the same and we are able to do some other filtering to change the minimum relevancy score or number of rows.

$query = $this->solr->createMoreLikeThis();
$helper = $query->getHelper();
$query->setQuery('id:' . $id);
$query->setRows($params['rows']);
$query->setMltFields($params['mltfields']);
$query->setMinimumDocumentFrequency(1);
$query->setMinimumTermFrequency(1);
$query->createFilterQuery('status')->setQuery($params['queryfields']);

We didn’t share all of the code used for this here, obviously. The point of this post isn’t to help others create an exact duplicate of this custom implementation of Solarium in Drupal 8. At the time of this writing, it seems that the existing Solr modules might be in great shape for most use cases. We wanted to point out that if you have to dip into code for something like this, it can certainly be done and without an insane amount of custom code.

MIKE OUT

New Call-to-action

Jan 31 2018
Jan 31

Illustration of person scratching head at fork in the road

This is the first part in a series on how not to ruin your life on your next Drupal project. Sound extreme? Well, if you’ve ever suffered the crushing defeat of working your tail off on a lengthy project only to sit there at the end after launch feeling like you just came out of the opening night of Star Wars: The Phantom Menace (ie: severely disappointed and a bit confused), then you know that it is indeed extreme. We spend a majority of our day at work and when it’s not rewarding or energy-giving, it’s a real drag.

So what is the formula? Well, a blog post isn’t going to solve all your problems - but - there are certainly key approaches that we have taken that have helped us avoid catastrophe time and time again. Translation? We’ve managed an extremely high customer satisfaction rate for over two decades. What’s been happening here seems to be working so we pay a lot of attention to what it is exactly that we are doing and assess why we think it’s working. If you want a high-level bird's-eye view, check out our process page. We are going to get a bit downer and dirtier here though.

Ultimately, we want you to go home to your family at the end of the day saying “GUESS WHAT I DID AT WORK TODAY EVERYONE!!” (like we do) instead of “Can we just order pizza and go to bed at 7?”.

 We’ve identified 3 essential components to kicking a project off right, the first of which will be covered in this post. They are the following:

  1. Aggressive and Invested Requirements Gathering
  2. Relentless Ideation
  3. Atomic Preparation

So let’s start with Aggressive and Invested Requirements Gathering. We spent a lot of time thinking about this and I realized it comes down to the adjectives. Everyone knows (mostly) about requirements gathering, but it’s a minefield of unasked questions, unanswered questions, misconceptions, forgetfulness, and chaos. The solution? Take ownership of this baby from the beginning and treat it like it’s your project - it’s your passion - and do what it takes to nail it down. Getting answers that make your life easier, despite your suspicions that the client is maybe not thinking it through, doesn’t help anyone. Take no shortcuts and care about everything.

“Take ownership of this baby from the beginning.”

Here are 3 specific goals:

Assess priorities (theirs and yours!)

Priorities are key because we can easily get hung up on things that ultimately aren’t that important. On the flip side, there are things that are tremendously important to one of the two parties, and hence, it must be important to both. So the client says I care most about X, then Y, then Z. In your head you’re thinking “Yikes, Z has a huge unknown element that I’d like to solve quickly to understand the implications.” So talk about it. Repeat their priorities back to them and state your own and find that happy middle ground where you can pursue the project in an efficient and effective way while also focusing on what matters. It sounds simple, but unspoken expectations or concerns are a plague in project management.

Determining constraints (time, money, features, personnel)

I still love the age-old project management triangle that says that for any given project, you can choose 1 of the 3 key priorities in a project: time, money or features. This means that you can’t simply dictate the budget and the schedule and also expect a very rigid set of requirements. The problem is that despite even stating this, there is a lot of pressure from the client to set the expectation on all three and that simply isn’t possible. So it’s critical early on to sort out what the real constraints are. Ok, you would like this to stay under $50k. Is that a hard cap or could you go over if you felt it was worth it? So you want this launched by January 1st. Is that more of a clean-sounding date or is this tied to a fiscal year, or some other real deadline? Ok, so you want features X, Y and Z. Which of those would be deal breakers to not have? This kind of questioning is very helpful because early on in the build phase, you can make intelligent decisions about how and when to collaborate with the client since you know the significance of obstacles or changes of directions that impact these things.

The last thing I’m throwing on top of this triangle is the concept of personnel. We’ve found that knowing who your stakeholders are, who your end users are, who your editors and admins are - early on - is critical. I’ve literally had meetings where we’re deep into requirements and then I meet the person who has veto power over everything and the thing goes sideways. We’ve learned as well that there is a repeating sales cycle when new stakeholders arrive because convincing the last three people doesn’t mean you’ve convinced the next three. I’ve also had times where a stakeholder makes some critical decisions, but then after talking to the people “on the ground”, I find that he was simply just wrong on some of the day-to-day operations. It’s good to talk to everyone, but also find out each person’s role in the big picture. Often times we’ve found ourselves advocating on behalf of lower level employees who often bring up important and practical issues that decision-makers are often overlooking. It’s a delicate balance, but if the system isn’t welcomed and adopted well by it’s primary users, the project will sink even if the ones writing the checks are getting what they think they want.

Reading between the lines

This is tied to the item above in a lot of ways, but stands on it’s own as an important point. When you’ve done this long enough, you learn that most of what is asked for by a potential client is not always really the point. Often there is a hidden goal or motivation that has led to the formation of a feature request. Even if that request perfectly solves the need, it’s still important to discover that need because it can affect the implementation and guide the specifics. For example, if a request is made to let users download an export of tracking data, but you dig and find out that actually they’re just using this tool to turnaround and upload it into a remote system and it’s a bit of a pain, maybe building a web service is better where their system can talk directly to ours and users can step out of the daily grind. 

Conclusion

So in summary - gathering requirements the same way you date someone you’re thinking of marrying. Care about it and pursue it as if it’s the most important thing you’ve got going with an end goal of a lifetime of happiness.

Up Next: Running a Drupal project the right way: Part 2 - Relentless Ideation 

Free offer, talk to a seasoned Drupal expert.

Jan 31 2018
Jan 31

Illustration of person scratching head at fork in the road

This is the first part in a series on how not to ruin your life on your next Drupal project. Sound extreme? Well, if you’ve ever suffered the crushing defeat of working your tail off on a lengthy project only to sit there at the end after launch feeling like you just came out of the opening night of Star Wars: The Phantom Menace (ie: severely disappointed and a bit confused), then you know that it is indeed extreme. We spend a majority of our day at work and when it’s not rewarding or energy-giving, it’s a real drag.

So what is the formula? Well, a blog post isn’t going to solve all your problems - but - there are certainly key approaches that we have taken that have helped us avoid catastrophe time and time again. Translation? We’ve managed an extremely high customer satisfaction rate for over two decades. What’s been happening here seems to be working so we pay a lot of attention to what it is exactly that we are doing and assess why we think it’s working. If you want a high-level bird's-eye view, check out our process page. We are going to get a bit downer and dirtier here though.

Ultimately, we want you to go home to your family at the end of the day saying “GUESS WHAT I DID AT WORK TODAY EVERYONE!!” (like we do) instead of “Can we just order pizza and go to bed at 7?”.

 We’ve identified 3 essential components to kicking a project off right, the first of which will be covered in this post. They are the following:

  1. Aggressive and Invested Requirements Gathering
  2. Relentless Ideation
  3. Atomic Preparation

So let’s start with Aggressive and Invested Requirements Gathering. We spent a lot of time thinking about this and I realized it comes down to the adjectives. Everyone knows (mostly) about requirements gathering, but it’s a minefield of unasked questions, unanswered questions, misconceptions, forgetfulness, and chaos. The solution? Take ownership of this baby from the beginning and treat it like it’s your project - it’s your passion - and do what it takes to nail it down. Getting answers that make your life easier, despite your suspicions that the client is maybe not thinking it through, doesn’t help anyone. Take no shortcuts and care about everything.

“Take ownership of this baby from the beginning.”

Here are 3 specific goals:

Assess priorities (theirs and yours!)

Priorities are key because we can easily get hung up on things that ultimately aren’t that important. On the flip side, there are things that are tremendously important to one of the two parties, and hence, it must be important to both. So the client says I care most about X, then Y, then Z. In your head you’re thinking “Yikes, Z has a huge unknown element that I’d like to solve quickly to understand the implications.” So talk about it. Repeat their priorities back to them and state your own and find that happy middle ground where you can pursue the project in an efficient and effective way while also focusing on what matters. It sounds simple, but unspoken expectations or concerns are a plague in project management.

Determining constraints (time, money, features, personnel)

I still love the age-old project management triangle that says that for any given project, you can choose 1 of the 3 key priorities in a project: time, money or features. This means that you can’t simply dictate the budget and the schedule and also expect a very rigid set of requirements. The problem is that despite even stating this, there is a lot of pressure from the client to set the expectation on all three and that simply isn’t possible. So it’s critical early on to sort out what the real constraints are. Ok, you would like this to stay under $50k. Is that a hard cap or could you go over if you felt it was worth it? So you want this launched by January 1st. Is that more of a clean-sounding date or is this tied to a fiscal year, or some other real deadline? Ok, so you want features X, Y and Z. Which of those would be deal breakers to not have? This kind of questioning is very helpful because early on in the build phase, you can make intelligent decisions about how and when to collaborate with the client since you know the significance of obstacles or changes of directions that impact these things.

The last thing I’m throwing on top of this triangle is the concept of personnel. We’ve found that knowing who your stakeholders are, who your end users are, who your editors and admins are - early on - is critical. I’ve literally had meetings where we’re deep into requirements and then I meet the person who has veto power over everything and the thing goes sideways. We’ve learned as well that there is a repeating sales cycle when new stakeholders arrive because convincing the last three people doesn’t mean you’ve convinced the next three. I’ve also had times where a stakeholder makes some critical decisions, but then after talking to the people “on the ground”, I find that he was simply just wrong on some of the day-to-day operations. It’s good to talk to everyone, but also find out each person’s role in the big picture. Often times we’ve found ourselves advocating on behalf of lower level employees who often bring up important and practical issues that decision-makers are often overlooking. It’s a delicate balance, but if the system isn’t welcomed and adopted well by it’s primary users, the project will sink even if the ones writing the checks are getting what they think they want.

Reading between the lines

This is tied to the item above in a lot of ways, but stands on it’s own as an important point. When you’ve done this long enough, you learn that most of what is asked for by a potential client is not always really the point. Often there is a hidden goal or motivation that has led to the formation of a feature request. Even if that request perfectly solves the need, it’s still important to discover that need because it can affect the implementation and guide the specifics. For example, if a request is made to let users download an export of tracking data, but you dig and find out that actually they’re just using this tool to turnaround and upload it into a remote system and it’s a bit of a pain, maybe building a web service is better where their system can talk directly to ours and users can step out of the daily grind. 

Conclusion

So in summary - gathering requirements the same way you date someone you’re thinking of marrying. Care about it and pursue it as if it’s the most important thing you’ve got going with an end goal of a lifetime of happiness.

Up Next: Running a Drupal project the right way: Part 2 - Relentless Ideation 

Free offer, talk to a seasoned Drupal expert.

Jun 14 2013
Jun 14

For a current Drupal 7 project that uses Ubercart and Ubercart Recurring to provide for a subscription service, I need the ability for an admin user to be able to cancel a user's ongoing recurring fee when a subscription level is changed. I accomplished this with the following php rule:

<?php
// load all recurring fees for a user
$recurring_fees = uc_recurring_get_user_fees($user_uid);
// loop through fees
foreach ($recurring_fees AS $fee) {
// cancel each fee
uc_recurring_fee_cancel($fee->rfid);
}
?>

I tied this in with a rule to clear the user's cart and another rule to add the new subscription to the user's cart.

Anyway ... I have included an export of the component for ease of import.

Attachment Size cancelusersubs.txt 614 bytes
Jun 10 2013
Jun 10

On a current site in development I am using Ubercart to provide a renewable subscription service. To make the user experience clean, I wanted to protect the user from going 'shopping' to add their subscription. To do this I decided to use a rule to add the product to the user cart when the user is created by an administrator or when the subscription is cancelled or fails payment. I tried the Ubercart Rules module, but this is mainly for dealing with orders and not carts, and did not contain the needed add to cart rule.

Lucky it is easy to make one for yourself using the php action in rules. The following is the needed code for adding a nid to a specified users cart:

<?php
// uc_cart_add_item($nid, $qty = 1, $data = NULL, $cid = NULL, $msg = TRUE, $check_redirect = TRUE, $rebuild = TRUE)
uc_cart_add_item($product_nid, 1, NULL, $user_uid, FALSE);
?>

Please note the commented out add to cart call with all the bells and whistles. I have included an export of the component for ease of import.

Peace!

Attachment Size addtocartrule.txt 615 bytes
Sep 26 2012
Sep 26

I was playing with Navin, an Omega sub-theme, and wanted to make some minor adjustments to the CSS, and maybe have a custom template.php file to work with. The best way to do this is to create a sub-theme of the sub-theme (though I kind of wish there was a way to just add a plan old CSS file somewhere - and I feel like maybe there's a way to do that I'm spacing on). 

To get this to work required a few steps. My use case is that I'm working within the Open Enterprise Drupal distribution to get a fairly simple Drupal-based blog set up. It comes with Navin as the default theme.

  1. Downloaded Omega Tools - a module that adds a Drush tool to create quick and easy Omega sub themes.
  2. Created a subtheme using drush omega-subtheme nameofmythemehere on the command line (though the manual instructions here would have worked using the HTML5 version, I believe).
  3. Copied the code from /themes/navin/navin.info (everything below the settings[alpha_css][navin-footer.css] = 'navin-footer.css' line) and pasted into nameofmythemehere.info.
  4. Copied the code from comment 1 on a helpful Navin issue queue post and pasted it in above what I just added to nameofmythemehere.info. The idea here (if you're using a different subtheme) was to pull in the subtheme's CSS into the sub-subtheme.
  5. Changed the name of the subtheme from omega to navin.
  6. Enabled the new subtheme in the Appearance settings page.
  7. I also had to clear my caches for this to take because I had done some messing around in the .info file and wanted to clear out any stray settings.

That was enough to do the trick.

OPEN ENTERPRIsE HOlli: LATEST BLOGS Retina icon here here Maybe some green WATERFAll NT to

Sep 01 2012
Sep 01

I'm not sure how it happened, but today I noticed that Drupal's menus were behaving very oddly. After upgrading to Drupal 6 and installing several additional modules, I noticed duplicate menu entries as well as other disturbing oddities. Items I was placing into the menu were not showing up. Menu items that I moved around were apparently saved but they did not appear properly in a dropdown context. 

Looking further into it via the absolutely awesome SQLYog tool, I verified that there were dozens of duplicate entries. Some items were duplicated up to six times. It was a real mess.

The database tables involved here are menu_links and menu_router. These are two of the more mysterious tables in Drupal. I haven't had the need to spend much time with them in the past, and I know now that this is a good thing. Fortunately, you do not have to know anything about them to fix this problem. While I spent a couple hours carefully deleting items from this table, ultimately I gave up. I was able to remove all the duplicates, but the menus were still misbehaving. At this point, I just wanted to do a factory reset on the menus, but it's not so simple as flushing cache. However, that is not far from the solution.

This solution will do a 'factory reset' on your menus. You will lose any customizations you have made. However, all core and contrib module entries will be restored very nicely.

Please backup your entire database before doing any destructive database manipulation. 

Step one is to empty the corrupted tables:

In your favorite SQL client, run the following commands:

truncate menu_links;
truncate menu_router;

At this point, your site will be completely unusable. But not for long. To complete the final step, you will need to be comfortable with the Drupal admin's best friend, drush.

Simply run the following commands from your terminal (Viva tcsh!):

drush php-eval 'menu_router_build();'
drush cc menu

Now my menus are as fresh as the day they were installed.

Though I could not clearly identify the cause of this problem, I would suggest backing up your database before installing the Taxonomy Menu module

Jul 17 2012
ao2
Jul 17

There are cases when you want to export some content from a website, in order to import the same content at some later point into a similar website in an automated fashion.

In my case I want some “default content” to be imported every time I rebuild a Drupal site with drush make, this is particularly useful if you are building a Drupal distribution or a Drupal installation profile, or basing your product on these concepts.

I am going to show how to set up a source site and a destination site, and one possible way to export and import the content from and into Drupal.

In the following text there are some assumptions:

  • The Operating System is Debian GNU/Linux, in particular the group of the web server is www-data.
  • The web server has per-user web directories configured, and the user name is username; you have to substitute your own in URLs when you follow the instructions below.
  • The DBMS is MySQL (BTW, when drush sql-create becomes available I may update the article and drop this assumption).
  • In the code sections below, lines starting with $ are supposed to be commands to be written in a command line shell.
  • The reader has some Drupal knowledge, especially with regard to modules installation and content creation, knowing what the Features and Deploy modules do is a plus.

I used the OpenOutreach distribution because it is simple enough and provides a default content type which supports images, which are not trivial to export.

Let's get started.

A script to set up test sites

Here is a script to make it easier to build test sites under $HOME/public_html/, let's call it create_test_site.sh:

#!/bin/sh
 
set -e
 
[ $# -eq 5 ] || { echo "usage: $(basename $0) <db_name> <db_user> <db_password> <site_path> <site_name>" 1>&2; exit 1; }
 
DB_NAME="$1"
DB_USER="$2"
DB_PASS="$3"
 
echo -n "MySQL root user. "
mysql -u root -p <<EOM
CREATE DATABASE IF NOT EXISTS ${DB_NAME};
USE ${DB_NAME};
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, LOCK TABLES \
ON ${DB_NAME}.*
TO '${DB_USER}'@'localhost' IDENTIFIED BY '${DB_PASS}';
FLUSH PRIVILEGES;
EOM
 
SITE_PATH="$4"
SITE_NAME="$5"
 
DISTRIBUTION=openoutreach
INSTALLATION_PROFILE=standard
WEB_SERVER_GROUP="www-data"
 
(
  cd $HOME/public_html
  drush dl "$DISTRIBUTION" --drupal-project-rename="$SITE_PATH"
  cd "$SITE_PATH"
  drush site-install "$INSTALLATION_PROFILE" \
    --site-name="$SITE_NAME" \
    --db-url="mysql://${DB_USER}:${DB_PASS}@localhost/${DB_NAME}"
  sed -i "[email protected]# RewriteBase /drupal\[email protected] /~${USER}/${SITE_PATH}@" .htaccess
  chmod 775 sites/default/files && \
  sudo chgrp -R "$WEB_SERVER_GROUP" sites/default/files
)

Set up the source site, create and export some content

Creating a new site is as simple as:

$ cd ~/public_html/
$ ./create_test_site.sh drupal_test_src user_test_src pass_test_src openoutreach_src "Source site"

The admin password will be printed on the standard output.

Log in to http://localhost/~username/openoutreach_src, go to node/add and create some contents with images.

Install the modules needed to export the content (NOTE: development versions are needed for now):

$ cd ~/public_html/openoutreach_src
$ drush dl --dev ctools deploy entity entity_dependency features uuid
$ drush en deploy deploy_ui entity entity_dependency features uuid
$ drush cc all

Go to admin/structure/deploy/plans/add and create a new deployment plan selecting these options:

  • Managed aggregator
  • Fetch-only

Go to admin/content, select some content and add it to the deployment plan using the “Update Options” dropdown menu.

Go to admin/structure/features/create in order to export the content to a Feature:

  1. Choose a name (e.g. Default Content) and a version (e.g. 7.x-0.1)
  2. select the option “Deployment: deploy_plans” from the “Edit components” dropdown menu, and you will see the deployment plan defined before;
  3. check it;
  4. select the option “UUID entities: uuid_entities” from the “Edit components” dropdown menu, and you will see an entry with the same name of the deployment plan defined before;
  5. check the entry with the same name of the deployment plan;
  6. then click on “Download feature”.

Let's say that the new feature module is called default_content-7.x-0.1.tar

NOTE

The resulting feature seems to be lacking dependencies, not only the needed modules from above are missing, but for instance the content type of the exported nodes is not mentioned anywhere either.

Anyhow, as long as you move contents between site installations with the same configuration the default_content feature will work fine.

Let's verify that, let's install a destination site.

Set up a destination site and import the content

$ cd ~/public_html/
$ ./create_test_site.sh drupal_test_dest user_test_dest pass_test_dest openoutreach_dest "Destination site"
$ cd ~/public_html/openoutreach_dest
$ drush dl --dev ctools deploy entity entity_dependency features uuid
$ drush en deploy entity entity_dependency features uuid
$ drush cc all

Install the exported feature and enable it:

$ cp default_content-7.x-0.1.tar ~/public_html/openoutreach_dest/sites/all/modules
$ cd ~/public_html/openoutreach_dest/sites/all/modules
$ tar xvf default_content-7.x-0.1.tar
$ cd ~/public_html/openoutreach_dest/
$ drush en default_content
$ drush cc all

Enjoy the imported content at http://localhost/~username/openoutreach_dest.

Final words

I know that a push deployment plan can be used to exchange content between two actual sites, but remember that I aim to import the content from code when rebuilding a site from scratch, that's why I exported the content to a “feature module”; in my use case, the destination site in this article can easily be seen as a future incarnation of the source site itself.

Someone may also wonder if it is right at all to export content as code; well, in my case I really see this “default content” as configuration so having it stored as PHP code in a feature module makes totally sense to me.

Jan 19 2012
Jan 19

Pantheon (heart)s Drush. We took care when assembling our DROPs infrastructure to maintain Drush access for developers, and we'll be building more command-line power tools over time. The magic that allows us to offer Drush (as well as rsync and sftp) without traditional shell access will be the subject of a longer "Inside Pantheon" post coming up, but the end-result is there for you now.

When you log in, your account overview screen will let you snag a compiled drushrc.php file:

Download your drushrc file

On linux/MacOS you can drop this file into a .drush directory in your home. You can also put it in the aliases directory of your local Drush installation. Then run drush sa and you should get a list of aliases like so:

# drush sa
@pantheon.open-public-demo.dev
@pantheon.open-public-demo.test
@pantheon.open-public-demo.live
@pantheon.drupal-7-sandbox.dev
@pantheon.drupal-7-sandbox.test
@pantheon.drupal-7-sandbox.live

You're now ready to use these aliases to run all your favorite Drush commands on your remote sites!

# drush @pantheon.drupal-7-sandbox.dev status
 Drupal version         :  7.10                                                 
 Site URI               :  dev.drupal-7-sandbox.gotpantheon.com                          
 Database driver        :  mysql                                                
 Database hostname      :  50.57.231.252                                        
 Database username      :  pantheon                                             
 Database name          :  pantheon                                             
 Database               :  Connected                                            
 Drupal bootstrap       :  Successful                                           
 Drupal user            :  Anonymous                                            
 Default theme          :  waves                                                
 Administration theme   :  seven                                                
 PHP configuration      :  /srv/bindings/12bda2a361064ce680fb4218871a33e5/php.ini                                                   
 Drush version          :  4.5                                                  
 Drush configuration    :  /srv/bindings/12bda2a361064ce680fb4218871a33e5/drushrc.php                                               
 Drush alias files      :                                                       
 Drupal root            :  .                                                    
 Site path              :  sites/default                                        
 File directory path    :  sites/default/files                                  
 Private file           :  sites/default/files/private                          
 directory path         

More power to you!

Jan 13 2012
Jan 13

Some of the biggest questions that come up for developers with a next-generation platform like Pantheon are "how do I get my database synced?" or "what about migrating large amounts of files?"

Upload where you downloadYou've asked and we've listened. In the past week we've deployed a couple updates to make moving your data around much easier, including new command-line capabilities for advanced users.

First, as the initial release of a series of updates we're making to the control panel we've added the ability to directly import database dumps and file archives using the web interface. This gives people the ability to essentially re-run the import process any time they like for just the pieces they need. This feature is accessed in the same place as database/file downloads (see image right).

We've also added an option to "Wipe content" for an environment. This will clear out the database and files area completely, and can be extremely helpful for developers working on installation profiles. After running this option you'll be back to a fresh install.php when you load the site.

Power Tools for Power Users
For many developers, a terminal is better than any web UI. As part of the work to improve our data-hauling workflow in the control panel, we've also been improving our command-line features. We pre-generate Drush alias files for all your sites — grab this from your account screen, add your SSH key and you're cleared to Drush.

You can now use Drush to connect directly to import import/export your database and files. For instance, getting your sql credentials is as easy as:

drush @pantheon.my-site-name.dev sql-connect

That will give you access the database instance for that site/environment. You can use this to pull or push database dump files directly, or connect a local client, including GUI clients.

You can also now use the drush rsync command successfully:

drush rsync @pantheon.my-site-name.dev:sites/default/files/ ./local-files/

That will sync the remote files directory to your desktop. Reversing the arguments will do the reverse. When using rsync it's important to be careful to use that trailing slash or else you'll end up copying one directory into another, rather than syncing their contents.

There's more technical documentation of these capabilities in the wiki, including how to use rsync directly if you prefer. If you are truly savvy, you can even make an SFTP connection to transfer files!

We'll be expanding and improving these features in the coming weeks, as well as providing more customized copy/paste snippits directly from the dashboard to get you Drushing in no time. Let us know what you think and what else you'd like to see.

Also, we do stuff like this all day and it's rad. If you're into this kind of work, we're hiring.

Apr 25 2011
Apr 25

When a client asks for a way to pull content onto a site through RSS, the obvious choice is to use Drupal's Feeds module. I've never been really in love with this module but it does the job well. We recently had an interesting case that required extending the normal functionality of feeds to interact with custom content types.

The scenario was as follows:

  • The site lists many organizations and the events of those organizations.
  • Site users connect with and follow events and news from the various organizations.
  • Each organization may run several websites each having its own set of RSS feeds.
  • The organizations desired the ability to have their content added to their pages dynamically and without effort.
  • The user would visit the organization's page and see a list of news and articles from one or more of that organization's feeds.

The idea is fairly straight forward. But feeds does not support this kind of association by default.

If you're not familiar with feeds here's a brief rundown of how it works:

  1. Feeds allows you to designate a content type to be the source of a feed, or it will create a feed content type for you.
  2. You then create new nodes of this content type, adding the URL of the feed to be imported to each node.
  3. A designated times, the feed importer will be used to fetch information from the designated RSS URL. The data will be added to Drupal as nodes. You can choose what kind of node you would like the imported data to be.

This was a Drupal 7 site and our idea was to use references to reference each feed importer to the organization in question. For example the feed importers for developer.apple.com would reference the organization node for Apple as would the feed importer for news.apple.com while the feed importers for developer.microsoft.com and news.microsoft.com would reference the Microsoft organization node.

Make sense so far?

We then created a new content type for partner's news called Partner News. To this we added normal title, body, date, and ID information and also another reference field for organization. What we really wanted was for the partner news nodes to automatically inherit the reference field from the importer that created them. So playing off the example above, we wanted each news item that was added to Drupal from the feed at news.apple.com to inherit the reference to the Apple organization node so that we could later create a view on the Apple organization node that displayed all the imported feeds associated with Apple.

The trick is that this function didn't exist. Lucky for us Feeds module provides some useful hooks to extend its base functionality.

The custom module we created is fewer than 50 lines if you ignore all the comments.

The following screen shots show a typical Feeds importer setup.

In this case I am attaching my feed importer to content type called importer. This means that when I create a new importer node, I will see that feeds has added a new field to the content type giving me a place to add the URL of the feed to be imported.

Under settings I designate that imported nodes should be Partner News nodes. That means that each item in an RSS feed that is imported will become its own Partner News node. I also set Feeds to update nodes rather than replace them or creating new ones if it finds duplicate data.

Finally we designate the mapping. The mapping defines to feeds what elements from an imported RSS element should be added to what part of the new node. Some of these are pretty obvious. We map title to title, date to date, description to body, and GUID to GUID. This last one (GUID) provides a unique identifier for updating feed data and is required if you want the nodes to update rather than duplicate.

But what we want doesn't exist. We want to see a source element that says something like “Feed Importer's Organization Reference” and a target that says something like “Organization Reference”, so that we can map from one to the other.

To do this start a custom module in the standard fashion (http://drupal.org/node/1074360). I'll call my module feedmapper. In feedmapper.module add the following function:

<?php
/**
* Implements hook_feeds_parser_sources_alter().
*/
function feedmapper_feeds_parser_sources_alter(&$sources, $content_type) {
  $sources['field_importer_reference'] = array(
    'name' => t('Organizations\'s NID'),
    'description' => t('The node ID of the partner.'),
    'callback' => 'feedmapper_get_organization_nid',
  );
}

This adds a new source to the dropdown on the feed importer configuration.

The callback describes a function that will actually handle the data processing. You should prefix it with the name of you module but it can be named anything that makes sense. I haven't written this function yet, but we'll get to that shortly.

Next I'll add a function that specifies a new target.

/**
* Implements hook_feeds_processor_targets_alter().
*/
function feedmapper_feeds_processor_targets_alter(&$targets, $entity_type, $bundle_name) {
  $targets['field_importer_reference'] = array(
    'name' => 'Organization Reference',
    'description' => 'the node reference for the partner',
    'real_target' => 'field_importer_reference', // Specify real target field on node. This is on the content type.
    'callback' => 'feedmapper_feeds_set_target',
  );
}

Note that the source and the target both reference the same field field_importer_reference. This is due to the fact that I am reusing the field across content types. If you had different field names for each content type, you would need to make the target and source point to the specific field names you created.

Now you can assign this mapping. Of course it doesn't do anything yet because we haven't written the appropriate callbacks.

The set method is actually pretty easy because feeds handles that for us providing we pass the correct data at first. We need to focus on retrieving the correct node Id from the feed importer. To do this we access a property of the feed object. This property is feed_nid which as you can guess returns the value of the feed's node id. But now that we have the nid, retrieving another field's data is fairly trivial, we just need to make sure we're using the correct type of node so we run a check on the node type and then get the field in question:

/**
* Find the node id of the feed source and use it to find the associated organization.
*/
function feedmapper_get_organization_nid(FeedsSource $source) {
  $nid = $source->feed_nid;
  $feed = node_load($nid);
  if ($feed->type == 'importer') { //this needs to be the name of the importer content type.
    $partner_nid = $feed->field_importer_reference;
  }
  else {
    $partner_nid = NULL;
  }
  return $partner_nid;
}/**
* Implements hook_feeds_set_target().
*/
function feedmapper_feeds_set_target($source, &$entity, $target, $value) {
  $entity->$target = $value;
}

And that's it. Now the creating of a feed importer is tied to an organization and every time news is imported via feeds the incoming news item is automatically linked to its parent organization.

I've attached a working version of the module outlined here along with a Feature that should get you started.

Good luck.

AttachmentSize 3.52 KB 953 bytes
May 31 2010
May 31

After being alerted to Google Fonts, the Google Font API, and the Google Fonts Module in a recent Drupal Planet post (http://acquia.com/blog/robert/google-fonts-api-time-drupal-market-one-day), I dropped my lunch and said, "Rad!" Then I rolled up the sleeves and dropped a few fonts into my blog as easy as the dog drops logs on the lawn. What follow is usage notes and examples on getting this all going for yourself:

References right up front:

and then a walkthrough:

  1. Download the Google Fonts Module from http://drupal.org/project/Google_fonts
  2. Enable your new modules (admin/build/modules/list). Make sure you enable both Google Fonts and Google Fonts Ui.

  3. Enable your desired fonts on the Google Fonts admin page (admin/settings/Google_fonts). Here you will find a list of all the available Google Fonts with a checkbox to select desired font. For my example I am enabling Droid Sans Mono and Lobster. When you have made your choices click 'Save configuration'.

  4. Add the font to an element via CSS in two different ways:
    • • use the font directly in your stylesheet (.node h2{ font-family: "Droid Sans Mono"; }) I have not tried this because, this next way is easier
    • • or add a rule via the 'Add rules' tab (admin/settings/Google_fonts/rules). Here you will find textareas for each font that you previously selected. Enter your CSS selectors here to get going. In my example I am selecting for the dateline and the custom made tag marquee view at the bottom of my page.

Now check out my handy work in the dateline on all posts, and at the bottom of the page using the funky Lobster font on my Tag Marquee.

May 07 2010
May 07

I frequently use a 3rd party designer to help with the tedious task of going from PSD to final theme. If you haven't realized it yet, but alot of designers have problems setting up a local MAMP install w/ drupal in which to fuck with css. To deal with this without giving the designer any command-line access, my shop uses what we call CZI on all drupal installs. This stands for CSS Injector, Zen theme, IMCE, and allows a designer to upload images and apply css rules to a development site they have been given permissions for on the theme, Zen, that provides all the classes and ids anyone would need.

After my shop, the designer, and the client are satisfied, CSS Injector and it's external files become a weight and need to be removed. Below I detail the process of using Zenophile (http://drupal.org/project/zenophile) to create a zen subtheme in which to wrap up all your CSS Injector files:

Create a subtheme using Zenophile

  1. Enable module Zenophile
  2. create a new zen subtheme (site building > themes > create zen subtheme)
    • name appropriately according to site url
    • set site directory to installs folder unless you want it available to other installs
    • create a fresh css file
    • Submit (you may need to chown the target directory to have appropriate permissions)
  3. disable module zenophile
  4. manage blocks for new theme (site building > blocks > list > newtheme)
    • save each block individually to have titles set appropriately
  5. Duplicate theme settings (site building > themes > configure > zen & newtheme)
    • make sure your newtheme has the exact same settings as zen
    • pay special attention to logo and favicon paths
    • save theme settings

pack up and move css injector files

  1. merge all css injector files ( site configuration > css injector )
    • copy all css injector files into single file
    • delete originals, leaving you with one merged file
  2. copy content of merged file into newtheme-fresh.css
    • search and replace any filepaths in css code

switch themes

  1. set newtheme as primary (site building > themes)
  2. remove last css_injector file (site configuration > css injector)
  3. test site

cleanup

  1. disable module css_injector
  2. uninstall modules zenophile and css_injector

Peace out and remember to get a good lunch.

Feb 24 2010
Feb 24

They will load faster, it's easier for deployment, and you'll be able have them under version control.

We all know that you can load views from code, and it is even recommended. But what about Panels pages? It is also possible. And actually, it's quite easy.

Let's say that you have reached a beta stage for these pages, and you are ready to start having these under version control so you can sleep better at night.

1. Create the module that will load these pages

panels_default_pages_file_structure.png
We'll call our module ctools_defaults. On the right we see the structure and files that we will need in our module.
Inside of the pages directory, we'll be inserting each of our exported pages.

2. Implement hook_ctools_plugin_api()

This is done in order to let ctools know that our module has something to say. We'll do this inside ctools_defaults.module.

<?php
/**
  * Implementation of hook_ctools_plugin_api().
  */
function ctools_defaults_ctools_plugin_api($module, $api) {
  if (
$module == 'page_manager' && $api == 'pages_default') {
    return array(
'version' => 1);
  }
}
?>

3. Implement hook_default_page_manager_pages()

The name of the file, very important, will be MODULENAME.pages_default.inc - and in our case ctools_defaults.pages_default.inc. Because we are clever, in it we'll do a little trick, to facilitate our lives, and have each page in its separate file. This will allow us to have an independent version control per page, and also make the re-exporting/editing of existing pages easier.

<?php
/**
* Implementation of hook_default_page_manager_pages().
*/
function ctools_defaults_default_page_manager_pages() {
 
$pages = array();
 
$path = drupal_get_path('module', 'ctools_defaults') . '/pages';
 
$files = drupal_system_listing('.inc$', $path, 'name', 0);
  foreach(
$files as $file) {
    include_once
$file->filename;
   
$pages[$page->name] = $page;
   }
  return
$pages;
}
?>


So what's special? not much, this code looks for .inc files in the directory called pages within our module, and for each of the files found, it tells panels that we have a default page. Each new file that we place in there will be loaded automatically.

4. Export our panel pages

The process is pretty simple, but in case someone gets lost, the button is right here when you are editing a panel page:
panel_page_export_button.png

Which will get you to a page with all the code ready to be copied, like:
panel_page_export_code.png

5. Create our PAGENAME.inc

We create a file called PAGENAME.inc inside our pages directory, within our module's directory.

<?php
// paste below the code exported from the panels UI
?>


Inside this file, we'll open php, and paste the code copied in the previous screen. Just like that, don't be shy and save the file.

6. Empty the cache

Panels caches (thankfully) the default pages that third party modules provide, so we must clear the cache when we create a new default page, and when we modify the code of an existing one.

Conclusion

This technique let's us sleep better at night. If someone ever touches the panel page, and breaks things, we can always revert to the default code. We'll be able to create pages based on existing ones if they are very similar, just by copying and modifying the original code, reducing our development time, and improving our personal relationships as an unexpected bonus.

If you have any comments, you know what to do. If you see a mistake in my technique, please do let me know and I'll fix it right away.

Attachment Size ctools_defaults.tar.gz 926 bytes
Oct 10 2009
Oct 10

If you've ever used the drupal views module, chances are at some point you've needed to suppress any output until AFTER the user has made a selection from one of your exposed filters.  Views actually DOES make this possible, but it's not exactly self-evident.  I'm going to run you through a quick "howto" on this as I'm sure many people have needed it at some point.

As I mentioned above, this is possible but not particularly self-evident.  Views has a number of different "global" group items.  The most common of these is probably the Random Sort.  Within arguments you also have another member of the global group, the global NULL argument.  This is basically a way of attaching your own rudimentary argument to a view.  Through the use of the default value (as custom php) and custom validation (again through php) you could cook up just about anything.

With our global NULL argument in place, the following settings are about all we need to make this really work:

1.) Provide a default argument
2.) Argument type -> Fixed entry (Leave the default argument field blank as what gets passed is irrelevant to our needs, we simply needed to make it to the next level which is validation
3.) Choose php code as your validator
4.) Check through the $view->exposed_input array.  I recommend using devel module's dsm() function here because it will respond on the ajax that view is using (unlike drupal_set_message()).
5.) Set "Action to take if argument does not validate:" to "Display empty text"

You can get as fancy in step 4 as you need, but it's just down to good old php if statements at that point.

I hope this howto helps other people.  We've found it rather useful, and since it's sort of arcane, I wanted to share it.

Thanks to Earl Miles (merlinofchaos) for pointing me in the right direction on this one!

Oct 06 2009
Oct 06

Creating panels styles can be very powerful. You can define certain styles for your client to choose from, so they can choose what type of display the panel pane will be like. This way you keep the workflow clean, your code under revision control, your themer gets to keep his sanity, and your concious stays clear.

This article assumes you know about running panels, and more or less what the nomenclature is. You should know also that panels now uses ctools, which is is primarily a set of APIs and tools to improve the developer experience.

So, what we'll be doing here is actually creating a ctools plugin, to implement a new panels style. Sorry if I'm confusing you already, don't worry, it's actually quite straight forward, we want to be able to do this:

choose_style.png
... and then this:
choose_style2.png

OK, now to the meat of it. We'll call our module ctoolsplugins.

1. Create a new module, and tell ctools about our plugins

What you need is very basic, an info file and a module file. So far, nothing new.

1.1 Declare our dependencies

So we obviously need ctools module on our site, and well, the plugin wouldn't make much sense without panels and page_manager, so:

; $Id:
name = Ctools Plugins
description = Our custom ctools plugins
core = 6.x
dependencies[] = ctools
dependencies[] = panels
dependencies[] = page_manager
package = Chaos tool suite

1.2 Implement hook_ctools_plugin_directory to tell ctools about our plugins

In our module file, of course, we'll implement this function, that will check if ctools if looking for style plugins, and lets if so, it will let it know where ours are:

<?php
/**
  * Implementation of hook_ctools_plugin_directory().
  */
function ctoolsplugins_ctools_plugin_directory($module, $plugin) {
  if (
$module == 'panels' && $plugin == 'styles') {
    return
'plugins/' . $plugin;
  }
}
?>

2. Prepare our file structure for the plugins, and create our plugin file.

This image is pretty self explanatory:

ctoolsplugin_directories.png
We'll call our plugin 'collapsible', so we create inside ctoolsplugins/plugins/styles/ a file called collapsible.inc.

3. Implement our style plugin in collapsible.inc

3.1 Define your style goals and necessities.

OK, Here you should think about what you are going to do with the plugin.

  • Is it just for markup?
  • Will you be offering different options?
  • Will you be implementing javascript on it?

In our case, we'll take the opportunity, to teach you another thing that ctools has, the collapsible div utility. So our style will basically convert any panels pane, into a collapsible panels pane:
collapsed_pane.png

And because we are friendly developers (or like not to be ever bothered after developing it), we'll give the user a chance to configure if they want the pane to start opened or closed. That means an extra settings form, so we can have this:
ctools_style_options.png

3.2 Imlement hook_panels_style_info

The naming is very important here, it should be modulename_stylename_panels_style. You basically return an array, defining your style:

<?php
/**
* @file
* Definition of the 'collapsible' panel style.
*/

/**
* Implementation of hook_panels_style_info().
*/

function ctoolsplugins_collapsible_panels_styles() {
  return array(
     
'title' => t('Collapsible div'),
     
'description' => t('Display the pane in a collapsible div.'),
     
'render pane' => 'ctoolsplugins_collapsible_style_render_pane',
     
'pane settings form' => 'ctoolsplugins_collapsible_style_settings_form',
  );
}
?>

'title' and 'description' are pretty self explanatory.
'render pane' specifies the theme function we'll be providing for rendering the pane. Watch the naming convention.
'pane settings form' specifies the callback function which provides the extra settings form, that we'll be using for our start up options. Watch the naming convention.

3.3 Define the settings form callback.

The name of the function will be what you specified in 'pane settings form' earlier. Just provide a new array inside $form, for each configuration you want the user to specify. See the FAPI documentation for reference.

<?php
/**
* Settings form callback.
*/
function ctoolsplugins_collapsible_style_settings_form($style_settings) {
 
$form['collapsed'] = array(
   
'#type' => 'select',
   
'#title' => t('Startup behaviour'),
   
'#options' => array(
     
0 => t('Start opened'),
     
1 => t('Start collapsed'),
    ),
   
'#default_value' => (isset($style_settings['collapsed'])) ? $style_settings['collapsed'] : 'opened',
   
'#description' => t('Choose whether you want the pane to start collapsed or opened'),
  );

  return

$form;
}
?>

This is pretty straight forward, in our case we provide two options, one to start opened and one to start collapsed. If collapsed is chosen, the value will be 1.

3.4 Define the render callback function.

The name of the function will be what you previously specified in 'render pane'. This is just a theme function, and where you have the chance of altering what will be shown to the user when viewing the page.

<?php
/**
* Render callback.
*
* @ingroup themeable
*/
function theme_ctoolsplugins_collapsible_style_render_pane($content, $pane, $display) {
 
$style_settings = $pane->style['settings']; // good idea for readability of code if you have a ton of settings
 
$start_settings = $style_settings['collapsed']; // we can do this be cause the only values possible are 0 or 1 $pane_content = $content->content;

  if (

$content->title) {
   
$pane_title = '<h2 class="pane-title">'. $content->title .'</h2>'; // theme('ctools_collapsible', $handle, $content, $collapsed);
   
$result = theme('ctools_collapsible', $pane_title, $pane_content, $start_settings);
  }
// if we don't have a pane title, we just print out the content as normaly, since there's no handle
 
else {
   
$result = $pane_content;
  }

  return

$result;
}
?>

Important to note here, is that our user's specified settings are inside $pane->style['settings']. In this example we check if there's a title available, and if so we implement the ctools_collapsible theme function to get our collapsible panes. Other wise we don't have a handle, and we just return the content as normal.

And that is it. Hope you found the article useful, and if you'd like me to write up some more articles about writing plugins for panels/ctools, drop a comment with your question/suggestion!

UPDATE:You can also provide a style plugin in your theme, as shown in this fine tutorial.

Attachment Size ctoolsplugins.tar.gz 1.3 KB
Jul 30 2009
Jul 30

CCK formatters are pieces of code that allow you to render a CCK field content how you want. In Drupal 6 this basicaly means a theme function.

As an example, we will build a formatter for the field type 'nodereference'.
This type of field, which is part of the standard CCK package, allows you to "reference" a node inside another.
The formatter that nodereference has by default, prints a standard link to the referenced node.

We are going to give the users other options, allowing them choose if they want the link to open in a new window or, if they have the popups module activated, that it opens in a jQuery modal window.

Let's call our module 'formattertest'.

Step 1: Declare our CCK formatters

To do this, the only thing needed is to implement our hook_field_formatter_info() in our module:

<?php
/**
* Implementation of hook_field_formatter_info().
*
* Here we define an array with the options we will provide in display fields page
* The array keys will be used later in hook_theme and theme_
*/
function formattertest_field_formatter_info() {
 
$formatters = array(
   
'newwindow' => array(
     
'label' => t('Open in new window link'),
     
'field types' => array('nodereference'),
     
'description' => t('Displays a link to the referenced node that opens in a new window.'),
    ),
  );
  if (
module_exists('popups')) {
   
$formatters['popup'] = array(
     
'label' => t('Open in a popup window'),
     
'field types' => array('nodereference'),
     
'description' => t('Displays a link to the referenced node that opens in a jQuery modal window.'),
    );
  }
  return
$formatters;
}
?>

In this function, you have to return an arrays of arrays, that define each formatter that the module provides.
  • label: The name that the user will choose in the display fields configuration page
  • field types: an array with the types of cck fields that the formatter supports.

It's important to remember that the array keys you use, in our case 'newwindow' and 'popup', will be used later on to construct our functions hook_theme and theme_.
Note that in the second formatter, first we check if the module popups is active in the system, and then we add our formatter array that makes use of it.

2. Implement hook_theme

In hook_theme() you also return an array of arrays, defining the theme_ functions that will take care of rendering the cck field content. 'element' will be the content of the cck field, that will be used as the parameter for our theme function.

<?php
/**
* Implementation of hook_theme().
*
* We declare our theme functions according to the array keys in  hook_field_formatter_info
*/
function formattertest_theme() {
 
$theme = array(
   
'formattertest_formatter_newwindow' => array(
     
'arguments' => array('element' => NULL),
    ),
  );
  if (
module_exists('popups')) {
   
$theme['formattertest_formatter_popup'] = array('arguments' => array('element' => NULL));
  }
  return
$theme;
}
?>

'formattertest_formatter_newwindow' and 'formattertest_formatter_popup' will be used to build our functions in the next step.

3. Build our theme functions.

Remember taht you can do dsm($element); (if you have devel installed), to see what you have to play with ;)

<?php
/**
* Theming functions for our formatters
*
* And here we do our magic. You can use dsm($element) to see what you have to play with (requires devel module).
*/
function theme_formattertest_formatter_newwindow($element) {
 
$output = '';
  if (!empty(
$element['#item']['nid']) && is_numeric($element['#item']['nid']) && ($title = _nodereference_titles($element['#item']['nid']))) {
   
$output = l($title, 'node/'. $element['#item']['nid'], array('attributes' => array('target' => 'blank_')));
  }
  return
$output;
}
/* Theme function for popup links */
function theme_formattertest_formatter_popup($element) {
 
$nid = $element['#item']['nid'];
 
$link_id = 'popup-'. $nid; // we want an unique id for each link so we can tell popups api to only do those =)
 
$output = '';
  if (!empty(
$nid) && is_numeric($nid) && ($title = _nodereference_titles($nid))) {
   
$output = l($title, 'node/'. $nid, array('attributes' => array('id' => $link_id)));
  }
 
popups_add_popups(array('#'. $link_id));
  return
$output;
}
?>

In the first function, we start from the formatter that nodreference has by default, and we just add a target="_blank" so that the browser opens it in a new window.

In the second function, first we put inside the variable $nid the nid of the referenced node, in order to build the id that we'll use on the link ($link_id). We need this so that we can tell popups to only use the js on those specific link. That way we avoid having to scan the whole document for popup links, making our site faster in the front end.

Conclusion.

Imagine for example, that your module also provides a default view. You can then use this view to pull out information depending on the content of a cck field. Any cck field that is using your formatter. No longer would you have to write complex and hard to maintain code in your template.php. You could just assign your formatter to any new field you create on any content type, reusing the same code.

Attachment Size formattertest.tar.gz 1.22 KB
Apr 01 2009
Apr 01

Developers are all familiar with the default behavior of the drupal menu systems "local tasks" (aka tabs). These appear throughout most Drupal sites, primarily in the administration area, but also on other pages like the user profile.

Generally, developers are pretty good about creating logical local tasks, meaning only those menu items which logically live under another menu item (like view, edit, revisions, workflow, etc... live under the node/% menu item).

But sometimes, these tabs either don't really make sense as tabs or you simply want to have the flexibility of working with the items as "normal menu items", or those menu items which appear under admin/build/menu.

I recently wanted to move some of the tabs on the user profile page (user/UID) into the main menu so that I could include them as blocks.

For some reason, developers think the user profile page is a great place to put tabs for user related pages such as friendslist, tracker, bookmarks, notifications and so on. But these types of items are less a part of the user's account information than they are resources for specific users. Personally, I would not think to look at my account information on a site to find stuff like favorites or buddies. I'd expect those items to be presented somewhere much more obvious like a navigation block.

Initially, this may seem like a trivial task. My first thought was to simply use hook_menu_alter() and change the 'type' value of the menu item from MENU_LOCAL_TASK to MENU_NORMAL_ITEM. However, for reasons I don't understand well enough to explain in detail, this does not work.

In order to achieve the desired result, you must change the path of the menu item and incorporate the '%user_uid_optional' argument, replacing the default '%user' argument.

All very confusing, I know. Let's look at an example.

The notifications module (which provides notification on changes to subscribed to content) uses the user profile page rather heavily. I don't want its links there, I want them in the sidebar where users can always see them.

<?php
/**
* Implementation of hook_menu_alter().
*/
function MODULENAME_menu_alter(&amp;$callbacks) {
 
// NOTIFICATIONS MODULE
 
$callbacks['notifications/%user_uid_optional'] = $callbacks['user/%user/notifications'];
 
$callbacks['notifications/%user_uid_optional']['type'] = MENU_NORMAL_ITEM;
  unset(
$callbacks['user/%user/notifications']);
  <
SNIP>
}
?>

So I have moved the notifications menu into my own menu, changed the type, used %user_uid_optional instead of %user, and unset the original menu item.

This works fine except for the fact that you'll lose all of the other menu items under user/%user/notifications! You need to account for all menu items in the hierarchy to properly reproduce the tabs in the main menu system, so we add the following:

<?php
    $callbacks
['notifications/%user_uid_optional/thread'] = $callbacks['user/%user/notifications/thread'];
    unset(
$callbacks['user/%user/notifications/thread']); $callbacks['notifications/%user_uid_optional/nodetype'] = $callbacks['user/%user/notifications/nodetype'];
    unset(
$callbacks['user/%user/notifications/nodetype']); $callbacks['notifications/%user_uid_optional/author'] = $callbacks['user/%user/notifications/author'];
    unset(
$callbacks['user/%user/notifications/author']);
?>

And of course, we don't want this code executing at all if our module is not enabled, so you'd want to wrap the whole thing in:

<?php
 
if (module_exists('notifications')) {
 
  <
SNIP>

  }

?>

Keep in mind that not all modules implement menu items using hook_menu(). It's becoming more and more common for developers to rely on the views module to generate menu items, and this is a wise choice. Menus generated using views (ala bookmark module) can be modified to get the desired result without any custom code.

Mar 06 2009
Mar 06

I have had a seemingly constant battle with webdav for some reason over the years. We use it to hold Kate's graduate work and temporary work storage so it is constantly needed. TO its credit the problems ive had have never been its own making but rather a conflation of issues from installing so many services on top of eachother (Subversion, Drupal, Webdav). My most recent battle involved not being able to write to the webdav folder that existed as a subdirectory of a drupal site. THe READ, DELETE and other actions worked fine but the PUT action failed because drupals mod_rewrite routine jumped in and stole the show. The result was a 403 error in my webdav client and for all you google searchers out there the error log parroted out something like this:

[Fri Mar 06 10:19:56 2009] [error] [client XXX.XXX.XXX.XXX] Unable to PUT new contents for /dav/filename.txt.  [403, #0]
[Fri Mar 06 10:19:56 2009] [error] [client XXX.XXX.XXX.XXX] (2)No such file or directory: An error occurred while opening a resource.  [500, #0]

The solution was simple if not intuitive. Disable the mod_rewrite for this directory and all will be fixed. This can be done by adding an exclusion to the drupal .htaccess file in the mod_rewrite section or by disabling it completely in a .htaccess file that resides right in webdav. Both solutions are shown below but only one needs to be used:

Initial Organization and Setup

I have drupal installed to the root directory (httpdocs) of my virtual host such that www.example.com pulls up drupal. I have created a folder in this directory called "webdav" and mapped an alias from "webdav" to "dav". Like so:

Alias /dav /var/www/vhosts/example.com/httpdocs/webdav
<Location /dav/>
  DAV on
</Location>

This was necessary for some old MS windows webdav clients if I remember correctly but the mod_rewrite part only cares about the functioning webdav directory I have defined. If you have an otherwise functioning webdav directory without using the <Location> tags in your vhost file then don't worry about it and just use that directories name.

Parent Directory method

File: .htaccess of parent directory which is also the root directory in this case
# Rewrite URLs of the form 'x' to the form 'index.php?q=x'.
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} !=/favicon.ico
RewriteCond %{REQUEST_URI} !^/dav/(.*)$
RewriteRule ^(.*)$ index.php?q=$1 [L,QSA]

I only added the 4th one to the existing ones in the drupal .htaccess file but, these 4 lines of rewrite conditions basically say:

If the requested URI isnt a real file (line 1), and if it isnt a real directory (line 2) and if it isnt "/favicon.ico" (line 3) and finally (line 3) if it doesnt (!) start with "/dav/" (^/dav/) followed by any number of other characters ( (.*) ) until the end ($). Here is a useful cheatsheet for mod_rewrite that i like to use if your matching conditions vary from mine.

Webdav directory method

File: .htaccess in the webdav directory itself
#Disable mod_rewrite
RewriteEngine off

This one need much less explanation I think but it is also slightly less insecure and messy in my opinion. Either way you go you'll have better luck than you did before I promise. Good luck and comment if you have a question or this was useful to you.

Sep 01 2008
Sep 01

I eagerly awaited the videos for the Boston Drupalcon. They never seemed to appear. 

However, the videos for Szeged are already available here ! The quality is very good. Audio is good and the presentation screens are fuzzy but you can get the general idea. 

Very impressive. There are many hours of must watch video here if you were unable to attend the conference live.

Nov 25 2006
Nov 25
Nostradamus

I began the Devbee website back in March as a way to help others by way of documenting what I have learned about Drupal and also to drum up a little bit of business for myself. The content of this site is extremely targeted, and I don't ever expect to see more than a few hundred visits a day. This definitely does not reflect the expectations, or at least hopes, of most website owners. It's typically all about bringing in as many visitors as possible to generate money through advertising or purchases. Sites interested in bringing in large numbers of visitors typically do this by spending a lot of time focusing on "search engine optimization" (SEO). Absolutely nothing can drive traffic to a site like a top placement in the search results on one of the major search engines.

Back in the day (way back during the last millennium), all one needed to do was have a simple HTML page containing relevant words or phrases and he was fairly likely to make a decent showing in results pages. In fact, this is exactly how I shifted from studying literature to building websites. I built my first homepage (don't laugh!) for fun. It was found by an employer, and I got a cool job at a major search engine. Today, it is not so simple.

Fortunately for us, as Drupal users, we have a secret weapon, Drupal itself. Drupal SEO does not require any witchcraft or elaborate HTML trickery. It's simple, and in this article, I'm going to explain how I get consistent premium search placement with very little effort.

Stumbling upon Drupal SEO

Today I discovered that an article I wrote recently is the top result for the query "opcode cache" on Google. I almost feel guilty about it. There are countless pages out there with much more information on the topic than my article, yet I'm at the top. I guess I'll just have to deal with it.

This is not unusual. I find myself on "the first page" of many searches for terms relevant to my site. And when I'm not seeing a premium placement (top-ten), it's either because the search term is very broad (e.g. "Drupal") or there are simply much more relevant pages pushing my placement down. Just like the old days.

And more than half of my very modest traffic comes through these search results.

What's the Secret?

Now comes the mysterious part. I make no claims of expertise in the area of SEO. It's mostly voodoo as far as I'm concerned. The search engines are necessarily very secretive about their methods, trying to stay ahead of search engine spammers. And what works today may be detrimental tomorrow. What I'm going to describe below is entirely based on my own, very subjective, experience with various techniques and modules. These are the things that I believe are resulting in my accidental SEO success.

Drupal SEO

Drupal itself is well-known for its search-engine friendliness. Its markup is clean and standards-compliant. It creates all the tags the engines are looking for. And unlike so many other CMSs, Drupal creates search engine friendly URLs. Using Drupal is the first step in this process, but presumably you're already doing this, so let's move on.

The Right Path

Here's an example of the URL to a Joomla forum topic: http://forum.joomla.org/index.php/topic,65.0.html

And here's an example of a URL to a Drupal forum topic: http://drupal.org/drupal-5.0-beta1

Do you notice a difference? Can you tell me anything about the Joomla article without going to the page? In fact you can, sort of: you might conclude that the page covers a topic, a fact of dubious value. The URL really provides no useful information to you. Nor does it provide anything useful to a search engine. This is key. Unless you're searching for "index topic 65.0 html", this URL isn't going to help you find the information on this page.

Looking at the Drupal URL is another story. Based on that URL, one can assume that it has something to do with "drupal 5.0 beta1", and so can a search engine. If that's what you're looking for, this page will come up #1.

Most SEO "experts" agree that the search-engine-friendly URLs are critical to a page's search ranking.

Drupal allows you complete control of the path of any page. Creating short, clean and informative paths will improve your rankings. And the Pathauto module automates the process of generating relevant paths. But be extremely careful when experimenting with Pathauto, particularly on sites with existing content. Using Pathauto without first understanding how to use it properly can result in all of the URLs on your site changing, and thereby breaking existing links to your content. If you are going to introduce Pathauto on an existing site, play it safe and enable the Create a new alias in addition to the old alias option in Pathauto's settings. But keep in mind that having multiple URLs pointing to the same page on your site may result in a search engine penalty for "duplicate content".

Sitemaps

Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional meta data about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.

I've seen no solid evidence that implementing a sitemap will directly improve search rankings. However, even if search engines do not use your sitemap to to adjust the ranking of your pages (which I doubt), it does help them more efficiently index your site, thereby increasing the likelihood of your pages being included in search results. This one's a no-brainer.

Sitemaps would be virtually impossible to maintain by hand. And this is where the excellent XML Sitemap (formerly Google sitmeap) module comes in. Installing this module is simple and comes with reasonable default settings that don't require changing unless you want to fine tune your sitemap. After you've installed and enabled this module, you'll need to tell search engines about your sitemap. At this point, I'm only familiar with Google Sitemaps, Though other major companies are beginning to adopt this concept as an new open standard.

Leaving Comments

Another common method used by search engines to determine the importance of your pages is the number of other sites that link to them. A simple way to continually promote your site while helping improve your search rankings is to make regular comments on other sites like Drupal.org. Take the time to create an account on sites similar to yours and complete your public profile. Then leave useful comments where appropriate. Do not post comments simply to include a link back to your site. This is in very poor taste and may get you blocked. Instead, post comments where you have something to contribute to the topic being discussed. If you have nothing useful to add, don't post a comment. I'm a regular participant over at Drupal.org, and I'm confident this helps the "relevance" of my own site.

Page Title

By default, Drupal will use the title of your node as the page HTML title (the bit that appears in the <title></title> tags of the HTML and shows up in the title bar of your browser). This is very reasonable behavior. However, if you want to give your page that extra SEO boost, you may want to allow for two different page titles, one that appears at the top of the page in <h1> tags and the other that appears in the head of the HTML document in the <title> tag. the <h1> and <title> tags are both pieces that search engines will consider when reviewing your page. If they are identical, you're missing out on an opportunity to further promote the page!

So how do you manage to control the <title> tag contents if Drupal automatically sets it based on the node title? The Page Title module does this. Install and enable this module, and you will see an additional field on the node edit form called "page title". Use this field to configure the phrase that you think will most likely attract users to the page. Use something eye catching and alluring, something the user will feel he has to read. If you're writing about an article you found on another site, don't title the page "cool link!", instead, something more enticing: "Fascinating study of the Indonesian spotted tadpole". Follow that up with a relevant <h1> title: "National Geographic looks at one of nature's most mis-understood wonders".

The Prophecy

Search result placement was not a top concern of mine when I built this site. But it has become a bit of an obsession now. I have no need to drive thousands of visitors seeking information on opcode caching to my site, but hitting that number one position for a query is a bit of a rush! Thanks Drupal!

Lastly, I asked myself a question as I wrote this article: Is there anything at all to what I'm saying? Well, I think there is, and I'm willing to make a bold prediction based on this belief. Within three days of posting this article, I believe it will appear in the top-ten search results for "Drupal SEO" on Google. If I'm right, that should serve as some pretty solid evidence that there's something to all this. There are currently 1,090,000 pages competing for placement in this results page. The odds of making it into the top-ten by shear luck are 1 in 109000.

And if I'm wrong, well, I can always come back and edit out this prediction to save face %^)

The Revelation

Update: Mon Nov 27 23:19:42 2006

A search for "Drupal SEO" now shows this article as the second result out of 1,080,000 pages. I come in just below an article on Drupal.org.

So as you now see, there is not a lot of work involved in getting premium search placement if you are using Drupal. Of course, the broader your topic, the more difficult it will be to hit the top-ten. While you can almost certainly hit number one for surfers searching for a certain rare antiquity, your less likely to see much success attracting surfers hunting for the term "sex".

drupal_seo
Nov 16 2006
Nov 16

Until the mid 90s, spam was a non-issue. It was exciting to get email. The web was also virtually spam-free. Netizens respected one another and everything was very pleasant. Spam Those days are long gone. Fortunately, there are some pretty amazing tools out there for fighting email spam. I use a combination of SpamAssassin on the server side and Thunderbird (with its wonderful built in junkmail filters) on the desktop. I am sent thousands of spam messages a day that I never see thanks to these tools.

But approximately five years ago, a new type of spam emerged which exploited not email but the web. Among this new wave of abuse, my personal favorite, comment spam.

I love getting comments on my blog. I also like reading comments on other blogs. However, it's not practical to simply allow anyone who wants to leave a comment, as within a very short period of time, blog comments will be overrun with spam generated by scripts that exploit sites with permissive comment privileges. To prevent this, most sites require that you log in to post a comment. But this may be too much to ask of someone who just wants to post a quick comment as they pass through. I often come across blog postings which I would like to contribute to, but I simply don't bother because the site requires me to create an account (which I'd likely only use once) before posting a comment. Not worth it. Another common practice is the use of "captchas" which require a user enter some bit of information to prove they are human and not a script. This works fairly well, however, it is still a hurdle that must be jumped before a user can post a comment. And as I've personally learned, captchas, particularly those that are image based, are prone to problems which may leave users unable to post a comment at all.

As email spam grew, there were various efforts to implement similar types of protection, requiring by the sender to somehow verify he was not a spammer (typically by resending the email with some special text in the subject line). None of these solutions are around anymore because they were just plain annoying. SpamAssassin and other similar tools are now used on most mail servers. Savvy email users will typically have some sort of junkmail filter built into their email client or perhaps as part of an anti-virus package. And spam is much less a nuisance as a result.

What we need for comment spam is a similar solution. One that works without getting in the way of the commenter or causing a lot of work for the blog owner. Turn it on, and it works. I've recently come across just such a solution for blogs which also happens to have a very nice Drupal module so you can quickly and easily put this solution to work on your own Drupal site.

Enter Akismet

It's called Akismet, and it works similarly to junkmail filters. After a comment (or virtually any piece of content) has been submitted, the Akismet module passes it to a server where it is analyzed. Content labeled as potential spam is then saved for review by the site admin and not posted to the blog.

Pricing

Akismet follows my absolute favorite pricing model. It's free for workaday Joes like me and costs money only if you're a large company that will be pumping lots of bits through the service. They realize that most small bloggers are not making any money on their sites, and they price their service accordingly. Very cool.

Installation

In order to use Akismet, you need to obtain a Wordpress API key. I'm not entirely sure why, but it is free and having a collection of API keys is fun. So get one if you have not already.

The Akismet Drupal module is appropriately named Akismet. It's not currently hosted on Drupal.org, but hopefully the author will eventually host it there as that is where most people find their Drupal modules. Instead, you will need to download the Akismet module from the author's own site. The installation process is standard. Unzip the contents into your site's modules directory, go to your admin/modules page and enable it. There is no need for additional Akismet code as all the spam checking is done on Akismet's servers.

Configuration

After installing Akismet, I was immediately impressed at how professional the module is. There were absolutely no problems after installation. Configuration options are powerful and very well explained. The spam queue is very nice and lets you quickly mark content as "ham" (ie not spam) and delete actual spam. As you build up a level of trust with the spam detection, you can configure the module to automatically delete spam after a period of time.

Spam filtering can be enabled on a per node type basis, allowing you to turn off filtering for node types submitted by trusted users (such as bloggers) and on for others (eg forums users). Comment filtering is configured separately.

Another sweet feature is the ability to customize responses to detected spammers. In addition to being able to delay response time by a configureable number of seconds, you can also configure an alternate HTTP response to the client, such as 503 (service unavailable) or 403 (access denied). Nice touch.

One small problem

I've only been working with Akismet for several days now. And I'd previously been using captcha, which I imagine got me out of the spammers sights for a while (spammers seem to spend most of their efforts on sites where their scripts can post content successfully). So far, Akismet has detected 12 spams, 2 of which were not actually spam. These were very short comments, and I imagine Akismet takes the length of the content into consideration. I assume that as the Akismet server processes more and more pieces of content, it will become more accurate in picking out spam versus legitimate content. Each time a piece of flagged content is marked as "ham", it is sent to Akismet where it can help refine their rule sets and make the service more accurate.

Perhaps Akismet could provide an additional option that allows users to increase or decrease tolerance for spam. I would prefer to err on the side of caution and let comments through.

Nov 13 2006
Nov 13

PHP is an interpreted language. This means that each time a PHP generated page is requested, the server must read in the various files needed and "compile" them into something the machine can understand (opcode). A typical Drupal page requires more than a dozen of these bits of code be compiled.

Opcode cache mechanisms preserve this generated code in cache so that it need only be generated a single time to server hundreds or millions of subsequent requests.

Enabling opcode cache will reduce the time it takes to generate a page by up to 90%.

Vroom! PHP is known for its blazing speed. Why would you want to speed up your PHP applications even more? Well, first and foremost is the coolness factor. Next, you'll increase the capacity of your current server(s) many times over, thereby postponing the inevitable need to add new hardware as your site's popularity explodes. Lastly, high bandwidth, low latency visitors to your site who are currently seeing page load times in the 1-2 second range will be shocked to find your vamped up site serving up pages almost instantaneously. After enabling opcode cache on my own server, I saw page loads drop from about 1.5 seconds to as low as 300ms. Now that's good fun the whole family can enjoy.

Opcode Cache Solutions

There are a number of opcode caching solutions. For a rundown on some of them, read this article. After a bit of research and a lot of asking around, I concluded that Eaccelerator was the best choice for me. It's compatible with PHP5, is arguably the most popular of its kind, and is successfully used on sites getting far more traffic than you or I are ever likely to see.

Implementing Eaccelerator

This is the fun and exciting part. Implementing opcode cache is far easier than you might imagine. The only thing you'll need is admin (root) access to your server. If you're in a shared hosting environment, ask your service provider about implementing this feature if it is not in place already. These instructions apply to *nix environments only.

Poor Man's Benchmarking

If you would like to have some before and after numbers to show off to your friends, now is the time to get the 'before' numbers. Ideally, you will have access to a second host on the same local network as your server so that the running of the test does not affect the results. For those of us without such access, we'll just have to run the test on the actual webserver, so don't submit these results in your next whitepaper:

Apache comes with a handy benchmarking tool called "ab". This is what I use for quick and dirty testing. From the command line, simply type in the following:

ab -n 1000 -c 10 http://[YOURSITE.COM]/

Here is a portion of the results I got on my own test:

    Concurrency Level:      10
Time taken for tests:   78.976666 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      13269256 bytes
HTML transferred:       12911899 bytes
Requests per second:    12.66 [#/sec] (mean)
Time per request:       789.767 [ms] (mean)
Time per request:       78.977 [ms] (mean, across all concurrent requests)
Transfer rate:          164.07 [Kbytes/sec] received
Connection Times (ms)
min  mean[+/-sd] median   max
Connect:        0    7  51.3      0     617
Processing:    77  725 1704.4    300   21390
Waiting:        0  673 1697.5    266   21383
Total:         77  732 1706.2    307   21390
Percentage of the requests served within a certain time (ms)
50%    307
66%    468
75%    625
80%    639
90%    805
95%   3808
98%   6876
99%   8529
100%  21390 (longest request)

The single most useful number is 'Requests per second', which in my case was 12.66.

Download, Build and Install

First, download the source code.

Get it to your server and do the following (I'm assuming you have gcc on your system, if not, get it):

tar jxvf  eaccelerator-0.9.5.tar.bz2
cd eaccelerator-0.9.5
phpize
./configure
make
make install

Configure Apache and Restart

If you have an /etc/php.d directory, create the file /etc/php.d/eaccelerator.ini for your new settings. Alternatively, you can put them in your php.ini file. Your configuration should look something like this:

zend_extension="/usr/lib/php/modules/eaccelerator.so"
eaccelerator.shm_size="32"
eaccelerator.cache_dir="/var/cache/eaccelerator"
eaccelerator.enable="1"
eaccelerator.optimizer="1"
eaccelerator.check_mtime="1"
eaccelerator.debug="0"
eaccelerator.filter=""
eaccelerator.shm_max="0"
eaccelerator.shm_ttl="0"
eaccelerator.shm_prune_period="0"
eaccelerator.shm_only="0"
eaccelerator.compress="1"
eaccelerator.compress_level="9"
eaccelerator.log_file = "/var/log/httpd/eaccelerator_log"
; eaccelerator.allowed_admin_path = "/var/www/html/control.php"

Adjust values according to your particular distribution. For more details on configuring eaccelerator, see the settings documentation.

See Eaccelerator in Action

The value eaccelerator.allowed_admin_path, if enabled, should point to a web accessible directory with a copy of 'control.php' (which comes with the eaccelerator source code). Edit this script, changing the username and password. You can then access this control panel and see exactly what eaccelerator is caching

See the results

After enabling Eaccelerator on devbee.com, I ran my benchmark again, and here are the results:

    Concurrency Level:      10
Time taken for tests:   10.472143 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      13129000 bytes
HTML transferred:       12773000 bytes
Requests per second:    95.49 [#/sec] (mean)
Time per request:       104.721 [ms] (mean)
Time per request:       10.472 [ms] (mean, across all concurrent requests)
Transfer rate:          1224.30 [Kbytes/sec] received
Connection Times (ms)
min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       4
Processing:    20  103  52.1     96     345
Waiting:       17   92  50.1     83     342
Total:         20  103  52.1     96     345
Percentage of the requests served within a certain time (ms)
50%     96
66%    122
75%    137
80%    147
90%    176
95%    201
98%    225
99%    248
100%    345 (longest request)

We are now serving up 95.49 requests per second. That's 754% increase in server capacity. Had I been able to run the tests from another machine on the same network, I believe the numbers would be even more dramatic.

May 03 2006
May 03

One of the great features of Drupal is its ability to run any number of sites from one base installation, a feature generally referred to as multisites . Creating a new site is just a matter of creating a settings.php file and (optionally) a database to go with your new site. That's it. More importantly, there's no need to set up complicated Apache Virtual hosts, which are a wonderful feature of Apache, but can be very tricky and tedious, especially if you're setting up a large number of subsites.

No worries, there is a solution.

Create a new LogFormat

Copy the LogFormat of your choice, prepend the HTTP host field, and give it a name:

LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vcombined 

Get the script

Next, download the attached script (split-logfile) and store it somewhere like /usr/bin (don't for get to chmod 755 that baby!)

Now, tell apache to use pipe logfiles to your script rather than writing them directly to disk:

CustomLog "| /usr/bin/split-logfile" vcombined 

Restart Apache

/etc/rc.d/init.d/httpd restart

That's it.

Naturally, you may have to modify split-logfile if you don't store your logfiles in the default location.

 

 

Apr 21 2006
Apr 21

This article explains a practical implementation of a technique outlined in the article "Sharing Drupal tables between databases using MySQL5 Views".

Problem

You have multiple (multisite) Drupal sites and you would like to manage the content for all of these sites through a single interface. Depending on the nature of a given piece of content, you may want the content published on one, several or all of your subsites, but you do not want to have to create copies of the same content for each site.

Solution

Taxonomy plus MySQL5 views. (NOTE: this solution will not work with versions of MySQL prior to 5.)

Assumming you have your subsites properly set up and running, the first step is to create a special vocabulary which you will use to target content.

Go to [your site's baseurl]/admin/taxonomy/add/vocabulary and create a vocabulary. We'll call it simply "sites".

Next, go back to your taxonomy page (/admin/taxonomy) and select "edit vocabulary" for the "sites" vocabulary.

Add a name for each of the subsites you would like to manage. For our example, we'll have two subsites, "foo" and "bar", and one master site, "master".

Now add at least three pieces of test content. Target one piece of content for each of foo, bar and both.

Next, we're going to create a node view for each of our subsites that we'll use to replace the actual node table.

The SQL is as follows:

CREATE VIEW [subsite, eg. "foo"]_node AS SELECT n.* FROM node n, term_data td, term_node tn, vocabulary v WHERE v.name = '[vocabulary name, eg. "sites"]' AND td.vid = v.vid AND td.name = '[subsite vocab term, eg. "foo"]' AND td.tid = tn.tid AND n.nid = tn.nid ;

Because the terms that serve as our subsite labels may very well exist within other vocabularies, we also need to join on the vocabulary table to ensure our solution works reliabley.

Finally, we need to have our subsites use the views we have created instead of our master nodes table, which only the "master" site will have access to directly.

In your drupal's sites directory, you should have directories that correspond to each of your drupal sites (both master and subsites). Edit the settings.php file for each of your subsites, and use the db_prefix variable to point the site to your view. So sites/foo.example.com/settings.php would contain the following:

$db_prefix = array( 'node' => 'foo_', );

At this point, you'll want to disable creation of content from within each of your subsites. You can do this in the from the admin/access page. If you attempt to create content from within the subsites, you'll likely get a 'duplicate key' error.

I hope that explanation is clear. These articles are written rather hastily. If you questions or suggestions regarding this solution, please leave a comment.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web