May 12 2010
May 12

It has been a bit of an adventure getting the SSO module to work with a site that already utilizes the Domain Access module. I got this all working yesterday... hopefully this clarifies things for people. I got a lot of help by reading the UPGRADE.txt that comes with SSO and by reading this issue: http://drupal.org/issues/595802

May 12 2010
May 12

I've been building the community pages for a client's site where one of the most popular areas is the forum. The design called for the forum listing to be the default menu tab with a second tab for searching the forum. I wanted to use a view with an exposed filter to search the site index and limit the results to this specific forum.

In Drupalese the default tab is the MENU_DEFAULT_LOCAL_TASK in the menu definition, and any additional tabs are MENU_LOCAL_TASK(s).

The best way to see how tabs work is to look at the core node module. The default tab is the node "view", but you only see that tab if you also can see the "edit" tab or any other tabs that you have access to. These other tabs are the local tasks. Node sets this up in hook_menu() by making the "view" not only a MENU_DEFAULT_LOCAL_TASK, but also a MENU_NORMAL_ITEM , which is how a "normal" page is indicated in drupal's menu system. The "edit" tab is a MENU_LOCAL_TASK.

The problem with tabs and the forum module

The forum module uses a heirarchical taxonomy to categorize forum containers, forums and forum topics. Let's ignore forum containers. When you navigate to a forum (say example.com/forum/11) you're really trying to navigate to a taxonomy term page. But taxonomy offers a hook, hook_term_path() that lets other modules take over that navigation. Forum uses this hook direct the term's path to a path created by forum's hook_menu() implementation, which then returns a listing of topics (topics are nodes, but that's not important here).

The point is that the forum listing page isn't a MENU_NORMAL_ITEM, it's a MENU_SUGGESTED_ITEM . This means (as far as I can tell) that the menu path is all ready to go, but only after you've created a forum (remember, it's really just a taxonomy term). And, there is no defaut local task defined. The bottom line is that now we can't simply give a views tab display the path forum/11/search and expect the tab to show up. First we need to alter the forum module's menu.

This all goes in a custom module, here named custom_forum.

<?php

/*
* Implementations of hook_menu_alter().
*/
function custom_forum_menu_alter(&$items) {
$items['forum/11'] = array(
'page callback' => 'forum_page',
'page arguments' => array(1),
'access arguments' => array('access content'),
'type' => MENU_NORMAL_ITEM,
'module' => 'forum',
'file' => 'forum.pages.inc',
);
$items['forum/11/view'] = array(
'title' => 'Forums',
'type' => MENU_DEFAULT_LOCAL_TASK,
'weight' => -10
);
}
?>

You can see that first we duplicate the forum item and make it the MENU_NORMAL_ITEM, then add it again, just like in the node module, also making it a MENU_DEFAULT_LOCAL_TASK. The weight is set low so it's always on the left edge of the tabs.

Note that the forum ID (really a term ID in the forum vocabulary) is hard coded here. I could not get this work with wildcards, but I didn't need to for this site, which will only ever have a couple of forums. I bet wildcards could be made to work by first unsetting $items['forum'] in the hook_menu_alter().

Now, when the views display (as a menu normal tab) is added to the existing path forum/11/search, Drupal knows what to do, and both tabs appear. This all works great with path aliases too.

Another note: This client wanted to use the forumthread module, which makes Drupal forums look like it's 1995 instead of 2003. So instead of altering forum's menu, I altered forumthread's menu, but I did not show that here.

May 11 2010
May 11

We are in the middle of our development cycle doing a Brightcove module for Drupal and we are currently discussing following issue: The module would be even cooler if it provides Imagecache support for thumbnails and still images coming from Brightcove Media API (still images are single frames from the video).

Any imagecache support is very easy, it's a matter of creating a simple formatter and a field in Views. Keeping that in mind, situation gets much more complicated when you have to do this with external images - our Video thumbnails are sourced from Brightcove and Imagecache module does not support external images. You cannot do something like this:

$image = theme('imagecache', 'some_preset', 'http://www.dynamiteheads.com/file.jpg');

In order to do that, you have to download the image to files/ directory first, then point imagecache to it.

There is even a bigger problem - what if the image changes? We should be able to detect a filename change and reload the image but it requires more checks and might be a performance hit.

What is your opinion about this? Give us your ideas, we are very happy to listen to Drupal community!

Bookmark/Search this post with:

May 11 2010
May 11

I've added some new code to my Drupal IRC bot.module and Druplicon, the official Drupal IRC bot, is now running with these features. First up is feed aggregation: bot.module now integrates with aggregator.module to provide IRC announcements of new feed items. Feeds can be configured per channel or the items can be sent to multiple channels at once. If you run a channel on Freenode that currently has Druplicon and you'd like it to announce relevant news as it happens, don't hesitate to let me know.

A new bot_potpourri.module has been added, and its first feature is timezone display and conversion. From the integrated help: Display timezones with "BOTNAME: timezone BST". Convert timezones with "tz 10AM MST to EST" or "tz 14:27 UTC in Europe/London". Timestamps are allowed if combined and with no spaces: "tz 2010-10-23T10:00 EST to UTC". All returned dates are DST-aware.

Some examples:

<Morbus> find out what time it is somewhere:
<Morbus> Druplicon: tz EST
<Druplicon> 2010-05-11 12:05 EDT.

<Morbus> or convert from one timezone to another:
<Morbus> timezone 14:27 EST to Europe/London
<Druplicon> 2010-05-11 19:27 BST.
<Morbus> tz 10 A.M. America/New_York to MST
<Druplicon> 2010-05-11 08:00 MDT.
<Morbus> tz 6 pm EST in EST
<Druplicon> 2010-05-11 18:00 EDT.

<Morbus> or dates in the future. note that DST is always considered:
<Morbus> tz 2010-10-23T10:00 EST in UTC
<Druplicon> 2010-10-23 14:00 UTC.
<Morbus> timezone 2010-01-23T10:00 EST in UTC
<Druplicon> 2010-01-23 15:00 UTC.

I hope to squeeze in some more features later this week too.

May 10 2010
May 10

In environments where there are many databases running on the same machine (ex. shared hosting), or in high traffic environments (ex. enterprise sites) it is a common problem that unterminated connections to the database linger around indefinitely until MySQL starts spitting out the "Too many connections" error. The fix for this is decrease the wait_timeout from the default 8hrs to something more in the range of 1-3 minutes. Make this change in your my.cnf file. This means that the MySQL server will terminate any connections that have been sitting around doing nothing for 1-3 minutes.

But this can lead to problems on the other side where now MySQL is terminating connections that are just idle, but will be called on to do something. This results in the "MySQL has gone away" error. This problem is unlikely to happen with stock Drupal, but is much more frequent with CiviCRM (or any other time where you connect to more than one database). The issue being that sometimes an intensive process will happen in one database, then action needs to return to the other database, but oops MySQL has terminated that connection. This is most likely to happen on anything that takes a long time like cron, contact imports, deduping, etc.

There's a little known trick with changing wait_timeout on the fly. You can do this because wait_timeout is both a global and per-session variable. Meaning each connection uses the global wait_timeout value, but can be changed at any time affecting only the current connection. You can use this little function to do this:

<?php
/**
* Increase the MySQL wait_timeout.
*
* Use this if you are running into "MySQL has gone away" errors.  These can happen especially
* during cron and anything else that takes more than 90 seconds.
*/
function my_module_wait_timeout() {
  global
$db_type, $db_url;
 
 
watchdog('my_module', 'Increasing MySQL wait timeout.', array(), WATCHDOG_INFO);
  if (
is_array($db_url)) {
   
$current_db = db_set_active();
    foreach (
$db_url as $db => $connection_string) {
     
db_set_active($db);
     
db_query('SET SESSION wait_timeout = 900');
    }
    if (
$current_db) {
     
db_set_active($current_db);
    }
  }
  else {
   
db_query('SET SESSION wait_timeout = 900');
  }
 
  if (
module_exists('civicrm')) {
   
civicrm_initialize();
    require_once(
'CRM/Core/DAO.php');
   
CRM_Core_DAO::executeQuery('SET SESSION wait_timeout = 900', CRM_Core_DAO::$_nullArray);
  }

}

?>

Then call this function before anything that might take a long time begins.

There's also an issue in the CiviCRM issue queue to make CiviCRM do this before any of it's long internal operations.

May 10 2010
May 10

In the last couple of years we used to integrate a lot of interactive flash content on our sites.
Flash gives to the end user a lot of interactive web richness that is nearly impossible to reproduce on other current tools we have today.
The emerge of html5 is very exciting and we can't wait for this protocol to gain more power - but with all that said in the current state it is still hard to give cool solutions to our clients that require a lot of logic or UI that is beyond the current capabilities of Jquery UI.

To integrate flash/flex content we use the highly recommended swftools module. It gives us the ability to add flash content using the "Drupal way" - simply by calling the swf() function.

With our goal to extend our capabilities to the mobile industry, we wanted to make sure we can offer a nice user experience for devices that don't support flash. This is easily achievable using a built-in feature in swftools, that allows to replace the flash content with supported html markup (like image or html5 canvas).

here is how we do that:

<?php
$flash_replacement
= '<img src="http://www.linnovate.net/blog/swftools-replacement/image-replacement"/>'; //here you insert the replacement markup
$file = 'http://...'; // insert here the path of the swf
$args = array(
             
'params' => array('width' => '100%', 'height' => '100%'),
             
'othervars' => array(
               
'html_alt' => $flash_replacement,
              ),
            );
print
swf($file, $args);
?>

As you can see this is pretty straight forward - you just need to add a value to the 'html_alt' key inside the 'othervars' array and that's about it.
You can insert any markup you like (even a theme function).

Check out our home page flash banner as an example.

May 10 2010
May 10

Update: Note, Drupal 7 already gives you useful body classes out of the box</strong>

I recently developed a Drupal site where each page in the site was based on a custom content type and needed some very specific theming. Although my custom theme was based on 960.gs, I decided to borrow from the zen theme its custom body class implementation to give more meaningful CSS classes to leverage for theming.

What's a body class?

The body class rendered by the <body> tag in Drupal outputs something like this for example:

<body class="front page-node no-sidebars section-blog">

That's nice but to theme specific sections of a site as I mentioned above, I needed a little more in the body class. You can do this by adding some PHP code to your theme's template.php file. If your theme does not have this you can create it at: /sites/all/themes/[your theme]/template.php Note if you are using the Zen theme or a Zen sub theme, you do not need to do this. Also, replace any instances of "zen" below with the name of your theme.

Add this php code to your template.php file:

function phptemplate_preprocess_page(&$vars, $hook) {

  // Classes for body element. Allows advanced theming based on context.
  // (home page, node of certain type, etc.).
  $body_classes = array($vars['body_classes']);
  if (!$vars['is_front']) {
    // Add unique classes for each page and website section.
    $path = drupal_get_path_alias($_GET['q']);
    list($section, ) = explode('/', $path, 2);
    $body_classes[] = zen_id_safe('page-' . $path);
    $body_classes[] = zen_id_safe('section-' . $section);
    if (arg(0) == 'node') {
      if (arg(1) == 'add') {
        if ($section == 'node') {
         // Remove 'section-node'.
          array_pop($body_classes);
        }
         // Add 'section-node-add'.
        $body_classes[] = 'section-node-add';
      }
      elseif (is_numeric(arg(1)) && (arg(2) == 'edit' || arg(2) == 'delete')) {
        if ($section == 'node') {
         // Remove 'section-node'.
          array_pop($body_classes);
        }
         // Add 'section-node-edit' or 'section-node-delete'.
        $body_classes[] = 'section-node-' . arg(2);
      }
    }
  }
   // Concatenate with spaces.
  $vars['body_classes'] = implode(' ', $body_classes);
}

function zen_id_safe($string) {
  if (is_numeric($string{0})) {
    // If the first character is numeric, add 'n' in front
    $string = 'n'. $string;
  }
  return strtolower(preg_replace('/[^a-zA-Z0-9_-]+/', '-', $string));
}

Once you add this you just need to modify the <body> tag in your page.tpl.php file. Change it to read:

<body class="<?php print $body_classes; ?>">

Once you have done this you should clear your site cache and voila, you now have meaningful advanced body classes being rendered on every page in your site. The possibilities are endless of what you can do with these advanced classes.

For example this page now shows:

<body class="not-front not-logged-in page-node node-type-blog one-sidebar sidebar-left
page-blog-custom-body-class-php-advanced-theming-and-css-drupal-6 section-blog">

Theoretically I could custom theme a blog page by using code such as this:

.node-type-blog {
/** your custom CSS here **/
}

It's now fairly granular for theming so you can really specific of what and how you want to theme.

Update: 01-03-2011

If you are using Drupal's Domain Access Module, you can also render domain body classes with the following code:

global $_domain;
$body_classes []= 'domain-'. $_domain['domain_id'];

Insert this just after this line of code:

$body_classes = array($vars['body_classes']);

The body class output will simply be of the format domain-numerical_id that represents the domain you are currently on that you can use and leverage in your CSS and theming. You could also use the domain site name as check_plain($_domain['sitename']);.

Tags

May 09 2010
tom
May 09

When working with AJAX, it's a common thing to inject dynamic content into an already loaded page. The idea is that it's possible to allow the user to interact with a page in a way that doesn't constantly require loading a new page like clicking on a normal link does. This is nothing new, people have been doing it for years. With jQuery, this is often done by using the .load() or .ajax() functions which can be used to load the content of a URL into a div or some other page element. This articles look at a technique to allow Drupal to serve content in an AJAX friendly way.

What do I mean by 'an AJAX friendly way'? Well, consider your typical page that is served by Drupal. It probably contains a header, footer, navigation, references to CSS and JavaScript files - basically, a complete HTML page. That's all fine, but how often are you going to want to insert an entire HTML page into a div? Probably never, or at least, very rarely. It is more likely that you are just interested in a small fragment of the page, perhaps the output of a View, or Panel

jQuery makes it very easy to select a subset of the returned HTML, and inject only that into your page. For example.

// Drupal behavior to attach our AJAX click handler to links with
// the .our-links class.
Drupal.behaviors.OurModule = function(context) {
$('a.our-links:not(.our-module-processed)', context)
.addClass('our-module-processed').each(function() {
$(this).click(function(e){

// Remove the default click handler.
e.preventDefault();

// Get the URL of the clicked link.
var toLoad = $(this).attr('href');

// Now load the content
xhr = $.ajax({
url: toLoad,
success: function(data) {

// Inject the new content into <div id="our-div"></div>
$('#our-div').html($('#our-new-content', data));

// Call attachBehaviors to ensure that any behavious in the
// injected content are fired.
Drupal.attachBehaviors(context);
}
});

});


});


}

I'm not going to go into the full details of what the above is doing - I'll save that for another day. But the bit that is relevant to this article is the following line.

$('#our-div').html($('#our-new-content', data));

After loading the new content, we're using the .html() method to inject the new content into the div with the id 'our-div'. But of special note, is that we are actually only injecting a small fragment of this newly loaded page, namely, the content of the #our-new-content div - this is the div that contains the content we are interested in, and it is this HTML that gets inserted into our page.

So, what is the problem with that? Well, for simple purposes, nothing really. It has achieved exactly what we set out to do, which was to load a new URL, and grab a little fragment of the content from that URL and insert it into the current page. Pretty cool I'd say.

NOTE: You can see an example of this technique in our portfolio. All that fancy AJAX stuff - was done in exactly this way.

However, I'm not going to stop there since this article was meant to be about getting Drupal to serve up AJAX friendly content. By this, I mean that I want Drupal to serve up only the content I'm interested in - no header, no footer, no menus, no surrounding page - just the content of a view.

As an example, lets say that I have a view that outputs a simple unordered list of taxonomy terms. This view has a page display that is accessible at the URL /lists/taxonomy.

Now, if I visit /lists/taxonomy, what I get is a complete HTML page, with my unordered list as the main bit of content. So, how do I get drupal to serve this content, without any of the surrounding noise? Well, by doing the following:

Create a new new page.tpl.php file

The page.tpl.php file is the template file that is responsible for outputting most of the standard HTML elements of a page. Take a look at the exelent zen themes page.tpl.php file. It should be pretty easy to see what is going on there - it outputs the <head>, <body>, prints the sidebars, the navigation, the footer - everything that we don't want in this case!

So what we need is a new template file that is dedicated to our purpose. I usually call this template page-ajax.tpl.php. The content of this file is about as basic as it gets:

<content>


<?php print $content; ?>


</content>

It simply prints the $content variable, which will hold the actual page content - in our case, the output of our view.

Set up an AJAX page handler

How do we make /lists/taxonomy use this new AJAX page template rather than the standard page.tpl.php file? You can do this using the preprocess function, preprocess_page() in your theme's template.php file.

/**


* Override or insert variables into the page templates.


*


* @param $vars


* An array of variables to pass to the theme template.


* @param $hook


* The name of the template being rendered ("page" in this case.)


*/


function ourtheme_preprocess_page(&$vars, $hook) {


if ( isset($_GET['ajax']) && $_GET['ajax'] == 1 ) {


$vars['template_file'] = 'page-ajax';


}


}

In this example, we are telling Drupal to use the page-ajax.tpl.php file if the URL has ?ajax=1 in the query string.

So, after adding this to the template.php file and clearing the theme registry, I can now request the URL /lists/taxonomy?ajax=1 and our new page template is used to render the content - giving us a very simple page output.

Tidying up the view output

Views is great, there is two ways about it. However, Views does have the tendency to create quite a lot of HTML markup. Usually this is a good thing, as it can be really useful when theming. But, for our purposes, all we want is simple HTML list - nothing more, nothing less.

To do this, we can override some of the views templates. In our example, we just want an unordered list, but what we actually get is something more like this:

<div class="view view-list view-id-list view-display-id-page_1 view-dom-id-1 view-list view-id-list view-display-id-page_1 view-dom-id-1">


<div class="view-content">


<div class="item-list">


<ul>


<li class="r">


<div class="views-field-name">


<span class="field-content">Beaches</span>


</div>


</li>


<li class="a">


<div class="views-field-name">


<span class="field-content">Houses</span>


</div>


</li>


</ul>


</div>


</div>

That is quite a lot of extra markup, and we are really only interested in the actual list. In the views UI, you can see all the template files that have been used to render this view by clicking on the 'Theme: information' link in the left column. In our case are 4 template files: 

Again, by looking at these files (found in the /sites/all/modules/views/theme) you should be able to see where all the extra markup is coming from.

So what we want to do is override these templates and make them as basic as possible. To do this, simply copy each of the relevant .tpl.php from the views module into your theme directory. You may want to check out the Views theming handbook page. These are the simplified version of the templates for our example.

Display style views-view.tpl.php

<?php print $rows; ?>

Style output: views-view-list.tpl.php

<<?php print $options['type']; ?>>


<?php foreach ($rows as $id => $row): ?>


<li><?php print $row; ?></li>


<?php endforeach; ?>


</<?php print $options['type']; ?>>

Row style output:  views-view-fields.tpl.php

<?php foreach ($fields as $id => $field): ?>


<?php print $field->content; ?>


<?php endforeach; ?>

Field Taxonomy: Term (ID: name): views-view-field.tpl.php

There is no need to override this template, since it doesn't produce any extra markup.

TIP: each time you add a views template override file to your theme directory, click the 'rescan template files' buttin in the Views UI (or clear your theme registry) - this forces Views to clear its cache and pick up the new template file.

The result

The result is the most basic HTML unordered list available at the URL /lists/taxonomy?ajax=1 - perfect for inserting in to a page as part of some AJAX scripting.

May 09 2010
May 09

Amitai Burstein

09 May 2010

In the past few month, since Drupalcon Paris, I was busy upgrading Organic groups (a.k.a OG) to Drupal7. I’d like to give a quick overview of what has been done, what needs to be done, and the changes that came with the upgrade.

The first noticeable thing is that like Ubecart became Commerce, Organic groups has changed its name to Group. The second thing you will notice, is that Group is a complete rewrite of OG! Why was it done? OG is a great module, and it has been around for a long time. Long enough to be very popular and feature rich, but on the same time, concepts and implementations that were right in earlier Drupal versions became outdated. Using Drupal 7 new features - especially field API - were too hard to resist.

Here are the Group’s main concepts, by importance:

  1. Allow associating entities (e.g. node, users, etc'.) to other entities - In plain English it means that you can have posts related to a group. Not more, not less
  2. Introduce the concept of roles and permissions, on the group level
  3. Provide integration with superior modules such as Views. We don't need to invent the wheel - just need to know how to hook into an existing one

Writing those concepts down made it easier to determine what should be in the core of Group and what should be a contrib module. The UI was separated to another module as-well, leaving us with a Group API module, that has as little assumptions as possible about how it will be used.

Time for the quick demo: (As always – The screencast script)

So, what is left to do? In a nutshell:

  • Upgrade path and add missing fields (group language and group theme)
  • Finish Views integration
  • Reach 100% test coverage
  • Document everything
  • Probably lots of other things I hope the community will bring up

Lets work on making Group the best module for groups in Drupal 7!

May 08 2010
May 08

ABOUT ME 
               HIRE ME for Drupal Training 
                                                          Contact Me 

                                                            

May 07 2010
May 07

As many of you may have heard, Ning recently eliminated its free networks and laid off 40% of its staff. Currently, Ning supports exporting users; for those of you interested in migrating your community members from Ning into Drupal, you have two well developed options at your disposal:

If Ning ever opened up their APIs to allow content export, then importing the content would be equally straightforward. Both the Feeds module and the Node Import module would get the job done.

Another option for data import is a combination of the Table Wizard and the Migrate module.

If you are launching a Ning site now, setting up a Drupal site to import that content via RSS using the Feeds module wouldn't be a bad idea. Should Ning go out of business, or change their terms again, or if your organization makes the decision to take control of your web presence, your data (and the intellectual capital of your community) won't be in the hands of a content silo.

May 07 2010
May 07

Our development of Brightcove module is steadily progressing and I thought it would be useful to explain a bit about our development processes. For version control we use Git on all of our internal projects, so for Brightcove we decided to utilize this same tool with just a small change - using github.com instead our own private repositories.

Why Git?

At Dynamite Heads, we started few years ago with using minimal software version control (mostly CVS since drupal.org was using it), then progressed through Subversion to Bazaar and finally have adopted Git. The experience is always a little bit different with every system but only Git (with helper applications) provides all the functionality we need and provides it the way we want it - easy repository management with detailed brnach level access control for contributors and advanced branching workflow.

Unlike Bazaar and SVN, Git's branching model is an integral part of the natural development workflow - once mastered it really provides a lot of benefits. There is a great description of a common Git branching model at nvie.com, check it out. Its very similar to the one we use ourselves.

The Drupal community has recently made the decision to move the entire drupal.org infrastructure to Git, further supporting the decision to use Git for our software develpment.

Git for Brightcove Video

Since the Brightcove module will be publicly available on drupal.org under the GPL license, we decided to make all stages of development publicly accessible by hosting the repositories on github.com. Clone the repository and you can follow our progress developing the module and even make your own suggestions and contributions. You'll find it at Github - Brightcove Drupal.

There is of course the issue that Drupal.org still uses the old CVS repository, we'll also be pushing incremental releases to drupal.org during the development process.

Current status

While we're actually just at the beginning fo the development phase, the module is already in a very usable state. Try it out if you have a Brightcove account and let us know what you think! We will post a demo video of our progress next week, demonstrating what has been done and what is left. Stay tuned, more to come soon.

Bookmark/Search this post with:

May 07 2010
May 07

I frequently use a 3rd party designer to help with the tedious task of going from PSD to final theme. If you haven't realized it yet, but alot of designers have problems setting up a local MAMP install w/ drupal in which to fuck with css. To deal with this without giving the designer any command-line access, my shop uses what we call CZI on all drupal installs. This stands for CSS Injector, Zen theme, IMCE, and allows a designer to upload images and apply css rules to a development site they have been given permissions for on the theme, Zen, that provides all the classes and ids anyone would need.

After my shop, the designer, and the client are satisfied, CSS Injector and it's external files become a weight and need to be removed. Below I detail the process of using Zenophile (http://drupal.org/project/zenophile) to create a zen subtheme in which to wrap up all your CSS Injector files:

Create a subtheme using Zenophile

  1. Enable module Zenophile
  2. create a new zen subtheme (site building > themes > create zen subtheme)
    • name appropriately according to site url
    • set site directory to installs folder unless you want it available to other installs
    • create a fresh css file
    • Submit (you may need to chown the target directory to have appropriate permissions)
  3. disable module zenophile
  4. manage blocks for new theme (site building > blocks > list > newtheme)
    • save each block individually to have titles set appropriately
  5. Duplicate theme settings (site building > themes > configure > zen & newtheme)
    • make sure your newtheme has the exact same settings as zen
    • pay special attention to logo and favicon paths
    • save theme settings

pack up and move css injector files

  1. merge all css injector files ( site configuration > css injector )
    • copy all css injector files into single file
    • delete originals, leaving you with one merged file
  2. copy content of merged file into newtheme-fresh.css
    • search and replace any filepaths in css code

switch themes

  1. set newtheme as primary (site building > themes)
  2. remove last css_injector file (site configuration > css injector)
  3. test site

cleanup

  1. disable module css_injector
  2. uninstall modules zenophile and css_injector

Peace out and remember to get a good lunch.

May 06 2010
May 06

Part of improving communication towards the Drupal community, is writing about is done within the DA.
Here three posts that you may have missed:

Thanks to the people working on those.

Average:

Your rating: None Average: 1.5 (26 votes)

May 05 2010
May 05

I was working with the date module today and wanted to remove the (All Day) that appeared after the date when the field was rendered to the screen. I found a great post here... http://www.pridedesign.ie/content/drupal-date-field-remove-all-day.

To summarise, it seems that a good way to turn it off (altogether) is just to override a theme function.

Edit you template.php and add the following code:
 

function _date_all_day_label() { return ''; }

However, I realized there was another option too.

One that didn't require editing of the template.php file, and could make some people quite happy. You only need to add a new custom format.


From there you can either use the "Short" format and set it to your new custom format or you can add a format style, and set it there.

May 04 2010
May 04
Share this

I'm actually posting this as a question. If you're looking for the answer, sorry I don't have it yet.

How can we reasonably handle large file uploads? I'm talking in the >100MB range; YouTube, for instance, now supports 2GB files, and this will become increasingly the norm. I don't think that most servers are up to that yet, particularly if you need an application to scale.

Elephant on a Bike

Currently, using PHP, you need to set memory_limit to more than twice the upload_max_filesize, which as you can see would be prohibitive in the example of 2GB uploads; you'd need to set your PHP memory to >4GB (adding the buffer of 64M or whatever you need to run Drupal). EDIT: Looks like I was incorrect in my assumption; if you're not going to process the file, you don't need a huge memory footprint just to handle the raw uploads. Thanks Nate and Jamie!

Even if you manage to have that kind of resource available, you can probably expect things to splode with concurrent uploads...

So I spent some time yesterday looking at SWFUpload yesterday (module here), as I'd misunderstood its claims. Yes, it handles large file uploads (from the browser's standpoint), but you still need to set PHP memory accordingly. Not suitable for what I'm looking for, but it is a really nice way to handle multiple uploads. WARNING: I also learned from experience and much head-scratching that it doesn't work if you have Apache authentication on your server...

Now I'm looking at node.js as a possibility. This looks really great, and might do the job. Basically, it's a JavaScript application that sits on your server. Yes, you heard that right. Turns out that as JS has evolved, it's turned into a really tight language, and should be quite suitable for concurrent tasks.

Sorry if you came to this post looking for answers; I've simply postulated more questions. But I'm hoping that someone with more experience with this issue might be able to comment, and we'll all benefit from it. Additionally, this might turn out to be a handy addition to the Media suite, perhaps as a fancy stream wrapper for handling large files? And I'll definitely follow-up when I figure out how best to tackle this.

Thanks,
Aaron

May 04 2010
May 04

I'm working on a project to convert a big ASP website into Drupal. On Windows OSes there is not really a distinction between upper and lower case characters in filenames. At first I thought to just leave the capitals, but on web pages links were sometimes with capitals and sometimes lowercase. So I added some stuff to the Apache configuration, usually inside the VirtualHost directive (this does not work in .htaccess):

        
        RewriteEngine on
        RewriteMap lc int:tolower
        RewriteCond %{REQUEST_URI} [A-Z]
        RewriteRule (.*) ${lc:$1} [R=301,L]

But that wasn't all. For the project I'm parsing thousands of ASP files and the plan is to do this everyday while the existing website is updated. We're rsyncing files to be parsed, and the recursively rename to lowercase script I found is nice but it's not very efficient to copy and rename almost 3 GB everyday. I wrote a little Python script in the train today to create lowercase symbolic links to file and directory names with uppercase letters. The hardest part was to properly change directories - it's important to change into the directory the symlink is created but at the end of an os.walk it's necessary to get back to the current working directory.

I'm sure it can be useful to other people dealing with similar issues...

#!/usr/bin/env python
"""
lowlink.py

Recursively creates lower case symlinks to filenames with uppercase letters.

Created by Kasper Souren in 2010, consider this public domain, copy and re-use freely.
"""

import os

def lowlink(path = '.'):
    cwd = os.getcwd()  # os.walk can't handle path changes very well 
    for root, dirs, files in os.walk(path):
        # change directory for symlink creation
        os.chdir(os.path.join(cwd, root))
        print root
        for name in files + dirs:
            lowname = name.lower()
            if name != lowname and not os.path.exists(lowname):
                os.symlink(name, lowname)
        os.chdir(cwd) # restore path for os.walk

lowlink()
my working setup
May 04 2010
May 04

Play Video

Movie link (right-click to download) Problem with this video? Contact me

I remember when I first started with Drupal. There was just a handful of modules compared to today. I want to say something like 600 or so...

In fact, there were few enough, I had printed the whole list of all top Drupal modules and took a stack of paper home to read about each one. Fast forward to today's module count and it's coming close to 6,000!

Can you say "Holy metric ton load of Drupal modules to search through Batman!" How do you really know which are good and which are bad?

To start, keep this in mind. Drupal is open source, and pretty much anyone can contribute back to the project. While Drupal's future of version control will place a few more barriers to entry to just anyone submitting potential duplicate modules, (therefore preventing the wonderful module of shame issue ) you still don't really know which modules are coded well and which are not - unless you dig into their code.

So, the quick answer to this problem about knowing which are the top Drupal modules, is to use Drupal.org's /project/usage stats. This is the one place you can see which modules are used the most by the most sites (at least those reporting back to Drupal.org).

In this video, I start at number one and head down to module 50 within the usage stats. I've not worked with all of them, but I'm familiar with what most do. I provide an overview of the modules and what they do within Drupal.

If this approach is a good one, and you'd like me to keep going, then please leave a comment on this posting!

May 03 2010
May 03

Here at Fuse we argue from time to time about the best ways to do things in Drupal. Do we go with module 'x' or module 'y?  Do we use views or custom code, etc... One of the more heated "debates" we've had lately was which base theme to use. We had been using Zen for a while but Niall, a themer that joined the Fuse team a while back, swears by Basic. Basic being a branch of Zen from a while back already had a lot of what we needed and less of what we don't so it seemed like a good progression for us. So, instead of just mashing up the best features of the two starter themes, we decided to use this opportunity to streamline our future development and hopefully yours while avoiding an all out theme war. Nobody likes a cranky Nerf gun wielding themer so Fuse Basic was born.

Since our wonderful designer prefers the use of a 975 pixel grid (versus the somewhat standard 960), this was to become the foundation of the new layout. The primary benefits of a 975 pixel base grid are added flexibility with the column structure, and slightly loftier margins that help keep visual page elements from running into one another.

Now, you may argue that the popular 960 pixel grid can do all of the above and wash your car if you massage it just right. While that argument has its merits, I have found that once tweaked to our liking, what was 960 ends up being 975.

The second item that sets the Fuse Basic branch apart from the pack is the nifty implementation of the #grid tool/script. This tool allows us to painlessly bring in our 975 pixel photoshop grid and overlay it over the page at the press of a button.

The features of #grid:

  • Adaptable for all layout widths and alignments
  • Adaptable for any vertical rhythm value
  • Set keyboard shortcuts to show and hide the grid, hold it in place, toggle it to the foreground and background, and jump between multiple grids
  • Uses a single JavaScript file, a little CSS, and an image (for the vertical lines)

The grid is bound to your F, G, H, J keys, and can be overlaid by pressing (and holding) the G key. For a full list of keystrokes and features, please visit #grid.

There you have it. Our very own theme to change the way Drupal sites are put together, or at the very least ease the transition from those pesky inDesign Photoshop mockups. If you get a chance to use Fuse Basic, please do let us know what you think.

May 03 2010
May 03
Attendees at the Drupalcon SF Drupal Learning BoF

One of the more exhilarating and valuable Drupalcon San Francisco experiences for me was participating in a self-organized BoF (birds of a feather) session convened around the topic of Learning Drupal.

Drupal learning, training, curriculum development, certification -- these were hot topics buzzing both inside sessions and out. Even before the conference proper began, 495 people participated in 20 full-day training workshop. The need for Drupal education and knowledge sharing is strong and growing. As a community it is critical for us to come together to share strategic thinking around how to address this need.

About 50 people showed up on the last day of the conference to do just that.

At the brilliant suggestion of Heather James, we utilized the Open Space Technology method to organize the session. This optimized the small amount of time we had and enabled substantive simultaneous discussions on 6 important topic areas defined by the participants.

After being introduced to the "rules" of Open Space, every participant wrote down and spoke aloud the topic they wished to discuss. Heather and I then combined like topics into 6 groups:

  • Long Term Training and Training Developers
  • Marketing & Selling Training
  • Developing curriculum
  • Methods for training
  • Finding and Improving Teaching and Learning Materials and Programs
  • Higher-ed training

Scribes were designated for each group to capture discussion notes and people chose a group to join for the next 45 minutes of discussion. Everyone was encouraged to drift as they felt inspired from one discussion to another -- such is one of the Open Space principles.

At the end of the hour, each group's scribe took two minutes to summarize the main outcomes of their discussion to the full room.

I encourage you to at least skim the session's notes. The ideas were flowing freely and set an inspirational framework for continuing as a community of Drupal educators and learners to collaborate on training strategy.

I also encourage trainers and meeting organizers explore the concept and methods of Open Space Technology as an alternative to the traditional one-track lecture/discussion format for conducting successful meetings. It certainly beats the all-introduction no-room-for-fruitful-discussion sessions we've all experienced far too often.

May 03 2010
May 03

We were recently selected by Brightcove to make a module to easily integrate Brightcove video services into Drupal sites. Brightcove is a well known white label video hosting platform. While there are existing modules which currently allow for video content from their site to be embedded in Drupal, none of the current modules has the complete functionality that Brightcove wants its customers to have.

At Drupalcon 2010 San Francisco we held a BOF session about the new Brightcove integration module that we will be making. Unfortunately, the volcano issue made it impossible for Jakub and some of the engineers from Brightcove to come. The session was held by me and Matt, one of the Brightcove representatives. Everyone provided a lot of useful feedback including potential use cases and the importance of DRM support.

The module will provide a CCK field which you can put onto any content type you want. It will be configurable both on a sitewide level and on a field level. The first release will be a Drupal 6 module.

The first release will focus on browsing and embedding. This will let you browse the videos that you have uploaded to Brightcove studio and select them for display in your nodes. You will be able to also select formatting that should be applied to the video, if allowed. We will create Views integration for the video and its meta data. There is rich meta data in Brightcove Studio about each video, and you'll be able to pull this information to show in your views.

The second stage will incorporate upload to Brightcove through your Drupal site. There are currently efforts in the community already to create this, we hope to be able to work together with this effort to create a high quality upload mechanism.

As Drupal 7 and the Media module mature, we will develop a D7 version of the Brightcove integration module. Since there are still outstanding issues with the Media module, its not possible to firmly say when this will be done.

We are really excited to be making a module to support Brightcove users in Drupal. Its a module we could have used in the past on a number of projects. What could be better than to be hired to make a module that you've been wanting yourself? We'll be developing the module publicly on Github and welcome all feedback and suggestions. You'll find new blog posts here as we continue to make progress.

Bookmark/Search this post with:

May 03 2010
May 03

This was originally intended to be a BoF session at DrupalCon San Francisco, but after talking with Sam(sdboyer) about it it seems to be a better idea to make a little screencast instead.

So, like I mentioned in the post about the drupalCon SF this is my first attempt to start documenting the 6.x-2.x branch version.

I hope this can help people to get inside the code of the versioncontrol module.

I'm also attaching here the presentation I used in the video.

hint: looks better at fullscreen

May 03 2010
May 03

My client wanted to be able to search their list manager archives (uses mailman) with Solr. We already had a pretty major investment in Drupal with about 80K PDF files. In the past, each of the different databases were managed by seperate dtSearch indexes. With the new, Drupal system, we are now able to consolidate everything into one master index. With the special ‘faceting’ that is provided within Solr/Drupal, it becomes very easy to drill from the general request down to the specifics.

Well, this article is going to get a bit specific on the why and how of the integration we did between mailman data and Drupal.

Mailman keeps its archives in a directory structure that provides a single file <listname>.mbox and a directory <listname>. I selected the directory as my driver for getting all the files across. After I got everything written, some more research indicated that I might have been better to use the <listname>.mbox file, as this is ‘authoritative’ for each list that mailman handles. But, I have working code now, so I will live with this decision for the time being.

The general process is as follows:

A) One Time Procedures

1) create a directory under sites/default/files. I called mine mailman. This is where all the list subdirectories will live.

2) create a Content Type in Drupal using just Title and Body.

3) install my Python script in the directory from step A.1 above.

4) Make sure that a current release of drush is installed

5) install my drush script in the sites/default directory

B) Repeat procedures

1) rsync all of your lists that you want from the mailman server to the A.1 directory

2) run the Python script. This will create a list of all the eligible archive files, and then call the drush script once for each file found.

May 02 2010
May 02

In this screen-cast I plan to show you a new module we are creating called - http://drupal.org/project/form_extended This screen-cast is brought to you by using the Sparkeo video platform - we helped to build

The Drupal form API comes with default generic properties.
The form extended module, lets other modules extend the form API and add their own custom properties.
To create a new form property a module needs to define a plug-able handler. Then each module can use those new properties in it's forms or manipulate other forms using hook_form_alter and Rules form support, on the same way you can manipulate a native property.

This module is still in development and i'll be happy to hear from the community any thoughts and comments they have on the idea itself and its implementation.

Apr 30 2010
Apr 30

Posted by admin

Flags at the Moscone Center entrance

Third and last day of Drupalcon, we continue to see interesting things.

Instant Dynamic Forms with #states

Presenter:
Konstantin Käfer

This is one of the most interesting improvements around the form API in Drupal 7.

It provides you with a method to create dynamic forms without writing javascript, and here 'dynamic' does not refer to the ahah features already present in Drupal 6, we are now talking about things like making new elements appear in the form when a checkbox is selected, without having to do any request to the server.

In code, the basic usage takes form of dependencies. You describe which fields depends on others, what kind of dependency they have and what's the action to run when the dependency is met.

It also servers to dynamically validate the introduced data, for example with phone or credit cards number formats, all this without making ajax calls, with javascript but without having to write javascript, just specifying regular expressions in the form API.

You can also change the 'required' property of fields depending on the values of others.

There is this other new concept called 'triggers' (not related to the triggers functionality in Drupal 6) which allows to do more advanced things but that require to write some javascript. Por example, disable a field when certain string is written on it.

Session page: http://sf2010.drupal.org/conference/sessions/instant-dynamic-forms-states

Advanced drush

Advanced drush session

Presenters:
Moshe Weitzman
Owen Barton
Adrian Rossouw
Greg Anderson

In this session we are introduced with a series of new features that are been developed around the drush command.

Aliases, very useful. Now you can execute drush commands over any Drupal install without having to be inside the site directory (sites/...) or having to specify a drushrc.php file. For example you can do: drush @dev status, where #dev is the alias defined in a global location which contains the config data for all your sites so drush knows which site you are referring to.

Remote sites. We can execute drush commands on Drupal sites located in other servers. Drush automatically access the server using ssh and executes the command. For authentication it's recommended (and its the default) to use the ssh public key system, although you can also force the use of passwords.

You can now run commands in several sites at the same time, using the alias @sites or a list of site aliases.

Aliases for commands. Example, alias cc for 'cache clear', you can even specify which cache you want to clear (menu, theme, page, etc), instead of all like its done in current versions.

Interactive drush shell, with aliases for the most used directories in a drupal install. It's like a drush shell, inside of it you have specific commands to jump to the different directories that you can find in every drupal install.

New 'hook' command that allows you to see the hook code. It first presents a list of modules that implement a given hook, you select one and it shows you the code.

They continue with a fast tutorial on how to write drush commands, hooks that you can implements, etc.

The first example is a command 'drush make.me-a-sandwich', which ends up showing an ascii sandwich on the terminal.

Session page: http://sf2010.drupal.org/conference/sessions/advanced-drush

The story of RDF in Drupal 7 and what it means for the Web at large

Videoconferencing the session

Presenters:
Stéphane Corlosquet
Lin Clark

One of the presenters of this session could not go to Drupalcon and got stuck in Europe because of the volcano, so she is connected by videoconferencing and she starts to explain some slides.

It may be hard to get when you are first introduced to RDF, but applications are very interesting.

She starts explaining some basic notions about RDF, introducing 'resources', ex: document, company, person. The different kind of resources and the possibility of describing properties about each one.

The presenters exposes using a comparative table the advantages of RDF over other meta markup formats like Microformats.

Drupal 7 has RDF capabilities integrated into core and some of the more popular RDF schemas are supported 'out of the bos', like foaf, soc, dc.

Session page: http://sf2010.drupal.org/conference/sessions/story-rdf-drupal7-and-what-...

Closing session and next Drupalcons

Morten announcing Drupalcon Copenhagen

They now make a review of the numbers that this conference has handled, even coffee galons have been counted. Economically they have moved amounts over the million dollars, having obtained profits over $300,000 that goes to the Drupal Association.

In the closing session they spend some time to announce where the next drupalistas meet ups will be.

Morten is in charge of announcing that the next Drupalcon will be in Copenhagen six months from now. The web is at http://cph2010.drupal.org. You can register starting today.

Drupalcon Chicago 2011 logo

And for the next year, again at the Unites States, Drupalcon will be in Chicago. The announcement is done by Tiffany Farris, from palantir.net, that will lead the organization. It seems that they are preparing a very original Drupalcon. They are going to lease several entire floors in a hotel in the middle of downtown and Drupalcon will take place there. Web: http://chicago2011.drupal.org

Party (aka. more networking)

San Franciso skyline

Like every night Drupalcon has organized a private party, this time at Mission, a little more far appart than the previous days, so they have setup free busses so people can go there and back. The club has three floors and a amazing view of the city skyline.

Related posts

http://oitdesign.ncsu.edu/2010/04/22/drupalcon-2010-day-three/

Apr 30 2010
Apr 30

One of the mantras for scalability on the web is "divide and conquer". For Drupal and other PHP-bsaed sites, it's common to eschew mod_php and run PHP as a CGI process managed by mod_fcgid. This allows apache to keep its processes lightweight and nimble so that they can efficiently handle static requests for images, css, and javascript files. This concept takes on many forms, including replacing Apache with an alternative web server such as nginx, or preventing those static requests from reaching the web server at all by placing a proxy cache such as Squid or Varnish on the front lines.

These implementations vary but the philosophy is the same: not all requests are created equal, and when scalability is the name of the game you should handle different types of requests with core competency solutions. Beyond static vs. dynamic, there's a significant benefit to tailoring your installation based on the nature of your PHP requests.

We use the Ad module for many of our ad-supported Drupal sites. It's a good fit over working with a third-party advertising service because site administrators can control ads and content in one place. It's easier to honor your privacy policy when your visitors' data isn't being farmed out to parties unknown. You can control the appearance of ads based on specific and fine-grained content relationships, and you can enable features like selling an ad directly from your site.

But serving these ads can put some serious load on your server. In order to support ad counters for tracking statistics, "something smart" must be in control of delivering an ad to the user and tracking the request. Based on the current architecture of the ad module, "something smart" means passing each ad through a PHP script. This means that each visit to every page of the site causes several, sometimes dozens of concurrent PHP requests to your site. Because they're all PHP, none of these requests can be intercepted by Apache, ngnix, Varnish, or Squid.

One site in particular is the Twin Cities Daily Planet, which is a growing news network for the Minneapolis and St. Paul Metro area. Each day, the Daily Planet publishes original articles and blog entries and also republishes several articles and blog entries from our 100+ community media partners. Their traffic has steadily doubled year by year, and they continue to expand their reach by implementing new features. We try to keep a lean and mean PHP installation, but new features often mean enabling PHP extensions for json, xml, image handling, hashing and file manipulation, which add resource consumption to every PHP process. There's also a whole lot of Drupal going on: APC caching loaded modules and data, plus whatever data is left behind after bootstrapping Drupal and handling its request.

Now, the ad counters aren't served up by Drupal directly. That is, they don't go through index.php, but rather through the serve.php file that is included in the ad module. serve.php serves and tracks an ad without necessitating the call to drupal_bootstrap() that takes on the resource impact of loading up Drupal.

That's great and all, but we're still burdened by same problem that a mod_fcgid configuration is designed to address: lots of bloated processes are hanging around in the same pool, waiting to serve up a mixture of lightweight and resource-hungry requests. We're spawning hundreds of PHP processes, and each process can reach 100MB of resident memory during its lifetime. Every page request is a rallying cry for a dozen of these processes to spin up and start lumbering towards those ad requests.

My first thought for this was to set up a separate VirtualHost for ad serving, but that sounded like a huge pain for an already-running site. And then it dawned on me what the design goals for mod_fcgid actually are: it's not about separating PHP from static requests, it's about designating different process groups for different applications. That's exactly what I want!

My original configuration has the following, per mod_fcgid's documentation:


<VirtualHost *:80>
...
FCgidWrapper /www/tcdailyplanet.net/bin/php .php
</VirtualHost>

And the wrapper script at /www/tcdailyplanet.net/bin/php looks something like this, also per the documentation:


#!/bin/sh
CONF="`path/to/php.ini"
exec /usr/local/bin/php-cgi -c $CONF

This has us running every request for anything that uses PHP through the same configuration. Instead, we want to run ad requests through a separate PHP wrapper. So we add this to the end of our apache configuration file:


<Files ~ serve.php$>
FCgidWrapper /www/tcdailyplanet.net/ad/bin/php .php
</Files>

Now, any request for the file serve.php uses the alternative php wrapper script that lives in the 'ad' folder instead of the default wrapper. Automatically, this reduces overhead because the PHP processes for serve.php remain unencumbered by the baggage of handling a lot of Drupal page requests.

I can go even further: because we're using a different wrapper script, I can also specify a separate php.ini file. In the php.ini file for serve.php, I have omitted any php extensions that aren't pertinent to counting ads. Gone are json, geoip, xml, zip, and anything else that's not applicable to incrementin an integer.

As a result, each PHP process in in the serve.php pool consumes only 3-5MB of resident RAM. Occasionally, when it's time for the the ad process to dump its statistics to the database, it executes a drupal_bootstrap(). This results in one process increasing to about 24MB of RAM until it eventually dies off. A small price to pay!

This radically changed the way the site performs. We freed approximately 1.5GB of RAM overall, and our anonymous page generation times went from a 2-second average down to about 50 milliseconds. As for load average? The included graph shows a pretty clear indication of "before" and "after".

Partitioning our requests in this manner required no new software and only a few lines of Apache configuration, which is great for growing organizations that can't just throw hardware at a problem.

Apr 30 2010
Apr 30

There has been a lot of press regarding how many Drupal jobs are currently available. Many people have been commenting on the lack of experienced Drupal developers. Well, I am here to help.

As many people maybe aware I work full time for Mark Boulton Design, as a key member of their development team. Although in my spare time I have around 10 hours a week to take on freelance projects.

Please take a look at my work page to see some of the sites I have worked on. These have included custom module and theme development as well as integration with Salesforce, Paypal, Twitter, Flickr, jQuery and Adobe Air.

Feel free to contact me if you have any potential projects or questions.

Apr 30 2010
Apr 30

I was lucky enough to meet quite a few local organizers at Drupalcon, and a few folks who were looking to grow their local communities as well. I promised them that I would share some of the things I've learned over the years. This is the first of several posts.

Lone Drupalistas

Recently, two guys from Bryan/College Station (BCS) came to the Austin Drupal Dojo. They came on different nights. The two of them didn't know each other.

I asked the second fellow if he knew the first, and he didn't. I asked him if there was a Drupal meet in BCS, and he replied that he only knew 2 other guys in town who were working with Drupal. That's why he came to Austin -- to hang out with other Drupal users.

I already knew that there were at least ten people in BCS who were interested in Drupal -- to the extent that they considered it worthy to mention in their LinkedIn profile. I suggested to this guy to search LinkedIn for Drupal in the BCS zipcodes and start a group. Then he wouldn't have to drive a few hours to Austin just to meet other Drupal folks.

Drupal on LinkedIn

It was through LinkedIn that I started to connect with my local Drupal community. As an outsider new to Drupal, I didn't know about groups.drupal.org (GDO). There were no local Facebook groups for Drupal, and Meetup stinks for anything other than organizing a monthly meet.

The number of Drupal people on LinkedIn was actually one of the factors in my decision to migrate GeekAustin from SlashCode to Drupal. I had been running Slash since 2000, but by 2006, public Slash contributions had all but come to a standstill -- which meant that many of the newer features I wanted simply would never be available. I wanted my next CMS to have a large user/developer community -- so there would be plenty of people who could help me come up to speed.

These two LinkedIn searches pretty much told me what I needed to know:

LinkedIn profiles mentioning Drupal: 17,087

LinkedIn Austin profiles mentioning Drupal: 209

There were 209 people within 100 miles of me who considered Drupal important enough to mention in their LinkedIn profile. Judging by this number, it was a safe bet that, no matter what my problem was, I'd be able to find someone to help me.

Searching elsewhere for Drupalistas

Meetup.com has a strong presence in Austin for many years, so the next place I checked was the local meetup group:

Austin Drupal Meetup: 170 members.

Jerad Bitner tells me that Austin may be a bit unique in that the local meetings (for the last few years at least) have been organized mostly through meetup.com. In other cities he has visited, the habit is to organize the get togethers through the local GDO group. My own search showed nine cities with larger Drupal meetup groups than Austin.

A quick search of Austin + Drupal on Google led me finally to GDO Austin:

http://groups.drupal.org/austin

The GDO Austin group has 284 members. If this group was a typical online group, I knew that most of these folks would not be active, but even if only a fraction were, there would be ample folks with whom to share the experience of learning Drupal.

But where is everyone?

When I first went to the local Austin Drupal Meetup, there were about 35 people in attendance -- little more than a tenth of what I located online. After a few Meetups, it was clear that this was not the fault of the organizer. Former Austinite and now Bigtime Bay Area Drupalist, Lauren Roth (@laurennroth), was doing a great job bringing speakers every month. So where was everyone?

Having been involved in, and hosted events for, numerous local tech groups, there were several things that I knew about the numbers for these groups:

  • Recruiters and hiring managers join Meetup and LinkedIn groups when looking for talent -- and they remain in the group. In some cases, like Java user groups, I seen nearly 30% enrollment by recruiters.
  • Some of the members may have joined when they started working with Drupal, but either ran out of free time or hit a learning block -- and are still membeers.
  • Some members may once have been working on a Drupal project and, although that time may have long since passed, they remain in the group.
  • Some of the older members may be longtime Drupal experts who simply haven't been motivated recently to attend meetup-style presentations.

I also knew from experience that there would be people in the LinkedIn and Facebook groups that were not members of the GDO group, etc. So the combined number of Drupalistas from these three sources could be as many as 400 people or more.

So, how many people really are in my local Drupal community? and how to I meet them?

Everyone likes a party. Throw a party or a happy hour. That's what I did. I called it Drinks and Drupal. It's a catchy name. If you like it, feel free to use the name for your event too.

The key thing is to get people together in the same room, sharing experiences and having a good time. Knowledge exchange will happen and new friendships will be struck. The Drupal community needs all kinds of people -- programmers, designers, and local gadflies too. You can take the initiative to make an event like this happen.

In a followup, I'll share some of the dos and donts I've learned from hosting tech events over the last few decades.

-Lynn

Apr 29 2010
Apr 29

Earlier this week, I received an email from an international NGO with ties to Minnesota, where our company, Advantage Labs is based. His question: "If my goal is to quickly set up sites that allow Volunteers to run their own projects, add content, manage user access, etc., without having to learn any code...is Drupal the way to go?" My answer, more often than not these days, is "Yes."

However, making a platform decision should not be predicated on generalities. Learning more about the specific nature of his project confirmed my recommendation to pursue Drupal:

"With [multiple centers] all around the country, and significant cultural barriers to collaboration, I want/need to select online technologies that will allow Volunteers to collaborate on projects and offer distance education."

Essentially, he's looking to set up an online "community of practice" around technology transfer, potentially one that will need to utilize multiple languages. Drupal's flexibility as an application framework makes it well-suited for this project. I need to look no further than our local community to provide an example:

Advantage Labs works with the Center for Victims of Torture on their New Tactics in Human Rights project site. They use Drupal to facilitate practictioners around the world working together to "promote tactical innovation and strategic thinking within the international human rights community." While the nature of the work may differ, the concept of an online community of practice is the same.

Like with the New Tactics online community site, success for this new project will be based on the care taken in architecting the sites, having a solid infrastructure (including hosting and support), and choosing and configuring an appropriate set of contributed modules. New Tactics is part of our Catalyst Program -- itself a community of practice focused on Drupal learning -- which has helped them navigate Drupal and continue to innovate and evolve. Connecting this new project coordinator with New Tactics' community manager could well benefit both projects in strategic as well as technical ways.

For this new project, as I always do, I recommended taking advantage of research and efforts already in place in the Drupal community to serve as a guide rather than trying to go it alone. With a little searching, I was able to quickly gather some resources to start them off:

There has been quite a bit of research and work done in efforts to use Drupal as "learning management system." One prominent leader in this area is Bill Fitzgerald. His site, http://funnymonkey.com/, serves as resource and he's written Drupal for Education and E-Learning about configuring Drupal for this purpose.

The Drupal group, Drupal in Education, focuses on exploring utilizing Drupal in education environments. One recent post to this group collates an abundance to valuable research. Digging into these resources and contacting people engaged in exploring these strategies will help the project leapfrog many potential startup hurdles.

I'm continually amazed by the breadth, depth and generosity of the Drupal community. Research into one email inquiry led me to find a wealth of information and inspiration. It is essential we continue to share our developing knowledge, experience and ideas with each other to be successful and innovative. I'm happy to be part of this effort.

Apr 29 2010
Apr 29

Instalamos el paquete de apc para php5

$ sudo aptitude install php-apc

Servidor de memcache

$ sudo aptitude install memcached

Memcache para php5

El modulo por defecto del php5-memcache de fallos y parece que según el CVS para la siguiente versión va estar corregido.

Es necesario el compilarlo a pelo:

$ wget http://pecl.php.net/get/memcache-2.2.5.tgz
$ tar -zxvf memcached-2.2.5.tgz
$ cd memcached-2.2.5
$ phpize && ./configure --enable-memcache && make
$ sudo cp memcache.so /usr/lib/php5/20060613+lfs/
$ cd /etc/php5/apache2/conf.d/
$ sudo su
$ echo 'extension=memcache.so' > /etc/php.d/memcached.ini
$ /etc/init.d/apache2 restart

Ahora ya configuramos los parámetros.

$ sudo nano /etc/php5/apache2/php.ini

Y añadimos:

; Memcache
memcache.hash_strategy="consistent"

Configuración del cache para nuestro sitio. Esta sacado de lullabot.

 './sites/all/modules/memcache/memcache.inc',
   // or
   'cache_inc' => './sites/all/modules/memcache/memcache.db.inc',
  'memcache_servers' => array(
    'localhost:11211' => 'default',
    'localhost:11212' => 'content',
    'localhost:11213' => 'filter',
    'localhost:11214' => 'menu',
    'localhost:11215' => 'page',
    'localhost:11216' => 'views',
  ),
  'memcache_bins' => array(
    'cache' => 'default',
    'cache_content' => 'content',
    'cache_filter' => 'filter',
    'cache_menu' => 'menu',
    'cache_page' => 'page',
    'cache_views' => 'views',
  ),
);
?>

Editamos el fichero de memcache y lo configuramos con los siguientes parámetros personalizados.

$ sudo nano /etc/init.d/memcache

Estos parámetros variaran según el hardware y aplicación.

#!/bin/bash
prog="memcached"
start() {
    echo -n $"Starting $prog "
    # Sessions cache.
    memcached -m 16 -l 0.0.0.0 -p 11211 -d -u nobody
    # Default cache.
    memcached -m 32 -l 0.0.0.0 -p 11212 -d -u nobody
    # Block cache.
    memcached -m 32 -l 0.0.0.0 -p 11213 -d -u nobody
    # Content cache. Holds fully loaded content type structures.
    memcached -m 16 -l 0.0.0.0 -p 11214 -d -u nobody
    # Filter cache. Usually the busiest cache after the default.
    memcached -m 32 -l 0.0.0.0 -p 11215 -d -u nobody
    # Form cache.
    memcached -m 32 -l 0.0.0.0 -p 11216 -d -u nobody
    # Menu cache.
    memcached -m 32 -l 0.0.0.0 -p 11217 -d -u nobody
    # Page cache. Bigger than most other caches.
    memcached -m 128 -l 0.0.0.0 -p 11218 -d -u nobody
    # Views definition cache.
    memcached -m 1 -l 0.0.0.0 -p 11219 -d -u nobody
    # Views data cache (may need to be increased if heavily used).
    memcached -m 32 -l 0.0.0.0 -p 11220 -d -u nobody

    # More caches that might be added later:
    # Users table.
    #/usr/bin/memcached -m 24 -l 0.0.0.0 -p 11219 -d -u nobody
    # Path source cache.
    #/usr/bin/memcached -m 4 -l 0.0.0.0 -p 11220 -d -u nobody
    # Path destination cache.
    #/usr/bin/memcached -m 6 -l 0.0.0.0 -p 11221 -d -u nobody
    RETVAL=$?
    echo
    return $RETVAL
}

stop() {
    if test "x`pidof memcached`" != x; then
        echo -n $"Stopping $prog "
        killall memcached
        echo
    fi
    RETVAL=$?
    return $RETVAL
}
case "$1" in
        start)
            start
            ;;
        stop)
            stop
            ;;

        restart)
            stop
            start
            ;;
        condrestart)
            if test "x`pidof memcached`" != x; then
                stop
                start
            fi
            ;;
        *)
            echo $"Usage: $0 {start|stop|restart|condrestart}"
            exit 1
esac
exit $RETVAL
Apr 29 2010
Apr 29

ABOUT ME 
               HIRE ME for Drupal Training 
                                                          Contact Me 

                                                            

Apr 29 2010
Apr 29

The last days has been an incredible experience for me. Let's start at the beginning :-)

I had the luck that my application was one of the chosen to receive a scholarship for travelling to this year first DrupalCon, at San Francisco, California on USA. After dealing with visa stuff(thanks for the letter Cary Gordon!), which was really easier than for France, I was in a ten-hours flight to the Con!

Stephanie Canon was the person in charge to coordinate about the scholarship, and it was really easy, mainly because of her great work. Thanks for taking care of so many details!

So, what happened there?

Volunteering

I also register as a volunteer for the conference, and this time it was great! I mean, I could help a lot the day before the start moving/getting ready things, and in the first two days, mainly behind the registration table(I can not believe that we registered almost 3,000 people!), but this time I had lot of time to spend at the conference.

The sessions

There were a lot of sessions about a lot of topics in many rooms at the same time, and all of them were recorded with really good quality :-), so there is not any excuse to not watch them: they are all on original schedule posts at the official site(uploaded to archive.org and embed there)

Here is a mini-list of the sessions I recommend you(only sessions I was in, I still need to watch the other ones):

Many productive sprints!

Everyday after the conference, and also one day before and after there was space to have sprints, they were called ChX coder lounge at conference days.

I tried to participate on them everyday, and it was great to share ideas with community, and it was also pretty productive! The main things I could do there were:

  • Vote up/down documentation: finally, after procrastinating this too much, I found the time to do it!
  • Version Control API introduction documentation: The real plan was to make a BoF session about it, but talking with Sam, the co-maintainer of the module, it seem that it's a better idea to make a little screencast about the presentation I made. So I'll be posting it in this blog and probably on an official channel :-p in the next days.
  • Core bugs! Yep, we all can fix our beloved beast :-)
    • #776178 PHP 5.3 warning for command line install: this was an amazing casuality. I was lucky and Moshe was at my left on the table, and after hearing a problem about PHP 5.3 on drush, I tried to help. So, after realizing it was a core bug, and apparently at drupal install, I rolled a little patch to fix it, and I could not beleive it, he RTBC it, and then I was talking to chx about the solution. Magic happens :-) .. it was the quickest core patch I ever get in. The next day, DavidXXXX propose a better solution that I rerolled to include a rollback to my original patch(no more necesary), I mean it was a really tricky bug at Form API, and it was the first time I read drupal install, but anyway it was a _great_ experience, and it show me again that you always learn awesomeness at FLOSS development.
    • #237566 Automated JavaScript unit testing framework
    • #507502 Provide Locale support for jQuery UI
  • Git migration planning: On the CVS exodus session, there was an invitation to join the real work behind migrating to git. The session started kind of ethereal, but suddenly all was working, we could communicate between us, and we all have now real assigments to do. Now, I'll do all I can to improve versioncontrol* modules to help.
  • Close a lot of issues at vote up/down module with the help of Pratul(lut4rp) and Simon(lyricnz) and a big OMG on our faces when Greg deployed it to g.d.o!

Great time, we love community

I really appreciate to have had the opportunity to meet a lot of people from the community. Again a lot of new nick-face relations to my head.

Finally I could meet: Sam Boyer(sdboyer), one of my mentors at Google Summer of Code 2009; Derek Wright(dww), one of the Drupal Infrastructure members who, like he said, maintains the (necessary and awesome!) modules nobody want to maintain; Earl Miles(merlinofchaos) the creator of views and other modules you-always-need, who recently writes an awesome ctools integration patch for vote up/down; Damien Tournoud(DamZ), one of the top D7 contributors, and the maintainer of git.drupalfr.org; Randy Fay(rfay), a great core contributor who actually was some time in Peru; Andrew Berry(deviantintegral); Jeremy Andrews(Jeremy), a great FLOSS contributor whom I'm lucky to share the co-mantaining of xapian integration module; Benjamin Doherty(bangpound), Jeff Robbins(jjeff), HedgeMegde, Simon Roberts(lyricnz), Joshua Rogers(JoshuaRogers).

I also meet many people I already met before:
Angie Byron(webchick), the awesome omni-present D7 branch core co-maintainter; Moshe Weitzman(moshe), core contributor and the maintainer of modules like devel and og; Károly Négyesi(ChX), great core contributor and one of the people behind MongoDB integration for Drupal; Pratul Kalia(lut4rp), the co-mantainer of vote up/down; Antoine Beaupré(anarcat), one of the maintainers of aegir; Chach Sikes(chachasikes), to talk about the Rosa María(assoritam) interest about drupalchix; Lisa Rex(lisarex); Charlie Gordon(cwgordon7); Dmitri Gaskin(dmitrig01); Jim Caruso(jimcaruso) and many more!

I also met two Drupal Peru people: Fernando(develCuy) and Nancy(joyita)

Thanks for the opportunity to be at DrupalCon again!

Apr 29 2010
Apr 29

Brice Lenfant

29 April 2010

DrutNet is a .NET API to create client applications that connects to a Drupal site, and allow file upload, node save/load, view get and more.

Introduction I’ve created this simple API to connect Drupal with .NET applications easily and quickly, this API was based on an API that I wrote for one of our projects, for this project we had to create a few client application to upload files, connect to desktop application and update the content. I used two different interfaces to cover all my needs, cURL - to upload files; and Services module - to create/ update nodes.

Features Snapshot DrutNet API has a Services class:

  • Services.Login - Login to Drupal
  • Services.NodeGet - Load a node
  • Services.UserGet - Load a user
  • Services.NodeSave - Save a node

To upload files using the API use the CURL class:

  • Curl.Login - Login to Drupal with cURL, which is required in order to uplaod a file. This login is _not the same as the Services login
  • Curl.UploadFile - Upload a file to a CCK file/ image field. The "File form" module provided in the ```/Drupal Module``` folder must be enabled on the Drupal site

Source The source code is stored under GitHub (we didn’t publish it in Drupal as apparently it against some the CVS guidelines).

Example In the source there is a sample project, that demonstrates the use of the API to load/ save a node, and to upload a file. To test the API follow the instructions bellow (no programing or compiling required when using the):

  1. Place both Drupal modules under '/Drupal Module' in your Drupal installation (e.g. under ```sites/all/modules```): - DrutNet sample module - The module creates a ```DrutNet smaple``` content type to test the system with the DrutNETSample - File form - This module is required for file upload with cURL
  2. Download and enable the required modules ``` drush dl cck views features filefield services ```
  3. Compile the sample project Or use the already compiled program in ```/Dlls/DrutNETSample.exe```
  4. Insert your username password to Drupal and the Drupal site URL
  5. Click on "Login to services", and then click "Login to cURL".
  6. Update an existing node by indicating the node ID, loading it, changing the text and "Save" it.
  7. To test file upload, switch to the Upload tab, choose a file and fill the node ID, CCK field name to attach the file to (in our example we use ```field_file```), and hit the Upload Button

For features or bug report use GitHub project page

Apr 28 2010
Apr 28

We recently received a report by "ZeroDayScan", about a "Full path disclosure bug in Drupal 6.16".

You can read the story @ http://blog.zerodayscan.com/2010/04/full-path-disclosure-bug-in-drupal-6.... As my short comment was removed from the post, I have to resort to a blogpost. My apologies for polluting the Planet.

Summary of the issue: If you set error reporting to the default value "Write errors to the log and to the screen", the installation path is displayed on the ...*drumroll*... screen.

Which is of course the point.

Calling the setting a "workaround", the default a "bug" and a "vulnerability" is either idiocy, or insincere. Now that comments were removed, we know. Insincere and at the same time a great way to highlight the impotence of the ZeroDayScan scanner.

My last message to ZeroDayScan: If there's an SQL injection on a Drupal site; you can simply take over the site as uid 1 (root); no need to find out the full path via an obscure error message.

Apr 28 2010
Apr 28

I recently presented a session titled Case Studies in Academia: Drupal at ASU & Johns Hopkins Knowledge for Health at DrupalCon San Francisco 2010. The presentation went really well. It was great to meet with all the other universities that are using Drupal and talk about the wildly varying ways Drupal is being used in academia.

If you'd like to watch the video recording, it's available on Archive.org or on the DrupalCon presentation page. The slides for the presentation are attached below.

Apr 28 2010
Apr 28

Stay up to date, follow me on twitter or identi.ca!

Apr 28 2010
Apr 28

Modulo apachesolr

Descargar y comprimir el modulo de apachesolr: http://drupal.org/project/apachesolr

Instalar JSON

$ sudo aptitude install php5-json

En caso de no disponer en los repositorios, en necesario el instalarlo a mano.

$ sudo pecl install json 
sudo nano /etc/php5/conf.d/json.ini

Añadir el contenido "extension=json.so" (sin las comillas).

Librería solr-php-client

Obtener del siguiente proyecto la librería necesaria: http://code.google.com/p/solr-php-client/ Ir al directorio del modulo apachesolr y ejecutar el siguiente comando:

$ svn checkout -r22 http://solr-php-client.googlecode.com/svn/trunk/ SolrPhpClient

Instalar Apache Solr

Se puede realizar la instalación de dos formas diferentes.

Una opción puede ser el usar la aplicación del ejemplo con la configuración de Drupal y la otra basada en tomcat.

Descargar Solr 1.4 de:

http://www.apache.org/dyn/closer.cgi/lucene/solr/

Descomprimir el fichero en una ruta que no este visible y que tampoco sea la de Drupal.

Configuración de Solr basada en Example de Jetty

Vamos a la siguiente carpeta apache-solr-1.4.0/example donde esta un ejemplo del servidor que sirve como ejemplo, test, desarrollo y para pequeños sitios en producción.

Entramos en la carpeta apache-solr-1.4.0/example/solr/conf/ y renombramos los siguientes ficheros schema.xml y solrconfig.xml a schema.back y solrconfig.back.

Ahora copiamos los ficheros schema.xml y solrconfig.xml del modulo apachesolr a la carpeta apache-solr-1.4.0/example/solr/conf/.

Subimos un nivel en la carpeta a la ruta apache-solr-1.4.0/example y ejecutamos el siguiente comando:

$ java -jar start.jar

Accedemos a la siguiente ruta para comprobar que esta en marcha:

http://localhost:8983/solr/admin/

Se puede instalar Tomcat de los repositorios o descargarlo de la web.

$ sudo aptitude install tomcat6 tomcat6-admin

O descargarlo de:

http://tomcat.apache.org/download-60.cgi#6.0.26

Una vez realizado configuramos apachesolr:

Vamos a la siguiente carpeta apache-solr-1.4.0/example donde esta un ejemplo del servidor que sirve como ejemplo, test, desarrollo y para pequeños sitios en producción.

Entramos en la carpeta apache-solr-1.4.0/example/solr/conf/ y renombramos los siguientes ficheros schema.xml y solrconfig.xml a schema.back y solrconfig.back.

Ahora copiamos los ficheros schema.xml y solrconfig.xml del modulo apachesolr a la carpeta apache-solr-1.4.0/example/solr/conf/.
Copiamos el war de la aplicación:

$ sudo cp apache-solr-1.4.0/example/solr/ /nuestra_ruta/tomcat6/solr

Ahora copiamos la carpeta solr de apache-solr-1.4.0/example/ a nuestra carpeta tomcat6:

$ sudo cp apache-solr-1.4.0/dist/apache-solr-1.4.0.war /nuestra_ruta/tomcat6/webapps/solr.war

Creamos el fichero sorl.xml en la ruta /nuestra_ruta/tomcat6/conf/Catalina/localhost/solr.xml.

$ sudo nano /nuestra_ruta/tomcat6/conf/Catalina/localhost/solr.xml

Y añadimos lo siguiente:

Ahora accedemos a la ruta:

http://localhost:8080/solr/admin/

Ahora tenemos que activar los siguiente módulos "Apache Solr framework" y "Apache Solr search". Ahora comprobamos que el nos podemos conectar en ?q=admin/setting/apachesolr a Solr. Para que indexe es necesario ejecutar cron. Se puede controlar el monitor de indexación en ?q=admin/settings/apachesolr/index

Según el servidor que configuremos sera necesario el puerto 8983 o 8080.

Apr 28 2010
Apr 28

This post is a follow-up to the "Use Google Analytics Instead of the Statistics Module" post. If you want to use Google Analytics for all of your site statistics, you may need to add links that the google_analytics module can't handle. The google_analytics module is great, and handles almost everything you may need, including clicks on external links. In some cases, however, it has no way to track an external click.

I was recently presented with the problem of tracking clicks on an "Add This" dropdown. The drop-down handles everything in javascript, so the "links" don't even have an anchor tag. Each one is made up of a div with an onClick event attached. Fortunately, we can add an event listener to the AddThis drop-down in jQuery, then add our share click with one line of code. Here is an example:

  1. $(function() {

  2. var add_button = null;

  3. addthis.addEventListener('addthis.menu.share', shareEventHandler);

  4. $('.addthis_button').mouseover(function() {

  5. add_button = $(this);

  6. });

  7. function shareEventHandler(evt) {

  8. if (evt.type == 'addthis.menu.share') {

  9. pageTracker._trackPageview(add_button.attr('addthis:url') + '/share/' + evt.data.service);

  10. }

  11. }

  12. });

In the above example, an event listener is being added on 'addthis.menu.share', which is when a user clicks on one of the share links. I've also added a mouse-over event to all buttons with the '.addthis_button' class, since we have more than one share button on a page. This is where I set the button the user is currently using.

Since AddThis requires a special attribute (addthis:url), I can simply grab that attribute in the event handler, and add it to the Google Analytics tracker. In this case, we created a fake share link for the node types being shared (for example, blog/22/share or blog/22/share/facebook). When someone tries to go to the link directly, it will redirect them back to the home page, so the share is only tracked when someone actually clicks on one of the links in the drop-down.

The next step would be to use the techniques outlined in the last blog post to properly track these links in a useful format. In the case above, we've taken an external click and transformed it into an internal click. For example, a click to share the link blog/22 on Facebook would result in an internal click to the link blog/22/share/facebook. If we wanted to see how many people shared any blog on any site, we could set our filter to the following:

$filter = 'pagePath =~ /blog/[0-9]+/share';

and if we wanted to see how many people shared any blog post on Facebook, we could set our filter to the following:

$filter = 'pagePath =~ /blog/[0-9]+/share/facebook';

But what about clicks to actual external links? Let's say our site has user profiles, and users are able to enter a website URL to display on their profile. We can tell the google_analytics module to track clicks to external links, but how do we track these clicks using the techniques outlined in the last blog post? Well, the google_analytics module tracks these clicks as an "Event". So if we wanted to see how many times someone clicked on the external website link for a particular profile (using a content_profile node), we would write something like the following:

  1. $website = db_result(db_query("SELECT field_website_value FROM {content_field_website} WHERE nid = %d", $nid));

  2. // Always add an ending slash

  3. if (substr($website, -1) !== '/') {

  4. $website .= '/';

  5. }

  6. $request = array(

  7. '#metrics' => array('uniqueEvents'),

  8. '#filter' => 'eventLabel == '.$website,

  9. '#start_date' => $start_date,

  10. '#end_date' => $end_date,

  11. '#start_index' => 1,

  12. '#max_results' => 1,

  13. );

  14. try {

  15. $entries = google_analytics_api_report_data($request);

  16. }

  17. catch (Exception $e) {

  18. return $e->getMessage();

  19. }

  20. if (!empty($entries)) {

  21. foreach ($entries as $entry) {

  22. $metrics = $entry->getMetrics();

  23. $stats['more info'] = $metrics['uniqueEvents'];

  24. }

  25. }

The code above will get the total number of events involving the external website. Since the google_analytics module only tracks click events for these links, there's no reason to further narrow the events to look at click events. If you wanted to track all external click events, however, you would set your filter to the following:

$filter = 'eventCategory==Outgoing links && eventAction==Click";

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web