Jan 23 2015
Jan 23

Google Summer of Code 2015 is approaching and few people started asking me about how to get selected in GSoC 2015 and where to start. So I though to go ahead and write a blog post so that others can also benefit. This post targets students who have never participated in GSoC before and want to know how to get started with the application process and open source in general.

Google Summer of Code 2015 logo

What is Google Summer of Code? How it works?

The GSoC FAQ page should suffice to answer most of your queries and I strongly suggest to go through it before looking anywhere else for answers.

Google Summer of Code is a program that offers student developers stipends to write code for various open source projects. We work with many open source, free software, and technology-related groups to identify and fund projects over a three month period. Since its inception in 2005, the program has brought together over 8,500 successful student participants from over countries and over 8,000 mentors from 109 countries worldwide to produce over 55 million lines of code.

So, basically this is how it works:

  • Different orgs (open source organizations) submit their applications to be part of the program and Google chooses about 190 of those based on their application and past record.
  • Once the orgs are selected, the list will be available on Melange. Each org will have an ideas list and a homepage.
  • You need to choose one of the ideas from the list on the ideas page and submit your proposal. (Details on this below)
  • Then you wait for Google to announce the list of selected proposals. If you find your proposal there, then the hardest part is over and now you code with your org for about three months and complete the proposed project.
  • If everything went smoothly so far, you'll get a handsome paycheck for your contribution and you'd have learnt a lot about your project, org and open source.

There are so many orgs, which one do I choose?

This is probably the single most asked question every year around this time. The answer is pretty straightforward if you're already involved with any open source organization and want to continue work with the same org, then go for that one. If the answer to the previous question is no (which might be the case for most of you reading this post), then you need to choose a few orgs from the list of all accepted orgs. Although you will finally work with only one org, it might be a nice idea to select 1-3 orgs to which you may submit your proposals. You can shortlist the orgs based based on tags, for example if you're familiar with C++, you can filter the orgs which have the C++ tags mentioned on Melange.

If the org list of this not out yet, you can look at the list of orgs which participated in GSoC last year. For instance, you can take a look at the list of orgs which took part in 2014 and 2013. Filter the orgs based on the tags you're either familiar with or want to work on. Orgs which participated in previous years and took in more than a couple of students are more likely to get accepted again this year. Based on this and your favorite tags, you filter out 1-3 orgs.

After this, the next task is to go through the idea list for those orgs and decide what ideas interest you most. If you don't fully understand the ideas, it's completely fine and the next step will be to get your doubts cleared up by contacting the org and/or the mentor of the task (more on this in the next section).

Okay, I've decided an org and project idea, what do I do next?

Once you've decided what project idea interests you most and some parts of the description are either unclear to you or you want to clarify a few details, you should get in touch with the task mentor and the organization in general. All the orgs have a contact section on Melange which will tell you how to contact the org. Most orgs prefer communication either via IRC or mailing lists so you can get in touch with the org. You can also ping the task mentor in IRC or mail him to clarify any doubts that you might have regarding the project.

Although its not compulsory, its usually a good idea to contribute to the org before sending your proposal. In order to that, you can ask questions like "Hey I'm new here, can anyone help me get started on how to contribute." either on IRC or the mailing lists. Since orgs get asked such questions very frequently, many of those have a 'Getting Started' page and if it'll be very helpful if you find that page and follow the instructions. If you've any doubts don't hesitate to ask those. Mentors are generally nice people and will help you through.

How to start contributing

Contributing to an org means either helping to fix bugs (issues), writing documentation or doing testing etc. All the orgs use an issue tracker to keep track of their issues/bugs and most of those orgs have a novice/beginner/quick-fix tag which lists tasks which are easy to fix for beginners. You can get more info on that by contacting the org. Contributing to open source is fun and if you're not having fun, you're doing it wrong.

Writing a good proposal

Once you've finalized the project idea, and have got started contributing to the org, the next and the most important step is to write a proposal. Many orgs have a application template of sorts and if your org has one, you need to follow that. Otherwise, you can start by specifying your personal information and then moving on to project description. Following are a few tips for writing your project proposal:

  • Include a detailed timeline based on how you intend to complete the project.
  • Make sure to list any bugs you've worked on and/or links to your contributions.
  • Double, actually triple check for spelling mistakes.
  • Don't forget to mention your contact info.
  • Last but not the least, don't forget to update Melange with your latest proposal.

Once your proposal is ready, you can ask the task mentor (and/or the org admin) to review it before you submit it finally to Melange. Ask them if you could explain any parts of it in a better manner and follow up on their feedback. The most important part is really understanding the project idea and reflecting that in your proposal.

Some Do's and Don'ts

Following are some miscellaneous tips for communicating with your org in a better manner:

  1. Don't ask to ask: Don't hesitate to ask any questions and its much better than asking something like "Hello! I ran into an isuue, can anyone help me?" Instead you're more likely to get a helpful answer by asking your real question instead of asking to ask your question.

  2. Be patient and don't spam: Once you've asked your question, wait for some time for someone to answer it. Its not a good idea to spam the channel again and again with the same question at short intervals.

  3. Mentors are humans (and volunteers): After mailing a mentor, at least wait for 48 hours for them to reply. You need to understand that they are humans and most of them contribute in their volunteer time.

  4. Use proper English language: Its really not a good idea to use SMS language while communicating on IRC or mailing lists. Also, note that excessive use of question marks is frowned upon. Although you need to be respectful, but addressing mentors as Sir/Ma'am is not such a great idea.

Final words

If you follow the steps mentioned above sincerely, you'll have a great chance of getting selected into GSoC this year. If you have any doubts, feel free to ask those in comments below.

PS: A little background about me

I was a Google Summer of Code student with Drupal in 2014 and org admin for Drupal in Google Code-In 2014.

Jan 23 2015
Jan 23

DrupalCamp DelhiWhile we know there are over 33,000 Drupal developers around the globe, I had no idea how strong Drupal was in India until I was there with Rachel Friesen, scouting locations for a possible DrupaCon Asia. By meeting with the community at camps, meetups, and dinners, we saw first hand how strongly India is innovating with Drupal and contributing back to the Project.

When it comes to geographic referrals, India is second in driving traffic to Drupal.org. However, they aren’t second in contributions, but things are changing. I was especially impressed with the relationship between Tata Consultancy Services (TCS) and Pfizer, a $51.5B life sciences company. Pfizer allows TCS to contribute their code, which is often not allowed for legal reasons. Since contributing back is a one of Pfizer’s top values, they asked TCS to make contribution part of their culture - and they did. At TCS, Rachit Gupta has created contribution programs that teach staff how to contribute and gives them time during work hours each week to contribute code. With a staff of several hundred developers, this can make TCS become a mighty contribution engine for the Project.

I’m equally impressed by other Indian web development consulting agencies that I met like Axelerant, Blisstering Solutions, Kellton Tech, and Srijan, who also have a contribution culture in their organizations. They even set up KPIs around staff contributions to make sure they are keeping this initiative top of mind.

While India celebrates its 68th birthday on January 25, it’s a time to celebrate its growth as a nation-- and, in its own way, Drupal has a hand in the country’s prosperity. Shine.com, a Drupal job search site, shows there are over 15,000 Drupal jobs in India.  All of the companies I talked to are growing their teams to meet that demand. Imagine if this contribution culture is fully embraced by Indian web development companies? The impact on the Project will be significant.

Individuals are also stepping up to support the Project and there is a passion for contribution that is spreading. I keynoted DrupalCamp Delhi, where over 1,000 people registered and 575 people attended. I saw first hand how dedicated the organizers were to make the event informative and fun. Several sprint mentors were on hand to lead more than 75 people through a full day sprint. Plus, the following weekend was Global Sprint Weekend and sprints popped up all over India in Bangalore, Chennai, Delhi, Goa, Hyderabad and Pune.

Not only are Drupalers in India helping the Project, but they are also using Drupal to create change in India with leapfrog solutions that give Indians access to more digital services. For example, many villages don’t have access to products found in major cities due to lack of infrastructure. The village stores simply can’t scale to buy and hold large quantities of inventory.

Iksula, an Indian eRetail consulting agency,  created a headless Drupal solution for Big Bazaar, India’s largest hypermarket, which provides lightweight tablets for store owners throughout India. Using those tablets, villagers can go into their local store and buy their goods online. The products are delivered to the shop owner, who hand delivers products to the consumer, giving people easier access to goods that can improve their quality of life.

As another example, we can look at IIT Bombay, India’s top engineering university, which uses Drupal at the departmental level. Professors P Sunthar and Kannan are taking Drupal to the masses by creating a MOOC in conjunction with MIT’s EDx. The work is funded by a government initiative called FOSSEE (Free and Open Source Software for Education), and through it, Indian university students can watch videos on several open source technologies, including Drupal.

The initiative bridges learning divides by providing the trainings in several languages found throughout India and provides low cost tablets for students who do not have a personal computer. This well thought-out program can help students learn the tools faster to meet the needs of of future employers. 

India has clearly embraced Drupal. They are making innovative solutions with the software and they are learning to contribute that back to the Project. Its for these reasons we want to host DrupalCon Asia. It will be a chance to highlight India’s Drupal talent and accelerate their adoption of a contribution culture.

A huge thank you to Chakrapani R, Hussain Abbas, Rahul Dewal, Jacob Singh, Mayank Chadha, Parth Gohil, Ankur Gupta, Piyush Poddar, Karanjit Singh, Mahesh Bukka, Vishal Singhal, Ani Gupta, Rachit Gupta, Sunit Gala, Professor P Sunthar and all the other community members who helped organize our trip to India. I’m personally moved and professionally inspired by all that you do.

Image credit to DrupalCamp Delhi

Jan 23 2015
Jan 23

DrupalCamp DelhiWhile we know there are over 33,000 Drupal developers around the globe, I had no idea how strong Drupal was in India until I was there with Rachel Friesen, scouting locations for a possible DrupaCon Asia. By meeting with the community at camps, meetups, and dinners, we saw first hand how strongly India is innovating with Drupal and contributing back to the Project.

When it comes to geographic referrals, India is second in driving traffic to Drupal.org. However, they aren’t second in contributions, but things are changing. I was especially impressed with the relationship between Tata Consultancy Services (TCS) and Pfizer, a $51.5B life sciences company. Pfizer allows TCS to contribute their code, which is often not allowed for legal reasons. Since contributing back is a one of Pfizer’s top values, they asked TCS to make contribution part of their culture - and they did. At TCS, Rachit Gupta has created contribution programs that teach staff how to contribute and gives them time during work hours each week to contribute code. With a staff of several hundred developers, this can make TCS become a mighty contribution engine for the Project.

I’m equally impressed by other Indian web development consulting agencies that I met like Axelerant, Blisstering Solutions, Kellton Tech, and Srijan, who also have a contribution culture in their organizations. They even set up KPIs around staff contributions to make sure they are keeping this initiative top of mind.

While India celebrates its 68th birthday on January 25, it’s a time to celebrate its growth as a nation-- and, in its own way, Drupal has a hand in the country’s prosperity. Shine.com, a Drupal job search site, shows there are over 15,000 Drupal jobs in India.  All of the companies I talked to are growing their teams to meet that demand. Imagine if this contribution culture is fully embraced by Indian web development companies? The impact on the Project will be significant.

Individuals are also stepping up to support the Project and there is a passion for contribution that is spreading. I keynoted DrupalCamp Delhi, where over 1,000 people registered and 575 people attended. I saw first hand how dedicated the organizers were to make the event informative and fun. Several sprint mentors were on hand to lead more than 75 people through a full day sprint. Plus, the following weekend was Global Sprint Weekend and sprints popped up all over India in Bangalore, Chennai, Delhi, Goa, Hyderabad and Pune.

Not only are Drupalers in India helping the Project, but they are also using Drupal to create change in India with leapfrog solutions that give Indians access to more digital services. For example, many villages don’t have access to products found in major cities due to lack of infrastructure. The village stores simply can’t scale to buy and hold large quantities of inventory.

Iksula, an Indian eRetail consulting agency,  created a headless Drupal solution for Big Bazaar, India’s largest hypermarket, which provides lightweight tablets for store owners throughout India. Using those tablets, villagers can go into their local store and buy their goods online. The products are delivered to the shop owner, who hand delivers products to the consumer, giving people easier access to goods that can improve their quality of life.

As another example, we can look at IIT Bombay, India’s top engineering university, which uses Drupal at the departmental level. Professors P Sunthar and Kannan are taking Drupal to the masses by creating a MOOC in conjunction with MIT’s EDx. The work is funded by a government initiative called FOSSEE (Free and Open Source Software for Education), and through it, Indian university students can watch videos on several open source technologies, including Drupal.

The initiative bridges learning divides by providing the trainings in several languages found throughout India and provides low cost tablets for students who do not have a personal computer. This well thought-out program can help students learn the tools faster to meet the needs of of future employers. 

India has clearly embraced Drupal. They are making innovative solutions with the software and they are learning to contribute that back to the Project. Its for these reasons we want to host DrupalCon Asia. It will be a chance to highlight India’s Drupal talent and accelerate their adoption of a contribution culture.

A huge thank you to Chakrapani R, Hussain Abbas, Rahul Dewal, Jacob Singh, Mayank Chadha, Parth Gohil, Ankur Gupta, Piyush Poddar, Karanjit Singh, Mahesh Bukka, Vishal Singhal, Ani Gupta, Rachit Gupta, Sunit Gala, Professor P Sunthar and all the other community members who helped organize our trip to India. I’m personally moved and professionally inspired by all that you do.

Image credit to DrupalCamp Delhi

Jan 23 2015
Jan 23

So, here at Lucius HQ we are planning on building a RESTful API (web services) on top on Drupal distribution OpenLucius.

We want to do this so all 3rd party programmers, thus 3rd party applications, can integrate with OpenLucius. And not only Drupal developers and Drupal modules. 

For example: integrate time tracking with Toggle, invoicing with Freshbooks or integrating with other case trackers like Jira, Asana or Basecamp. And there are a lot more apps out there with huge potential you can tap into.

So, a brief intro in web services in Drupal:

What is a web service API

W3C defines a web service as follows:

‘A Web service is a software system designed to support interoperable machine-to-machine interaction over a network’.

In other words: web services is a documented and defined way for two computers to communicate with each other over the internet. A computer can be anything connected to the internet. So even a Playstation, a smart watch or a thermostat. Think of: 'the internet of things'.

Application Programmer Interface

API means: Application Programmer Interface. API’s define how software and thus computers can communicate with each other. We are constantly using API’s without noticing it. For example, when you attach your laptop to an external monitor. This is done by means of an API. The programs are 'communicating' with each other without any human intervention.

Standardized and documented

An API can also be seen as a standardized and documented way to get access to content and functionality of an application. For example, as an independent developer you can get access to data from Facebook using the Facebook API. An example of Facebook’s ‘Graph API explorer’ that everyone can use:

API documentation

API Documentation is very important, otherwise nobody will know how to use the API to obtain the correct information. An API is worthless without proper documentation.

How does it basically work

An external application makes a request for data through a Drupal web service API. Drupal passes data back in the appropriate structured way (e.g. JSON), so that the external application can use the data. The external program can also create users, create a node, reset a password, etc.

Why web services

In most cases web services are used to provide mobile applications with data. When you look for example at the App nu.nl, then these news items will have to be managed somewhere. These same news items are posted on their website, but also on Android apps and in future maybe on smart TV’s, smart watches and anything else yet to be invented.

Future proof

Web services are future oriented: whatever will come after iOS or Android, the new application platform will also be able to retrieve and modify data via the desired web service API.

In other words: the internet of things can be centrally provided with content, users, etc.

Web services in Drupal

There are several modules in Drupal that can facilitate web services, the most famous are Restws and Services.

These two modules ensure that data and internal functions are openly served to other applications through a Drupal web services API. An external application can 'communicate with these modules' and receive structured data that can be used. Examples of external applications: an iOS or Android app, but also a Playstation, smart TV, smart watch or even a thermostat. In other words, all things in the internet of things.

Drupal Module: Restws

Restws is great in RESTful web services and the necessary CRUD action for all Drupal entities, but has no additional web services like SOAP, XML-RPC, etc. It is also not possible to define and configure ‘service endpoints’.

Drupal Module: Services

The Services module can do everything that Restws can do and more. It is a complete toolkit to provide Drupal with web services. It knows Drupal’s node, entity en CRUD system and provides opportunities to create and configure service endpoints yourself. The module also supports multiple interfaces like REST, XMLRPC, JSON, JSON-RPC, SOAP, AMF and more.

It also provides a number of standard features, allowing you to quickly have the standard web services up and running, for example requesting node content details. This can be done within 10 minutes. Specific use cases are obviously requiring more effort, but with all custom needs this Service module is facilitating a large part of the required functions. Such as creating users, creating a node, reset a password, etc.

Drupal Module: Views data sources

This is a module that lets you create endpoints through Views and serve data through that endpoint. And all this without having to code a line. Relevant data can be configured in the View. Note that this is still an alpha version and can be handy for standard lists: for example the last 10 news items.

More complex use cases

But when the query becomes more complex this module is not working satisfactory yet. You will then have to create a custom endpoint in code and code your own queries. But these custom endpoints do let you hook into the Services module, which is facilitating many functions. So there is no need to code a Drupal web service from scratch.

Web services in Drupal 8

Drupal 8 incorporates web services in the Drupal core, so modules will not be needed anymore!

Wrap up

Ok, that's it for now. But since we are currently working enthusiastically on a major Drupal web services project, blogs will follow with specific use cases about this.

-- Cheers!

Sources

Source header image

This video is very good resource, thanks Mkorostoff

Ow yeah.., and don't foget to check out other Youtube videos on Drupal web services

Jan 23 2015
Jan 23

Start: 

2015-03-28 09:00 - 17:00 America/Chicago

Organizers: 

http://www.drupalcampnola.com

Drupalcamp New OrleansJoin us for the second annual Drupalcamp New Orleans on Saturday, March 28, 2015. Visit www.drupalcampnola.com for more information, to register and to submit a session.

Drupalcamp New Orleans
Saturday, March 28, 2015 - 9 am - 5 pm
Launch Pad
643 Magazine St
New Orleans, LA
www.drupalcampnola.com

Jan 23 2015
Jan 23

In a recent blog post, Drupal 8 co-maintainer Alex Pott highlighted a seismic shift in Drupal that's mostly slipped under the radar. In Drupal 8, he wrote, "sites own their configuration, not modules".

To see why this change is so far-reaching, it's useful to back up a bit and look at where exportable configuration comes from and what's changed.

In Drupal 7, a lot of site configuration (views, rules, and so on) can be exported into files. There are two main use cases for exportable configuration:

  • To share configuration among multiple sites.
  • To move configuration between multiple versions of a single site.

By and large, the two use cases serve different types of users. Sharing configuration among multiple sites is of greatest benefit to smaller, lower resourced groups, who are happy to get the benefits of expertly developed configuration improvements, whether through individual modules or through Drupal distributions. Moving configuration between different instances of the same site fits the workflow of larger and enterprise users, where configuration changes are carefully planned, managed, and staged.

In Drupal 7, both use cases are supported. An exported view, for example, can be shared between multiple sites or between instances of the same site. The Views module will treat it identically in either case.

If a site admin chooses to customize exported configuration in Drupal 7, the customized version is saved into the site database and overrides the module-provided version. Otherwise, though, the site is on a configuration upgrade path. When the site is upgraded to a new release of the module that provided the configuration, it receives any changes that the module author has made--for example, refinements to a view. At any time, a site admin can choose to toss out changes they've made and get the module-provided view--either the one they originally overrode or a new, updated version.

If anything, the multiple site use case was a driving force behind the development and management of configuration exports. The Features module and associated projects - Strongarm, Context, and so on - developed configuration exporting solutions specifically for supporting distributions, in which configuration would be shared and updated among tens or hundreds or thousands of sites. Yes, Features could be and is used for staging changes between instances of a single site; but the first and foremost use case was sharing configuration across sites.

For Drupal 8, however, the entire approach to configuration was rewritten with one use case primarily in mind: staging and deployment. The confiugration system "allows you to deploy a configuration from one environment to another, provided they are the same site."

In Drupal 8, module-provided configuration is imported once and once only--when the module is installed. The assumption is that, from that point onward, the configuration is "owned" by the site. Updated configuration in modules that have already been installed is, by design, ignored. Importing them, as Pott notes, might lead to "a completely new, never-seen-before (on that site) state." Instead, "Fortunately, Drupal 8 does not work this way."

It's indeed a fortunate outcome if you're building an enterprise site and place a premium on locking down and controlling every detail of configuration.

But for most current Drupal sites and for distributions? The benefits are not so clear cut.

On the plus side, much of what previously was unexportable in Drupal core (content types, fields, variables, and so on) is now supported natively. No more heavy handed workarounds in the Features module for so called "faux exportables"--components like user roles, content types, and fields that Drupal 7 core stores only in the database.

But, with Drupal core firmly on the "single site" configuration management side, users wanting to benefit from module-provided configuration updates and developers of distributions may be left fighting core every step of the way.

It's hard not to conclude that Drupal 8 ties configuration management to a (primarily, enterprise-focused) single-site staging model, and in the process, neatly undermines the use cases that largely brought us exported configuration in the first place.

That said, there are emerging initiatives including Configuration Revert that may help. More on those in future posts.

Jan 22 2015
Jan 22

A couple of weeks ago I hacked together a quick proof of concept of editing the same template for using on the client side and the server side with Drupal 8. It looked like this:

Sunday hack. Make #headlessdrupal use #twig for client side templates http://t.co/OQiVya0cu8 #drupal #drupaltwig.

— eiriksm (@orkj) January 4, 2015

If you click the link you can see an animated gif of how I edit the Bartik node template and it reflects in a simple single page app. Or one of these hip headless Drupal things, if you want.

So I thought I should do a quick write up on what it took to make it work, what disadvantages comes with it, what does not actually work, and so on. But then I thought to myself. Why not make a theme that incorporates my thoughts in my last post, "Headless Drupal with head fallback". So I ended up making a proof of concept that also is a live demo of a working Drupal 8 theme with the first page request rendered on the server, and the subsequent requests rendered fully client side. They both use the same node template for both full views and the node listing on the front page. So if you are eager and want to see that, this is the link. 

Next, let's take a look at the inner workings:

Part 1: Twig js

Before I even started this, I had heard of twig.js. So my first thought was to just throw the Drupal templates to it, and see what happened. 

Well, some small problems happened.

The first problem was that some of the filters and tags we have in Drupal is not supported out of the box by twig.js. Some of these are probably Drupal specific, and some are extensions that is not supported out of the box. One example is the tag {% trans %} for translating text. But in general, this was not a big problem. Except that I did as I usually do when doing a POC. I just quickly threw together something that worked, resulting for example in that the trans tag just returns the original string. Which obviously is not the intended use for it. But at least now the templates could be rendered. Part one, complete.

Part 2: Enter REST

Next I needed to make sure I could request a node through the REST module, pass it to twig.js and render the same result as Drupal would do server side. This turned out to be the point where I ended up with the worst hacks. You see, ideally I would just have a JSON structure that represents the node, and pass it to twig.js. But there are a couple of obvious problems with that.

Consider this code (following examples are taken from the Bartik theme):

{{ label }}

This is unproblematic. If we have a node.url property and a node.label property on the object we send to twig.js, this would just work out of the box. Neither of these properties are available like that in the default REST response for a node, however, but a couple of assignments later, that problem went away as well.

Now, consider this:

{{ content|without('comment', 'links') }}

Let's start with the filter, "without". Well, at least that should be easy. We just need a filter that will make sure comment and links properties on the node.content object will not be printed here. No problem.

Now to the problem. The content variable here should include all the rendered fields of the node. As was the case of label and url, .content is not actually a property in the REST response either. This makes the default output from the REST module not so usable to us. Because to make it generic we would also have to know what fields to compose together to this .content property, and how to render them. So what then?

I'll just write a module, I thought. As I often do. Make it return more or less the render array, which I can pass directly to twig.js. So I started looking into what this looked like now, in Drupal 8. I started looking at how I could tweak the render array to look more or less like the least amount of data I needed to be able to render the node. I saw that I needed to recurse through the render array 0, 1 or 2 levels deep, depending on the properties. So I would get for example node.content with markup in all its children, but also node.label without children, just the actual title of the node. Which again made me start to hardcode things I did not want in the response, just like I just had started hardcoding things I wanted from the REST response.

So I gave up the module. After all this is just a hacked together POC, so I'll be frank about that part. And I went back to hardcoding it client side instead. Not really the most flexible solution, but at least - part two: complete.

Part 3: Putting the pieces together

Now, this was the easy part. I had a template function that could accept data. I had transformed the REST response into the pieces I needed for the template. The rest was just adding a couple of AJAX calls and some pushState for the history (which reminds me. This probably does not work in all browsers at all). And then bundling things together with some well known front-end tools. Of course, this is all in the repo if you want all the details.

Conclusions

Twig on the server and on the client. Enough said, right? 

Well. The form this demo is now, this is not something you would just start to use. But hopefully get some ideas. Or inspiration. Or maybe inspire (and inform) me of the smartest way to return a "half-rendered render array".

Also, I would love to get some discussion going regarding how to use this approach in the most maintainable way.

Some thoughts on how I would improve this if I would actually use it:

  • Request templates via ajax.
  • Improve escaping.
  • Incorporate it into a framework (right now it just vanilla js).
  • Remove hacks, actually implement all the filters.

Finally: The code is up at github. There is a demo on a test site on pantheon. And huge props just mostly go out to both twig and twig js authors. Just another day standing on the shoulders of giants.

I'm going to end this blog post with a classy gif from back in the day. And although it does not apply in the same way these gifs were traditionally used, I think we can say that things said in this blog post are not set in stone, neither in regards to construction or architectural planning.

Jan 22 2015
Jan 22

Berkeley approached us to not only build a website for an exciting new project but to also develop its brand identity from scratch.

The project was the Berkeley Institute for Data Science (BIDS), a new initiative to provide a common collaborative space for research fellows, faculty and anyone at Berkeley working with data science in some way.

The White House hosted an event to announce the initiative, which is funded by a $37.8 million grant from the Gordon and Betty Moore Foundation and the Alfred P. Sloan Foundation. Berkeley is one of three institutions to receive this funding, in addition to New York University and the University of Washington.

Now that the site is live, I’d like to share some of the processes I used to develop the identity and site design.

BIDS Final Logo

BIDS Homepage

Mood Boards

I started with some exploration. What is big data? What does data science look like? How far can we push a visual motif that implies big data without being too literal or narrow in focus?

With some smart keyword guessing I hunted down images from Designspiration and Google Image searches, and used Niice to collect a moodboard of related reference images. This got me familiar with some data visualization techniques and started turning the gears in my brain.

BIDS Niice Moodboard

Time to sketch! I like to limit my creative output to paper early in the process to explore more ideas quickly. I don’t expect much from the sketches at this point; I tried a lot of different things and hunted for visual clues that could lead me in new directions.

BIDS Sketches

Getting Digital

Sketching yielded a few interesting ideas; I was ready to digitize some of them and start to explore typography in Illustrator. This included some pretty rough options (shown below) that I never presented to the client, and a handful of acceptable options that I refined into our first deliverable.

BIDS Design Exploration 1

BIDS Design Exploration 2

BIDS Design Exploration 3

Presentation

I boiled these rough explorations down to three distinct design directions. I had a good feeling about using a dot-and-line motif in some form. Here are a few of the highlights from the first round:

BIDS Logo Round 1

Refinement

From here we were able to narrow to two possible directions for refinement:

BIDS Round 2

We also needed to account for an acronym version of the logo to be used in some situations:

BIDS Acronym Logos

The client selected the second option with concentric, curved lines. For the final logo we moved the lower dot to the bottom-right to function as visual punctuation. We were already doing this in the acronym version and it kept things a little cleaner:

BIDS Logo Final

Honoring the Brand

I knew we would need to comply  with Berkeley’s brand guidelines, which are conveniently available on this public website. However, the guidelines provided a lot of flexibility in both color and typography, which gave me opportunity to explore.

I used Freight Sans Pro for both the logo and throughout the entire site design. Some of our early concepts included Freight Text Pro (a serifed typeface), but we opted for the more contemporary feel of all sans-serif typography.

BIDS Homepage

The core color palette was lifted directly from the brand guidelines, with some minor modifications made for hover states or secondary UI elements.

Fortunately, the brand guidelines are well designed and already very functional for use on the web! This made it easy to design a site that both matches the Berkeley identity and has a unique identity.

BIDS About Page

BIDS Project Page

Jan 22 2015
Jan 22

I was hired by the Drupal Association in October 2014 to develop a new revenue stream from advertising on Drupal.org. For some time we’ve been trying to diversify revenue streams away from DrupalCon, both to make the Association more sustainable and to ensure that DrupalCons can serve community needs, not just our funding needs. We’ve introduced the Drupal Jobs program already and now, after conversations with the community, we want to put more work into Drupal.org advertising initiatives.

This new revenue stream will help fund various Drupal.org initiatives and improvements including better account creation and login, organization and user profile improvements, a responsive redesign of Drupal.org, issue workflow and Git improvements, making Drupal.org search usable, improving tools to find and select projects, and the Groups migration to Drupal 7.

We spent time interviewing members of the Drupal Association board, representatives of the Drupal Community, Working Groups, Supporting Partners, and Drupal Businesses, both large and small to help develop our strategy and guidelines. Our biggest takeaways are:

  • Advertising should not only appeal to advertisers, but also be helpful to our users and/or our mission.
  • When possible, only monetize users who are logged out and not contributing to the Project. If you’re on Drupal.org to do work and contribute, we don’t want you to see ads.
  • Don’t clutter the site, interfere with navigation or disrupt visitors, especially contributors.
  • Do not put ads on pages where users are coming to work, like the issue queue.
  • Advertising products should be inclusive, with low cost options and tiered pricing. We want to make sure that small businesses without huge marketing budgets have the opportunity to get in front of the Drupal Community.
  • Create high impact opportunities for Partners that already support the Community.
  • Address the industry-wide shift to Programmatic Advertising, which is the automated buying and selling of digital advertising.

There are already advertising banners on Drupal.org, however we need to expand their reach to hit our goals. We’re trying to address challenges for our current advertisers, including a relatively low amount of views on pages with ads, which makes it difficult for them to reach their goals.

We’re also facing industry-wide challenges in Digital Advertising. Advertisers are looking for larger, more intrusive ads that get the users’ attention, or at the very least use standard Interactive Advertising Bureau (IAB) ad sizes, which are larger than the ads we offer on Drupal.org.

We came up with a new line of products that we feel will help us reach our goals, but not disrupt the Drupal.org experience, or the Drupal Association Engineering Team roadmap. We want our Engineering Team to fix search on Drupal.org, not spend time developing and supporting major advertising platforms.

2015 Advertising Initiatives:

  • The ongoing development of curated content with banner ads including resource guides, content by industry and in the future, blog posts.
  • Continued display of banner ads on high profile pages like the Homepage, Marketplace and Case Studies Section.
  • Sponsored listings from Supporting Technology Partners (similar to Hosting Listings).
  • Opt-in email subscriptions with special offers from our Supporters.
  • Audience Extension: a secure, anonymous, non-interruptive way to advertise to Drupal.org visitors. It allows advertisers to programmatically reach the Drupal.org audience while on other websites through Ad Networks and Exchanges.

I wanted to spend most of my time explaining Audience Extension, since its unlike anything we’ve done in the past, and it may prompt questions. This product makes sense because it addresses all of the challenges we’re facing:

  • It’s affordable for small businesses; they can spend as little as $200 on a campaign
  • We don’t need to flood the site with ads and disrupt the user experience.
  • It’s relatively easy to implement - we won’t interrupt the engineering team or their efforts to improve Drupal.org.
  • We will only target anonymous (logged out) users.
  • We will support “Do Not Track” browser requests.
  • This is an industry-wide standard that we’re adopting.
  • Anonymous users will have the option to opt-out.
  • This improves the ad experience on other sites with more relevant, useful ads that also support the community.

How does Audience Extension Work?

We’re partnering with Perfect Audience, a company that specializes in retargeting, and offers a unique audience extension solution called Partner Connect.  We add a Perfect Audience JavaScript tag to the Drupal.org source code. This tag will be loaded on the page to logged out users. The tag places a Perfect Audience cookie in the visitor's browser that indicates that they recently visited Drupal.org. Once that cookie is in place, an advertiser looking to reach out to the Drupal.org community can advertise to those visitors on Facebook, Google's ad network, and many other sites that participate in major online ad networks. Advertisers create and manage these campaigns through their Perfect Audience accounts. They pay for the ads through Perfect Audience and we split the revenue with Perfect Audience and the ad networks that serve the ads.

  • The program is anonymous. No personally identifiable information (such as email address, name or date of birth) is gathered or stored.
  • No data is sold or exchanged, this merely gives advertisers the opportunity to buy a banner ad impression within the Perfect Audience platform.
  • It's easy to opt-out. You can just click over to the Perfect Audience privacy page and click two buttons to opt out of the tracking. Here's the link.
  • Drupal.org will support “Do Not Track” browser requests and only users who have not logged in (anonymous) will be included in the program.
  • It does not conflict with EU privacy rulings. Advertiser campaigns for Partner Connect can only be geotargeted to the United States and Canada right now.
  • Only high quality, relevant advertisers who have been vetted by an actual human will be able to participate in this program. Some good examples of Perfect Audience advertisers would be companies like New Relic and Heroku.
  • Perfect Audience is actually run by a Drupaler! The first business started by founder Brad Flora back in 2008 was built on Drupal. He spent countless hours in the IRC channel talking Drupal and posting in the forums. He understands how important it is to keep sensitive pages on Drupal.org an ad-free experience and he’s very excited to be able to help make that happen.
  • This program has the potential to generate significant revenue for the Drupal Association and Project over time as more advertisers come on board.


It’s important that we fund Drupal.org improvements, and that we do so in a responsible way that respects the community. We anticipate rolling out these new products throughout the year, starting with Audience Extension on February 5th.  Thanks for taking the time to read about our initiatives, and please tell us your thoughts!

Roy
Jan 22 2015
Jan 22

On January 31 and February 1 the 15th edition of the FOSDEM event will be held in Brussels, Belgium. FOSDEM (Free and Open Source Software Developers’ European Meeting) is the largest gathering of open source community members in Europe. More than 5000 people will come from all parts of the world to meet, share ideas and collaborate.

As the name says, the event is highly developer centric, so the main focus has always been on the technology and the code. But open source software has graphical user interfaces too! Buttons to click, sliders to drag, forms to fill out, boxes to check, screens to swipe and what have you.

Useful software attracts users. To keep them around and attract more users, the useful has to be made usable. Which means uncovering and prioritising user goals and needs and doing the work to find out how to best serve those. That’s where design comes in.

This year FOSDEM will have it’s first ever “devroom” dedicated to the topic of open source design. User experience architects, interaction designers, information architects, usability specialists and designer/coder unicorns will share experiences and discuss the good and bad of design in open source environments.

Open source software is a driving force behind all things online. As more aspects of business, culture, society, humanity as a whole move into the digital domain it becomes just as more important to ensure that people don’t get left behind because of the sheer complexity of it all. There’s a lot that the craft of design can contribute to ensure this.

I’ll deliver a short talk about how we started, grew and maintain a user experience design team within the Drupal project. Otherwise, the schedule is looking great. I’m looking forward to meet my open source designer colleagues.

See you there?

Jan 22 2015
Jan 22

A somewhat common request for projects, especially intranets, is to provide a single sign-on (SSO) system. At a rudimentary level, an SSO system allows one site to handle all logins (authentication) for a group of sites, rather than a visitor needing a separate login for each one. For an organization with several sites it can greatly reduce the headache for its clients, customers, employees, etc increase visitor satisfaction, reduce maintenance costs, and potentially increase sales.

Another common use of an SSO system is to transparently log in visitors who are also logged into a local network where a directory service is being used to manage access across the network, e.g. the open LDAP standard, Microsoft’s Active Directory, Novell Open Enterprise Server, etc. This is most often used for local network sites and services where a level of physical security is assumed, i.e. the only people on the network are supposed to be there.

Digging deeper

At a technical level, most SSO services start with the visitor logging into one central service, the authentication site/service. After that, the visitor can then browse to other sites and services and are transparently logged in. When the visitor does connect to one of the other sites, the new site radios back to the central server to confirm the authentication. In practice this means that the visitor may be bounced back ‘n forth between a series of pages in order for the authentication to be confirmed. More commonly, though, the confirmation is handled behind the scenes on the actual servers themselves, thus greatly reducing the chance of foul play.

Either which way, these are well known and understood problems that have been solved several times before. Because of this there are many known solutions for adding SSO to a site or a group of sites. Some options for connecting to an SSO system as a client, including LDAP, SAML, CAS and OAuth, already have stable modules available for Drupal. Should there be a need to connect multiple Drupal sites together there's even a custom solution available called the Bakery module, which has been in use on drupal.org itself for many years and serves hundreds of thousands of users across its variety of sites.

And here’s one I made earlier

Because of all this, it rarely makes sense to write a custom solution; instead, an existing solution should be sought that can fit both the server and secondary / client side of the equation. When researching SSO options the most important aspect to research is what existing options are already available on the central login server, maybe as an optional extra - an existing but unused system may already exist that could save tens or even hundreds of hours of development time.

Using an existing single sign-on system provides a wealth of benefits:

  • A greatly reduced amount of custom code, thus less code to manage.
  • Publicly available code results in much higher security standards as anyone can audit the code.
  • Known APIs result in easier integration and easier maintenance than custom code.
  • Known APIs increase the likelihood of being able to find someone with experience in working with the API, rather than having to start from scratch.
  • Once the pieces are in place, it’ll usually Just Work™.

.. But not the custom module!

Writing a custom SSO solution comes with many disadvantages:

  • SSO systems have been built before, there are lots of options out there.
  • More custom code for the site maintainers to maintain.
  • It requires architecting a custom security algorithm to match the specific requirements.
  • Without paying for a 3rd party service, there's no "automatic" vetting of the security algorithm outside of the immediate development team, thus a greater chance of security holes existing.
  • They result in more time (and budget) spent writing solutions that already exist.
  • More time (and budget) is then also spent maintaining the custom code in the future.

Hot! Code slowly!

That said, there can be a few (very few!) reasons why a custom SSO solution is required:

  • There may not be an existing SSO system available for the systems that are being connected.
  • One or more portions of the system may be behind custom firewalls that cannot be gotten past, which would stop the server-side account confirmation.
  • Existing systems may not support unusual custom requirements that are outside of the project's control, e.g. integration with physical card or thumb readers that are mandated for use, etc.

If a project does end up needing to require a custom SSO solution, a few things should be kept in mind:

  • See if there's any way to use an existing codebase and just write a plugin to handle the unique requirements; this would reduce the amount of custom code that would need to be written and maintained.
  • As this opens up the castle's front gate to invaders, so to speak, secure code is of paramount importance, so make sure the code follows the Drupal security best practices.
  • Include a timestamp in the algorithm and necessary logic to ensure that there’s an automatic timeout / expiration of the SSO; this will help avoid scenarios of someone using a link from a cached page on someone else’s computer.
  • Ensure that all traffic is secured via HTTPS; while not perfect (the IT world is constantly uncovering details that show how it can be gotten past), it still adds a reasonable base layer of security that’s much more difficult for someone (e.g. at a coffee shop) to snoop than an unsecured connection.
  • Have both the authentication logic / algorithm and the code itself vetted by a 3rd party.
  • Listen to all feedback regarding the system's security and make every possible effort to remove potential avenues of attack.
  • Do not attempt to write new encryption algorithms, there are plenty of highly secure algorithms supported by PHP's mcrypt library that will work with plenty of other systems.
  • Use the strongest encryption algorithms supported by each platform, don't skimp on something this important, especially when there would be negligible difference in terms of system / site responsiveness for the end user.

Don't design a new mouse trap

In summary – known and trusted solutions exist for adding a single sign-on system to a website and writing a custom system should always be the last resort.

Additional Resources

Best Practices for Custom Modules | Mediacurrent Blog Post
Introducing the Mediacurrent Contrib Committee | Mediacurrent Blog Post

Jan 22 2015
Jan 22

I've been privileged to attend almost every DrupalCon since Barcelona in 2007. I missed Paris in 2009, but I had a good excuse - my wife was due to give birth to our first child around the same time.

The relocation of the Commerce Guys headquarters to Paris has given me plenty of time to catch up on the missed sightseeing, but I still need to figure out how to get to Sydney after missing that one. Lol

Without access to those hundreds of Drupal developers and enthusiasts in 2007, I never would have known anyone was even using Ubercart. I didn't know how to engage other developers remotely (my early forays into IRC were similar to webchick's, I believe), and there wasn't much going on in Louisville, KY where I called home. Meeting others in the Drupal community, learning from my peers, and being mentored directly by many of the same has grown me personally and professionally in ways I never would have expected.

That's why I'm excited about the opportunity to travel to Bogotá, Colombia for the first DrupalCon in Latin America, February 10-12. I can't wait to hear the keynotes from both Dries and Larry, two of my Drupal heroes, and to learn more about the latest developments in Drupal 8 core and contributed modules.

I'll personally be addressing two topics: Drupal Commerce 2.x for Drupal 8 (on behalf of bojanz) and growing a Drupal based product business. I also look forward to the conversations, shared meals, and sprints that make the conference so rewarding.

I strongly encourage you to come if you're in a position to do so! Smile

With the help of Carlos Ospina, I've recorded a personal invitation in Spanish that I trust doesn't have me saying anything embarrassing. I'm sure my Spanish will be better for at least a week after spending time at the conference. Tongue

Jan 22 2015
Jan 22

Shifting to a content-driven commerce focus is a daunting challenge.

Whether you are a media company adding commerce to your site or a retail site wanting to add richer editorial, there are very different skillsets required to sell product versus those needed for writing and curating content. How do you successfully blend these skillsets — much less these seemingly disparate websites — into a single, cohesive whole?

It ain’t easy, but it’s worth it.

From Media to Commerce

Adding commerce to a media site is tricky. On the one hand, product recommendations can add a new dimension of value to both you and your readers. Just like advertising, though, (and maybe more so), you run the risk of corrupting a brand that your readers have come to trust.

Promotion!

If you are making the step into content-driven commerce, you must be willing to promote products on your site. Sounds like a no-brainer, right? But integrity is one of the things that readers value from media sites. And if they feel like they are being pushed toward a bad product (or even an unrelated product), they will likely revolt.

Now, the promotions don’t have to be in your face, “everything must go”, car-sales promotions. In fact, those are the exact promotions that will spark revolution. But you must be willing to add tasteful product descriptions and honest reviews and recommendations. This means putting your trusted brand behind a product that you like — and, more importantly, one that you think your readers will like.

Not selling out

There is a fine line between promoting product and selling out. Sometimes it’s easy to find. Don’t like a product? Think a product is cheaply made? Don’t recommend it no matter how sweet that affiliate commission looks.

But what about a product you love versus one that you like? The one you love, right? But what if that second product has a much better affiliate program?

It’s tricky. But you can probably find a way to promote both. The Wirecutter (and their sister site, The SweetHome) approach to product reviews is a great example of this. They write in-depth product reviews for different categories of gadgets. Each review has a recommended product along with explanations of why they did and didn’t like some of the other options they reviewed. Each product is a link to Amazon (and other stores) and every link has their affiliate code.

It’s a smart, if intense, solution that allows them to promote a lot of different products without selling out. In fact, it’s quite the opposite. Readers trust the site more because they go into so much detail about so many options.

From Commerce to Media

Now, if you are going in the opposite direction (adding content to your commerce site), then you’ll experience a range of other issues that can be even more challenging. In many respects, they run counter to much of the marketing culture that permeates most retail shops — unless those shops have come to value content-marketing and storytelling as a way to increase online sales.

Content Production

Editorial content is a whole new world. Marketing content goes through a series of edits and reviews. It’s often bland and boring. Intentionally so. You need to put the best foot forward of every product you sell — no matter how much that description might gloss over hard truths.

With a content-driven commerce approach, though, using your marketing-style for your editorial content will sabotage your efforts. You need something with a voice and style that captures people’s attention and engages them on a personal level. Something that product descriptions almost never do.

Willingness to curate

Once you start producing content, you need to start curating it. What products are going to make it onto your top 10 list? Which set of widgets are you going to include in your how-to article? You know those items you promote are going to get more views and more clicks — even a bump in brand perception — that other products won’t.

After you’ve written the piece, then you need to decide what content you’re going to promote on the homepage and throughout the site. Another tough decision. This one, though, fits closely inline with your sales planning process — which sale are promoting and when.

Treating content as a first-class citizen

Another aspect of content-driven commerce that may seem anathema to many commerce sites: treat your content like a first-class citizen. Specifically: give it equal weight on your homepage, which means treating it the same as you would a sale or other promotion. The challenge for many is that this feels like you are losing sales. But you’re trading a bump in short-term sales for long-term engagement.

There are many companies that have seemingly embraced content-driven commerce as a strategy. Big brands like Home Depot, Lowes, and Brooks Brothers are producing some amazing content. A quick glance at their homepages, though, and the only hint at this content is behind a single link. Everything on these pages is focused on the latest sale and other product promotions. This may be a strategic decision or a technological limitation. Regardless, these websites have yet to really embrace content as a cornerstone to their brand.

Admittedly, there are many ways to enter a website—from Google to social media. But what a company includes on their homepage speaks volumes about what a brand values.

Gaining trust

Does your audience see you as an expert on the product you sell (Crutchfield)? Or just as a fancy storefront (Best Buy). In either case, gaining and maintaining the trust of your audience is critical — and, depending on your current relationship with your customers, may be an uphill slog.

Are you willing to write a bad review of a product? Are you willing to pull a product if there are no redeeming qualities? Are you willing to write content that doesn’t directly sell the product?

Imagine if Best Buy started producing content that actually helped their audience better understand and use the technology they were selling. As it is, the store (and by extension, website) has limited audience engagement and does nothing to pull anyone to their site — other than offer product promotions and discounts.

One of the fundamental requirements to succeeding with any kind of content-driven strategy is audience trust. You need to build trust with your audience and you can’t do that if they feel like you are selling them anything and everything.

The move to content-driven commerce

Making the decision to integrate content and commerce has its challenges. The exact challenges you face will really depend on the culture of your organization as well as the abilities and mindset of your staff. But if you’re willing to make the necessary changes to engage your audience and build their trust, you can make the transition.

If you’re moving from a media site into commerce, they key will be maintaining your readers' trust and your own integrity. If you’re moving in the opposite direction, the challenge will be gaining the reader’s trust, which means making some pretty big organizational and cultural changes.

In both cases, though, you’ll find the move well worth the effort.

Jan 22 2015
Jan 22

By Steve Burge 21 January 2015

One of our members was watching our video class on Drupal's Workbench module and has been getting it set up on their site.

They ran into one problem: how to use the new moderation states they added.

They wanted add a tab so that people could easily see the content in a particular moderation state.

In this tutorial, we'll show you how to make that happen.

Note: we are going to assume some knowledge of Workbench. If you're new to or struggling with Workbench, watch our video class.

First, if you don't have one, set up a new state that content can be assigned to.

  • Go Configuration > Workbench Access
  • Add a new state. In this example, we added "Final Editor Approval":
Add a New Moderation State Tab to Workbench
  • Go to Structure > Views
  • Find the "Workbench Moderation: Content" view. This is the view that creates an default tab for the "Needs Review" state.
  • Click Clone:
  • Click Continue.
  • Now, we need to create a custom menu link for our "Final Editor Approval" state. Click "Tab: Needs review"/
  • Change the title of the menu link:

Now we need to modify the content that is being shown when people click this menu link.

  • Under Filter Criteria, click "Workbench Moderation: State (=Needs Review).
  • Choose the moderation state that you created earlier:
  • Save your view.
  • Go to your Workbench dashboard and you'll see your new tab.
Jan 21 2015
Jan 21

We're very pleased to announce that the new Drupal Console project is now multilingual!

We put a lot of hours into this effort because we felt it was so important to broaden the base of Console users and help them get off to a great start developing modules for Drupal 8. It will be the last major feature to be added before an upcoming code freeze that will allow us to work on Console documentation - another effort to broaden the base of users that can benefit from the project.

Here are a few reasons why we felt it was so important to add multilingual capabilities to the Console project:

  • Drupal is multilingual and Drupal 8 is even more so than ever. Take a look at D8MI at http://www.drupal8multilingual.org/  We want to ship this project with capabilities like these.
  • It feels good and more natural to use a tool on your mother tongue. Most of our project contributors are not native english speakers.
  • Separating messages from code will ease the text messages updates, no need to know/learn PHP or use an IDE to contribute.
  • David Flores & myself will be presenting a sesion in Spanish related to this project at the DrupalCon Latino in Bogota and we knew making the project multilingual would be interesting for the event audience.

As I mentioned on twitter:

Code looks a little hacky but got a translatable version of #Drupal Console commands, feature will be available on the next release. #drupal8

— Jesus Manuel Olivas (@jmolivas) January 2, 2015

But we needed a starting point.

Talking about code, this is what was required 

Adding the Symfony Translation Component to the composer.json file


"require": {
    ...
+   "symfony/config": "2.6.*",
+   "symfony/translation": "2.6.*",
    ...
},

For more information about the Translation Component look the awesome symfony documentation here.

Add translation files and messages


# extract of config/translations/console.en.yml
command:
  cache:
    rebuild:
      description: Rebuild and clear all site caches.
      options:
        cache: Only clean a specific cache.
      messages:
        welcome: Welcome to the cache:rebuild command.
        rebuild: Rebuilding cache(s), wait a moment please.
        completed: Done cleaning cache(s).
        invalid_cache: Cache "%s" is invalid.
      questions:
        cache: Select cache.

Actually four language files are available (en, es, fr and pt) you can find those files here, take note those files are only a copy of the console.en.yml file with few overrides for testing purposes.

Create a new Helper class

In order to take care of the translation the TranslatorHelper class was added see code here, the Helper was also registered at bin/conosle.php see code here

Inject the TranslatorHelper

For this task it was necessary to modify the RegisterCommandsHelper class, obtaining the TranslatorHelper and Injecting via the constructor when creating and registering a new instance of each command.


if ($cmd->getConstructor()->getNumberOfRequiredParameters()>0) {
  $translator = $this->getHelperSet()->get('translator');
  $command = $cmd->newInstance($translator);
}
else {
  $command = $cmd->newInstance();
}
$this->console->add($command);

You can see the full class here

How can you help

Feel free to take a look at the messages at the github repo and send us fixes.

How to override the default language

As simple as creating a new YAML file at your home directory ~/.console/config.yml and override the language value.


#file path ~/.console/config.yml
application:
  language: es

How to make a console.phar

We are using and recommend this great project http://box-project.org/


$ curl -LSs https://box-project.github.io/box2/installer.php | php
$ mv box.phar /usr/local/bin/box

// Run this inside your project directory to create a new console.phar file
$ box build

Feel free to try this new multilingual feature on the latest release v0.6.0, and as usual feel free to ask any questions commenting on this page, or adding a new issue on the drupal project page or the github repository.

I mentioned earlier that we are moving toward a code freeze so we can focus on documentation. The freeze is expected to be in place for about 4 weeks. I'll also use the time to prepare my DrupalCon Latino  Console presentation with David Flores aka @dmouse.

Stay tuned!

This post has been adapted from my personal blog.

Jan 21 2015
Jan 21

This tutorial will showcase how we have made Bootstrap 3 and especially its responsive grid system and integral part of the platform, and will show you how to use some easy tools to make any website component or content mobile friendly!

About Bootstrap 3 in CMS Powerstart

The Drupal CMS Powerstart disbitrution has made Bootstrap 3 an integral part of the platform. The main reason we did this is to leverage the Bootstrap 3 responsive grid system. This grid system is not just functional, practical and effective.. it's also widely used, widely understood and very well documented. On top of that, Bootstrap 3 is an active open source project, like Drupal, and also supported very well with Drupal through a basetheme and various modules. This tutorial will teach you about these integrations and how to use them to create awesome responsive websites with ease. This tutorial will focus more on Drupal integration than on the gridsystem itself. For a quick introduction to the grid system check out this tutorial. For real life examples check out our Drupal themes.

2.1 Bootstrap on blocks

Forget about themes with 16 regions, or 25 regions. If your'e using Bootstrap you really only need full-width regions that stack on top of one another. The horizontal division will be provisioned by block classes, with responsive layout switching that is customized for your content, not for your theme (-designer) or for an outdated wireframe.

In Drupal CMS Powerstart I added the block_class module and added a patch that assists in our responsive designing labours by auto-completing the Bootstrap 3 grid system classes. 

2.2 Bootstrap in Views

To use Bootstrap 3 in views we will use the views_bootstrap Drupal module. Let's take a look at how this module is used to create a Portfolio grid page for theDrupal CMS Powerstart Portfolio component.

Live demo of portfolio grid.

The views_bootstrap module provides an array of new Views display plugins:

  • Bootstrap Accordion
  • Bootstrap Carousel
  • Bootstrap Grid
  • Bootstrap List Group
  • Bootstrap Media Object
  • Bootstrap Tab
  • Bootstrap Table
  • Bootstrap Thumbnails 

This grid of portfolio thumbnails uses the Bootstrap Grid views display plugin. The Bootstrap Grid plugin allows you to output any content in a grid using Bootstrap's grid html markup. A current shortcoming in the module is that it only allows you to select the number of columns for the 'large' media query. Fortunately, there is a patch for that:

https://www.drupal.org/node/2203111

The Drupal CMS Powerstart distribution has this patch included and uses it in views to create truly responsive grids, where you can set the number of columns per media query. It works quiet well out of the box. Here is the views format configuration used for the portfolio:

As you can see it's real easy to create responsive views with this Views Bootstrap 3 integration! Without writing any code you can leverage the tried and tested responsive systems that are provided by Bootstrap. The views_bootstrap module gives you a whole set of tools that help you build responsive layouts and widgets using your trusted Views backend interface. This means site builders can rely less on themers/programmers and get work done quicker.

Using custom markup in views

The View Bootstrap module is great at organizing rows of data into responsive layouts, but it doesn't have the same level of support for fields inside a row of data. This is what we did to create a responsive events listing for the Drupal CMS Powerstart events component:

Live demo of events view.

The events view uses the 'Unformatted list' plugin that is provided by the views module itself. This prints each row of data in a div container. There are 2 ways to make the contents of these rows responsive. One would be to generate node teasers inside the rows, and configure the content type's teaser display mode to use grid classes on the fields. This method will be covered in the next part of this tutorial. For the events view we don't use teasers, we are building a fields view because it gives us more flexibility in the fields we show in our view. Luckily the views interface makes it easy for us to add grid classes right where we need them. First, we will add a row class to each views row by clicking Settings under Format and adding row in the Row class field:

Now we can add responsive column classes to our fields and they will be organized within each row. We simply add classes by clicking each field and editing the Style Settings CSS class field:

The only thing we need to do here is check the Create a CSS class checkbox, and a textbox will appear that allows us to add grid classes to the field. This field uses the class col-sm-6, which makes our event title use 50% of its parent container's width (because Bootstrap uses a 12 column grid) when on a small device. This means that on an extra small device there is not grid class active and the title will use 100% of it's parent container's width, as you can see in the mock-up above. We can't say this method is as easy as the point and click method discussed earlier but if you are familiar with the views interface already this method will become intuitive with a little bit of practice and will allow you to have very fine-grained control over responsive behaviors in your views.

2.3 Bootstrap in Fields

Often you want to organise content fields in a layout. A module that can be of help here is Display Suite, but even with the ds_bootstrap_layouts extension this will give you a limited set of layouts. We can easily build any layout by simply adding bootstral grid classes on fields. This is not to say I don't like Display Suite but since CMS Powerstart focuses on simplicity I will choose the simplest solution. 'Big' tools like Panels and Display Suite are definitely more appropriate for larger Drupal projects.

To make an example I will start building a new Drupal CMS Powerstart component. There was a feature request for a 'shop' component, so we will be building a content type as part of a simple component that will help brick and mortar shops display their inventory. First we will create a new content type called Object.  Since Bootstrap columns need to be wrapped in row classes, we are adding the field_group module. Once you have downloaded and enabled the field_group module, you will have a new option 'Add new group' under the manage fields tab of your Object content type. We are adding a group called Bootstrap row using the default widget fieldset. Now drag the image and body field to the indented position under the Bootstrap row field group. This will create a visual indication in the node/add and node/edit interface that fields belong to the same group. Your Manage Fields interface should now look like this:

Next we will go to the Manage Display tab of the Object content type. This is where the Bootstrap magic happens. Our goal is to display the body text and image field beside eachother on big device and above one another in small devices. First, we have to create our Bootstrap row group again, this time we add a group named Bootstrap row and make it the 'Div' format. Give our field group the following configuration settings:

  • Fieldgroup settings: open
  • Show label: no
  • Speed: none
  • Effec none:
  • Extra CSS classes: row (you can remove the default classes)

Next we wil drag the Body and Image fields to the indented position under the field group. Now we simply configure the field formatters to use the Bootstrap grid classes of our choice. To add these classes in the Manage Display interface we are going to install another module: field_formatter_class. Once you have downloaded and enabled this module you can go back to the Manage Display interface and you will see an option to add a class on each field. You will now set both the Body and Image field to have the Field Formatter Class col-sm-6. This will create a 2 column layout on devices wider than 768px and a stacked layout on smaller devices. If you are using Drupal CMS Powerstart, you can set the Image style of your image field to Bootstrap 3 col6. This will resize the image to exactly fit the 6 column grid container.

Your Manage Display tab should now look like this: 

Now if you create a node using your new content type it should look similar to this:

Using our new fieldgroup tool we can easily add bootstrap rows and columns to any content type, and since classes are listed and edited in the Manage Fields interface, it's relatively quick and and easy to manage per-node layouts. At least it's a step up from managing a ton of node templates.

2.4 Bootstrap in Content: Shortcodes

Sometimes you (or a client) just want to create a special page that needs more attention than other pages of the same type. Unfortunately there aren't any free tools that give our clients a true WYSIWYG experience for creating responsive Bootstrap grids. If you know one please let me know! Our fallback option is the bs_shortcodes module that I ported from a Wordpres plugin. This module let's you add nearly all Bootstrap components, including grid elements, using a WYSIWYG-integrated form. 

To see the power and flexibility of what you can do with these shortcode check out this demo page:

http://glazed-demo.sooperthemes.com/content/responsive-columns

This system leverages the Drupal Shortcode API, which is a port of the Wordpress shortcode API. The Drupal CMS Powerstart distribution ships with a WYWISYG component that includes CKEditor 4 with the neccesary Shortcode API and shortcode-provisioning submodules. Since the configuration of this setup is complex and beyond the scope of this article I'm just going to assuming you are using Drupal CMS Powerstart and ready to use the WYSIWYG with Shortcodes integration.

To create a simple 2 column layout like in the previous examples we first add a row shortcode:

Then we select the column shortcode and find the code that corresponds to 6 columns on a small devices:

Now if we use 2 6 column shortcodes and put in the same content used in the Field and Field Group tutorial in will look like this in the editor:

After saving the page it will look exactly as the Test Object page we created in the previous tutorial. I admit that shortcodes are a rather crude tool for a complex problem but anyone who is willing the learn the basic principles of a 12 column grid system will have a huge amount of flexibility and capability in creating responsive content. When you combine the Bootstrap 3 grid documentation, the WYSIWYG integration, and for emergencies the documentation of the Wordpress plugin you already have a fully documented tool for savvy clients who don't want to deal with raw HTML code. Shortcodes don't seem like the most userfriendly tool but I've seen clients pick it up quickly and appreciate the flexibility it gives them in organising their most important pages. In the future we migh see improvement in this area from tools like Visual Composer and the Drupal-compatible alternative Azexo Composer.

In Part 3 of this tutorial series I will write about using shortcodes as a site building tool and demonstrate what you can do with shortcodes in a real life Drupal CMS project. To get a sneak preview of the shortcode elements I will be using, check out our Drupal themes.

Jan 21 2015
Jan 21

It isn't just about Drupal here at ActiveLAMP -- when the right project comes along that diverges from the usual demands of content management, we get to use other cool technologies to satisfy more exotic requirements. Last year we had a project that presented us with the opportunity to broaden our arsenal beyond the Drupal toolbox. Basically, we had to build a website which handles a growing amount of vetted content coming in from the site's community and 2 external sources, and the whole catalog is available through the use of a rich search tool and also through a RESTful web service which other of our client's partners can use to search for content to display on their respective websites.

Drupal 7 -- more than just a CMS

We love Drupal and we recognize its power in managing content of varying types and complexity. We at ActiveLAMP have solved a lot of problems with it in the past, and have seen how potent it can be. We were able to map out many of the project's requirements to Drupal functionality and we grew confident that it is the right tool for the project.

We pretty much implemented the majority of the site's content-management, user-management, and access-control functionality with Drupal, from content creation, revision, display, and for printing. We relied heavily on built-in functionality to tie things together. Did I mention that the site and content-base and theme components are bi-lingual? Yeah, the wide foray of i18n modules took care of that.

One huge reason we love Drupal is because of its striving community which drives to make it better and more powerful every day. We leveraged open-sourced modules that the community has produced over the years to satisfy project requirements that Drupal does not provide out-of-the-box.

For starters, we based our project on the Panopoly distribution of Drupal which bundles a wide selection of modules that gave us great flexibility in structuring our pages and saving us precious time in site-building and theming. We leveraged a lot of modules to solve more specialized problems. For example, we used the Workbench suite of modules to take care of the implementation of the review-publish-reject workflow that was essential to maintain the site's integrity and quality. We also used the ZURB Foundation starter theme as the foundation for our site pages.

What vanilla Drupal and the community modules cannot provide us we wrote ourselves, thanks to Drupal's uber-powerful "plug-and-play" architecture which easily allowed us to write custom modules to tell Drupal exactly what we need it to do. The amount of work that can be accomplished by the architecture's hook system is phenomenal, and it elevates Drupal from being just a content management system to a content management framework. Whatever your problem, there most probably is a Drupal module for it.

Flexible indexing and searching with Elasticsearch

A large aspect to our project is that the content we handle should be subject to a search tool available on the site. The criterias for searching do not only demand the support for full-text searches, but also filtering by date-range, categorizations ("taxonomies" in Drupal), and most importantly, geo-location queries and sorting by distance (e.g., within n miles from a given location, etc.) It was readily apparent that SQL LIKE expressions or full-text search queries with the MyISAM engine for MySQL just wouldn't cut it.

We needed a full-pledged full-text search engine that also supports geo-spatial operations. And surprise! -- there is a Drupal module for that (A confession: not really a surprise). The Apache Solr Search modules readily provide us the ability to index all our content straight from Drupal and into Apache Solr, an open-source search platform built on top of the famous Apache Lucene engine.

Despite the comfort that the module provided, I evaluated other options which eventually led us to Elasticsearch, which we ended up using over Solr.

Elasticsearch advertises itself as:

“a powerful open source search and analytics engine that makes data easy to explore”

...and we really found this to be true. Since it is basically a wrapper around Lucene and exposing its features through a RESTful API, it is readily available to any apps no matter which language it is written in. Given the wide proliferation and usage of REST APIs in web development, it puts a familiar face on a not-so-common technology. As long as you speak HTTP, the lingua franca of the Web, you are in business.

Writing/indexing documents into Elasticsearch is straight-forward: represent your content as a JSON object and POST it up into the appropriate endpoints. If you wish to retrieve it on its own, simply issue a GET request together with its unique ID which Elasticsearch assigned it and gave back during indexing. Updating it is also a PUT request away. Its all RESTful and nice.

Making searches is also done through API calls, too. Here is an example of a query which contains a Lucene-like text search (grouping conditions with parentheses and ANDs and ORs), a negation filter, a basic geo-location filtering, and with results sorted by distance from a given location:

POST /volsearch/toolkit_opportunity/_search HTTP/1.1
Host: localhost:9200
{
  "from":0,
  "size":10,
  "query":{
    "filtered":{
      "filter":{
        "bool":{
          "must":[
            {
              "geo_distance":{
                "distance":"100mi",
                "location.coordinates":{
                  "lat":34.493311,
                  "lon":-117.30288
                }
              }
            }
          ],
          "must_not":[
            {
              "term":{
                "partner":"Mentor Up"
              }
            }
          ]
        }
      },
      "query":{
        "query_string":{
          "fields":[
            "title",
            "body"
          ],
          "query":"hunger AND (financial OR finance)",
          "use_dis_max":true
        }
      }
    }
  },
  "sort":[
    {
      "_geo_distance":{
        "location.coordinates":[
          34.493311,
          -117.30288
        ],
        "order":"asc",
        "unit":"mi",
        "distance_type":"plane"
      }
    }
  ]
}

Queries are written following Elasticsearch's own DSL (domain-specific language) which are in the form of JSON objects. The fact that queries are represented as tree of search specifications in the form of dictionaries (or “associative arrays” in PHP parlance) makes them a lot easier to understand, traverse, and manipulate as needed without the need of third-party query builders that Lucene's query syntax leaves to be desired. It is this syntactic sugar that helped convinced us to use Elasticsearch.

What makes Elasticsearch flexible is that it is at some degree schema-less. It really made it quite quick for us to get started and get things done. We just hand it with documents with no pre-defined schema and it just does it job at trying to guess the field types, inferring from the data we provided. We can specify new text fields and filter against them on-the-go. If you decide to start using richer queries like geo-spatial and date-ranges, then you should explicitly declare fields as having richer types like dates, date-ranges, and geo-points to tell Elasticsearch how to index the data accordingly.

To be clear, Apache Solr also exposes Lucene through a web service. However we think Elasticsearch API design is more modern and much easier to use. Elasticsearch also provides a suite of features that lends it to easier scalability. Visualizing the data is also really nifty with the use of Kibana.

The Search API

Because of the lack of built-in access control in Elasticsearch, we cannot just expose it to third-parties who wish to consume our data. Anyone who can see the Elasticsearch server will invariably have the ability to write and delete content from it. We needed a layer that firewalls our search index away from public. Not only that, it will also have to enforce our own simplified query DSL that the API consumers will use.

This is another aspect that we looked beyond Drupal. Building web services isn't exactly within Drupal's purview, although it can be accomplished with the help of third-party modules. However, our major concern was in regards to the operational cost of involving it in the web service solution in general: we felt that the overhead of Drupal's bootstrap process is just too much for responding to API requests. It would be akin to swatting a fruit fly with a sledge-hammer. We decided to implement all search functionality and the search API itself in a separate application and writing it with Symfony.

More details on how we introduced Symfony into the equation and how we integrated together will be the subject of my next blog post. For now we just like to say that we are happy with our decision to split the project's scope into smaller discrete sub-problems because it allowed us to target each one of them with more focused solutions and expand our horizon.

Jan 21 2015
Jan 21

The default contact form in Drupal has quite basic settings. You may only create categories and receiving emails with the default UI admin. To change other preferences such as form title or form destination, we may have to implement override hooks.

In this article, we present some tricks to customize the contact form in Drupal. More tricks will be added regularly.

1. Edit the contact form title

To change the title, add this function to template.php on your theme folder (/sites/all/themes/your-theme/template.php)


function mytheme_form_contact_site_form_alter (&$form, &$form_state) {
drupal_set_title ('Contact ABC Media');
}

2. Redirect form result

By default, users will be redirected to front pages after submitting the form. It has a strange behavior for users because they may confuse what is going on, whether the message has been sent.

To redirect the contact form to a page of your choice, please add these two functions to your template.php file of your theme, as in section 1 above. I learnt it from a tip of Postrational.


function my_theme_form_alter(&$form, &$form_state, $form_id) {
if ($form_id == 'contact_site_form') {
$form['#submit'][] = 'contact_form_submit_handler';
}
}
function contact_form_submit_handler(&$form, &$form_state) {
$form_state['redirect'] = 'thank-you-page-alias';
}

Do you have other tricks with contact forms in Drupal? Pls share and we will post them here with acknowledgement to you.

Jan 21 2015
Jan 21

2015 is poised to be a great year for nonprofit technology and the adoption of digital tools to advance the causes we love. While I can’t say that I see too many groundbreaking innovations on the immediate horizon, I do believe that this will be a year of implementation and refinement. Building upon trends that we saw arise last year in the consumer industry and private sector, 2015 will be the year that many nonprofits leap into digital engagement strategies and begin to leverage new tools that will create fundamental change in the way that they interact with their constituencies.

Of course, as always happens when a growing sector first embraces new tools, the nonprofit technology world will see more than its fair share of awkward clunkiness this year, mainly as "software as a service" product companies rebrand their offerings for nonprofits and flash shiny objects at the earnest and hungry organizations we all support.

But as a more general and appealing trend, I believe that we’ll see a slimming down and a focus on polish this coming year. Visual storytelling and "long form" journalism are hopefully on the rise in the nonprofit digital world. We should see more, and better, integrations between web applications, data management systems, and social networks. These integrations will power more seamless and personalized user experiences. Rather than tossing up an incongruent collection of web interfaces and forms delivered by different paid service platforms, nonprofits will be able to present calls-to-action through more beautiful and less cumbersome digital experiences.

Below are some additional thoughts regarding the good stuff (and some of the bad) that we’re likely to see this year. If you have any additional predictions, please share your thoughts!

Visual Storytelling and the Resurgence of Long-Form Journalism

I don’t know about you, but I’m tired of my eyeballs popping out of my head every time I visit a nonprofit’s homepage that attempts to cram 1,000 headlines above the fold. I’m tired of the concept of "a fold" altogether. And don’t get me started about slideshow carousels as navigation: It’s 2015, folks!

Fortunately, we are seeing an elegant slowdown in the pace of writing for the web. Audiences are getting a little more patient, particularly when presented with clean design, pleasing typography, and bold imagery. We’re also seeing nonprofits embrace visual storytelling, investing in imagery and content over whistles and bells and widgets.

Medium and Exposure are my two favorite examples of impactful long-form journalism and visual storytelling on the web. These deceptively simple sites leverage cutting-edge javascript and other complex technologies to get out of the way and let content and visuals speak for themselves.

clutter-simple0.png

As an added benefit, adopting this more long-form storytelling approach may help your SEO. Google took bold steps in late 2014 to reward websites that focus on good content. With its release of Panda 4.1, their new search algorithm, nonprofits who prioritize long-form writing and quality narrative will start to see significant benefits.

We’re already seeing nonprofits adopt this approach, including one of my new favorites, The Marshall Project. This site cuts away the usual frills and assumes an intelligent audience that will do the work to engage with the content. Don’t get me wrong: The Marshall Project website is slick and surprisingly complex from an engineering and user experience perspective – but its designers have worked hard to bring the content itself to the surface as the most compelling call-to-action.

Interconnectivity

2015 will be a big year for APIs in the CMS space. Teasing out those acronyms, we will see content management systems, like Drupal and WordPress, release powerful tools allowing them to talk with other web applications and tools. Indeed, its new web services layer is a central and much anticipated feature in the upcoming release of Drupal 8. WordPress made similar strides late last year with the early release of its own REST API.

clutter-simple1-2.png

Leveraging these APIs, 2015 will bring the nonprofit sector more mobile applications that share data and content with these organizations’ websites. The costs for developing these integrations should decrease relative to the usefulness of such solutions, which will hopefully lead to more experimentation and mobile investment among nonprofits. And as mentioned previously, because these new applications will have access to more constituent data across platforms, they will lend themselves to more robust and personalized digital experiences.

clutter-simple3.pngclutter-simple4.png

On the less technical and more DIY front, 2015 will be marked by the maturation of 3rd-party services that allow non-developers to integrate their online tools. In its awesome post about technology trends in 2015, the firm Frog Design refers to this development as the "emergence of the casual programmer." Services like Zapier, and my new favorite IFTTT, will allow nonprofits to make more out of social networks and services like Google Apps, turn disparate data into actionable analytics, see the bigger picture across networks, and make more data-driven decisions.

More Big (And Perhaps Clunky) Web Apps

If you’ve been following ThinkShout for a while now, you probably know that we are big fans of Salesforce because of its great API and commitment to open data. We maintain the Salesforce Integration Suite for Drupal. At this point, the majority of our client work involves some sort of integration between the Drupal CMS and the Salesforce CRM.

As proponents of data-driven constituent engagement, we couldn’t be more excited to see the nonprofit sector embrace Salesforce and recognize the importance of constituent relationship management (CRM) and CRM-CMS integration. Because of the power of the Salesforce Suite, we can build powerful, gorgeous tools in Drupal that sync data bidirectionally and in real time with Salesforce.

clutter-simple5.jpg

That said, part of the rise of Salesforce in the nonprofit sector over the last two years has been driven by the vacuum created by Blackbaud’s purchase of Convio. And now, with the recent releases of Salesforce’s NGO Connect and Blackbaud’s Raiser’s Edge NXT, both "all-in-one" fundraising solutions with limited website integration potential (in my opinion…), we’re going to see more and more of an arms race between these two companies as they try to “out featurize” each other in marketing to nonprofits. In other words, in spite of the benefits from integrating Drupal and Salesforce, we’re going to see big nonprofit CRM offerings like Salesforce and Blackbaud push competing solutions that try to do everything in their own proprietary and sometimes clunky ecosystems.

The Internet of Things

The Internet of Things (IoT), or the interconnectivity of embedded Internet devices, is not a new concept for 2015. We’ve seen the rise of random smart things, from TVs to refrigerators, for the last few years. While the world’s population is estimated to reach 7.7 billion in 2020, the number of Internet-connected devices is predicted to hit 26 billion that same year. Apple’s announcement of its forthcoming Watch last year tolled the the first meaningful generation of wearable technology. Of course, that doesn’t necessarily mean that you’ll want to wear this stuff just yet, depending upon your fashion sense...

clutter-simple6.jpg

(Image from VentureBeat’s coverage of the 2015 Consumer Electronics Show last week. Would you wear these?)

However, the advent of the wearable Internet presents many opportunities to the nonprofit sector, both as a delivery device for micro-campaigns and targeted appeals, and as a tool for collecting information about an organization’s constituency. Our colleagues at BlueSpark Labs recently wrote about how these technologies will allow organizations to build websites that are really "context-rich systems." For example, with an Internet-connected watch synced up to a nonprofit’s website, that organization could potentially monitor a volunteer athlete’s speed and heart rate during a workout. These contextualized web experiences could drive deeper feelings of commitment among donors and other nonprofit supporters.

clutter-simple7.jpg

(Fast Company envisions how the NY Times might cover election result's on the Apple Watch.)

Privacy and Security

While not exactly a trend in nonprofit technology, I will be interested to see how the growing focus on Internet privacy and security will affect online fundraising and digital engagement strategies this year.

clutter-simple8.png

(A poster for the film, The Interview, as most of you probably know, the film incited a major hack of Sony Studios and spurred international dialog about cyber security.)

We are seeing more and more startups providing direct-to-consumer privacy and security offerings. This last year, Apple release Apple Pay which adds security, as well as convenience, to both online and in-person credit card purchases. And Silent Circle just released Blackphone - an encrypted cell phone with a sophisticated and secure operating system built on top of the Android platform.

clutter-simple9.png

How might this focus on privacy and security affect the nonprofit sector? It’s hard to say for sure, but nonprofits should anticipate the need to pay for more routine security audits and best practices regarding maintenance of their web properties, especially as these tools begin to collect and leverage more constituent data. They should also consider how their online fundraising tools will begin to support new online payment formats, such as Apple Pay, as well as virtual currencies like BitCoin.

And Away We Go…

At ThinkShout, we’ve already rolled up our sleeves and are excitedly working away to implement many of these new strategies and approaches for our clients in 2015. What are you looking forward to seeing in the world of of nonprofit tech this year? What trends do you see on the horizon? Let us know. And consider swinging by the "Drupal Day for Nonprofits" event that we’re organizing on March 3rd in Austin, TX, as part of this year’s Nonprofit Technology Conference. We hope to dream with you there!

Jan 20 2015
Jan 20

Start: 

2015-08-12 (All day) - 2015-08-15 (All day) America/Chicago

Organizers: 

This is a place holder to get MWDS on the calendar.
More details soon.

Will be hosted at http://palantir.net/
Wednesday-Saturday all sprint days. No sessions.
Focus on getting Drupal 8 released (and some key contrib ports to Drupal 8).

Jan 20 2015
Jan 20

On of the things I've blogged about recently when talking about my upcoming book Model Your Data with Drupal is domain-driven design. While domain-driven design is important and something that I hope to touch on in the future, I've decided it's too much to cover in one book, and I'm refocusing Model Your Data with Drupal on basic object oriented principles.

If you want to know more about this decision and what will be covered in Model Your Data with Drupal, read on.

On studying complex subjects

There are a lot of ways to approach learning a complex, layered subject. You could just dive right in and see if there are any gaps in your foundational knowledge and work backwards to fill in those gaps. Others (like myself) like to try to figure out the best starting place before diving in and work incrementally, building a solid base of foundational knowledge that can be used as a platform for deeper learning. Of course there are advantages and disadvantages to both methods, but I often find that even though I'm inclined to work slowly and try to start at the beginning, I often get farther more quickly when I try to dive into advanced subjects and see what I'm lacking along the way.

Punching above your weight isn't a new idea, and if I had the time and patience to work through it, that's exactly how I would structure Model Your Data with Drupal. But it's not a perfect world, and I ran into some problems along the way. Basically I was trying to leave out too many layers in the software development layer cake. Most people really like cake, so that seemed like a pretty bad idea.

A tasty layer cake of software development

"What is this software development layer cake?" you may ask. It's simply the layers of foundational knowledge that build upon one another. Take this example from my recent presentation on Object Oriented (OO) Design Patterns for Drupal 8:

OO principles layer cake

In this example, more foundational material is at the bottom, (even though basics such as understanding syntax, data structures, control structures, etc have been left out):

OO Basics: The features that define what makes a system or program object oriented: Abstraction, Encapsulation, Polymorphism, Inheritance

OO Principles: The axioms or best practices, these are to OO programming, what principles likes Don't repeat yourself are to procedural or functional programming: Encapsulate what varies, program to interfaces, favor composition over inheritance, strive for loosely coupled designs, depend on abstractions, etc.

OO Patterns: Finally, all of the above basics and principles give us a series of patterns that come naturally, such as decorator, factory, observer, strategy, facade, singleton, etc.

When you put these all together you get something like this:

OO principles layer cake with examples

Enter domain-driven design

This seems simple enough, even though it covers lots of ground. But you may be asking, where does domain-driven design come into this? Is it foundational, or another layer on the top of the cake?

Yes. It's a bit of both, domain-driven design is a process that informs patterns and principles, but it's also something that builds on all of the above, since it requires familiarity with the patterns and principles before you can speak fluently about a project. That doesn't seem like such a hard requirement until you consider that one of the main tenants of domain-driven design is ubiquitous language for bringing developers, engineers, analysts, and domain experts together.

The current

The problem I kept running into while working on Model Your Data with Drupal, was that it was hard to explain the high level concepts of domain-driven design, while also covering the low-level nitty gritty of writing PHP for Drupal. Putting myself in the readers position, it seemed like there was plenty of material for the top and bottom levels of the cake, without much in between.

Because of this, I've decided to put the discussions of domain-driven design and Drupal on hold for now. Model Your Data with Drupal will instead focus on basic object oriented principles and a few patterns, where applicable. If you're reading this book, you should be able to get something that you can easily apply to real world projects, without too much extra fluff, and refocusing this will make the book more clear and easier to understand and put into practice.

The future

The other major factor for this change is that Drupal 8 is looming in the future, with massive changes coming for every Drupal developer. You may have heard about it's sweeping changes, with tons of object oriented systems and dependency injection everywhere. These changes are going to make it easier than ever to apply objected oriented principles to your Drupal projects and modules. Because of this it's going to be easier to describe OO principles and domain-driven design in context of Drupal 8.

Want to learn more?

If you're interested in learning more about the book, you can read more or sign up for the mail list at Model Your Data with Drupal.

Jan 20 2015
Jan 20


Drupal 8 represents a radical shift, both technically and culturally, from previous versions. Perusing through the Drupal 8 code base, many parts may be unfamiliar. One bit in particular, though, is especially unusual: A new directory named /core/vendor. What is this mysterious place, and who is vending?

The "vendor" directory represents Drupal's largest cultural shift. It is where Drupal's 3rd party dependencies are stored. The structure of that directory is a product of Composer, the PHP-standard mechanism for declaring dependencies on other packages and downloading them as needed. We won't go into detail about how Composer works; for that, see my article in the September 2013 issue of Drupal Watchdog, Composer: Sharing Wider.

But what 3rd party code are we actually using, and why?

Crack open your IDE if you want, or just follow along at home, as we embark on a tour of Drupal 8's 3rd party dependencies. (We won't be going in alphabetical order.)

Guzzle

Perhaps the easiest to discuss is Guzzle. Guzzle is an HTTP client for PHP; that is, it allows you to make outbound HTTP requests with far more flexibility (and a far, far nicer API) than using curl or some other very low-level library.

Drupal had its own HTTP client for a long time... sort of. The drupal_http_request() function has been around longer than I have, and served as Drupal's sole outbound HTTP utility. Unfortunately, it was never very good. In fact, it sucked. HTTP is not a simple spec, especially HTTP 1.1, and supporting it properly is difficult. drupal_http_request() was always an after-thought, and lacked many features that some users needed.

What's more, it was a single function – one single 304 line function with a cyclomatic complexity of 41 and an N-Path complexity of over 25 billion. That's a fancy way of saying "completely and utterly impossible to unit test before the heat death of the universe." For a modern web platform, that's simply not good enough. (For more on cyclomatic complexity and N-Path complexity, see Anthony Ferrara's talk from DrupalCon Portland, “Development by the Numbers.”)

As we said, though, writing a good HTTP client is quite hard, and we already had plenty of hard tasks to do in Drupal 8. So instead, we outsourced it. After conducting a comparison survey of over a half-dozen different HTTP clients for PHP, we settled on Guzzle as the most feature-rich. The developer's decision to refactor Guzzle itself – to make it easier for Drupal to use just the portions we wanted – helped, too.

Guzzle actually has a lot of other capabilities that we're not using in core, but can be downloaded quite easily. One of the most interesting is the ability to auto-map RESTful services to PHP classes. (See the Guzzle documentation for more information.)

That same thought process applies to much of Drupal 8: “This is going to be really hard to do, but someone already did it. Let’s just save time and use theirs. Open Source is cool like that.”

Doctrine Annotations

Doctrine is a large project. It's most known for its database abstraction layer (DBAL) – for which Drupal already has "DBTNG" – and for the Doctrine ORM object-relational-mapper – for which the Entity and Field system already serves Drupal well, even if it doesn't have as cool-sounding a name. So what's Doctrine doing in Drupal?

The new plugin system in Drupal 8 makes use of "annotations". Annotations are a way to define special Docblock tags that can be parsed at runtime to provide metadata for a class or method. Essentially, they serve a similar purpose to "info hooks" in previous versions of Drupal, but keep that metadata right next to the class they describe. Of course, parsing those annotations into useful information takes work. (Some languages have native annotation support, but PHP does not; we have to rely on the docblock.)

As with Guzzle, why do that work when it's already been done? Doctrine's flavor of annotations is one of the more commonly used in PHP, so we adopted that and its annotations library. We even managed to submit some work back upstream to improve the efficiency of its parsing engine.

Easy RDF

The story with Easy RDF is much the same. Managing RDF graphs is complicated: It's better for there to be a few really good libraries for it than lots of mediocre ones. So we just adopted an existing one that worked. (Notice a pattern emerging?)

Zend Feed

“Zend? I thought Drupal was using Symfony!”
Open source isn't partisan. As with many other libraries, Drupal had an old and half-implemented RSS parser in the Aggregator module that left much to be desired. (Actually we had two; Views has an RSS generating routine.)

The most robust RSS and Atom parser right now in PHP is the Feed library out of Zend Framework 2. After some discussion with the Zend Framework maintainers, they were able to remove a few dependencies from it, which made it small enough for Drupal to leverage. Out with the Aggregator RSS parser, in with Zend Feed. As a bonus, although core isn't using it, we now have a full-featured Atom parser ready to go for any module that wants to use it. As of this writing Views isn't leveraging it yet, but there's an open issue to do so. (Volunteers welcome.)

Twig

The Twig template is worthy of its own article (see Morten DK’s article in this issue). Its genesis was – you guessed it – much the same.

By the time of DrupalCon Denver, in early 2012, most core developers had concluded that PHPTemplate was no longer viable and needed to be put out to pasture. We needed something to replace it and, as we were already leveraging Symfony by that point, “let's look elsewhere” was a viable strategy. What really sold core developers on Twig was simple: Front-end developers demanded it. Twig offers them a far nicer experience than PHPtemplate ever did, so core developers went “Okay, you want it, you got it!” That said, moving Drupal to Twig has taken a sizeable army of front-end developers, many working in core for the first time.

Assetic

Assetic is an asset management library; that is, it helps manage CSS and Javascript files. As is the trend, it is replacing much of Drupal's home-grown CSS/JS compression and aggregation logic. Work is still happening in this area, and it's highly unlikely that module developers will ever deal with it directly, but it's there.

PHPUnit

PHPUnit is the industry standard testing framework for PHP. Although it has not fully replaced Drupal's home-grown testing framework yet, it is slowly supplanting it. For most code written for Drupal 8, if it cannot be tested with PHPUnit, then your code is flawed. (See Sam Boyer’s article in this issue, PHPUnit and Drupal.)

PSR Logger

The tiny Psr\Log library is just a collection of interfaces released by the Framework Interoperability Group (PHP-FIG) to standardize logging. As of this writing, it is only included because it's a dependency for some Symfony components, but there is active work to replace the watchdog() function with a new logger that uses the same standard interface. (With due apologies to the editors.)

Gliph

Gliph is an interesting case. It was written by Drupal developer Sam Boyer to solve a Drupal problem, but there is nothing Drupal-specific about it. Sam simply decided to build it outside of Drupal as an MIT-licensed library that Drupal, or anyone else, could then import. That's an approach that I expect will become increasingly popular in coming years, both for core and contrib. Gliph is a graph management library, that is, mathematical graphs such as dependency trees. It will be used to complement Assetic, and again it's unlikely that module developers will ever use it directly. (But if you need dependency resolution logic, it's there – go for it!)

Symfony

Last but not least, there's Symfony. Drupal 8 is not using all of Symfony by any means; in fact, we're using less than a third of the component libraries it offers and none of the fullstack framework. Nonetheless, it shares the same core pipeline with many other projects in the Symfony family. Many of these can and do have their own articles, so for now we'll just give a cursory review of them.

HttpFoundation / HttpKernel

These are the libraries that started it all. HttpFoundation was the first significant 3rd party library added to Drupal 8, followed soon after by HttpKernel. HttpFoundation abstracts the HTTP Request and Response concepts, replacing PHP's native superglobals and "print, but hope you don't have cookies" mess. HttpKernel essentially provides an interface for mapping a request to a response; a simple concept, it's actually fundamental to what any web application does. It also includes many powerful standard implementations that Drupal is leveraging, including the default HttpKernel itself.

Once again, the need was to replace Drupal's page-only routing system with a pipeline that could handle the full power of HTTP, and do so with a more self-documenting API. After designing one in the abstract, the Web Services Initiative found that Symfony had already implemented essentially what we had concluded we needed.
Open Source – For The Win!

Routing / CMF Routing

Routing is the process of mapping an incoming request to the code that will handle that particular request. In Drupal 7, it was hook_menu, menu_get_item(), and page callbacks. In Drupal 8, it's Symfony's Routing component with enhancements from the Symfony CMF project.

The Symfony CMF Routing component was actually a close collaboration between Drupal and Symfony CMF; despite nominally being competitors, both projects saw the value in working together to build one really solid routing framework.

EventDispatcher

The HttpKernel library makes use of the Symfony EventDispatcher library as well. Events are, essentially, object-oriented testable hooks. The low-level parts of Drupal 8 are using events, while many older systems are still using hooks. For Drupal 8 contributed modules, it's a good idea to focus on using events (as well as plugins) over hooks in most cases. There is serious talk of removing hooks as redundant in Drupal 9, so get a head start on that transition (and make your code more testable to boot).

DependencyInjection

The last big library is the Dependency Injection component. It provides the Dependency Injection Container, or simply "container", that ties all of Drupal's loosely coupled libraries (both 3rd party and home grown) together into a cohesive system. Almost all developers will be interacting with it, but mostly through a services.yml file rather than dealing with its low-level APIs directly.

Serializer

The Serializer component is a simple framework for managing the serialization and deserialization of objects to various string formats. Drupal 8 is using it as part of the REST framework, as it provides a common way to convert Entities to and from different formats like JSON-HAL (our default), XML, etc. If you want to support a new format for Entities (say, JSON-LD or Collection or some XML format), you'd write new services for the Serializer, and the rest would wire itself up automatically.

Validator

This little library helps structure validation of data rules. It is used deep within the Plugin and Entity systems, and most module developers won't be interacting with it directly.

YAML

Finally there is YAML. YAML is a text file format that Drupal 8 is using for many configuration files. Symfony has a YAML parser, we needed one, you know the drill by now – Open Source FTW.

Reuse All the Things!

All of that is found in one simple directory. That's the advantage of decoupled libraries and easy sharing between projects: Do less work, reuse more code, get more done faster.

That's the power of /vendor.

Image: ©profotokris/123RF.COM

Jan 20 2015
Jan 20

One tool for stylesheets

Laziness tends to get in the way of progress, but it doesn’t have to! There is now a tool to help out with all of those steps in the CSS process that we don’t want to take. This is especially important now that we have mobile-first orientation in web-development. We should make a lot of optimizations for our code to decrease page loading time and make our users happy. New useful tools are created every day and staying up to date with all of them is really hard. That is exactly why programmers try to collect as many tools as possible, all in one. Pleeease is the perfect example of this kind of tool, it is actually a web-development Swiss Army knife.

What is it?

Pleeease is a CSS post-processor based on Node.js and it contains a lot of CSS tools in it. It can really do magic to make your stylesheets better for production usage and it also corrects the consequences of CSS pre-processors(sass, less or stylus) usage. You can use it as separate tool from Node prompt or as Gulp.js plugin in your tasks. To use it you must install Node.js to you system and Gulp.js if you prefer to use Pleeease automatically in your project task. It has simple JSON-like configuration file. If you need to change some defaults, just create .pleeeaserc file in your project folder or declare the settings directly in gulpfile if you use it.

What can it do?

The first thing that you can do is set in the config file the source and destination using “in” and “out” parameters. Example:

{

 "in": "*.CSS",
 "out": "app.min.CSS",

}

So let’s explore all of the amazing features that Pleeease has in it arsenal.

Autoprefixer

It sets the correct browser prefixes for CSS3 properties according to the browser support that you need and it sets only necessary prefixes with CanIUse database to help instead of some tools like Compass that set all of them. You can specify the version of browser, min or max version, browsers with some global usage percent and much more! All available settings can be found at the official GitHub page. By default Pleeease use all browsers with global usage more than 1%, the last 2 versions of all browsers, Firefox ESR and Opera 12.1. Example of post-processing with defaults:

background: linear-gradient(red, blue);

in the output file:

background: -webkit-gradient(linear, left top, left bottom, from(red), to(blue));

background: -webkit-linear-gradient(red, blue);

background: linear-gradient(red, blue);

Filters

CSS filters effects is a part of CSS3 draft and only WebKit-based browsers support them now. Firefox uses SVG fallback and old IE uses its own filters, modern IE(10-11), Opera mini and native Android browser have no support at all. But, you can use them now in parts of your project that will not affect the functionality, but add some wow-effect to your pages for users with “good” browsers.

Pleeease makes it simple to declare in your CSS, just use the standard syntax and the tool will add prefixes to it, create the svg fallback and old IE filter syntax if you set “filters”: {“oldIE”: true}, because by default it is set to false.

Example with blur filter:

Before

filter: blur(3px);

After

filter: url('data:image/svg+xml;utf8,http://www.w3.org/2000/svg">#filter');
-webkit-filter: blur(3px);
filter: blur(3px);
filter: progid:DXImageTransform.Microsoft.Blur(pixelradius=3);

rem units

CSS3 provides us with root em units that are close to standard em but only the font-size of root elements affect them; instead of the parent for em. Unfortunately, there is still lackluster browser support of it.  Pleeease finds all rem declarations in our files and adds a pixel fallback to them, so old browser users are still happy.

Before

h1 {

 font-size: 2rem;
}

After

h1 {
 font-size: 32px;
 font-size: 2rem;
}

Pseudo-elements

It converts CSS3 syntax for declaring pseudo-elements with the old ones for back-compatibility of old browsers like IE8, modern browsers support both syntaxes, for example ::after will be converted to :after to avoid bugs.

Opacity effect

Old IE’s have their own filter properties for an opacity effect, they do not have a friendly syntax, but Pleeease will help you with it. This tool automatically supplements all opacity properties with filter. Lets see how it works in practice:

Before

h2 {
 opacity: .25;
}

After

h2 {
 opacity: .25;
 filter: alpha(opacity=25);
}

Media query packer

Preprocessor tools provide us with the opportunity to write media query breakpoints directly with other CSS properties. This makes our code more readable and maintainable so we can see all of the element changes and throw all breakpoints in one place. But, the generated CSS after writing code this way is not as pretty as we’d like it to be. Our tool can help us in this situation again, it will analyze CSS code, find all @media declarations, match them and concatenate them in one so our CSS will be look great again and performance will grow too. Here is the simple example of how it works:

Before:

h1 {

font-size: 2em;

 @media screen and (min-width: 768px) {

   font-size: 1.5em;

 }

}

h2 {

font-size: 1.75em;

 @media screen and (min-width: 768px) {

   font-size: 1.25em;

 }

}

After:

h1 {

font-size: 2em;

}

h2 {

 font-size: 1.75em;

}

@media screen and (min-width: 768px) {

 h1 {

font-size: 1.5em;

}

h2 {

font-size: 1.25em;

}

}

Source maps, imports, minifier

And finally Pleeease can inline all of your import declarations, generate source maps that allow you to use browser direct editing in debugging tools and minify CSS code to decrease output file size. I hope that it will have a lot of more useful features in future.

Why should I use it?

There are several reasons, the first: it is just one tool for all stylesheets stuff except compilation of sass, less, stylus, whatever, but it works for specialized pre-processing tools. The second: there are even more features to come, because this list of functions is not totally complete, you can read about experimental features that will soon be added to Pleeease on their website. The Pleeease web-site also provides native CSS variables, colors functions and a lot of other new CSS features support. Also, this tool can decrease you gulpfile code if you build a project with Gulp because it replaces a lot of separate gulp plugins so you can process your style with only with 2 actions: preprocessing sass, less or stylus code and postprocessing CSS code. So enjoy your virtual all in one web development tool! I think it’s pretty cool! If you have any questions about how it all works, contact us, we'd like to help!

Jan 20 2015
Jan 20

Code structure is something most Drupal developers wrestle with. There are tons of modules out there that make our lives easier (Views, Display Suite, etc.) but managing database configuration while maintaining a good workflow is no easy challenge. Today I'm going to talk about a few approaches I use in my work here at Echo. We will be using a simple use case of creating a paginated list of blog posts. To start, we're going to talk about the workflow from a high level, then we'll get into the modules that leverage Drupal in a way that makes sense. Finally, we'll have some code samples to help guide things along.

Workflow

This will vary a bit based on what you need, but the idea behind this is we never want to redo our work. Ideally we'd like to design a View or functionality once on our local, and then package it and push it up. Features is a big driving force behind this. Beyond that, we want things like page structures and custom code to have a place to live that makes sense. So, for this example we will be considering the idea of a paginated list of Blog Posts. This is a heavy hammer to be swinging at such a solved task, but we will get into why this is good later on.

  • Create a new Feature that requires ctools and panels (and not views!)
  • Open up the generated .module file and declare the ctool plugin directory
  • Create the plugins/content_types/blog_posts.inc file
  • Define the needed functions within blog_posts.inc to make it work
  • Add the newly created content type to a page in Page Manager
  • Add everything we care about to the Feature and export it for deployment

Installation

This only assumes that you have a working Drupal installation and some knowledge of how to install modules. In this case, we will be using drush to accomplish this, but feel free to pick your poison here. Simply run the following commands and answer yes when prompted.

drush dl ctools ds features panels strongarm
drush en ctools ds features panels strongarm page_manager

What we have done here is install and enable a strong foundation on which we can start to scaffold our site. Note that I won't be getting into folder structure too much, but there are some more steps before this you would have to take to ensure contrib, custom, and features all make it to their own place. We wave our hands at this for now.

Features

The first thing we're going to do is generate ourselves a Feature. Simply navigate to Structures -> Features -> Create Feature and you will see a screen that looks very similar to this. Fill out a name, and have it require ctools and panels for now.

Features screen

This will generate a mostly empty feature for us. The important part we want here is the ability to turn it on and off in the Features UI, and the structure (that we didn't have to create manually!) which includes a .module and .info file is ready to go for us. That being said, we're going to open it up and tell it where to find the plugins. The code to do that is below, and here is a screenshot of the directory structure and code to make sure you're on the right track. Go ahead and create the plugins directory and associated file as well.

function blog_posts_ctools_plugin_directory($owner, $plugin_type) {
  return 'plugins/' . $plugin_type;
}

Chaos Tools

Known more commonly as ctools, this is a module that allows us this plugin structure. For our purposes, we've already made the directory and file structure needed. Now all we have to do is create ourselves a plugin. There are three key parts to this: plugin definition, render function, and form function. These are all defined in the .inc file mentioned above. There are plenty of resources online that get into the details, but basically we're going to define everything that gets rendered in code and leverage things like Display Suite and the theme function for pagination. This is what we wind up with:

/**
* Plugin definition
*/
$plugin = array(
  'single' => TRUE,
  'title' => t('Blog Post Listing'),
  'description' => t('Custom blog listing.'),
  'category' => t('Custom Views'),
  'edit form' => 'blog_post_listing_edit_form',
  'render callback' => 'blog_post_listing_render',
  'all contexts' => TRUE,
);
 
/**
* Render function for blog listing
* @author Austin DeVinney
*/
function blog_post_listing_render($subtype, $conf, $args, &$context) {
  //Define the content, which is built throughout the function
  $content = '';
 
  //Query for blog posts
  $query = new EntityFieldQuery();
  $query->entityCondition('entity_type', 'node', '=')
    ->entityCondition('bundle', 'blog_post', '=')
    ->propertyCondition('status', NODE_PUBLISHED, '=')
    ->pager(5);
 
  //Fetch results, and load all nodes
  $result = $query->execute();
 
  //If we have results, build the view
  if(!empty($result)) {
    //Build the list of nodes
    $nodes = node_load_multiple(array_keys($result['node']));
    foreach($nodes as $node) {
      $view = node_view($node, 'teaser');
      $content .= drupal_render($view);
    }
 
    //Add the pager
    $content .= theme('pager');
  }
 
  //Otherwise, show no results
  else {
    $content = "No blog posts found.";
  }
 
  //Finally, we declare a block and assign it the content
  $block = new stdClass();
  $block->title = 'Blog Posts';
  $block->content = $content;
  return $block;
}
 
/**
* Function used for editing options on page. None needed.
* @author Austin DeVinney
*/
function blog_post_listing_edit_form($form, &$form_state) {
  return $form;
}

Some things to note here. We're basically making a view by hand using EntityFieldQuery. It's a nifty way to write entity queries a bit easier and comes with some useful how to's on Drupal.org. We also offload all rendering to work with Display Suite and use the built-in pagination that Drupal provides. All things considered, I'm really happy with how this comes together.

Panels

Finally, we need to add this to the page manager with panels. Browser to Structure -> Pages -> Add custom page and it will provide you with a step by step process to make a new page. All we're going to do here is add our newly created content type to the panel, as shown here.

Panel screen

And now, we're all ready to export to the Feature we created. Go on back to and recreate the feature and you're ready to push your code live. After everything is said and done, you should have a working blog with pagination.

Blog screen .

Motivation

Obviously, this example is extremely basic. We could have done this in a View in far less time. Why would we ever want to use this? That's a great question and I'd like to elaborate on why this is important. Views are great and solve this problem just as well. They export nicely with Features and can even play with Panels (if you want to use Views as blocks or content panes). That being said, this is more for the layout of how we would have custom code that works with a lot of Drupal's best practices. Imagine instead if we have a complicated third party API we're trying to query and have our "view" react to that. What if we want a small, code-driven block that we can place discretely with panels? The use cases go on, of course.

There are many ways to solve problems in Drupal. This is just my take on a very clean and minimal code structure that allows developers to be developers and drive things with their code, rather than being stuck clicking around in menus.

Jan 20 2015
Jan 20

When building a Drupal 7 site, one oft-used technique is to keep the entire Drupal root under git (for Drupal 8 sites, I favor having the Drupal root one level up).

Starting a new project can be done by downloading an unversioned copy of D7, and initializing a git repo, like this:

Approach #1

drush dl
cd drupal*
git init
git add .
git commit -am 'initial project commit'
git remote add origin ssh://me@mygit.example.com/myproject

Another trick I learned from my colleagues at the Linux Foundation is to get Drupal via git and have two origins, like this:

Approach #2

git clone --branch 7.x http://git.drupal.org/project/drupal.git drupal
cd drupal
git remote rename origin drupal
git remote add origin ssh://me@mygit.example.com/myproject

This second approach lets you push changes to your own repo, and pull changes from the Drupal git repo. This has the advantage of keeping track of Drupal project commits, and your own project commits, in a unified git history.

git push origin 7.x
git pull drupal 7.x

If you are tight for space though, there might be one inconvenience: Approach #2 keeps track of the entire Drupal 7.x commit history, for example we are now tracking in our own repo commit e829881 by natrak, on June 2, 2000:

git log |grep e829881 --after-context=4
commit e8298816587f79e090cb6e78ea17b00fae705deb
Author: natrak 
Date:   Fri Jun 2 18:43:11 2000 +0000

    CVS drives me nuts *G*

All of this information takes disk space: Approach #2 takes 156Mb, vs. 23Mb for approach #1. This may add up if you are working on several projects, and especially if for each project you have several environments for feature branches. If you have a continuous integration server tracking multiple projects and spawning new environments for each feature branch, several gigs of disk space can be used.

If you want to streamline the size of your git repos, you might want to try the --depth option of git clone, like this:

Approach #3

git clone --branch 7.x --depth 1 http://git.drupal.org/project/drupal.git drupal
cd drupal
git remote rename origin drupal
git remote add origin ssh://me@mygit.example.com/myproject

Adding the --depth parameter here reduces the initial size of your repo to 18Mb in my test, which interestingly is even less than approach #1.

Jan 20 2015
Jan 20

If you are not already using Git on your Drupal websites or projects, now is the time to learn. Over the next week or two, I will be going over a brief introduction to Git in 5 parts. In the following post, I will provide a quick overview of Git and Git hosting services. In subsequent parts, I will walk through examples of Git commands and what they do. In the 5th and final part I will bring it all together with examples of how Git is commonly used with Drupal.

Git is one of the secrets from my 5 secrets to becoming a Drupal 7 Ninja ebook, and much of the following posts are from this ebook. To learn more about Git and the other secrets, please consider purchasing the ebook or signing up for the 5 secrets email newsletter which will give you more information about the 5 secrets.

VERSION CONTROL (WITH GIT)... NEVER LOSE YOUR CODE OR YOUR MIND AGAIN

You have probably heard of Version Control or Git before. If you are not already using a Version Control system now is the time to start. According to the Git website:

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and
efficiency.

So what exactly is a Version Control System? To keep things simple, it is basically a way to track changes that you have made to files over a period of time. It gives you the ability to not only track those changes, but roll back to a previous point in time if something goes bad. A version control system also makes it much easier for multiple developers to work on a single project without stepping on each other’s toes.

Git is a Distributed Version Control System, which means that every developer working on a project has a full copy of the repository. A repository is just another name for how the project and its files are stored in the version control system. Generally when working with Git you will have some type of server that you push your changes too. Often this will be a third party service like GitHub, Bitbucket, or one of the many other alternatives.

Choosing the Right Service to Host Your Git Repository

There are a lot of options to consider when choosing where (and if) you want to use a third party service to host the Git repository for your project. These services provide a lot of useful tools that make working with your Git repository easier. Some standard tools to keep an eye out for include:

  • Ability to view the code of your Git repository
  • Issue or Bug tracking
  • Create and manage Git branches of code
  • Built in Code Review Tools
  • Collaboration tools to make building a software project with a team easier

There are typically many more features, but that is a basic list that almost all Git hosting services offer. It is best to do your own research here as opinions tend to vary on which is the best. The most popular one is probably Github. It provides a great interface and great collaboration tools. Github is especially popular in the open source software market. Github is free to use as long as you make your Git repository public. Github charges for you to have a private repository. Github bases its fees on the number of private repositories you require.

Bitbucket is another popular choice. Bitbucket has free Git project hosting for teams of 5 or less. Bitbucket allow for an unlimited number of public or private Git repositories. All of the fees for Bitbucket are based on the number of people on the team (not the number of repositories). This distinct difference between Bitbucket and Github often helps you decide based on the type of project you are building and the team size (assuming you are basing your decision only on price). There are many other options out there, but these are the most widely used that I am aware of.

So which Git project hosting service do I use? Well... both actually. I prefer using Github for any type of open source project. Github’s interface and collaboration tools are slightly better than Bitbucket’s in my opinion. I do however use Bitbucket much more than I use Github. Because I often tend to work on projects in small teams, and I need private repositories for much of my work, Bitbucket is the logical choice. I also don’t want to discount the tools in Bitbucket as they too are really good (Github is just slightly more user friendly).

Ninja Lesson: All Git Hosting services will follow the same constructs. Learn Git and you can easily adapt to the hosting service of your choosing.

Getting Started with Git

So how do you go about getting start with Git if you have never worked with a Version Control System before? The first steps are to start by downloading Git for your operating system. Once you have Git downloaded and installed, you may be tempted to download a Git GUI client. You can browse for one of your choosing and try one out (I have used GitEye with some success in the past as it provides a Linux version). I won’t be covering Git GUI clients because frankly I don’t like using them and I think they shroud what is actually happening (sometimes making it seem more confusing than it has to be). Even if you do want to use a Git GUI client, I highly suggest learning the basics from the command line first. This will give you a much deeper understanding of what various commands are doing and how the entire Git process works.

Intro to Git Part 1 Summary

In the subsequent 4 parts, you will be able to follow along to create your first Git repository, learn the basics of Git commands, create a larger Git repository for your Drupal website, and learn how to pull down external Git repositories (like those on Github or Bitbucket).

So start out by following the instructions for your Operating system and getting Git installed. In the introduction to Git part 2, we will get started with some basic Git commands and configuration.

xjm
Jan 20 2015
Jan 20

The next beta for Drupal 8 will be beta 5! Here is the schedule for the beta release.

Tuesday, January 27, 2015 Only critical and major patches committed Wednesday, January 28, 2015 Drupal 8.0.0-beta5 released. Emergency commits only.
Jan 19 2015
Jan 19

Recently, we were debugging some performance issues with a client's Drupal Commerce website. After doing the standard optimizations, we hooked up New Relic so we could see exactly what else could be trimmed.

The site is using different line item types to differentiate between products that should be taxed in different ways. Each line item type has a field where administrators can select the tax code to use for that line item type. The options for the select list are populated via an API call to another service provider. The call for the list was using the static cache because it was thought that the list would only be populated when needed on the line item type configuration page. In reality, that's not the case.

When an Add to Cart form is displayed in Drupal Commerce, it also loads the line item type and the line item type's fields. When loading the fields, it loads all of the options even if the "Include this field on Add to Cart forms for line items of this type" option is not enabled for that field. In this case, it resulted in 90 HTTP calls to populate the list of tax codes every time someone viewed a page with an Add to Cart form.

The solution was to actually cache those results using Drupal's Cache API. You can see the improvement:

Jan 19 2015
Jan 19

INTRODUCTION

Almost all Drupal websites will have multiple Views displays containing output of various content but your options to sort this content are usually limited. Most Views displays can only practically be sorted by creation date or node title. This will work well in many cases but if you need to implement user-friendly manual controlled sorting then you will need expand Views.

There’s a page on Drupal.org comparing various Node ordering Modules but our favorite is DraggbleViews and in this article we’ll show you how to use it to create a drag and drop sortable image gallery.

INSTALLATION

  • Install the latest 7.x.2.x branch of DraggableViews from Drupal.org. For Drush users the project name is 'draggableviews'
  • Module dependencies: Views, Chaos tools, Entity API

SETUP

DraggableViews will allow you to make rows of a Drupal View "draggable" which means that they can be rearranged using Drag and Drop interface. For this example we've created a Content Type called Images that contains an Image Field that will be used for a photo gallery page on our website.

We then created multiple Image Nodes containing stock photography images. Our View is currently limited to a few sorting options like creation date and title but.we want to allow for our editors to easily reorder the images so now we’ll setup DraggableViews and create a sorting interface.

  1. Edit your existing View that you want to be sortable (in this example it’s our image page View).
  2. Add a new display to your existing View. This new display should normally be a Page display type and this is what will be used as the sorting interface.
    Drupal View for Drag Drop sorting
  3. Setup your new View's Display similar to the following (This will vary depending on your specific needs).
    • IMPORTANT: Be sure to override this sorting display for all applicable settings so you you’re not also changing the main View display when we edit the Sorting Display's options in the following steps.
    • Set Display and Title to reflect your sorting display. Example, “Sort Images”.
    • IMPORTANT: The display format must be set to Table.
    • Add only the minimal amount of fields needed to be visible for your node’s sorting purposes. For example, only display the node titles or, as in this example, small thumbnails of the node’s image.
      • Add a title, image thumbnail, or some other visual reference field.
      • Add a NID (Node ID) field and be sure to select ‘Exclude from Display’.
      • Add the “DraggableViews” field, leaving the default settings.
    • Remove any sorting criteria that may already be in your View and add the “DraggableViews Weight” field as the sort criteria for both this sorting View and the main display View. The parent View’s DraggableViews sort field needs to be set to use the new sorting View in its “Display Sort as” setting.
    • Give your sorting View a page path. I like to use something like “admin/content/sort/photos” so a menu link will be available in the administration menus. Make this a ‘Normal Menu Entry” and be sure to set the Views menu path to ‘Management’.

    Drupal DraggableViews Views Setup

    Once you save your View you should now have a page containing the content of your View with handles to the right of them for drag and drop sorting.

    drag drop sorting interface drupal views

    PERMISSIONS

    Be sure to the appropriate permissions for your sorting View. There is also a “Access draggable views” permission that must be granted. If a user has access to the sorting View but does not have the “Access draggable views” permission then they will see the View without the drag and drop handles.

    Drupal Draggable Views Permissions

    MENU LINKS

    If you set your sortable View's path to something like '“admin/content/sort/photos' as described above and also set a "Normal menu entry" for the "Management" menu then you will now have a menu link to your sorting page from your administration menu.

    Your new sorting page will also be accessible by the View's Contextual Link, allowing for direct and quick access to sorting the View right from the main page.

    Draggable Views sort menu

    NOTES

    You should not rely on View's live preview as it may differ from the actual output.

    The reordering may not work if you have Caching turned on for your View. The drag and drop may work but upon saving your ordering it will revert back to the previous order. If you need caching then you will need to create a separate display for sorting and turn caching off for that View only.

Jan 19 2015
Jan 19

Any results of the color alterations, once they’re made, can be observed in the view block.

The tips are displayed after clicking on the text field.

After all necessary settings done, save them. Let’s look at the result.

User/login form has been tuned into the Bartik theme:

The login form:

Authorization block, which is situated on its left column by default:

The module itself is available here.

Jan 19 2015
Jan 19

In this article I am going to show you how to create a custom Views field in Drupal 8. At the end of this tutorial, you will be able to add a new field to any node based View which will flag (by displaying a specific message) the nodes of a particular type (configurable in the field configuration). Although I will use nodes, you can use this example to create custom fields for other entities as well.

So let's get started by creating a small module called d8views (which you can also find in this repository):

d8views.info.yml:

name: Drupal 8 Views Demo
description: 'Demo module that illustrates working with the Drupal 8 Views API'
type: module
core: 8.x

In Drupal 7, whenever we want to create a custom field, filter, relationship, etc for Views, we need to implement hook_views_api() and declare the version of Views we are using. That is no longer necessary in Drupal 8. What we do now is create a file called module_name.views.inc in the root of our module and implement the views related hooks there.

To create a custom field for the node entity, we need to implement hook_views_data_alter():

d8views.views.inc:

/**
 * Implements hook_views_data_alter().
 */
function d8views_views_data_alter(array &$data) {
  $data['node']['node_type_flagger'] = array(
    'title' => t('Node type flagger'),
    'field' => array(
      'title' => t('Node type flagger'),
      'help' => t('Flags a specific node type.'),
      'id' => 'node_type_flagger',
    ),
  );
}

In this implementation we extend the node table definition by adding a new field called node_type_flagger. Although there are many more options you can specify here, these will be enough for our purpose. The most important thing to remember is the id key (under field) which marks the id of the views plugin that will be used to handle this field. In Drupal 7 we have instead a handler key in which we specify the class name.

In Drupal 8 we have something called plugins and many things have now been converted to plugins, including views handlers. So let's define ours inside the src/Plugin/views/field folder of our module:

src/Plugin/views/field/NodeTypeFlagger.php

  
Jan 19 2015
Jan 19

It is frequent that customers approach us asking for help to rescue their projects from site builders. Sometimes they have technological issues (mainly slow sites) but sometimes it's just plain bad usability os some wrong marketing concepts.

We recently were asked for help from a site that gets about 5,000 unique visitors a day. Despite the not so bad visitor numbers for their niche, this page was getting very low user interaction. They barely got a handful (

Among the many changes we did, there was something new a member of our team came up with: linking node comments and forum posts. We noticed in GA that although having very little activity,forums got some attention, but just like it happens in a bar, if no one is dancing you are probably not going to be the first one. On the other hand, users commented on the sites contents and these comments got lost in the > 20,000 content nodes this sites has.

The idea was simple: for each comment thread in a node, there should be a forum post, and they must be syncronized (if someone comments on the forum it should appear on the node and vice versa).

All the magic can be easily implemented through hook_comment_insert:

function mymodule_comment_insert($comment) {
  // Crear un comentario de foro!
  $node = node_load($comment->nid);
  // Need a flag to prevent recursive behaviour.
  static $executed = FALSE;
  if (!$executed) {
    $executed = true;
    if ($node->type != 'forum') {
      // Buscar un tema de foro con el mismo título.
      $query = new EntityFieldQuery();
      $query->entityCondition('entity_type', 'node')
        ->entityCondition('bundle', 'forum')
        ->propertyCondition('status', NODE_PUBLISHED)
        ->propertyCondition('title', $node->title, '=')
        ->addMetaData('account', user_load(1)); // Run the query as user 1.

      $result = $query->execute();
      
      $forum = NULL;
      
      if (isset($result['node'])) {
        $news_items_nids = array_keys($result['node']);
        $forum = node_load(reset($news_items_nids));
        
        // Añadimos como comentario nuevo.
        unset($comment->cid);
        $comment->nid = $forum->nid;
        
        comment_submit($comment);
        comment_save($comment);
      } 
      else {
        $forum = new stdClass(); // Create a new node object
        $forum->type = "forum"; // Or page, or whatever content type you like
        node_object_prepare($forum); // Set some default values
        $forum->title = $node->title;
        $forum->language = $node->language; // Or e.g. 'en' if locale is enabled
        $forum->uid = $comment->uid; // UID of the author of the node; or use $node->name
        
        $value = "

Opinión sobre el artículo: " . $node->title . "

" . $node->field_entradilla[LANGUAGE_NONE][0]['value'] . "" . $comment->comment_body[LANGUAGE_NONE][0]['value']; $forum->body[$node->language][0]['value'] = $value; $forum->body[$node->language][0]['summary'] = ''; $forum->body[$node->language][0]['format'] = 'filtered_html'; $forum->taxonomy_forums[$node->language][0]['tid'] = 475; if($forum = node_submit($forum)) { // Prepare node for saving node_save($forum); } } } else { // Aplicamos a la inversa, el comentario del foro // lo pasamos al nodo para que las conversaciones // estén sincronizadas. // Buscar un tema de foro con el mismo título. $query = new EntityFieldQuery(); $query->entityCondition('entity_type', 'node') ->entityCondition('bundle', 'forum', '') ->propertyCondition('status', NODE_PUBLISHED) ->propertyCondition('title', $node->title, '=') ->addMetaData('account', user_load(1)); // Run the query as user 1. $result = $query->execute(); if (isset($result['node'])) { $news_items_nids = array_keys($result['node']); $forum = node_load(reset($news_items_nids)); unset($comment->cid); $comment->nid = $forum->nid; comment_submit($comment); comment_save($comment); } } } }

Along with this change we also made some very basic adjustments such as:

  • Allowing anonymous comments and remove the need for registering
  • Reducing the number of fields in their subscribe form from 5 to 2.
  • Adding subscribe pop-ups
  • Etc.

The result? Conversions (newsletter subscriptions in this case) were up from 1-2 per day to 25-50 in less than 2 weeks, and user activity in forums has been growing steadily day after day. 

Our customer was sad that he had lost a 1 year (since the site was re-launched using Drupal) in user conversions and engagament, but happy to have now found the right partner to make his project succeed.

Jan 19 2015
Jan 19

Start: 

2015-01-21 (All day) America/New_York

Organizers: 

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, January 21.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, February 4.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Jan 18 2015
Jan 18

Solving the Drush Segmentation Fault 11 error

I've been doing a lot lately with Grunt and LibSass within my drupal.org contrib theme, Gratis. Yesterday, I updated my Node modules locally. Shortly thereafter, I started getting a nasty Drush error.

line 1: 48475 Segmentation fault: 11  
/opt/local/bin/php /Users/danny/.composer/vendor/drush/drush/drush.php
--php=/opt/local/bin/php --backend=2
--root=/Users/danny/Sites/Drupal/gratis2/gratis2-site
--uri=http://default pm-updatestatus 2>&1

or sometimes just:

Segmentation fault: 11

Not only that but my local site's admin UI started WSODing. I didn't immediately connect the Node NPM update to the drush error. So I looked in my MacPorts Apache log and saw hundreds of these streaming down every few seconds:

[Sat Jan 17 13:03:56 2015] [notice] child pid 49312 exit signal Segmentation fault (11)

No joy

Doing a Google search led me to some varied and vague issues with regard to Apache and MySQL but none of theme really rang true to what I was experiencing. I decided to check some of my other local sites and they all seemed fine; no errors, WSODs, or otherwise. Bizarre! I worked for about an hour, but still no joy, I was headed down a rabbit hole. That being said, I let this rest for a while. I always let a problem sit for a bit if I can't fix it right away or ask for help. More often than not, I'll come back later and end up fixing it.

The search

I got out for some air and went to downtown San Diego to take some photos. That usually gets my mind off things and is relaxing. Arriving back later in the day, I got back into it and decided to search for drush cache clear segmentation fault theme. Bingo! (and 50 browser tabs later). I don't know why I didn't search for this earlier in the day, I was just searching the pure Apache log error which knows nothing of drush.

Sure enough it's an error related to Node modules (from the node_modules folder) having a .info file. Drush sees that and thinks it's supposed to be part of Drupal. The problem is, in a Drush world, these files are malformed. Thus the errors. Right about now, I was wishing there was some kind of .drushignore file along the same lines as .gitignore.

With this new search, here's the relevant posts I found:

In turn, these led me to the main issue, Themes should not crash when .info file appears inside node_modules

It turns out there is a proposed patch for core to prevent this error. I somehow don't see this getting in anytime soon but there are some workarounds on the Node / Grunt end of things.

Custom script

Here is the fix that I arrived at based on all the suggestions and comments in this last issue. First, we need to write a Node NPM cleanup Bash script. The script will find any .info files and rename them to .inf0 (with a zero). This will not have any negative effects as you don't commit node_modules folder to your repo and the info files are not actually needed for Grunt to run properly. So we'll call our script,npm_post.sh

#!/bin/sh
# npm_post.sh

# This script finds any .info files in the node_modules directory and renames them so they don't
# conflict with drush. package.json runs this on completion of npm install.
# These files, if any are not actually needed to run grunt and compile LIbSass
# See this issue for more info: https://www.drupal.org/node/2329453

find -L ./node_modules -type f -name "*.info" -print0 | while IFS= read -r -d '' FNAME; do
    mv -- "$FNAME" "${FNAME%.info}.inf0"
done

Once you have this in the same folder as your package.json file (in my case the root of my theme), you'll need to call it with a postinstall method from your package.json file.

  "scripts": {
    "postinstall": "sh npm_post.sh"
  },

One caveat here is that you may run into an error that the script won't run. To solve this you can either run sudo npm install --unsafe-perm or alternatively create an .npmrc file with the code:

unsafe-perm = true

and then run sudo npm install as usual.

Conclusion

Running into errors like this is definitely not fun but I learned a lot in the process. I am not sure if this is the best fix in the world but it seems to work fine for my use case. It also shows us to not get tunnel vision when trying to fix a development problem and to avoid those rabbit holes if possible.

Jan 18 2015
Jan 18

It is frequent that customers approach us asking for help to rescue their projects from site builders. Sometimes they have technological issues (mainly slow sites) but sometimes it's just plain bad usability os some wrong marketing concepts.

We recently were asked for help from a site that gets about 5,000 unique visitors a day. Despite the not so bad visitor numbers for their niche, this page was getting very low user interaction. They barely got a handful (

Among the many changes we did, there was something new a member of our team came up with: linking node comments and forum posts. We noticed in GA that although having very little activity,forums got some attention, but just like it happens in a bar, if no one is dancing you are probably not going to be the first one. On the other hand, users commented on the sites contents and these comments got lost in the > 20,000 content nodes this sites has.

The idea was simple: for each comment thread in a node, there should be a forum post, and they must be syncronized (if someone comments on the forum it should appear on the node and vice versa).

All the magic can be easily implemented through hook_comment_insert:

function mymodule_comment_insert($comment) {
  // Crear un comentario de foro!
  $node = node_load($comment->nid);
  // Need a flag to prevent recursive behaviour.
  static $executed = FALSE;
  if (!$executed) {
    $executed = true;
    if ($node->type != 'forum') {
      // Buscar un tema de foro con el mismo título.
      $query = new EntityFieldQuery();
      $query->entityCondition('entity_type', 'node')
        ->entityCondition('bundle', 'forum')
        ->propertyCondition('status', NODE_PUBLISHED)
        ->propertyCondition('title', $node->title, '=')
        ->addMetaData('account', user_load(1)); // Run the query as user 1.

      $result = $query->execute();
      
      $forum = NULL;
      
      if (isset($result['node'])) {
        $news_items_nids = array_keys($result['node']);
        $forum = node_load(reset($news_items_nids));
        
        // Añadimos como comentario nuevo.
        unset($comment->cid);
        $comment->nid = $forum->nid;
        
        comment_submit($comment);
        comment_save($comment);
      } 
      else {
        $forum = new stdClass(); // Create a new node object
        $forum->type = "forum"; // Or page, or whatever content type you like
        node_object_prepare($forum); // Set some default values
        $forum->title = $node->title;
        $forum->language = $node->language; // Or e.g. 'en' if locale is enabled
        $forum->uid = $comment->uid; // UID of the author of the node; or use $node->name
        
        $value = "

Opinión sobre el artículo: " . $node->title . "

" . $node->field_entradilla[LANGUAGE_NONE][0]['value'] . "" . $comment->comment_body[LANGUAGE_NONE][0]['value']; $forum->body[$node->language][0]['value'] = $value; $forum->body[$node->language][0]['summary'] = ''; $forum->body[$node->language][0]['format'] = 'filtered_html'; $forum->taxonomy_forums[$node->language][0]['tid'] = 475; if($forum = node_submit($forum)) { // Prepare node for saving node_save($forum); } } } else { // Aplicamos a la inversa, el comentario del foro // lo pasamos al nodo para que las conversaciones // estén sincronizadas. // Buscar un tema de foro con el mismo título. $query = new EntityFieldQuery(); $query->entityCondition('entity_type', 'node') ->entityCondition('bundle', 'forum', '') ->propertyCondition('status', NODE_PUBLISHED) ->propertyCondition('title', $node->title, '=') ->addMetaData('account', user_load(1)); // Run the query as user 1. $result = $query->execute(); if (isset($result['node'])) { $news_items_nids = array_keys($result['node']); $forum = node_load(reset($news_items_nids)); unset($comment->cid); $comment->nid = $forum->nid; comment_submit($comment); comment_save($comment); } } } }

Along with this change we also made some very basic adjustments such as:

  • Allowing anonymous comments and remove the need for registering
  • Reducing the number of fields in their subscribe form from 5 to 2.
  • Adding subscribe pop-ups
  • Etc.

The result? Conversions (newsletter subscriptions in this case) were up from 1-2 per day to 25-50 in less than 2 weeks, and user activity in forums has been growing steadily day after day. 

Our customer was sad that he had lost a 1 year (since the site was re-launched using Drupal) in user conversions and engagament, but happy to have now found the right partner to make his project succeed.

NOTE: This site has no options to comment because we are still setting it up! Launching before finishing is not the usual, but we wanted to try something new here and do some "agile deployment" with an iterative approach.

Jan 18 2015
Jan 18

Objective

Select like-minded users from a local community website.

Pre-requisites

  1. A Drupal website with the votingapi module enabled and at least a few dozen votes by registered users.
  2. A working installation of the R language.

Exract data

For each user, select all other users that voted on same node and comments:

SELECT v1.uid uid1, v2.uid uid2, u.name name2,
  v2.entity_id entity_id, v1.value value1, v2.value value2
FROM votingapi_vote v1
JOIN (votingapi_vote v2, users u)
 ON (v1.uid != v2.uid AND v1.entity_id=v2.entity_id
   AND v1.entity_type=v2.entity_type AND v2.uid=u.uid)
WHERE v1.uid 

This produces a table

uid1    uid2    name2   value1  value2
1       2       Bob     100     100
1       2       Bob     20      20
1       2       Bob     40      40
1       2       Bob     100     100
1       2       Bob     20      100
1       2       Bob     100     100
1       2       Bob     100     100
1       2       Bob     100     100
1       2       Bob     100     100
1       2       Bob     80      80
1       2       Bob     100     20
1       2       Bob     20      20
1       2       Bob     60      60
1       2       Bob     100     100
1       2       Bob     100     100

with five columns:

  1. first user id
  2. second user id
  3. second user's name
  4. vote of the first user
  5. vote of the second user

The important parts in the SQL are

  1. the JOIN on the same table, which allows to generate all permutations of uid1 and uid1
  2. the WHERE clause on v1.uid which reduces permutations to combinations.

The uid of 0 is skipped, because it is the uid of the anonymous user. Every anonymous vote is attributed to it.

Calculate similarity

It can be done in PHP, but why bother? Here's a handy R script that takes the above table as in.tsv and produces, for each user, a file with the following columns:

  1. id of the other user
  2. username
  3. number of votes in common
  4. Pearson's correlation coefficient between votes
  5. a p-value that indicates how certain was the algorithm.
#!/usr/bin/env Rscript
d  7) {
      correlation 

Notice the use of the cor(x,y) function that calculates the correlation and cor.test(x,y) that produces additional metrics for the correlation, including p-value. By convention, everything above p-value ? 0.05 is considered uncertain, so we only print lines where p-value .

Here's the output from the above data:

2       Bob     15      0.6039604       0.01710946

Display results

The rest is fairly obvious. I've chosen to display the data as a tag cloud on user profiles. With a hook on hook_menu,

/**
 * Hook into the user menu
 */
function mymodule_menu() {
  $items['user/%user/likeminded'] = array(
    'access callback' => TRUE,
    'access arguments' => array(1),
    'page callback' => 'mymodule_likeminded', // function defined below
    'page arguments' => array(1),
    'title' => 'Likeminded',
    'weight' => 5,
    'type' => MENU_LOCAL_TASK,
  );
  return $items;
}

I fetch the user's data file as generated by the R script above and display the data from it in a bag of words of varying sizes:

/**
 * Display likeminded users
 */
function mymodule_likeminded($arg){

  if (is_object($arg) && !$arg->uid) {
    return;
  }
  # this is my path to the results, your path may be different
  $path =  drupal_get_path('module', 'mymodule') . '/pearsons/' . $arg->uid; 
  $lines = array();
  $min = 0; $max = 0;

  if ($handle = @fopen($path, 'r')) {
    while($line = fgets($handle)) {
      $line = explode("\t", $line);
      if ($line[2] >= $max) { $max = $line[2]; }
      if ($line[2] 

' ;
  $output .= '

'; foreach($lines as &$line) { if ($line[3] > 0 ) { $size =mymodule_font_size($min, $max, $line[2]); $opacity = $line[3]; $output .= ""; $output .= l($line[1], 'user/' . $line[0]); $output .= ""; } } $output .= '

';


  // Adversaries
  $output .= '
' ;
  $output .= '

'; foreach($lines as &$line) { if ($line[3] "; $output .= l($line[1], 'user/' . $line[0]); $output .= ""; } } $output .= '

';

  return $output;
} 

/**
 * calculate the font size in proportion to the maximum and minimum of common votes
 */
function mymodule_font_size($min_count, $max_count, $cur_count,
  $min_font_size=11, $max_font_size=36) {
  if ($min_count == $max_count) # avoid DivideByZero exception
  {
    return $min_font_size;
  }
  return (
    ($max_font_size - $min_font_size)
    /
   ($max_count - $min_count)
   *
   ($cur_count - $min_count) + $min_font_size);
}

That's it.

The algorithm scales fairly well. It takes around one minute to extract the data and around 10 minutes to calculate similarity on a database of 100 000 users, 1 000 000 posts and 4 500 000 votes, all on the same server that runs the website.

The lead image shows a real user profile page with a selection of like-minded users and adversaries.

P.S. If there's enough interest, I will rewrite the above code as a Drupal module.

P.P.S. Want to datamine your own data and receive an understandable explanation afterwards? Drop me a line.

Jan 17 2015
Jan 17

Drupal Form Ajax Example

Why reload the whole page, when you can just update certain parts of the DOM? Ajax allows you to do just this, to dynamically update content. Just one of the many great uses of Ajax is Form Validation. In this example, we will see how to implement this.

We will be making a simple form which will contain a text field that will validate if the username entered exists, and a button that will replace the text field value with a random existing username.

Building The Form

First, we need to define our two form elements:

PHP $form['user_name'] = array( '#type' => 'textfield', '#title' => 'Username', '#description' => 'Please enter in a username', ); $form['random_user'] = array( '#type' => 'button', '#value' => 'Random Username', ); 1 2 3 4 5 6 7 8 9 10 $form['user_name']=array(  '#type'=>'textfield',  '#title'=>'Username',  '#description'=>'Please enter in a username',);   $form['random_user']=array(  '#type'=>'button',  '#value'=>'Random Username',);

Next, to start using Ajax in Drupal, all you need to specify is the “callback“, or function to call, when the “event“, or trigger, is fired on that certain form element, in an array under the “#ajax” key:

PHP $form['user_name'] = array( '#type' => 'textfield', '#title' => 'Username', '#description' => 'Please enter in a username', '#ajax' => array( // Function to call when event on form element triggered. 'callback' => 'Drupal\ajax_example\Form\AjaxExampleForm::usernameValidateCallback', // Javascript event to trigger Ajax. Currently for: 'onchange'. 'event' => 'change', ); 1 2 3 4 5 6 7 8 9 10 $form['user_name']=array(  '#type'=>'textfield',  '#title'=>'Username',  '#description'=>'Please enter in a username',  '#ajax'=>array(    // Function to call when event on form element triggered.    'callback'=>'Drupal\ajax_example\Form\AjaxExampleForm::usernameValidateCallback',    // Javascript event to trigger Ajax. Currently for: 'onchange'.    'event'=>'change',);

In the “callback”, include the full namespaced class and function you want to call. The event can be any Javascript event without the “on”. A list of Javascript events can be found here.

Once you have added these two keys, you can add extra options such as “effect”, and “progress”. More options can be found on the Ajax API. Here are the finished elements:

PHP $form['user_name'] = array( '#type' => 'textfield', '#title' => 'Username', '#description' => 'Please enter in a username', '#ajax' => array( // Function to call when event on form element triggered. 'callback' => 'Drupal\ajax_example\Form\AjaxExampleForm::usernameValidateCallback', // Effect when replacing content. Options: 'none' (default), 'slide', 'fade'. 'effect' => 'fade', // Javascript event to trigger Ajax. Currently for: 'onchange'. 'event' => 'change', 'progress' => array( // Graphic shown to indicate ajax. Options: 'throbber' (default), 'bar'. 'type' => 'throbber', // Message to show along progress graphic. Default: 'Please wait...'. 'message' => NULL, ), ), ); $form['random_user'] = array( '#type' => 'button', '#value' => 'Random Username', '#ajax' => array( 'callback' => 'Drupal\ajax_example\Form\AjaxExampleForm::randomUsernameCallback', 'event' => 'click', 'progress' => array( 'type' => 'throbber', 'message' => 'Getting Random Username', ), ), ); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 $form['user_name']=array(  '#type'=>'textfield',  '#title'=>'Username',  '#description'=>'Please enter in a username',  '#ajax'=>array(    // Function to call when event on form element triggered.    'callback'=>'Drupal\ajax_example\Form\AjaxExampleForm::usernameValidateCallback',    // Effect when replacing content. Options: 'none' (default), 'slide', 'fade'.    'effect'=>'fade',    // Javascript event to trigger Ajax. Currently for: 'onchange'.    'event'=>'change',    'progress'=>array(      // Graphic shown to indicate ajax. Options: 'throbber' (default), 'bar'.      'type'=>'throbber',      // Message to show along progress graphic. Default: 'Please wait...'.      'message'=>NULL,    ),  ),);   $form['random_user']=array(  '#type'=>'button',  '#value'=>'Random Username',  '#ajax'=>array(    'callback'=>'Drupal\ajax_example\Form\AjaxExampleForm::randomUsernameCallback',    'event'=>'click',    'progress'=>array(      'type'=>'throbber',      'message'=>'Getting Random Username',    ),      ),);

Creating The Callbacks

After creating our form elements, it is time to create the callback functions which will return the response of what to update on the page.

These callbacks will return an instance of \Drupal\Core\Ajax\AjaxResponse. Each AjaxResponse instance will contain jQuery commands that will execute on the form. You can use the “addCommand()” method on AjaxResponse to add commands that implement \Drupal\Core\Ajax\CommandInterface.

Some commands such as CssCommand and ChangedCommand did not work. Thankfully, there is InvokeCommand which allows you to run any jQuery command. You can construct it with a jQuery selector, method, and arguments:

PHP public InvokeCommand::__construct($selector, $method, array $arguments = array()) 1 publicInvokeCommand::__construct($selector,$method,array$arguments=array())

Here are the two callbacks for our form:

PHP public function usernameValidateCallback(array &$form, FormStateInterface $form_state) { // Instantiate an AjaxResponse Object to return. $ajax_response = new AjaxResponse(); // Check if Username exists and is not Anonymous User (''). if (user_load_by_name($form_state->getValue('user_name')) && $form_state->getValue('user_name') != false) { $text = 'User Found'; $color = 'green'; } else { $text = 'No User Found'; $color = 'red'; } // Add a command to execute on form, jQuery .html() replaces content between tags. // In this case, we replace the desription with wheter the username was found or not. $ajax_response->addCommand(new HtmlCommand('#edit-user-name--description', $text)); // CssCommand did not work. //$ajax_response->addCommand(new CssCommand('#edit-user-name--description', array('color', $color))); // Add a command, InvokeCommand, which allows for custom jQuery commands. // In this case, we alter the color of the description. $ajax_response->addCommand(new InvokeCommand('#edit-user-name--description', 'css', array('color', $color))); // Return the AjaxResponse Object. return $ajax_response; } public function randomUsernameCallback(array &$form, FormStateInterface $form_state) { // Get all User Entities. $all_users = entity_load_multiple('user'); // Remove Anonymous User. array_shift($all_users); // Pick Random User. $random_user = $all_users[array_rand($all_users)]; // Instantiate an AjaxResponse Object to return. $ajax_response = new AjaxResponse(); // ValCommand does not exist, so we can use InvokeCommand. $ajax_response->addCommand(new InvokeCommand('#edit-user-name', 'val' , array($random_user->get('name')->getString()))); // ChangedCommand did not work. //$ajax_response->addCommand(new ChangedCommand('#edit-user-name', '#edit-user-name')); // We can still invoke the change command on #edit-user-name so it triggers Ajax on that element to validate username. $ajax_response->addCommand(new InvokeCommand('#edit-user-name', 'change')); // Return the AjaxResponse Object. return $ajax_response; } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 publicfunctionusernameValidateCallback(array&$form,FormStateInterface$form_state){  // Instantiate an AjaxResponse Object to return.  $ajax_response=newAjaxResponse();    // Check if Username exists and is not Anonymous User ('').  if(user_load_by_name($form_state->getValue('user_name'))&&$form_state->getValue('user_name')!=false){    $text='User Found';    $color='green';  }else{    $text='No User Found';    $color='red';  }    // Add a command to execute on form, jQuery .html() replaces content between tags.  // In this case, we replace the desription with wheter the username was found or not.  $ajax_response->addCommand(newHtmlCommand('#edit-user-name--description',$text));    // CssCommand did not work.  //$ajax_response->addCommand(new CssCommand('#edit-user-name--description', array('color', $color)));    // Add a command, InvokeCommand, which allows for custom jQuery commands.  // In this case, we alter the color of the description.  $ajax_response->addCommand(newInvokeCommand('#edit-user-name--description','css',array('color',$color)));    // Return the AjaxResponse Object.  return$ajax_response;}   publicfunctionrandomUsernameCallback(array&$form,FormStateInterface$form_state){  // Get all User Entities.  $all_users=entity_load_multiple('user');    // Remove Anonymous User.  array_shift($all_users);    // Pick Random User.  $random_user=$all_users[array_rand($all_users)];     // Instantiate an AjaxResponse Object to return.  $ajax_response=newAjaxResponse();    // ValCommand does not exist, so we can use InvokeCommand.  $ajax_response->addCommand(newInvokeCommand('#edit-user-name','val',array($random_user->get('name')->getString())));    // ChangedCommand did not work.  //$ajax_response->addCommand(new ChangedCommand('#edit-user-name', '#edit-user-name'));    // We can still invoke the change command on #edit-user-name so it triggers Ajax on that element to validate username.  $ajax_response->addCommand(newInvokeCommand('#edit-user-name','change'));    // Return the AjaxResponse Object.  return$ajax_response;}

Finished Form

Here is our finished Ajax Example Form:

Drupal Form Ajax Example

This blog post was created for Google Code-In 2014 to learn about a Drupal Core System.

Full Module Code

Jan 17 2015
Jan 17

INTRODUCTION

The Drupal Address Field Module is a great tool that we use often. There are, however, many times when the default output causes some issues for us. Be default, Address Field places all of its individual field components inside of a Feldset wrapper. This is usually a nice feature but there are times when you may want to remove this Fieldset wrapper for aesthetics. Or, perhaps, you'd like to place additional fields within the Address Field's Fieldset. We'll show you how to do both.

REMOVING FIELDSETS FROM ALL ADDRESS FIELDS

As usual, Drupal provides a handy Hook Function that allows us to override the Address Field's output to remove its Fieldset wrapper.

We can use hook_field_widget_WIDGET_TYPE_form_alter to alter widget forms for a specific widget provided by another module. To remove the Fieldset wrapper from all Address Field output we simply use this hook in our own custom module to change the element type from 'fieldset' to 'container'.

NOTE:

The Address Field's 'Widget Type' name is 'addressfield_standard'. This can be discovered by examining the module's code or using the dpm() function from the Dev Module to examine the $form output returned to a hook_form_alter() function.

Create a function similar to the following in your custom module to remove the Fieldssets from all address fields.

/**
 * Implements hook_field_widget_WIDGET_TYPE_form_alter().
 */
function MY_CUSTOM_MODULE_field_widget_addressfield_standard_form_alter(&$element, &$form_state, $context) { 
   
$element['#type'] = 'container'
}
?>

REMOVING FIELDSETS FROM SPECIFIC ADDRESS FIELDS

If you'd like to remove the Fieldsets from specific Address Field output instead of all of them then we can simply use the $context variable that is provided to our hook to only act upon certain conditions. In the example below we're checking the $context array for a specific bundle to act upon.

function MY_CUSTOM_MODULE_field_widget_addressfield_standard_form_alter(&$element, &$form_state, $context) { 
  if (
$context['instance']['bundle'] == 'student_registration'){
   
$element['#type'] = 'container'
  } 
}
?>

CUSTOM FIELDSETS

We often have a situation that requires an Address Field to have additional fields within its Fieldset, for example, email or phone number fields. To achieve this we use the method above to remove Address Field's default Fieldset wrapper and then simply add our own using Drupal's Field Group Module.

  1. Remove Address Field's Fieldset out from all or specific output using method above
  2. Create custom Drupal fields within your entity. For example, Phone Number or Email fields
  3. Create a custom Fieldset in your entity using the Field Group module's provided field groups
  4. Place the Address Field and your custom fields within the field group you created

Your output should now be similar to the 'after' image in the screenshot below.

Drupal Address Field Remove Fieldset</p> </body></html>

Jan 16 2015
Jan 16

On a recent project I was using the combination of Field Collection, Entity Reference, Taxonomy Terms, and Context to make a reusable set of references to terms on various content types. Then, based on the referenced term, I wanted to satisfy a context condition.

Due to the somewhat complex structure, the context was not aware of the term referenced through entity reference and the field collection.

In a case like this, creating a custom context plugin was a good solution.

I got started by reading a couple of helpful posts by others: Custom Context Conditions and Extending Drupal's Context Module: Custom Condition Based on Field Value.

The plugin must be included in a custom Drupal module. This involves setting up the .info and .module files, which is documented elsewhere. I will go through the needed functions in the necessary files.

There were four parts to the module I created:

  1. Letting Context know about the plugin.
  2. Describing the plugin for the Context UI.
  3. Giving Context the correct condition.
  4. Telling Context when to check for the condition.

Letting Context know about the plugin

The first function I implemented was hook_context_plugins(). It tells Context in which directory to find the plugin file, the name of the plugin file, the name of the class for the plugin, and the name of the parent class. I put this function in the .module file. (The string contextpluginexample is the name of the example module I am using in this post. You would replace it with your own module name when writing your own code.)

/**
 * Impelements hook_context_plugins().
 */
function contextpluginexample_context_plugins() {
  $plugins = array(
    'contextpluginexample_field_collection_entityreference_term' => array(
      'handler' => array(
        'path' => drupal_get_path('module', 'contextpluginexample') . '/plugins/context',
        'file' => 'contextpluginexample_field_collection_entityreference_term.inc',
        'class' => 'contextpluginexample_field_collection_entityreference_term',
        'parent' => 'context_condition_node_taxonomy',
      ),
    ),
  );
  return $plugins;
}

To make things easy, I set the filename for the plugin to be the same as the class name. I also made the name descriptive so I could remember what it does by looking at it. The parent class name is the class I wanted to extend for the plugin. The most generic class is context_condition, but I used context_condition_node_taxonomy because I needed most of the functionality of the included Taxonomy condition. (Look through the code in the plugins directory in the Context module for more classes that can be extended.)

Describing the plugin for the Context UI

The next function I implemented was hook_context_registry(). It tells Context what to display for the title and description of the plugin in the UI. I also put this function in the .module file.

/**
 * Implements hook_context_registry().
 */
function contextpluginexample_context_registry() {
  $registry = array(
    'conditions' => array(
      'contextpluginexample_field_collection_entityreference_term' => array(
        'title' => t('Field Collection Entity Reference Term'),
        'description' => t('Set this context based on whether or not the node has a Taxonomy Term referenced by Entity Reference in a Field Collection.'),
        'plugin' => 'contextpluginexample_field_collection_entityreference_term',
      ),
    ),
  );
  return $registry;
}

The title text displays in the select list or vertical tabs for Context conditions and the description text displays when the vertical tab for the selected context is active.

Giving Context the correct condition

This is where the action is for the plugin. Here is where I supply the values that Context uses to determine if the condition is met.

I created a new file named contextpluginexample_field_collection_entityreference_term.inc in the plugins/context directory of my module to hold the class I needed to extend.

Since my goal here was to mimic the way the Taxonomy condition worked by having a multi-select list with all the taxonomy terms in the site available for selection, I looked at the context_condition_node_taxonomy.inc file for examples.

The nice thing about extending the class of context_condition_node_taxonomy in a custom module is that I automatically got all its functions, which includes those that make the form, etc.

The execute() function was the only one I needed to override in this case. Looking at that function in the file from the Context module helped me understand what I needed to do.

function execute($node, $op) {
  // build a list of each taxonomy reference field belonging to the bundle for the current node
  $fields = field_info_fields();
  $instance_fields = field_info_instances('node', $node->type);
  $check_fields = array();
  foreach ($instance_fields as $key => $field_info) {
    if ($fields[$key]['type'] == 'taxonomy_term_reference') {
      $check_fields[] = $key;
    }
  }
 
  if ($this->condition_used() && !empty($check_fields)) {
    foreach ($check_fields as $field) {
      if ($terms = field_get_items('node', $node, $field)) {
        foreach ($terms as $term) {
          foreach ($this->get_contexts($term['tid']) as $context) {
            // Check the node form option.
            if ($op === 'form') {
              $options = $this->fetch_from_context($context, 'options');
              if (!empty($options['node_form'])) {
                $this->condition_met($context, $term['tid']);
              }
            }
            else {
              $this->condition_met($context, $term['tid']);
            }
          }
        }
      }
    }
  }
}

I could see what happens in the function: it checks for taxonomy_term_reference fields, and then gets the term IDs for the referenced terms.

In my execute() function I knew I would have to do a lot more traversing of fields because the term IDs I was after were a few hops away from the node itself because of the field collection and entity references.

Also, since I was using Entity Reference, the array key for the reference was target_id and not tid, so that part of the function had to be updated, too.

/**
 * Use Field Collections with Entity References to Taxonomy Terms as
 * Context conditions.
 */
class contextpluginexample_field_collection_entityreference_term extends context_condition_node_taxonomy {
  function execute($node, $op) {
    // build a list of each taxonomy reference field belonging to the bundle for the current node
    $fields = field_info_fields();
    $instance_fields = field_info_instances('node', $node->type);
    $check_fields = array();
    foreach ($instance_fields as $key => $field_info) {
      if ($fields[$key]['type'] == 'field_collection') {
        // get field collection field name
        $field_collection_name = $fields[$key]['field_name'];
        // Get field collection item IDs (allowing for multiple).
        $field_collection_values = field_get_items('node', $node, $field_collection_name);
        $field_collection_item_ids = array();
        if ($field_collection_values) {
          foreach ($field_collection_values as $field_collection_value) {
            $field_collection_item_ids[] = $field_collection_value['value'];
          }
        }
        // Load (multiple) field collection entities.
        $field_collection_items = field_collection_item_load_multiple($field_collection_item_ids);
        foreach ($field_collection_items as $field_collection_item) {
          // Get the list of fields in the field collection.
          $field_collection_instances = field_info_instances('field_collection_item');
          $field_collection_fields = $field_collection_instances["$field_collection_name"];
          // Get the field info for each field in the field collection.
          foreach ($field_collection_fields as $field_collection_field_key => $field_collection_field) {
            $field_collection_field_info = field_info_field($field_collection_field_key);
            // Check for entityreference fields referencing taxonomy terms.
            if ($field_collection_field_info['type'] == 'entityreference' && $field_collection_field_info['settings']['target_type'] == 'taxonomy_term') {
              // Get the term ID values.
              $check_fields[$field_collection_field_info['field_name']] = field_get_items('field_collection_item', $field_collection_item, $field_collection_field_info['field_name']);
            }
          }
        }
      }
    }
 
    if ($this->condition_used() && !empty($check_fields)) {
      foreach (array_filter($check_fields) as $terms) {
        foreach ($terms as $term) {
          foreach ($this->get_contexts($term['target_id']) as $context) {
            // Check the node form option.
            if ($op === 'form') {
              $options = $this->fetch_from_context($context, 'options');
              if (!empty($options['node_form'])) {
                $this->condition_met($context, $term['target_id']);
              }
            }
            else {
              $this->condition_met($context, $term['target_id']);
            }
          }
        }
      }
    }
  }
}

My function first changed when I checked for the field type. I needed to find fields that were of the type field_collection, not taxonomy_term_reference.

Once I found a field_collection field, I got its name and then had to get the field_collection_item IDs for the instances of that field collection on the node.

Then, once the instances were identified, I loaded the field collection entities so I could get the names of the fields in those entities and retrieve information about those fields.

At this point, I had to check whether each field in the field collection was an entityreference field that referenced a taxonomy term. If it was, then I stored the values of those references for later use.

Once I had all the values stored, then it came time to distill the target_id from those values to send to Context for it to compare against the term values set in the condition in the UI.

It took a lot of work navigating through the field collections, but thanks to the Devel module and the dpm() function I was able to figure out what I needed to know along the chain of references. (I was able to test a lot of this function by adding aspects of it it to page.tpl.php within

tags and using dpm() on the variables I needed information about.)

Telling Context when to check for the condition

The last function I needed to implement was harder for me to figure out. My first couple of tries were from examples I had seen in blog posts, but they didn’t quite work for my situation. I turned to the code itself.

The context.api.php file contains information about all the hooks that Context provides. By looking through that file, I was able to find the hook_context_node_condition_alter() function, which made sense to use for what I was doing. I knew it was the right one to implement when I saw that the parameters it passes to execute() were $node and $op, which were included in the execute() function I was implementing from the context_condition_node_taxonomy class.

/**
 * Implements hook_context_node_condition_alter().
 */
function contextpluginexample_context_node_condition_alter(&$node, $op) {
  if ($plugin = context_get_plugin('condition', 'contextpluginexample_field_collection_entityreference_term')) {
    $plugin->execute($node, $op);
  }
}

Once I had added this last function the .module file, everything came together and my context was active on nodes that had the terms referenced through an entity reference field in a field collection.

Jan 16 2015
Jan 16

Now that Drupal 8 is in beta, I’ve been trying to spend some more time with it. Reading articles and watching presentations are good ways to keep up with where things are (or are going), but nothing beats actually using it. Simplytest.me, Pantheon, and Acquia Cloud all now provide free ways to spin up an instance of the latest version (beta 4 as of this writing), so there’s no excuse not to try it out, even if a local setup seems daunting.

After clicking around a bit and admiring some of the administration interface improvements, I set to work on putting a test site together.

Arguably the most essential site building tool, Views in now part of Drupal 8 core. In being integrated, the module has also been leveraged to power most of the default lists and blocks (think content admin page, front page, taxonomy term pages, user admin page, recent content block, etc.). You can use your Views knowledge to modify these site elements or use them as starting points for your own creations.

Credit goes to the VDC (Views in Drupal Core) team for doing an excellent job of porting the module and converting to the new core plugin system. Although VDC wasn’t one of the original initiatives, it was one of the first ones ready, and the team was then able to use what it learned in the process to help out on other initiatives too.

The Views refactoring has brought many improvements, but in this post I’m going to focus on some new Displays functionality. A common task when putting a new site together is to customize the out-of-the-box pages (particularly the home page and content admin page), so I headed to Structure -> Views to copy a default view and get started.

After realizing that everything was mostly the same, one of the first differences I spotted was that you can now clone a display as a different type, so the block you’ve been working on can easily be turned into a page. Each display has its own “View” button, that now also allows you to “duplicate as”, which is slightly different from the old way of doing things. Technically, Views still uses the concept of a “Master” display that can be overridden. You can see it if you create a view with no display type, but it goes away after you create your first display. It pretty much disappears into the UI and is only present in the various settings’ “Apply” buttons — where you can save your changes by display or universally (“this display” vs “all displays”).

d8-views-display-optionsExamining the “duplicate as” options in my test view, I noticed three new display types:

Embed

In the Views module settings, you can choose to “Allow embedded displays”, and they can be used in code via views_embed_view().

Entity Reference

In your Entity Reference field settings, you can choose to “filter by an entity reference view” and use a view with this display type to determine what’s referenceable.

REST export (with RESTful Web Services enabled)

You can convert the output of a view into whatever format is requested, such as JSON or XML, and easily create a REST API for an application.

These Views improvements represent a few differences coming in D8, but are just a small taste of some of the exciting new functionality we have to look forward to in the near future. What Drupal 8 updates interest you the most?

Resources:

Pages