Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough

Drupal 7 - Data Migration Through Feeds

Parent Feed: 

Working with Drupal, often times we are required to import a large set of data in site. We have various ways to import data. Feeds module is simple and easy to use. Feeds out of the box supports many features which can cover a variety of scenarios. The module is well documented and you can access documentation here.

In this part, we are going to explain basics for how to import data using feeds.

To use feeds module, we need ctools and job scheduler module enabled.

The first step in the process is to turn required modules enabled. Enable Feeds, Feeds Admin UI. If you want an example of node importer and user importer then turn Feeds Import enabled.

Data Migration Through Feeds

Next step is to add an importer that lets you import data. To do this, go to "admin/structure/feeds" and click on "Add importer". If you have turned Feeds Import on, you will see example importers here.

Data Migration Through Feeds

Next step is to configure importer. There are various parts in this.

  1. Basic Settings
  2. Fetcher
  3. Parser
  4. Processor

Each is explained in detail below:

Data Migration Through Feeds

Basic Settings

Data Migration Through Feeds

These are basic settings for importers: 

Name and description are what you specified while creating the importer. "Attach to content type" is used to determine whether we want to attach importer to any content type form or use standalone form.  

We use the standalone form if we want to import by using a form that comes with the module at /import path. If we go with content type then import is executed by creating a node of selected content type.

Periodic updates are used to run import periodically. Proper cron configuration is required to run periodic updates. Keep periodic updates off if you are importing data one time only.

Import on submission will start importing process as soon as the form is submitted. There is an option to run the process in the background. This is particularly helpful when importing large data.

Fetcher

The Fetcher is where your feed is fetched from. We have two options here: File upload and HTTP fetcher.

Data Migration Through Feeds

File upload is where we upload the file from the local system. HTTP fetcher is where we provide content from URL. For this example, we are going with file upload. Each fetcher has its own settings.

Data Migration Through Feeds

For file upload fetcher, you can decide which types of file extensions are allowed and where uploaded files should be saved.

Parser

Parser determines the format of data. Options include following:

Data Migration Through Feeds

We are going with CSV parser for this example.

Processor

This determines what type of data we are importing. Options includes following

Data Migration Through Feeds

Node processor:  creates nodes upon import
Taxonomy term processor: created taxonomy terms upon import
User processor: creates users upon import

For this example, we are using node processor.

Node processor setting

Data Migration Through Feeds

These settings are used to decide how the processor will work. Each setting here impacts on what happens when import runs.

Bundle: This is what type of node want to create when import runs. Based on selected content type, we can create a mapping in next step.

Insert new nodes: Based on selection, new nodes are inserted based on unique key in mappings.

Update existing nodes: If unique key is repeating then based on selection existing nodes are updated, replaced or left untouched. This is particularly handy when we are using the same importer more than once.

Text format: This determines the format of text fields in selected node type. If you have HTML content, it's better to go to with Full HTML format.

Action to take when previously imported nodes are missing in the feed: This determines what to do when previously imported nodes are missing from current feed.

Author: The author of imported nodes. Leaving this empty will make anonymous user author of node. You also can make sure that user has permission to add this type of content with  Authorize checkbox.

Expire Nodes: Keep this set to never if you want do not want to delete imported nodes. Nodes will be deleted after selected time otherwise.

Mapping

Data Migration Through Feeds

This is where we determine which CSV column belongs to which field of content type. In custom drupal development, This is very important to match correct fields. For this example, we are adding title and body. The title is made unique field. Based on this and settings in the previous step, new nodes will be imported or updated.

The importer is ready and we can start importing nodes. Since we are using standalone form, we need to goto /import and upload CSV file as per our settings

Data Migration Through Feeds

Select the importer we just created.

Data Migration Through Feeds

We get a template sample on this page which helps in CSV file construction. Upload the CSV file in required format and import content. Upon completion, a status message is displayed.

In case, we want to delete all nodes that we imported we can do so by going to "Delete Items" tab on import page.

Feeds module out of the box provides many features and is very powerful. There are various other modules available that enhances feeds module functionality.

Hope this helps you, feel free to share your valuable feedbacks and inputs.

Original Post: 

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web