Skip to content

Instantly share code, notes, and snippets.

@shedd
Created April 21, 2012 00:13
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save shedd/2432768 to your computer and use it in GitHub Desktop.
Save shedd/2432768 to your computer and use it in GitHub Desktop.
Import Pivotal Tracker into Kanbanery - CSV export/import translation

Pivotal Tracker CSV export to Kanbanery CSV import translator

This script is designed to take a CSV input file, generated by Pivotal Tracker's CSV export process, and translate it into the CSV import format used by Kanbanery.

The fields used by the tools don't map exactly, but this tool tries to adapt them as best as possible. For instance, Pivotal makes use of labels. Kanbanery doesn't have tags or labels. Instead, this tool takes the labels and inserts them into the story title so that they can still be used for searching in Kanbanery.

Additionally, the Kanbanery import format doesn't take tasks or comments as explicit data elements. Thus, the script merges these into the Kanbanery description field separated by ASCII horizontal rules.

Hopefully, this is useful to you!

Configuration

To configure the script and run it for yourself:

  • Change the input and output filenames to suit
  • Edit the creator's email to your own
  • You may wish to customize the Story.destination_column method. We were directing stories with a label of "refactoring" into a refactoring column in Kanbanery. Similarly, we sent our bugs into a specific column for legacy bugs.
  • As discussed, we were importing all of our bugs into a "Legacy Bug" column. The script is configured to map all Pivotal stories with a type of 'bug' into the column defined by @@bug_column. If you want bugs to go into the backlog, just change this to Backlog
  • Want everything to go into the icebox instead of the backlog or other columns? Set @@icebox to true - this overrides all of the other column detection.
  • We weren't using priorities, so Story.priority defaults to 0. You could modify this to detect high priority stories with specific labels or some other criteria to alter this. Priority must be an integer of 0, 1, or 2 in Kanbanery.

Notes

The script was written and tested on Ruby 1.9.3-p125.

#/usr/bin/env ruby
require "csv"
#########################################################################################
# Translate Pivotal Tracker's CSV export into Kanbanery's CSV Import
# ---------------------------------------------------------------------------------------
# Pivotal's CSV fields:
#
# Id
# Story
# Labels
# Iteration
# Iteration Start
# Iteration End
# Story Type
# Estimate
# Current State
# Created at
# Accepted at
# Deadline
# Requested By
# Owned By
# Description
# URL
# Comment
# ...
# Task
# Task Status
# ...
#
# ---------------------------------------------------------------------------------------
# Kanbanery CSV fields:
#
# title
# type
# estimate
# priority
# description
# column_name
# creator_email
#
#########################################################################################
# configure the input (Pivotal) and output (Kanbanery) csv files here
input_filename = "pivotal.csv"
output_filename = "kanbanery.csv"
# configure the email address for the task creator
@@creator_email = 'email@mail.com'
# bug column name (map all bugs into this column)
@@bug_column = "Legacy Bugs"
# load into icebox? (overrides the bug column and all other column names)
@@icebox = false
#########################################################################################
stories = []
@@separator = "\n\n---------------------------------\n\n"
class Story
attr_accessor :title, :labels, :type, :description, :tasks, :comments, :url, :created_at, :id
def full_title
self.title + (self.labels == "" ? "" : " (#{ self.labels })")
end
def full_description
output = ""
output << self.full_title + @@separator if self.full_title.size > 255
output << self.description + @@separator
output << @@separator unless self.comments.size.zero?
output << self.comments.each{ |k,v| v }.join(@@separator)
output << @@separator unless self.tasks.size.zero?
output << self.tasks.each{ |k,v| v }.join(@@separator)
output << "Original URL: " + self.url
output << "\nOriginal Creation Date: " + self.created_at
output << "\nPivotal Tracker ID: " + self.id
output
end
def destination_column
return "Icebox" if @@icebox
return "Refactoring" if self.labels.include? "refactoring"
return @@bug_column if self.type == "bug"
"Imported"
end
def priority
0
end
def to_kanbanery
[self.full_title[0,255], self.type, '', self.priority, self.full_description, self.destination_column, @@creator_email]
end
end
# parse the pivotal tracker csv & assign it headers
# pivotal csv is a mess of mult-lines - this parse style is based on: https://gist.github.com/894624
raw_stories = CSV.read(input_filename)
headers = raw_stories.shift
revised_headers = []
# we need unique headers since there is more than one comment typically
headers.each_with_index do |header, index|
if header == "Comment"
revised_headers << header + " " + index.to_s
elsif header == "Task"
revised_headers << header + " " + index.to_s
elsif header == "Task Status"
revised_headers << header + " " + index.to_s
else
revised_headers << header
end
end
# take the headers and map them to the csv lines to produce a hash
# the following is based on: http://snippets.dzone.com/posts/show/3899
string_data = raw_stories.map {|row| row.map {|cell| cell.to_s } }
array_of_hashes = string_data.map {|row| Hash[*revised_headers.zip(row).flatten] }
# create objects from the hash
array_of_hashes.reverse.each do |row|
story = Story.new
story.id = row['Id']
story.title = row['Story']
story.labels = row['Labels']
story.type = row['Story Type']
story.created_at = row['Created at']
story.description = row['Description']
story.url = row['URL']
# first pull all of the comments, then take every other item in the resultant array to chop out the keys
comments = row.select{ |k,v| k.include? 'Comment' }
comments.flatten!
odds = Array.new(comments.size / 2){|i| i * 2 + 1}
comments.replace comments.values_at(*odds)
comments.delete_if { |a| a.nil? or a == "" }
story.comments = comments
# first pull all of the tasks, then take every other item in the resultant array to chop out the keys
tasks = row.select{ |k,v| (k.include? 'Task') and (!k.include? 'Task Status') }
tasks.flatten!
odds = Array.new(tasks.size / 2){|i| i * 2 + 1}
tasks.replace tasks.values_at(*odds)
tasks.delete_if { |a| a.nil? or a == "" }
story.tasks = tasks
stories << story
end
# output the kanbanery csv
CSV.open(output_filename, "wb") do |csv|
stories.each do |story|
csv << story.to_kanbanery
end
end
@hymerman
Copy link

Running this under Ruby 1.9.3-p392 I get this error:

gistfile1.rb:146:in `block in <main>': undefined method `flatten!' for {}:Hash (NoMethodError)
        from gistfile1.rb:133:in `each'
        from gistfile1.rb:133:in `<main>'

Looks like flatten! doesn't exist in 1.9.3. Replacing "comments.flatten!" with "commentsarray = comments.flatten" and fixing up references to comments, and the same for tasks, is a way around the problem (probably not the most elegant way but this is the first Ruby code I've had to type!).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment