Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Audible Goodreads Importer
# So this is a somewhat manual but not nearly as manual as it could be process.
# First, follow the instructions here: https://www.themodernnomad.com/audible-statistics-extractor/
# Depending on how many pages, copy and paste the results into Excel
# (it will auto format, though you will need to remove the header)
# Before running
# Install the required libraries: pandas and isbntools
# Modify the read_excel argument to point at your file.
# Then point the to_csv argument to wherever you want to export to.
# Go to https://www.goodreads.com/review/import, add the CSV file, and sit back and enjoy
import pandas as pd
from isbntools.app import *
df = pd.read_excel(r'/path/to/the/saved/excel.xlsx')
# Fetch ISBN from title and author using isbn tools
df['isbn'] = df.apply(lambda x : isbn_from_words(x.Title + " " + x.Author), axis=1 )
# Convert Dates
df["Buy Date"] = pd.to_datetime(df["Buy Date"], format="%m-%d-%y", exact=False).dt.strftime('%Y-%m-%d')
#df["Buy Date"] = df["Buy Date"]
# Add to audible
df["Bookshelves"] = "audible"
# Map the Date Read to the Buy Date as a loose proxy
df["Date Read"] = df.apply(lambda x: x["Buy Date"] if x["Time Left"] < 30 else None, axis=1)
# This doesn't actually register, but the export that Goodreads provides is Exclusive Shelf for the read status
df["Exclusive Shelf"] = df.apply(lambda x: "read" if x["Time Left"] < 30 else "currently-reading" if x["Time Left"]/x["Minutes"] < .8 else "to-read", axis=1)
# So I fix it by just appending the status to Bookshelves depending on read status. i.e. if less than 30min left, it's read. If you've listened to more than 20% of it, it's Currently Reading
df["Bookshelves"] = df.apply(lambda x: x["Bookshelves"] + " " + "read" if x["Time Left"] < 30 else x["Bookshelves"] + " " + "currently-reading" if x["Time Left"]/x["Minutes"] < .8 else x["Bookshelves"] + " " + "to-read", axis=1)
# Rename Buy Date to Date Added
df.rename(columns={"Buy Date" : "Date Added"},inplace = True)
df.to_csv(r'/path/to/export/audible-goodreads-import.csv')
print ("Done")
# Audible Library Extractor, minified from from https://www.themodernnomad.com/audible-statistics-extractor/
var includeImage=!1,includeShows=!1,convertToMinutes=function(t){var e=t.match(/.*?(\d+)\s*?h.*?/);return 60*(null!=e?parseInt(e[1]):0)+(null!=(e=t.match(/.*?(\d+)\s*?m.*?/))?parseInt(e[1]):0)},headerRow=[$(document.createTextNode("Title")),$(document.createTextNode("Author")),$(document.createTextNode("Minutes")),$(document.createTextNode("Buy Date")),$(document.createTextNode("Rating")),$(document.createTextNode("Performance")),$(document.createTextNode("Story")),$(document.createTextNode("Time Left"))];includeImage&&headerRow.unshift($(document.createTextNode("Image")));var tableArray=[headerRow],getAuthor=function(t){var e=t.find("td:nth-of-type(3) .bc-list a");if(e.length>1){var n=t.find("td:nth-of-type(3) .bc-list").text().replace(/\s+/g," ").trim(),a="/search?searchAuthor="+encodeURIComponent(n);e=$('<a href="'+a+'">'+n+"</a>")}return e};function tableCreate(t,e){var n=document.createElement("table");n.style.width="70%",n.border="1";for(var a=0;a<e.length;++a)for(var o=n.insertRow(),r=0;r<e[a].length;++r){var d=o.insertCell();e[a][r].each(function(){$(this).clone().appendTo(d)})}t.appendChild(n)}jQuery('tr[class*="adbl-library-row"]').each(function(t){var e=$(this);if(includeShows||"View all episodes"!=e.find("td:nth-of-type(2) .bc-list-item:nth-of-type(2)").text().trim()){var n=e.find("td:nth-of-type(1) img").clone().attr("width","90"),a=[e.find("td:nth-of-type(2) .bc-list-item:nth-of-type(1)").contents(),getAuthor(e),$(document.createTextNode(convertToMinutes(e.find("td:nth-of-type(4)").text()))),$(document.createTextNode(e.find("td:nth-of-type(5)").text())),$(document.createTextNode(e.find("*[data-star-count]:first").attr("data-star-count"))),$(document.createTextNode(e.find("*[data-star-count]:eq(1)").attr("data-star-count"))),$(document.createTextNode(e.find("*[data-star-count]:eq(2)").attr("data-star-count"))),$(document.createTextNode(convertToMinutes(e.find("td:nth-of-type(1) .bc-col:nth-of-type(1) .bc-row:nth-of-type(3) ").text().trim().split("\n")[0])))];includeImage&&a.unshift(n),tableArray.push(a)}}),tableCreate($("body").empty()[0],tableArray);
@PHDPeter

This comment has been minimized.

Copy link

@PHDPeter PHDPeter commented Feb 5, 2020

So I have a library of 300 ish books so I ran into some more exception then I guess you did
"Audible Sessions: FREE Exclusive Interview" or "Audible Original" 'as well as any Radio Dramas or etc witch will not have isbn
but also some other things
so I would recommend a "try" statement. I have not worked with pandas.read_excel so I will have a look where the try would have to be put in but right now I just ran it in debug mode and remove all the title it had an issues with.

Thanks for this code btw :-) realy good thinking

@readywater

This comment has been minimized.

Copy link
Owner Author

@readywater readywater commented Feb 7, 2020

So I have a library of 300 ish books so I ran into some more exception then I guess you did
"Audible Sessions: FREE Exclusive Interview" or "Audible Original" 'as well as any Radio Dramas or etc witch will not have isbn
but also some other things
so I would recommend a "try" statement. I have not worked with pandas.read_excel so I will have a look where the try would have to be put in but right now I just ran it in debug mode and remove all the title it had an issues with.

Thanks for this code btw :-) realy good thinking

Appreciate the feedback! I suspect the try statement would be wrapped around the isbn_from_words function, though could you share the error message you got? I don't think I've listened to any of the audible originals, so that must have been how I missed it. I'll implement a fix, though most likely by omitting those entries since my guess is that they won't show up in

@spadgett

This comment has been minimized.

Copy link

@spadgett spadgett commented Feb 12, 2020

I had to make the following changes for it to work:

def get_isbn(title, author):
    try:
        return isbn_from_words(title + " " + author)
    except:
        return ""

# Fetch ISBN from title and author using isbn tools
df['isbn'] = df.apply(lambda x : get_isbn(x.Title, x.Author), axis=1 )

and

df.to_csv(r'/Users/sam/audible-goodreads-import.csv', encoding='utf-8')

I don't know Python, so it was interesting trying to debug :) Thanks for the script.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.