Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dsample/7eae0cdb9e84228de1b2567e6d402668 to your computer and use it in GitHub Desktop.
Save dsample/7eae0cdb9e84228de1b2567e6d402668 to your computer and use it in GitHub Desktop.
Download all photos and videos from your Class Dojo account
"""
Download all ClassDojo photos and videos in your timeline.
by kecebongsoft
How it works:
1. Fetch list of items in the timeline, if there are multiple pages, it will fetch for all pages.
2. Collect list of URLs for the attachment for each item
3. Download the files into local temporary directory, and also save the timeline activity as a json file.
How to use:
1. Modify the session cookie in this script, check your session cookie by opening classdojo in browser and copy
the following cookies: dojo_log_session_id, dojo_login.sid, and dojo_home_login.sid
2. Run this script and wait for it to finish.
If error happens:
1. I ran this script in windows, make sure your path is correct if you are on linux
2. Make sure "classdojo_output" directory exists in the same folder as this script
3. Make sure you have a correct session cookies set in this script.
4. Make sure you can open the FEED_URL listed in this script from within your browser (assuming you can open ClassDojo website)
"""
import requests, json, tempfile
print('Starting')
FEED_URL = 'https://home.classdojo.com/api/storyFeed?includePrivate=true'
DESTINATION = tempfile.mkdtemp(dir='classdojo_output') #make sure this directory exists in the same place as this script.
SESSION_COOKIES = {
'dojo_log_session_id': '<insert here>',
'dojo_login.sid': '<insert here>',
'dojo_home_login.sid': '<insert here>',
}
def save_json(json_content):
with open('%s\\data.json' % DESTINATION, 'w') as f: #you might want something like '/' if you're on linux.
f.write(json.dumps(json_content, indent=4))
def get_items(feed_url):
print('Fetching items: %s..' % feed_url)
resp = requests.get(feed_url, cookies=SESSION_COOKIES)
data = resp.json()
prev = data.get('_links', {}).get('prev', {}).get('href')
return data['_items'], prev
def get_urls(feed_url):
items, prev = get_items(feed_url)
while prev and feed_url != prev:
prev_urls, prev = get_items(prev)
items.extend(prev_urls)
save_json(items)
urls = []
for item in items:
attachments = item['contents'].get('attachments', {})
if not attachments: continue
for attachment in attachments:
urls.append(attachment['path'])
return urls
def get_name_from_url(url):
parts = url.split('/')
return '_'.join(parts[3:]).replace('-', '_')
def download_urls(urls):
total = len(urls)
for index, url in enumerate(urls):
name = get_name_from_url(url)
print('Downloading %s/%s %s -> %s' % (index, total, url, name))
with open('%s\\%s' % (DESTINATION, name), 'wb') as f:
resp = requests.get(url, cookies=SESSION_COOKIES)
f.write(resp.content)
download_urls(get_urls(FEED_URL))
print('Done!')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment