Skip to content

Instantly share code, notes, and snippets.

@freimanas
Last active March 29, 2022 22:37
Show Gist options
  • Star 33 You must be signed in to star a gist
  • Fork 16 You must be signed in to fork a gist
  • Save freimanas/39f3ad9a5f0249c0dc64 to your computer and use it in GitHub Desktop.
Save freimanas/39f3ad9a5f0249c0dc64 to your computer and use it in GitHub Desktop.
Get twitter user's photo url's from tweets - download all images from twitter user
#!/usr/bin/env python
# encoding: utf-8
import tweepy #https://github.com/tweepy/tweepy
import csv
import sys
#Twitter API credentials
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""
def get_all_tweets(screen_name):
#Twitter only allows access to a users most recent 3240 tweets with this method
#authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
#initialize a list to hold all the tweepy Tweets
alltweets = []
#make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name = screen_name,count=1)
#save most recent tweets
alltweets.extend(new_tweets)
#save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
print "getting tweets before %s" % (oldest)
#all subsequent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)
#save most recent tweets
alltweets.extend(new_tweets)
#update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
print "...%s tweets downloaded so far" % (len(alltweets))
#go through all found tweets and remove the ones with no images
outtweets = [] #initialize master list to hold our ready tweets
for tweet in alltweets:
#not all tweets will have media url, so lets skip them
try:
print tweet.entities['media'][0]['media_url']
except (NameError, KeyError):
#we dont want to have any entries without the media_url so lets do nothing
pass
else:
#got media_url - means add it to the output
outtweets.append([tweet.id_str, tweet.created_at, tweet.text.encode("utf-8"), tweet.entities['media'][0]['media_url']])
#write the csv
with open('%s_tweets.csv' % screen_name, 'wb') as f:
writer = csv.writer(f)
writer.writerow(["id","created_at","text","media_url"])
writer.writerows(outtweets)
pass
if __name__ == '__main__':
#pass in the username of the account you want to download
get_all_tweets("WansteadWomble")
@Praveenms91
Copy link

I get error in python 3x as the buffer does not support string. Help me to encode it. https://twitter.com/praveen_ms91/status/605967731876167680

@hub2git
Copy link

hub2git commented Jun 12, 2015

Dear freimanas,
I saw you comment on https://gist.github.com/yanofsky/5436496#comment-1461997.
I ran your file and a CSV was created. The fourth column in the spreadsheet contains URLs to the images, in a format like http://pbs.twimg.com/media/ABCDEFG6789A3qp3.jpg.

Is there a way I could either embed the JPGs themselves to the spreadsheet, or download all the JPEGs listed in Column 4 onto my hard drive?

@freimanas
Copy link
Author

@hub2git
sure you can
the easiest way if you don't know python is:
line 61 change to outtweets.append([tweet.entities['media'][0]['media_url']])
remove line 66 - so no headers are written into csv.

you will get a list of links to images in the file, one per line.

then you just download all of them with your favourite downloader... for example:
wget -i filename.csv

which will read each line and download each image.

hope that helps

@mihirjha27
Copy link

Hi,

This works good. Is there any way i can get more details like

Bio_Of_Person_to_whom_reply_is_made
No_Of_Tweets_of_Person_interacting
No_Of_FavoritesCount_of_Person_interacting
No_of_FollowersCount_of_Person_interacting
No_of_FollowingCount_of_Person_interacting
Location_of_Person_interacting

Regards
Mihir

@layrshah
Copy link

runfile('C:/Users/layrshah/Desktop/Python/twitter1.py', wdir='C:/Users/layrshah/Desktop/Python')
getting tweets before 855708577541087231
Traceback (most recent call last):

File "", line 1, in
runfile('C:/Users/layrshah/Desktop/Python/twitter1.py', wdir='C:/Users/layrshah/Desktop/Python')

File "C:\Users\layrshah\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)

File "C:\Users\layrshah\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)

File "C:/Users/layrshah/Desktop/Python/twitter1.py", line 74, in
get_all_tweets("ValencyNetworks")

File "C:/Users/layrshah/Desktop/Python/twitter1.py", line 40, in get_all_tweets
new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)

File "C:\Users\layrshah\Anaconda2\lib\site-packages\tweepy\binder.py", line 239, in _call

File "C:\Users\layrshah\Anaconda2\lib\site-packages\tweepy\binder.py", line 226, in execute
if is_rate_limit_error_message(error_msg):

File "C:\Users\layrshah\Anaconda2\lib\site-packages\tweepy\parsers.py", line 88, in parse
if method.payload_type is None:

File "C:\Users\layrshah\Anaconda2\lib\site-packages\tweepy\parsers.py", line 54, in parse
raise TweepError('Failed to parse JSON payload: %s' % e)

TweepError: Failed to parse JSON payload: Unterminated string starting at: line 1 column 521982 (char 521981)

Getting this error. What should I do

@JackBuggins
Copy link

Awesome stuff!

Thanks for this

@dukuiran
Copy link

Useful practice case for me, thanks a lot.

@dukuiran
Copy link

but can we get all pictures in one tweet?

@kamalikap
Copy link

Hi,
Is there anyway I can get hashtag as well in a separate column?

@mustafaAhmed93
Copy link

dont save in csv

@Furtim
Copy link

Furtim commented Apr 26, 2020

dont save in csv

just remove the 'b' from 'wb'

@Rihamhanny
Copy link

Can't we use this to get URL of link shared by twitter

@chahatraj
Copy link

Hey, will this media method also download video URLs or it works only for images?

@oycyc
Copy link

oycyc commented Jan 13, 2021

Hey, will this media method also download video URLs or it works only for images?

Looks like it won't. The script the author provided only shows thumbnail of the video. In order to get the link of the video, you would have to go under tweet.extended_entities["media"][0]["video_info"].

@kaamilmirza
Copy link

Now I am getting this error:
AttributeError: 'Status' object has no attribute 'entities'
Has there being a change in the API or something?

@remmark
Copy link

remmark commented May 8, 2021

then you just download all of them with your favourite downloader... for example:
wget -i filename.csv

Where and how to write the command "wget -i '% s_tweets.csv'" to download a picture?

I'm a newbie. Does not work.

Thanks in advance

@remmark
Copy link

remmark commented May 9, 2021

My edits in the code, downloads images :

#!/usr/bin/env python 3
# encoding: utf-8

import tweepy #https://github.com/tweepy/tweepy
import csv
import sys
import wget 


#Twitter API credentials
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""


def get_all_tweets(screen_name):
        #Twitter only allows access to a users most recent 3240 tweets with this method

        #authorize twitter, initialize tweepy
        auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
        auth.set_access_token(access_key, access_secret)
        api = tweepy.API(auth)

        #initialize a list to hold all the tweepy Tweets
        alltweets = []

        #make initial request for most recent tweets (200 is the maximum allowed count)
        new_tweets = api.user_timeline(screen_name = screen_name,count=1)

        #save most recent tweets
        alltweets.extend(new_tweets)

        #save the id of the oldest tweet less one
        oldest = alltweets[-1].id - 1

        #keep grabbing tweets until there are no tweets left to grab
        while len(new_tweets) > 0:
                print ("getting tweets before получаем перед %s" % (oldest))

                #all subsequent requests use the max_id param to prevent duplicates
                new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)

                #save most recent tweets
                alltweets.extend(new_tweets)

                #update the id of the oldest tweet less one
                oldest = alltweets[-1].id - 1

                print ("...%s tweets downloaded so far загружены на данный момент" % (len(alltweets)))

        #go through all found tweets and remove the ones with no images 
        outtweets = [] #initialize master list to hold our ready tweets
        for tweet in alltweets:
                #not all tweets will have media url, so lets skip them
                try:
                        print (tweet.entities['media'][0]['media_url'])
                        wget.download(tweet.entities['media'][0]['media_url'])
                except (NameError, KeyError):
                        #we dont want to have any entries without the media_url so lets do nothing
                        pass
                else:
                        #got media_url - means add it to the output
                        outtweets.append([tweet.id_str, tweet.created_at, tweet.text, tweet.entities['media'][0]['media_url']])

        #write the csv  
        with open('%s_tweets.csv' % screen_name, 'w', encoding='utf-8') as f:
                writer = csv.writer(f)
                writer.writerow(["id","created_at","text","media_url"])
                writer.writerows(outtweets)
                

        pass


if __name__ == '__main__':
        #pass in the username of the account you want to download
        get_all_tweets("nameofparse")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment