Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Download all your photobucket images in bulk via CLI

backstory

On Jul 4, 2017 theverge.com posted an article about photobucket (silently) no longer allowing their users to source their images on 3rd party websites for free, thus leaving websites all over the web broken displaying the following image in replace:

Me being one of those individual, I attempted to go into my photobucket account and download my content as I now have my own hosting I am able to store those images on; however, the only ways to bulk download (on desktop) is by downloading albums through their interface. Doing so, gave me the following error message: "Hmmm. Something didn't click. Want to give it another shot? Try again now."

Doing this serveral times, in different browsers (chrome, firefox and safari), after disabling all my addons and extensions (including ad blockers), it still didn't work.

At this point, doing anything on their website would flood my screen with popups and 3+ ads. I swear I haven't seen anything like it since AOL. But I took screenshots of the errors, doing my due diligance and went to create a ticket for their support team to investigate. Only one problem: their ticket creation system's image uploader doesn't work. Interesting considering their whole business model is designed around you being able to upload images to their interface and serve them on the web...

At this point, I am just under the impression everything is broken and this site is just run down hoping it will get fixed soon. But after talking to their support, they pretty much shrugged their shoulders and said "if you don't have the android app, thats the only way to download them" - HENCE why I am writing this how-to on getting all your image!

Requirements

  • Ability to run bash commands
  • Terminal application to run bash on
  • Text editor
  • Patience to collect all the links via the ad cluttered photobucket website

Steps to download your files in bulk

Fun fact: Unlike photobuckets interface, this also works with your video files and gifs.

  1. Navigate to the album or folder you are wanting to download
  2. Check one of the boxes of the images you want to download
  3. A select all option will pop up in the buttons above, click that button - this will select all the images in that album, not just the images on your current page view.
  4. At the bottom of your screen, a menu bar will show up with your selections. WAIT for the number count of selected items to update before moving onto the next step!
  5. Navigate to the next album or folder you want to download and select those images as well. The selection you made from the previous album(s) will continue to accumulate as you continue to move from album to album.
  6. Once you have collected all the items you want to download, click the link option at the menu displayed at the bottom of the screen
  7. Photobucket will populate a box with all the direct links to the content you've selected, once you click on this, it will copy them to your computer's clipboard. NOTE: I had to do this in firefox, because the button to copy the links didn't work in chrome (might have been a popup blocker or something of that nature)
  8. Create a folder on your desktop titled photobucket
  9. Open a text editor you have on your computer (I use sublime which is available for Mac and PC), paste your links into a new blank document and save it on your photobucket folder on your desktop as a txt file type (ie. photobucket_files.txt)
  10. Open up a terminal based application and run the following commands:
cd ~/Desktop/photobucket
cut -d\/ -f 7 photobucket_files.txt | grep "\." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done
cut -d\/ -f 7 photobucket_files.txt | grep -v "\." |  sort -u | while read dir; do mkdir ${dir}; cd ${dir}; grep "/${dir}/" ../photobucket_files.txt | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done; cd -; done

What this does is tricks the request to the image file is coming from photobucket themselves by definining --referer "http://s.photobucket.com/" allowing you to download the file without dealing with the redirect to their upgrade/update image.

According to the time stamps of my commands, it took roughly 8 minutes for all of the content to download (~347M worth of content, over 1300 files).

NOTE: These scripts will create the albums for you on your personal computer; however, it will only create the albums one level deep. So I feel like this is more than enough just to get away from them if you need to download your content and don't want to pay them.

These one-liners aren't the cleanest in regards to code, but this isn't something I have any interest in cleaning up or improving as it completed the job it was intended for with me.

Hope this helps you as it did me.

@jorgimello

This comment has been minimized.

Copy link

commented Nov 1, 2017

Dude, thank you so much for your help.
The commands you explained worked like a charm. I don't have many pictures in my photobucket album but I wanted to save them somewhere else.
Great job man

screenshot from 2017-11-01 17-07-05

@Maranello2017

This comment has been minimized.

Copy link

commented Nov 7, 2017

Nice approach but have they now stopped the ability to "Direct Link". You can use IMG codes and just strip the [IMG] and [/IMG].

@philipjewell

This comment has been minimized.

Copy link
Owner Author

commented Nov 10, 2017

@Maranello2017 That is exactly the reason for this posting... To allow people to download their content because direct linking is disabled.

@philipjewell

This comment has been minimized.

Copy link
Owner Author

commented Nov 10, 2017

Glad to help! @jorgimello

@None1230

This comment has been minimized.

Copy link

commented Nov 15, 2017

Thanks for your guide. However for windows users, this might not be so easy.
So here's what i did...
I downloaded the Photobucket mobile (android) app, and it had the option to save Album to device.
And it really worked well... finished downloading all the files & put it to an Album called Photobucket_Album1.

Those who do not have an Android or want to do it on your PC can,
1, Get the "Photobucket Hotlink Fix" Chrome extension,
2, Export all your picture HTML links and save it to a text file.... photos.htm
3, Open the file in Chrome and press CTRL+S (or Save page) ...
4, select the option Webpage COMPLETE
Hope it helps others trying to download their memories from these greedy website owners.

@gatorman22

This comment has been minimized.

Copy link

commented Nov 20, 2017

As Maranello2017 noted above, you'll need to strip the IMG tags before processing. On a BASH terminal I used:

sed -i 's/\[IMG]//g; s/\[\/IMG]//g' photobucket_files.txt

Have fun backing up your images!

@ArcAIN6

This comment has been minimized.

Copy link

commented Dec 18, 2017

this no longer works, even the iIMG tags now use the same URL as everywhere else, so no direct way to download them. Attempting either trashes the image, or replaces it with the stock "you have to pay for 3rd party" image.

@naahthx

This comment has been minimized.

Copy link

commented Jan 2, 2018

I followed your instructions step by step and when I run the code in the terminal based application for Mac it says "no such file or directory". What am I doing wrong? I can't figure out any other way to mass export my albums from photobucket to my desktop.

@nr1q

This comment has been minimized.

Copy link

commented Jan 3, 2018

I could made it a little bit simpler, at per-album basis (because I only have 5 albums with 2.5K photos each); I use fish shell but for bash you only have to edit the while command:

 cat album_photo_links.txt | while read line; wget --wait=1 --referer 'http://s.photobucket.com/' "$line"; end

Update: Actually this is a better command:

 cat album_photo_links.txt | xargs wget --referer 'http://s.photobucket.com/' --wait=1 --random-wait --input-file=-
@Jakeukalane

This comment has been minimized.

Copy link

commented Jan 10, 2018

Worked like a charm. Just used the first line as I didn't want subfolders. NO need to use the [IMG] version, just what he said.

244 images download in little time. Thank you so much.

@PaulNerd

This comment has been minimized.

Copy link

commented Jan 15, 2018

Or, just copy the direct link text and give it to JDownloader2's linkgrabber.
Works fine, no console involved.

@giannisergio

This comment has been minimized.

Copy link

commented Jan 24, 2018

I tried both philipjewell and nr1q commands and I get the "bwe" 3rd party hosting image for almost all of my images but a few.
Same thing with the JDownloader2's linkgrabber suggested by PaulNerd.
Do you guys have any updated method?
Thanks!

@jamesoncollins

This comment has been minimized.

Copy link

commented Jan 28, 2018

If you are on a windows box, and you are using cygwin, don't forget to get rid of the carriage returns in your file using something like dos2unix.

@dubistdu

This comment has been minimized.

Copy link

commented Feb 1, 2018

Thanks for this post. I was pretty bummed that they got rid of download album option. Not to mention their aweful new(I haven't been to the site for over 3 years) ui. I don't know what's going on with photo photobucket. Seems like they are losing their mind.
I used @nr1q solution.
Worked great!

@schleir

This comment has been minimized.

Copy link

commented Feb 5, 2018

Works great! Thank you so much for this script.

@bash64

This comment has been minimized.

Copy link

commented Feb 14, 2018

This worked for me and did not make mistakes... >cat photobucket.txt | xargs -I url basename url | while read file; do grep "${file}$" photobucket.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; echo ${file}; done

@raaomoka

This comment has been minimized.

Copy link

commented Feb 26, 2018

This doesn't work directly in the Windows 10 command prompt, probably need to enable bash shell on your Windows 10 PC:

https://www.howtogeek.com/249966/how-to-install-and-use-the-linux-bash-shell-on-windows-10/

Alternatively, you could just use the Android App on your phone to download each of your albums, like I did, then transfer them via Wi-Fi direct or USB to your Windows PC. You may run into the issue of their half-baked app continuing to fail to download big albums, what you could do in this case, like I did, is as follows:

  1. Download the album using the Photobucket app and watch it fail, keep note of the album's name

  2. Open your phone's file manage, navigate to Internal SD Card > Pictures > Photobucket, and rename the corresponding album folder to Photobucket-AlbumName2

  3. Repeat steps 1 and 2 about 4 times, incrementing the number at the end of the corresponding album folder each time

  4. Now set your file manager to sort the album photos by size in ascending order. The ones that randomly failed to download will appear at the top of the list; delete these ones from each of the Photobucket-AlbumName up to Photobucket-AlbumName4 folders

  5. Now move and merge all the photos from the Photobucket-AlbumName2 to 4 folders into the Photobucket-AlbumName folder.

  6. Vuala, you should now have all your full-res photos saved in the Photobucket-AlbumName folder 😃 Double check by looking at the number of files your file manager says is in the folder, if it doesn't match the number of pictures the Photobucket app states is in the album, you'll need to repeat steps 2 to 5 above until the numbers match. Otherwise, you'll be missing pictures.

@NikkiDelRosso

This comment has been minimized.

Copy link

commented Mar 6, 2018

Why not use the -i or --input-file argument in wget?

Read URLs from a local or external file. If ‘-’ is specified as file, URLs are read from the standard input.

You should be able to download all of the files by running wget --referer 'http://s.photobucket.com/' -i photobucket_files.txt

Additionally, you can prevent accidental download of any of the "3rd party hosting disabled" image by adding --max-redirect=0, and if you have to run this multiple times (I did, because some of them were randomly 404ing), you can add -nc to have it ignore files that are already in the directory. Here's the full command that I used:

wget --referer 'https://s.photobucket.com/' --wait=1 --random-wait --max-redirect=0 -nc -i photobucket.txt

@NefariozBG

This comment has been minimized.

Copy link

commented Apr 17, 2018

For anyone who finds this, I found that the original script kept giving me an error when I tried to download my files,
curl: (3) Illegal characters found in URL

Turns out, this can be easily fixed if you change one thing in the commands. The last command should be:
cut -d/ -f 7 photobucket_files.txt | grep -v "." | sort -u | while read dir; do mkdir ${dir}; cd ${dir}; grep "/${dir}/" ../photobucket_files.txt | while read file; do curl -O --referer "http://s.photobucket.com/" ${file%$'\r'}; done; cd -; done

Notice the "%$'\r'" that was added after the last {file}.

@jhuseby82

This comment has been minimized.

Copy link

commented Apr 20, 2018

Hoping someone can help. I'm on Windows 10, I've installed Ubuntu, running bash.exe. I've tried copying/pasting nr1q command (after mounting to my desktop/photobucket folder where the file photobucket_files.txt is).
cat photobucket_files.txt | xargs wget --referer 'http://s.photobucket.com/' --wait=1 --random-wait --input-file=-

But I'm getting a "404 not found" error, and I notice after the URL's that pop up, it's appending a %0D to the end of each one, so I'm assuming that's why it can't find the URL. I've tried copying/pasting directly from here to Notepad++ (instead of notepad in Windows), but it's still not working. I've tried manually typing in the above code in bash, but get the same results (appending %0D to the end of the URLs).

I've tried the original code from phillipjewell and the edit from NefariozBG but get an error: curl: (3) Illegal characters found in URL

Any help would be greatly appreciated (I have about 2000 images in Photobucket, haven't uploaded to the site in probably 5-6 years due to not being able to easily retrieve them, but man would it be nice to bulk pull these images out and move to another service).

@BCE75

This comment has been minimized.

Copy link

commented May 13, 2018

I just got this to work! It kept failing on me but I saw that my library was set as private. I had to make my library public long enough for this to run. I'm going to delete my photobucket account when this is done. Thank you for this!

@estevesds12

This comment has been minimized.

Copy link

commented May 26, 2018

I'm on a windows 10 pc and it only worked the way None1230 explained it. But is there a way I can download the photos AND the tags ?

@JamesHagerman

This comment has been minimized.

Copy link

commented Jun 3, 2018

This method worked on 2018-06-03.

@tromlet

This comment has been minimized.

Copy link

commented Jun 11, 2018

It also worked on 11 June 2018. Thank you so, so much, this was super, super helpful.

@kenme243

This comment has been minimized.

Copy link

commented Jun 20, 2018

Works wonder on 20 June 2018, thank you very much, anyone have problem with file directory should check if the text edit file is .txt or not, you can search for how to save txt on Mac as do as instructs, pretty easy :)

BTW, any suggestion on best and free photo hosting right now anyone?

@kredzsays

This comment has been minimized.

Copy link

commented Aug 14, 2018

@NikkiDelRosso Thanks so much for posting your script variation. Worked like a charm for me! was getting the "curl: (3) Illegal characters found in URL" error using the OP code.

I'm running Ubuntu shell (1604.2018.510.0) on a windows 10 machine, for those in a similar pickle.

@samanthaimx

This comment has been minimized.

Copy link

commented Aug 18, 2018

I am having so much trouble with this. I've tried all the commands above but I'm sure I'm doing something wrong. Would anyone mind assisting me?

@Rocketman69

This comment has been minimized.

Copy link

commented Oct 4, 2018

Well...I've played hell all morning trying to get this to work. Finally got it to at least do something after adding the "%$'\r" after the last {file} per @NefariozBG (although I did this to the first command as I kept getting the "curl: (3)…" error). Now I keep getting an error on each file that basically says: curl: (7) Failed to connect to i38.photobucket.com port 80: Connection refused

Any ideas what I'm doing wrong? Bear with me, I'm new to Linux bash stuff.

@CaptainBadass

This comment has been minimized.

Copy link

commented Oct 11, 2018

Used philipjewell's original commands on a Mac, stock Terminal. Worked perfectly!

@Protobikes

This comment has been minimized.

Copy link

commented Oct 12, 2018

I am not seeing the toolbar with numbers in 4 and 5. Is it still there for other? Using Firefox. Just installed it for this.I guess what I am trying to do is get a list of links. Maybe can copy 1 and increment a sequence? Trying to make sure I have copies all my pics off photobucket so I can close the account. Don't have an Android phone.

@Protobikes

This comment has been minimized.

Copy link

commented Oct 12, 2018

Should this run in command prompt? Does not recognise cut as a command.

@mynameiskaren

This comment has been minimized.

Copy link

commented Oct 23, 2018

What worked for me using Windows 10:

  1. Enabled Bash shell using raaomoka's methods.
  2. Used a combination everyone else's comments:
cd /mnt/c/users/karen/desktop/photobucket

cut -d\/ -f 7 photobucket_files.txt | grep "\." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done

wget --referer 'https://s.photobucket.com/' --wait=1 --random-wait --max-redirect=0 -nc -i photobucket_files.txt

Finally got it working after troubleshooting. Worked as of 10.23.18

Thanks everyone!

@Dennist03

This comment has been minimized.

Copy link

commented Nov 1, 2018

Worked for Mac Mojave 10.28.18 with the original commands.

@mellymel88

This comment has been minimized.

Copy link

commented Nov 16, 2018

HI...sorry to be annoying but I am not very computer savvy. Is there a way to do this for dummys? Haha. I’m using a Mac computer and I have an iPhone...no android. I would really appreciate any help you guys could give in layman’s terms. Thank you so much for your time. Photobucket has all of my images from the past 6 years. It would be virtually impossible to download them one at a time :-(

@djc020799

This comment has been minimized.

Copy link

commented Nov 23, 2018

I'm completely new to linux commands. I followed the steps to enable Linux in Win10 and downloaded and installed the Ubuntu app. The app runs but I cannot seem to get the command prompt to work. It appears the first command is to set the directory to the location where the text file is located. I cannot seem to get that to work correctly. Would someone either give me some pointers or point me to a good source to educate myself? Much thanks. So far I've spent a couple hours on this simply because I wanted to download my 3100 images and 18 videos off Photobucket and then close my account.

@MareinK

This comment has been minimized.

Copy link

commented Nov 24, 2018

 cat album_photo_links.txt | xargs wget --referer 'http://s.photobucket.com/' --wait=1 --random-wait --input-file=-

This worked perfectly for me, thanks @nr1q.

The first day I tried this, I found that none of the scripts worked. The full images would not load on the website, only the thumbnails, and the script was downloading empty image files. It's a couple of days later and the images are now showing on the website and downloading using the script. So if anyone else finds their images appear to be broken, just wait a couple days and try again.

@coops1967

This comment has been minimized.

Copy link

commented Nov 28, 2018

To melly mell...

This all works just fine - just downloaded all my pics etc...

BUT

  1. Make a folder on your desktop called 'photobucket' - all lowercase
  2. Open a text editor ( i used text wrangler) and paste the links from the above instructions from the photobucket Link in to the text editor (try one photbucket folder first). If the Link won't copy it's because your browser may have pop-ups etc disabled, so turn that off for a moment while you do this - I had to turn off 1Blocker for example
  3. Save the text file as a text file (!) named photobucket_files.txt , make sure you use that underscore character & make sure you save it IN the photobucket folder you've made on your desktop
  4. Select all the command above after step 10 - you'll need to scroll down and sideways to make sure you select ALL the commands, then press command & C to copy all that to your clipboard...
  5. Open your Mac Terminal ,from the Utilities folder and paste (Command & V) in those commands...
  6. Press Return/enter and you should see the downloads start... you can see them in Finder in that photobucket folder as they complete
@Scrivener07

This comment has been minimized.

Copy link

commented Jan 14, 2019

Excellent! This saved my butt big time. Thanks for putting this together for everyone.

@Cassidy13K

This comment has been minimized.

Copy link

commented Jan 17, 2019

What worked for me using Windows 10:

1. Enabled Bash shell using raaomoka's methods.

2. Used a combination everyone else's comments:
cd /mnt/c/users/karen/desktop/photobucket

cut -d\/ -f 7 photobucket_files.txt | grep "\." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done

wget --referer 'https://s.photobucket.com/' --wait=1 --random-wait --max-redirect=0 -nc -i photobucket_files.txt

Finally got it working after troubleshooting. Worked as of 10.23.18

Thanks everyone!

Commenting for visibility that the above worked perfectly fine on first try for all my >1000 pictures in different folders. It took me less than 10 minutes to download all. To keep the folder structure I added folders and separate "photobucket_files" files in each folder and then ran the bash command in multiple folders at the same time. Thank you very much for posting this solution.

@Annfrances2

This comment has been minimized.

Copy link

commented Feb 1, 2019

Thank you PaulNerd, I used JDownloader2's linkgrabber and it worked. I was unable to figure out how to use the command code. It kept saying path not found.

@dschwa

This comment has been minimized.

Copy link

commented Feb 1, 2019

The Chrono Download - Chrome extension works perfectly in Windows 7 with the direct links

@tmayville

This comment has been minimized.

Copy link

commented Mar 1, 2019

With PowerShell, I was able to just do this- - Granted, you need to first build the txt file with the list of links.
Pre-Reqs: 1. Disc Space, 2. C:\Temp folder, 3. Modify where your text file [photobucket_files.txt] is located or, run PS from there.
Keeps the file names - if a folder has a same name, will prompt to over-write or, you can add -Force and blow it away.
One could use the Split again and build local folders but, I just wanted off photo-bucket.

#-------------
$client = new-object System.Net.WebClient
$photos=Get-Content .\photobucket_files.txt
md c:\temp -ErrorAction SilentlyContinue
$filelocation = "C:\temp"
$photos|%{$filename=Split-Path -Path $_ -Leaf;"File from $_ - Going here:$filelocation$filename";$client.DownloadFile($_,"$filelocation$filename")}
#----------------

Gives you a nice on screen list of whats coming down - Probably pipe it to a log if needed.

@hammrocks

This comment has been minimized.

Copy link

commented Mar 1, 2019

tmayville - I have it working somewhat, but don't you get the watermark on the pics? I know I'm getting it...

With PowerShell, I was able to just do this- - Granted, you need to first build the txt file with the list of links.
Pre-Reqs: 1. Disc Space, 2. C:\Temp folder, 3. Modify where your text file [photobucket_files.txt] is located or, run PS from there.
Keeps the file names - if a folder has a same name, will prompt to over-write or, you can add -Force and blow it away.
One could use the Split again and build local folders but, I just wanted off photo-bucket.

#-------------
$client = new-object System.Net.WebClient
$photos=Get-Content .\photobucket_files.txt
md c:\temp -ErrorAction SilentlyContinue
$filelocation = "C:\temp"
$photos|%{$filename=Split-Path -Path $_ -Leaf;"File from $_ - Going here:$filelocation$filename";$client.DownloadFile($_,"$filelocation$filename")}
#----------------

Gives you a nice on screen list of whats coming down - Probably pipe it to a log if needed.

@dwrolvink

This comment has been minimized.

Copy link

commented Mar 22, 2019

Direct link did not give me a full list of links. I chose to pick HTML, and use some regex to get the img src, so I got a list like this:

http://i240.photobucket.com/albums/ff67/wormmanager/ogame/w4.jpg    
http://i240.photobucket.com/albums/ff67/wormmanager/ogame/w3.jpg    
http://i240.photobucket.com/albums/ff67/wormmanager/ogame/w2.jpg    
http://i240.photobucket.com/albums/ff67/wormmanager/ogame/w1.jpg

Then, the command below was enough to download the images. Note that I changed the referer to the domain name in my list. It didn't work for me otherwise.

cat pb.txt | while read file; do curl -O --referer "https://i240.photobucket.com/" ${file}; done
@roxgentile

This comment has been minimized.

Copy link

commented Apr 5, 2019

Works perfectly, thank you.

@WelshAL

This comment has been minimized.

Copy link

commented Apr 6, 2019

Complete Edit:

The JDownloader plus Hotfix Chrome extension worked just fine. No need to install Linux on Windows for me!

I just need to figure out how to stop JDownloader creating a folder for each photo now.

Ace help, thanks very much.

Alex.

@streeetlamp

This comment has been minimized.

Copy link

commented Apr 19, 2019

thank you so much

@sethwilsonmd

This comment has been minimized.

Copy link

commented May 2, 2019

#-------------
$client = new-object System.Net.WebClient
$photos=Get-Content .\photobucket_files.txt
md c:\temp -ErrorAction SilentlyContinue
$filelocation = "C:\temp\"
$photos|%{$filename=Split-Path -Path $_ -Leaf;"File from $_ - Going here:$filelocation$filename";$client.Headers.Add("Referer","https://s.photobucket.com/");$client.DownloadFile($_,"$filelocation$filename")}
#----------------

@tmayville,@hammrocks: Thanks for the PowerShell script, tmayville. A couple changes. I changed the file location to C:\temp\ to stop PowerShell from complaining Exception calling "DownloadFile" with "2" argument(s): "An exception occurred during a WebClient request." I also added $client.Headers.Add("Referer","https://s.photobucket.com/"); to set the referrer to get rid of the photobucket watermark, which I also got if the referrer wasn't set. I selected 34 pictures from a few albums and was able to download them all successfully.

I also noticed that for my image link

https://i89.photobucket.com/albums/k237/sethw_photos/comics/Grin_and_Bear_It_20061014.gif

the download link was this

https://s89.photobucket.com/component/Download-File?file=/albums/k237/sethw_photos/comics/Grin_and_Bear_It_20061014.gif

When I used that download link, photobucket also returned the image file with no watermark and without the need to set the referrer, for whatever that might be worth to somebody.

@JayHoltslander

This comment has been minimized.

Copy link

commented May 8, 2019

Re: Backstory.

Dude. Same.

Thank you.

@7Orange7

This comment has been minimized.

Copy link

commented May 12, 2019

Help! I have been trying to fathom this for two hours. I found this link:

https://techgirlkb.guru/2017/12/download-images-held-hostage-photobucket/

Which brought me to your page.

I do have a Mac and a PC so have a couple of options, the challenge though is when I choose a small album to test (just 7 images) I get the following:

  1. If I use JDownloader - it saves me one image with a watermark
  2. No matter which album I choose I can't get a list that contains more than one image. It says 7 or 9 or even 31 selected, but each time it generates the links, the cupboards are bare!

Screen shot hopefully shows the problem.

My gut feel on this is that Photobucket (I could use more derogatory terms for these hostage takers) have disabled that as an option.

The only one that seems to work is iOS and doing each one manually. That does not save the date taken etc. and is impractical. took 20 minutes to get 100 images.

Hopefully not too much of a noob question!!

Screenshot 2019-05-12 at 10 34 07

@Qriist

This comment has been minimized.

Copy link

commented May 13, 2019

I've written an AHK script that takes concepts from the above examples, with some enhancements.

  • Downloads "direct" links from pb.txt, which much be in the same folder with the script
  • Builds local directories, keeping different users distinct (extrapolated from links)
  • Supports any number of sub directories
  • Enumerates files per each folder (keeping them in the same order as on PB, even if the file names are a mess)
  • Skips existing files
  • Simple informative gui that details the overall progress

mTRlwAf 1

Single-file ahk code (and compiled executable) found in my post on the AHK forums.
https://www.autohotkey.com/boards/viewtopic.php?f=6&t=64495

2019-05-19: Updated to download certain images without watermark that they would have otherwise had.

@Qriist

This comment has been minimized.

Copy link

commented May 13, 2019

@7Orange7 the box is a little wonky... Click on a box (once) to put all the links to the clipboard. It doesn't look like they are in the box.... they are. Bad website design on Photobucket's part.

The auto-clipboard sometimes fails, so click in the box and press Ctrl-A to select everything, then Ctrl-C to copy it.

Finally, save the links to a file.

Dates would need to be set by yet more queries to a given file's surrounding pages, and none of these tools are built to do that. If preserving file order is important I suggest using my tool (linked above) to at least number things sequentially. (Padding is based on the entire batch of files for uniformity, but numbers reset with each directory.)

Hope this helps.

@michaelrobert9

This comment has been minimized.

Copy link

commented May 16, 2019

I am using a Mac and it took we a while to work out why it wasn't working for me.

I was using Text Edit and saving the file. Text Edit saves the file as a .rtf file. I then changed the extension to .txt thinking it wouldn't make a difference but it does.
There are two options to fix this:
In Text Edit: Select preferences and there is an option to change the format that files get saved as. You can change it to .txt.

Or you can use Pages and export the file to Plain Text.

Once I made sure that the file wasn't .rtf and it was .txt it worked perfectly.

Thanks

@7Orange7

This comment has been minimized.

Copy link

commented May 16, 2019

@7Orange7 the box is a little wonky... Click on a box (once) to put all the links to the clipboard. It doesn't look like they are in the box.... they are. Bad website design on Photobucket's part.

The auto-clipboard sometimes fails, so click in the box and press Ctrl-A to select everything, then Ctrl-C to copy it.

Finally, save the links to a file.

Dates would need to be set by yet more queries to a given file's surrounding pages, and none of these tools are built to do that. If preserving file order is important I suggest using my tool (linked above) to at least number things sequentially. (Padding is based on the entire batch of files for uniformity, but numbers reset with each directory.)

Hope this helps.

Hi @Qriist - thank you for both answers. I have had an initial go and for some reason, all that CTRL+A then CTRL+C does is take one image (on a small 7 file album). I will of course try the obvious empty cache, log off etc. I am wondering if Photothieves have stopped this working. Have you had the copy and paste function work recently?

Thank you!!

@7Orange7

This comment has been minimized.

Copy link

commented May 16, 2019

I am using a Mac and it took we a while to work out why it wasn't working for me.

I was using Text Edit and saving the file. Text Edit saves the file as a .rtf file. I then changed the extension to .txt thinking it wouldn't make a difference but it does.
There are two options to fix this:
In Text Edit: Select preferences and there is an option to change the format that files get saved as. You can change it to .txt.

Or you can use Pages and export the file to Plain Text.

Once I made sure that the file wasn't .rtf and it was .txt it worked perfectly.

Thanks

Hey @michaelrobert9 - I am enthused as it sounds like if you have got the .txt issue (only) that you must be able to select and copy the direct links. Will see how I get on! Thank you!!

@Qriist

This comment has been minimized.

Copy link

commented May 16, 2019

all that CTRL+A then CTRL+C does is take one image (on a small 7 file album).
Have you had the copy and paste function work recently?

@7Orange7 For whatever reason, I could not successfully press CTRL+A and CTRL-C in the box when I attempted right now. I was able to right click to press Select All, then right click again to copy.

These newly gathered links were in the expected format and downloaded as usual.

Try other browsers, sometimes a broken script on one browser will work on another. (No idea if this is your issue.)

If you are on windows and using my script you should not have any issues with the line splits that @michaelrobert9 did. My script accounts for both line formats.

@traciebri

This comment has been minimized.

Copy link

commented May 18, 2019

I am using a Mac and am able to make the txt file just fine, when I copy and paste the script I am getting a message stating

cd ~/Desktop/photobucket
Brians-Mac-mini:photobucket brianmolgaard$ cut -d/ -f 7 photobucket_files.txt | grep "." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done
Brians-Mac-mini:photobucket brianmolgaard$ cut -d/ -f 7 photobucket_files.txt | grep -v "." | sort -u | while read dir; do mkdir ${dir}; cd ${dir}; grep "/${dir}/" ../photobucket_files.txt | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done; cd -; done
mkdir: 2008%20April: File exists
curl: (3) Illegal characters found in URL
/Users/brianmolgaard/Desktop/photobucket
Brians-Mac-mini:photobucket brianmolgaard$

A folder is created in my photobucket folder on my desktop, but it is empty

I am a novice to this so any help or insight you can provide would be greatly appreciated, I have 5,000+ photos I need to download and it is pretty critical since my physical media back up crashed and was unrecoverable.

EDIT: I changed the album url to remove the % from the space and it resulted in a longer string of the curl: (3) Illegal characters found in URL" errors.

Edit 2: I had missed the post from NefariozBG and his suggestion fixed everything, good to go now

@vadosnaprimer

This comment has been minimized.

Copy link

commented May 19, 2019

Still working, thanks. Copied links would open the image page in a browser, but downloading via DL manager app worked just fine. I put all links to a text file with .urls extension and opened it in Download Master.

@Qriist

This comment has been minimized.

Copy link

commented May 19, 2019

Updated my post above with a version that bypasses watermarks.

@ArashRasteh

This comment has been minimized.

Copy link

commented May 21, 2019

Thank you that worked for me. I did have to modify a few things in order to do this on Windows 10. Here is how I was able to do it.

I used Ubuntu 18.04.1 LTS for Windows 10 to do it (under the Windows Subsystem for Linux).

I made the photobucket_files.txt on windows, but it didn't like to run the command so it kept giving me an error: "curl: (3) Illegal characters found in URL".

I found out an easy way to fix this is to use dos2unix: https://askubuntu.com/questions/1117623/how-to-install-dos2unix-on-a-ubuntu-app-on-a-windows-10-machine

First of all I had to make sure I was in the same fold that had the txt file, so the cd THE_LOCATION_OF_YOUR_TXTFILE command was done first.

For me personally I didn't have dos2unix installed, so I ran the following commands to do it.
sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse"
then
sudo apt install dos2unix
then
dos2unix photobucket_files.txt

Then I could run the rest of the commands
cut -d\/ -f 7 photobucket_files.txt | grep "\." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done
cut -d\/ -f 7 photobucket_files.txt | grep -v "\." | sort -u | while read dir; do mkdir ${dir}; cd ${dir}; grep "/${dir}/" ../photobucket_files.txt | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done; cd -; done

Hopefully that was helpful to someone else out there.

@Monalyssa

This comment has been minimized.

Copy link

commented May 26, 2019

Thank you. That worked for me. I did have to modify a few things to do this on Windows 10. Here is how I was able to do it.

I used Ubuntu 18.04.1 LTS for Windows 10 to do it (under the Windows Subsystem for Linux).

I made the photobucket_files.txt on windows, but it didn't like to run the command so it kept giving me an error: "curl: (3) Illegal characters found in URL".

I found out an easy way to fix this is to use dos2unix: https://askubuntu.com/questions/1117623/how-to-install-dos2unix-on-a-ubuntu-app-on-a-windows-10-machine

First of all I had to make sure I was in the same fold that had the txt file, so the cd THE_LOCATION_OF_YOUR_TXTFILE command was done first.

For me personally I didn't have dos2unix installed, so I ran the following commands to do it.
sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse"
then
sudo apt install dos2unix
then
dos2unix photobucket_files.txt

Then I could run the rest of the commands
cut -d\/ -f 7 photobucket_files.txt | grep "\." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done
cut -d\/ -f 7 photobucket_files.txt | grep -v "\." | sort -u | while read dir; do mkdir ${dir}; cd ${dir}; grep "/${dir}/" ../photobucket_files.txt | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done; cd -; done

Hopefully that was helpful to someone else out there.

Hi Arash. I a, not sure how to do this. The commands that you have listed :
"sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse"
then
sudo apt install dos2unix
then
dos2unix photobucket_files.txt"

do not want to work. Because I was having trouble getting it to function, I started manually downloading one by one of the pictures that I have on Photobucket, but their program keeps crashing. The only pics and videos that I have been able to download are about 7 so far, and I have been at this for hours. I was wondering if you can help, please.
Can you please tell me which Command do I need to run the " sudo apt install dos2unix" and the rest of the commands, where do I run those? Also, this command, cd THE_LOCATION_OF_YOUR_TXTFILE, which command program do I use?
Thank you so much for your patience.

@Monalyssa

This comment has been minimized.

Copy link

commented May 26, 2019

Thank you that worked for me. I did have to modify a few things in order to do this on Windows 10. Here is how I was able to do it.

I used Ubuntu 18.04.1 LTS for Windows 10 to do it (under the Windows Subsystem for Linux).

I made the photobucket_files.txt on windows, but it didn't like to run the command so it kept giving me an error: "curl: (3) Illegal characters found in URL".

I found out an easy way to fix this is to use dos2unix: https://askubuntu.com/questions/1117623/how-to-install-dos2unix-on-a-ubuntu-app-on-a-windows-10-machine

First of all I had to make sure I was in the same fold that had the txt file, so the cd THE_LOCATION_OF_YOUR_TXTFILE command was done first.

For me personally I didn't have dos2unix installed, so I ran the following commands to do it.
sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse"
then
sudo apt install dos2unix
then
dos2unix photobucket_files.txt

Then I could run the rest of the commands
cut -d\/ -f 7 photobucket_files.txt | grep "\." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done
cut -d\/ -f 7 photobucket_files.txt | grep -v "\." | sort -u | while read dir; do mkdir ${dir}; cd ${dir}; grep "/${dir}/" ../photobucket_files.txt | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done; cd -; done

Hopefully that was helpful to someone else out there.

My email is monalyssastudios@gmail.com :)

@ArashRasteh

This comment has been minimized.

Copy link

commented May 26, 2019

Thank you. That worked for me. I did have to modify a few things to do this on Windows 10. Here is how I was able to do it.
I used Ubuntu 18.04.1 LTS for Windows 10 to do it (under the Windows Subsystem for Linux).
I made the photobucket_files.txt on windows, but it didn't like to run the command so it kept giving me an error: "curl: (3) Illegal characters found in URL".
I found out an easy way to fix this is to use dos2unix: https://askubuntu.com/questions/1117623/how-to-install-dos2unix-on-a-ubuntu-app-on-a-windows-10-machine
First of all I had to make sure I was in the same fold that had the txt file, so the cd THE_LOCATION_OF_YOUR_TXTFILE command was done first.
For me personally I didn't have dos2unix installed, so I ran the following commands to do it.
sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse"
then
sudo apt install dos2unix
then
dos2unix photobucket_files.txt
Then I could run the rest of the commands
cut -d\/ -f 7 photobucket_files.txt | grep "\." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done
cut -d\/ -f 7 photobucket_files.txt | grep -v "\." | sort -u | while read dir; do mkdir ${dir}; cd ${dir}; grep "/${dir}/" ../photobucket_files.txt | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done; cd -; done
Hopefully that was helpful to someone else out there.

Hi Arash. I a, not sure how to do this. The commands that you have listed :
"sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse"
then
sudo apt install dos2unix
then
dos2unix photobucket_files.txt"

do not want to work. Because I was having trouble getting it to function, I started manually downloading one by one of the pictures that I have on Photobucket, but their program keeps crashing. The only pics and videos that I have been able to download are about 7 so far, and I have been at this for hours. I was wondering if you can help, please.
Can you please tell me which Command do I need to run the " sudo apt install dos2unix" and the rest of the commands, where do I run those? Also, this command, cd THE_LOCATION_OF_YOUR_TXTFILE, which command program do I use?
Thank you so much for your patience.

@Monalyssa,

I used Ubuntu 18.04.1 LTS for Windows 10, but could work on other Windows Subsystem for Linux distributions. Basically I am running Ubuntu Linux's terminal within Windows 10. The reason why I used dos2unix is because Windows files don't natively work with Linux. So even a text file has extra parameters that can mess up the process on linux.

It would probably be easier to do this on a native linux terminal. Like running a Virtual Machine of Ubuntu on your Windows Computer. But as I demonstrated, it can work without a Virtual Machine, but just needs some programs installed.

Also THE_LOCATION_OF_YOUR_TXTFILE is just a placeholder for the folder string. So replace THE_LOCATION_OF_YOUR_TXTFILE with /mnt/c/user/... until you get to the correct folder that has the TXT file that has all the URLs.

I hope that helps.

@leland729

This comment has been minimized.

Copy link

commented May 28, 2019

Thank you for this! Worked for me on May 28, 2019 using instructions in OP.

I had some trouble with getting the direct links. I had turned that off in the settings years ago and only had the IMG code available in step 7. I had to go into the settings and re-enable the direct links (Settings, Albums, under "Links" make sure Direct Link is selected). Once I did that, I was able to follow the instructions in the OP.

Took me about an hour to download 30 albums with ~2500 files in total. Was able to get all albums with no sub-folders in one go. Albums with subfolders took a little bit of work, but still workable.

Thank you!!

@ghost

This comment has been minimized.

Copy link

commented May 31, 2019

I have a simple fix or workaround! If you go and highlight multiple images in a file (provided it isn't toooo huge, use shift when you tap the end!), share, copy the direct links into a text file. Replace all the .png or gif. or whatever with the same but with a semicolon attached ( .png; .gif; ) and paste it into this icon table generator. Click and you'll get a page with all your images you can save!

http://icontable.morning-songs.com

@seruzachan

This comment has been minimized.

Copy link

commented Jun 7, 2019

I have a simple fix or workaround! If you go and highlight multiple images in a file (provided it isn't toooo huge, use shift when you tap the end!), share, copy the direct links into a text file. Replace all the .png or gif. or whatever with the same but with a semicolon attached ( .png; .gif; ) and paste it into this icon table generator. Click and you'll get a page with all your images you can save!

http://icontable.morning-songs.com

This is great thank you, do you know if is there any way of downloading all photos simultaneously?

@seruzachan

This comment has been minimized.

Copy link

commented Jun 7, 2019

I'm still having trouble with this, I've followed coops1967's instructions but I still receive the 'No such file or directory' error on Mac, could anyone please help?

@ghost

This comment has been minimized.

Copy link

commented Jun 8, 2019

@seruzachan

This comment has been minimized.

Copy link

commented Jun 8, 2019

You can save the resulting webpage! Your compuuter should then save all the images on its automatically ( I only know of Windows, I can't tell if this works for other devices).

On Fri, Jun 7, 2019 at 8:08 PM seruzachan @.***> wrote: I have a simple fix or workaround! If you go and highlight multiple images in a file (provided it isn't toooo huge, use shift when you tap the end!), share, copy the direct links into a text file. Replace all the .png or gif. or whatever with the same but with a semicolon attached ( .png; .gif; ) and paste it into this icon table generator. Click and you'll get a page with all your images you can save! http://icontable.morning-songs.com This is great thank you, do you know if is there any way of downloading all photos simultaneously? — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://gist.github.com/a9e1eae2d999a2529a08c15b06deb13d?email_source=notifications&email_token=AMG4HMYBUHPZMH2N2QJ2UKDPZKIZBA5CNFSM4HHEPRTKYY3PNVWWK3TUL52HS4DFVNDWS43UINXW23LFNZ2KUY3PNVWWK3TUL5UWJTQAFTKD4#gistcomment-2937918, or mute the thread https://github.com/notifications/unsubscribe-auth/AMG4HMZQQSPNDNV7O7KLZXTPZKIZBANCNFSM4HHEPRTA .

Thanks so much! :)

@7Orange7

This comment has been minimized.

Copy link

commented Jun 10, 2019

Finally! :)

Thank you to @philipjewell for the process, @Qriist helping me capture the images and @NefariozBG for the tweak.

Problems I ran into included illegal characters and using RTF not TXT.

This worked for me today 10th June - 319 files.

MAC Mojave version10.14.5

STEP ONE
Create the photobucket album on the desktop and ensure named exactly photobucket

STEP TWO
Create a new TXT file. Find TextEdit in the Applications folder. By default, it creates formatted documents saved in rich text format, but you can use it to create a plain text file on a Mac. To do so, launch the program, click the "Format" menu and select "Make Plain Text."
Call this folder photobucket_files.txt and ensure it is saved in your new folder

STEP THREE
Log into Photobucket and add all the links that @philipjewell walks through. This was harder than it should be. Use CTRL+A and then right click copy. I pasted into an email first to sanity check. Paste this into your photobucket_files.txt and save it

STEP FOUR
Open Terminal and paste the below in

cd ~/Desktop/photobucket
cut -d/ -f 7 photobucket_files.txt | grep "." | while read file; do grep "${file}$" photobucket_files.txt; done | while read file; do curl -O --referer "http://s.photobucket.com/" ${file}; done
cut -d/ -f 7 photobucket_files.txt | grep -v "." | sort -u | while read dir; do mkdir ${dir}; cd ${dir}; grep "/${dir}/" ../photobucket_files.txt | while read file; do curl -O --referer "http://s.photobucket.com/" ${file%$'\r'}; done; cd -; done

Literally all there is to it!

THANKS FOR ALL THE HELP!!!!!!!!

EDIT - Just noticed that some were missing despite matching numbers. All folders were there but not the loose ones. Had to redo and move those in the bucket in manually. This does not read well, but hopefully makes sense

@nosey74

This comment has been minimized.

Copy link

commented Jun 17, 2019

Not a programmer/coder but created an account to say "Thank You!" I was stressing about losing hundreds of baby pictures of my niece and nephews and customer support at PhotoBucket was of no help but thanks to your tutorial I was able to download all my pictures.

@caslandr

This comment has been minimized.

Copy link

commented Jun 19, 2019

Worked like a charm and saved me a ton of time, thank you.

@hgdagon

This comment has been minimized.

Copy link

commented Jun 19, 2019

Can I kiss you? Like, on the cheek? Not only is this simple and hassle-free, it is also the only solution that works! You have my infinite gratitude.

@Luthien-in-edhil

This comment has been minimized.

Copy link

commented Jun 22, 2019

This is plain, unadulterated awesomeness. Of course it's not a hugely complex script, but you took the effort to publish it and offer help to people who have some issues getting it to work. Thank you so much for that.

I suddenly started to get spammed by Photobucket in their effort to force everyone to purchase a yearly plan, even though I have used Photobucket only for a short time, some 10 years ago; and I only had around one hundred pictures and a handful of low-res video files uploaded there. Like everybody else, I was thoroughly fed up by them making it so hard to download my pictures and let PB stew in their juices. But now I can, and good riddance to them. Finding this script more than made up for the annoyance caused by PB.

@thanksla

This comment has been minimized.

Copy link

commented Jul 11, 2019

Thanks, followed 7Orange7 and phillipjewell instructions. Very reliable, unlike h8 Photofucket.
If you're first time-terminal noob like me, never knew i was supposed to copy the command line by line, rather than paste the 3 lines together.
I used phillipjewell code. The first download was perfect, but the illegal code came up on my 2nd try copying the code.
I worked it around by re-trying to copy the direct-link and pasting it straight into the txt. (worked out fine somehow in the end)

@rfranklin15

This comment has been minimized.

Copy link

commented Jul 16, 2019

With PowerShell, I was able to just do this- - Granted, you need to first build the txt file with the list of links.
Pre-Reqs: 1. Disc Space, 2. C:\Temp folder, 3. Modify where your text file [photobucket_files.txt] is located or, run PS from there.
Keeps the file names - if a folder has a same name, will prompt to over-write or, you can add -Force and blow it away.
One could use the Split again and build local folders but, I just wanted off photo-bucket.

#-------------
$client = new-object System.Net.WebClient
$photos=Get-Content .\photobucket_files.txt
md c:\temp -ErrorAction SilentlyContinue
$filelocation = "C:\temp"
$photos|%{$filename=Split-Path -Path $_ -Leaf;"File from $_ - Going here:$filelocation$filename";$client.DownloadFile($_,"$filelocation$filename")}
#----------------

Gives you a nice on screen list of whats coming down - Probably pipe it to a log if needed.

This powershell script worked perfectly for me. July 16, 2019.
The only change I made was to add a slash at the end of temp in filelocation: $filelocation = "C:\temp"

Thank you!!

@compsciteacher

This comment has been minimized.

Copy link

commented Jul 19, 2019

Perfection! This worked great, thanks I had a little over 1k pictures so this saved me an insane amount of time. Now just backup to my AWS, even though it is glacier and takes forever.

@tmayville

This comment has been minimized.

Copy link

commented Jul 19, 2019

Just in case anyone liked the PowerShell option.
That same person has a OneDrive account.

That same person knows about its security etc - App password needed etc..
Here is an awesome Function that will let you map the OneDrive location to a Windows drive. (O Drive)

Just paste this in an elevated PS Prompt and then run OneDrv2Map

NOTE: The password it wants, is called an App Password.

  • You have to go under your onedrive account, into security and select "create new app password" - looks like jibberish but, that is what it uses. ;)

Once it does its magic, you have an O: drive, you can then do anything you need, like robocopy, xcopy or, just drag and drop in Windows Explorer..

The CID info is in the onedrive URL that your browser has. ;) - example added picture.

###########

Written by Tim Mayville

Version 0.0001

###Free-OneDrive Mapper

###########
#--------------------------------------
Function OneDrv2Map {
$ID = Read-Host "Please enter your Windows Live CID"
$addr = "https://d.docs.live.net/"
$end = "/"
$map = "$addr$ID$end"
$user = Read-Host "Please enter your Windows Live Email"
$password = Read-Host "Please Enter your Windows Live Account Password" -AsSecureString
$decodedpassword = [Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($password))
net use O: $map $decodedpassword /user:$user
Write-Output "Please allow up to 30 second for this command to complete"
Write-Host "Go to O:, within this PS window."
}

#-------------------------------

Enjoy.
T.
image

Infrastructure Engineer
Servers & Storage

@Kaostico

This comment has been minimized.

Copy link

commented Jul 22, 2019

$client = new-object System.Net.WebClient
$photos=Get-Content .\photobucket_files.txt
md c:\temp -ErrorAction SilentlyContinue
$filelocation = "C:\temp"
$photos|%{$filename=Split-Path -Path $_ -Leaf;"File from $_ - Going here:$filelocation$filename";$client.DownloadFile($_,"$filelocation$filename")}

Well, I cannot make this powershell script work. For starters, I had to execute, before the script, this:
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Because I got the following error "The request was aborted: Could not create SSL/TLS secure channel."

Well, now it's throwing me "The remote server returned an error: (503) Server Unavailable" for every file. What am I doing wrong?

This is one of the files: https://i272.photobucket.com/albums/jj174/Kreyla_Vaely/IMG_1214.jpg

@rfranklin15

This comment has been minimized.

Copy link

commented Jul 24, 2019

I'm getting the 503 error now, too:

Exception calling "DownloadFile" with "2" argument(s): "The remote server returned an error: (503) Server Unavailable."
At line:1 char:98

  • ... location$filename";$client.DownloadFile($_,"$filelocation$filename")}
  •                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    
    • CategoryInfo : NotSpecified: (:) [], MethodInvocationException
    • FullyQualifiedErrorId : WebException
@rfranklin15

This comment has been minimized.

Copy link

commented Jul 31, 2019

Aaaaaand the powershell script has started working again.

Kaostico, see if you can get it to work now.

@Kaostico

This comment has been minimized.

Copy link

commented Jul 31, 2019

Thanks rfranklin15! It seems it was something temporal on their end. I finally managed to free myself from Photobucket.

edit: Oh crap, I spoke too soon. It seems every image downloaded this way now have watermarks.

@abtabt

This comment has been minimized.

Copy link

commented Aug 7, 2019

This instruction is so clear but I seem to have issue on the way. Folder "photobucket" on the desktop..... check; text file inside the folder "photobucket_files.txt. ..... check.
Please help:
image

@Nettyhead1

This comment has been minimized.

Copy link

commented Aug 29, 2019

I seem to be having the same problem as abtabt.

C:\Users\annet>cd ~/Desktop/photobucket
The system cannot find the path specified.

DesktopPhotobucket

@bush3102

This comment has been minimized.

Copy link

commented Sep 6, 2019

The following works as of 9/6/2019 for Windows 10 and PowerShell.

  1. Create a 'temp' folder under the C:\ (C:\temp)
  2. In Photobucket, select all of your photos and copy all the links under 'Direct'
  3. Paste the 'Direct' URLs in a text file and save it in the C:\temp directory. Call it photobucket_files.txt
  4. In the text document, do CTRL+H (for find and replace).
    • Find What: oi290
    • Replace With: s290
  5. In the text document, do CTRL+H (for find and replace).
  6. Save the text file again so the updated URLs will be saved.
  7. Open PowerShell ISE
  8. Run the following code
#-------------
$client = new-object System.Net.WebClient
$photos=Get-Content c:\temp\photobucket_files.txt
md c:\temp -ErrorAction SilentlyContinue
$filelocation = "C:\temp\"
$photos|%{$filename=Split-Path -Path $_ -Leaf;"File from $_ - Going here:$filelocation$filename";$client.DownloadFile($_,"$filelocation$filename")}
#----------------
@spadgy

This comment has been minimized.

Copy link

commented Sep 6, 2019

The following works as of 9/6/2019 for Windows 10 and PowerShell.

Well - your method just worked for me @bush3102! Thank you.

I did have to improvise a little, though. On searching for 'oi290' I found nothing. In searching for 'i290' I found nothing.

Then I realised my links started with 'i253'. So I replaced all 'i253' with 's253'. I had no 'io253' or other numbers staring with 'io', so I didn't have anything to replace there.

Other wise I followed the above and it worked like a charm.

Thanks again!

@c7borg

This comment has been minimized.

Copy link

commented Sep 23, 2019

@bush3102 Thank you worked for me..

nice and simple as @spadgy I had to change my links from i3 to s3

@nastynate317

This comment has been minimized.

Copy link

commented Sep 23, 2019

@bush3102 THANK YOU SO MUCH! I have been working on this for weeks and was not able to get it to work. Was able to open the direct link in notepad++. I had to identify the links to change, for me it was oi298 and il298. I changed them both to s298 and ran the command prompt. Worked perfectly! Thank you so very much!

@GanonTEK

This comment has been minimized.

Copy link

commented Oct 12, 2019

Hi, I've a method that helped me as a Windows user. It doesn't require a command line. Here are the steps:

  1. Open Photobucket and tick one of your photos
  2. Click on Select All
  3. Click on Link - this will generate a link per photo you have selected
  4. Highlight and copy the HTML list of links (You are given 4 different boxes of links). You can use ctrl+A to highlight everyone and ctrl+c to copy
  5. Open Notepad
  6. Paste in your list of HTML links (ctrl+V)
  7. Use "Find & Replace"
  8. Replace: "https://i761.photobucket.com"with "https://s761.photobucket.com/component/Download-File?file=" . You might have a different number after the i or s. Just use the number you have instead
  9. Click Replace all
  10. Go to Save As in notepad and save as a .html file
  11. Click on your new .html file and it will open in your default browser (you can drag it into any open browser to load it either). You will see a webpage will all your photos starting to load on it
  12. Press ctrl+S to bring up Save Webpage or look for that option in your browser's menu
  13. Choose "Web Page, Complete" as the save type
  14. It will now download all your photos into a folder with the same name as the .html file. The folder may not appear until all the images are downloaded

Hope that helps.
Kind regards,

GanonTEK

@EllyHood

This comment has been minimized.

Copy link

commented Oct 13, 2019

THANK YOU!!!! This is my first time doing any sort of coding since customizing my myspace page so thank you for making this easy to follow for those of us who don't know what we're doing. I did have trouble with getting a "no such file or directory" message from Terminal, but I had opened a new command and pasted the code there. I tried again by just pasting the code in the window that pops up when you open Terminal and so far it's been running like a champ except for one gif that it spent 28 min on when my computer went to sleep (I won't walk away again!). Bless you for saving the internet from Photobucket's nonsense!

@mrsthompy

This comment has been minimized.

Copy link

commented Oct 14, 2019

"cut: photobucket_files.txt: Operation Not Permitted"

Perfection! This worked great, thanks I had a little over 1k pictures so this saved me an insane amount of time. Now just backup to my AWS, even though it is glacier and takes forever.

could you please help me? I am on my MacBook and keep getting the message "operation not permitted" when i try to run the code in "Run Bash"

@pjhslaw

This comment has been minimized.

Copy link

commented Oct 17, 2019

Please help, I am not able to create a html file in notebook.

@GanonTEK

This comment has been minimized.

Copy link

commented Oct 17, 2019

@pjhslaw When you go to Save As in notepad you get 2 options: .txt or All files. Choose All files and make sure to type .html at the end of the filename.
If all this doesn't work. Download the program notepad++ and it has html in it's Save As list.

@pjhslaw

This comment has been minimized.

Copy link

commented Oct 18, 2019

Thank you GanonTEK, the file is now a html file but when I open it, it does not start downloading my pictures. I tried in 2 different browsers.

file:///C:/Users/Owner/Documents/photobucket.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.