-
-
Save iamtekeste/3cdfd0366ebfd2c0d805 to your computer and use it in GitHub Desktop.
Download Google Drive files with WGET | |
Example Google Drive download link: | |
https://docs.google.com/open?id=[ID] | |
To download the file with WGET you need to use this link: | |
https://googledrive.com/host/[ID] | |
Example WGET command: | |
wget -O file https://googledrive.com/host/[ID] |
thanks, can work
wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME
Hey I'm trying to download the dataset from google drive using the terminal in Ubuntu. The dataset is around 89GB
Could anyone please confirm me exact command to run using the URL ''https://drive.google.com/file/d/1WvlAIvuochQn_L_f9p3OdFdTiSLlnnhv/view?usp=sharing"
wget --load-cookies /tmp/cookies.txt "https://drive.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://drive.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.confirm=([0-9A-Za-z]+)._/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
It works likes a charm in Nov 2020 !! Many Thanks !!
@shriyakak
Hi, this one worked for me once for a large file around 11 GB. But then didn't work for another large file. you can try it once.
Hi, thanks for the prompt reply. Weirdly enough, now when I tried it again, it worked. I tried it a couple of times to make sure, and after a few successful tries I'm now getting this error:
HTTP request sent, awaiting response... 403 Forbidden 2020-07-18 17:38:04 ERROR 403: Forbidden.
Example of a file I need to download has id1S43DCZhMV6trYUYstwtZ-vmEmZogvgXF
When you try to download a file anonymously, you receive a notification https://i.imgur.com/A7RNWxR.png.
Once you fix the problem, the script should work correctly.
@beliys : Thanks for your solution but as you said if we login with our user the command will work fine. I'm trying to download remotely and I have added my username and password in the command. But it is still giving me the error " Sorry, you can't view or download this file at this time.
Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator"
The command I tried is "wget --load-cookies /tmp/cookies.txt "https://drive.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate --user=XXXXXX@gmail.com --password='XXXXXXX' 'https://drive.google.com/uc?export=download&id=1WvlAIvuochQn_L_f9p3OdFdTiSLlnnhv' -O- | sed -rn 's/.confirm=([0-9A-Za-z]+)./\1\n/p')&id=1WvlAIvuochQn_L_f9p3OdFdTiSLlnnhv" -O images1024x1024 && rm -rf /tmp/cookies.txt"_
The data I'm trying to download is from this link "https://drive.google.com/drive/folders/1WocxvZ4GEZ1DI8dOz30aSj2zT6pkATYS"
folder name is "images1024x1024.zip"
Could you please correct if I'm wrong?
wget --load-cookies /tmp/cookies.txt "https://drive.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://drive.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.confirm=([0-9A-Za-z]+)._/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
It works likes a charm in Nov 2020 !! Many Thanks !!@shriyakak
Hi, this one worked for me once for a large file around 11 GB. But then didn't work for another large file. you can try it once.
@praj441 : I tried but not working for me.
It is giving me " Sorry, you can't view or download this file at this time.
Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator" error.
Any idea how to use it with user name and password?
What worked for me at the end was this script:
#!/bin/bash
if [ $# != 2 ]; then
echo "Usage: googledown.sh ID save_name"
exit 0
fi
confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$1 -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')
echo $confirm
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$confirm&id=$1" -O $2 && rm -rf /tmp/cookies.txt
What worked for me at the end was this script:
#!/bin/bash if [ $# != 2 ]; then echo "Usage: googledown.sh ID save_name" exit 0 fi confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$1 -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p') echo $confirm wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$confirm&id=$1" -O $2 && rm -rf /tmp/cookies.txt
I tried to download a file using this command but it failed.
wget --load-cookies /tmp/cookies.txt "https://drive.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://drive.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.confirm=([0-9A-Za-z]+)._/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
It works likes a charm in Nov 2020 !! Many Thanks !!@shriyakak
Hi, this one worked for me once for a large file around 11 GB. But then didn't work for another large file. you can try it once.@praj441 : I tried but not working for me.
It is giving me " Sorry, you can't view or download this file at this time. Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator" error.Any idea how to use it with user name and password?
Oh I also want to download this datafile and got the exact the same error log like you. Do you find any method to download this huge data now?
Try this, worked for me few days ago (5G file):
#!/bin/bash
if [ $# != 2 ]; then
echo "Usage: googledown.sh ID save_name"
exit 0
fi
confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id='$1 -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')
echo $confirm
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$confirm&id=$1" -O $2 && rm -rf /tmp/cookies.txt
Only this worked for a 68G file.
Only this worked for a 68G file.
are you referring to the answer using IDM?
I am having the same error like people above. I made a question post on stackoverflow if you want to follow.
Only this worked for a 68G file.
are you referring to the answer using IDM?
no, the one with OAuth https://www.quora.com/How-do-I-download-a-very-large-file-from-Google-Drive/answer/Shane-F-Carr
Only this worked for a 68G file.
are you referring to the answer using IDM?
Worked for me thank you
pip install gdown
In terminal:
gdown --id FILEID -O FILENAME
In python:
import gdown url = 'https://drive.google.com/uc?id=0B9P1L--7Wd2vU3VUVlFnbTgtS2c' output = 'spam.txt' gdown.download(url, output, quiet=False)
Can be found HERE
how i can use gdown directly from my command line ?
when i use gdown always command not found
Tks, @beliys! It worked!
command for download any big file from google drive (for big file we need confirm download)
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
thanks @beliys, works perfectly!
Only this worked for a 68G file.
are you referring to the answer using IDM?
Worked for me thank you
worked for me too (22GB)
Only this worked for a 68G file.
are you referring to the answer using IDM?
Worked for me thank you
worked for me too (22GB)
it worked on me, but always stuck and get error message curl: (56) TCP connection reset by peer, anyone can help ? thankyou i've file size 77GB
Thanks @vladalive still working in 2021 and it's really time saving
obrigado a todos,
wget --no-check-certificate ' https://docs.google.com/uc?export=download&id=FILEID ' -O FILENAME não funcionou para mim para o arquivo .rar, mas adicionar a opção de nova tentativa fez com que funcionasse
isso funcionou. obrigado
wget --no-check-certificate -r ' https://docs.google.com/uc?export=download&id=FILEID ' -O FILENAME@oguntola2018 perfeito!
Hello @maxsnet and members!
I see many members who report success in the "wget" command.
I’ve been trying for two days and I haven’t succeeded.
The file that is in Google Drive is:
D_mega_2.zip which I also changed to D_mega_2.rar
My shared link to any person is:
https://drive.google.com/file/d/1mZF1PMbzoeUWy6ce_NHpFrRJ9yDz6bou/view?usp=sharing
I made the indicated change and created a batch file conc.bat:
wget --no-check-certificate -r https://docs.google.com/uc?export=download&id=1mZF1PMbzoeUWy6ce_NHpFrRJ9yDz6bou -O D_mega_2.zip
When executing I have the following answer:
--2021-05-27 20: 06: 31-- https://docs.google.com/uc?export=download
Resolving docs.google.com (docs.google.com) ... 2800: 3f0: 4001: 802 :: 200e, 142.250.
219.14
Connecting to docs.google.com (docs.google.com) | 2800: 3f0: 4001: 802 :: 200e |: 443 ...
connected.
HTTP request sent, awaiting response ... 400 Bad Request
2021-05-27 20:06:31 ERROR 400: Bad Request.
They could tell me where I'm going wrong.
If you run the same command line in your browser, the file will always download.
https://docs.google.com/uc?export=download&id=1mZF1PMbzoeUWy6ce_NHpFrRJ9yDz6bou
Thank you all
Marcos Paris
Only this worked for a 68G file.
This works for me.
command for download any big file from google drive (for big file we need confirm download)
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
This worked for me by changing https://docs.google.com to https://drive.google.com
Yeah, it save my life!
i solve the "html“ problem by this https://stackoverflow.com/questions/65312867/how-to-download-large-file-from-google-drive-from-terminal-gdown-doesnt-work
Only this worked for a 68G file.
This works for a 188G file! Thanks a lot!
@beliys <3
@LL3RD Worked for me
Only this worked for a 68G file.
This works for a 3.3G file with a so fast speed. Thanks a lot!
How can I download the dataset here https://drive.google.com/drive/folders/0BweDykwS9vIoUG5nNGRjQmFLTGM?resourcekey=0-dHhRVxB0LDUcUVtASUIgTQ
I want to download Stanford3dDataset_v1.2.tar
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.confirm=([0-9A-Za-z_]+)./\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
Thanks, @beliys! It worked perfectly for me.
Why not work, kinda. The file is 886MB but it download its html file...
u0_a227@localhost ~
└─▶ wget --load-cookies $TMPDIR/cookies.txt "https://drive.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies $TMPDIR/cookies.txt --keep-session-cookies --no-check-certificate 'https://drive.google.com/uc?export=download&id=1IPQdFtw6ukMcDfgaxf-qPjtk9Iy34el0' -O- | sed -rn 's/.confirm=([0-9A-Za-z_]+)./\1\n/p')&id=1IPQdFtw6ukMcDfgaxf-qPjtk9Iy34el0" -O glibc.tar.xz && rm -rf $TMPDIR/cookies.txt
--2024-02-09 10:42:51-- https://drive.google.com/uc?export=download&confirm=&id=1IPQdFtw6ukMcDfgaxf-qPjtk9Iy34el0
Resolving drive.google.com (drive.google.com)... 142.251.220.142, 2404:6800:4017:801::200e
Connecting to drive.google.com (drive.google.com)|142.251.220.142|:443... connected.
HTTP request sent, awaiting response... 303 See Other
Location: https://drive.usercontent.google.com/download?id=1IPQdFtw6ukMcDfgaxf-qPjtk9Iy34el0&export=download [following]
--2024-02-09 10:42:51-- https://drive.usercontent.google.com/download?id=1IPQdFtw6ukMcDfgaxf-qPjtk9Iy34el0&export=download
Resolving drive.usercontent.google.com (drive.usercontent.google.com)... 142.251.220.193, 2404:6800:4017:803::2001
Connecting to drive.usercontent.google.com (drive.usercontent.google.com)|142.251.220.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2434 (2.4K) [text/html]
Saving to: ‘glibc.tar.xz’
glibc.tar.xz 100%[=================>] 2.38K --.-KB/s in 0.02s
2024-02-09 10:42:53 (115 KB/s) - ‘glibc.tar.xz’ saved [2434/2434]
u0_a227@localhost ~
└─▶
Made this code using the native wget and sed for windows. Since these compiled files are made specifically for windows, the code needs to be modified a little, according to windows command line.
Save that above code as google_down.bat and download the wget and sed from above (rename sed-4.8-x64.exe to sed.exe) and place them all in a folder. Add that folder up to the system/user environment path.
Once done, google_down can be executed from anywhere, provided the place it's executed from has cookies.txt
From there, just run this command.
google_down FILEID
Note: Removed -O FILENAME and instead used --content-disposition to directly resolve the filename from server end. Also, the code is working fine. I have tried it with multiple sized files. Only caveat I saw so far was that I couldn't resume a download if it errored, or if network was lost for too long the command was just stuck there.
EDIT: cookies.txt was exported using Get Cookies.txt, really any extension can be used or any method, as long as the cookies.txt is in the Netscape format.
Note_2: I seem to be noticing that many a people are unable to use it consistently. Perhaps it would be wise to export cookies.txt after logging into google first and then not removing them after the work is done. That way that signed in cookie is reusable afterwards again, at least that is what I am using and removed the part,
rm -rf /tmp/cookies.txt
from my code for the very same purpose. Also, once the cookies are exported after logging in, there shouldn't be any problems unless the file was restricted. This note is for anyone using the command @beliys wrote initially at the beginning.