Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Clone all repos from a GitHub organization
curl -s | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone #{repo["ssh_url"]} ]}'

Note if you need to access private repos you can modify as follows (replace the [[VARIABLE]] with the suitable value:

curl -u [[USERNAME]] -s[[ORGANIZATION]]/repos?per_page=200 | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone #{repo["ssh_url"]} ]}'

git complained that my rsa key did not exist at gitorious-id_rsa so i simply copied it. Not sure why it would be looking there but it did the trick. Thanks for posting this up.

i did

sudo apt-get install libjson-ruby


I was able to get access to private repos by creating an OAuth token in the "Personal access tokens" section of the "Applications" tab under "Personal settings", and modifying the snippet to auth with that token:

curl -u <token>:x-oauth-basic -s<organization>/repos\?per_page\=200 | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone #{repo["ssh_url"]} ]}'

boban-dj commented Aug 7, 2015

Just to add: it worked for me to backup all my public repo's from user account like this:

curl -u [[USERNAME]] -s[[USERNAME]]/repos?per_page=200 | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone #{repo["ssh_url"]} ]}'

Thanks for the gist!

TomHoss commented Aug 10, 2015

Just tried this and it looks like the max per_page is 100. If you use 200, it will fail silently and cut it off at 100

I made a script using Python 3 and Github APIv3

tiriana commented Oct 7, 2015

Thank you @caniszczyk, and thank you @mattheworiordan.

scanf commented Dec 3, 2015

Thanks, this turned out useful for cloning

I tried:
curl -s[folder_name]/repos?per_page=20&page=81 | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone #{repo["ssh_url"]} ]}'
It gives me following error:
-e:1:in <main>': undefined methodeach' for nil:NilClass (NoMethodError)

If people don't have ruby installed (such as on windows using the git bash prompt) you can use the following.

curl -s\?per_page\=200 | perl -ne 'print "$1\n" if (/"ssh_url": "([^"]+)/)' | xargs -n 1 git clone

fgimian commented Jun 1, 2016

Thanks, great job ๐Ÿ˜„

Here's a version in Python which should work without any extras on both OS X and Linux (as long as Python 2.6 or newer is installed):

curl -s | python -c $'import json, sys, os\nfor repo in json.load(sys.stdin): os.system("git clone " + repo["ssh_url"])'

erm3nda commented Jun 4, 2016

Thank you @fgimian, i like to use less dependencies as posible, always. Use the python json module is even better than require to install any other things nor require version 3, as 2.6 is (still) the legacy option.

I found better to use clone_url instead of ssh, wich requires you to have the proper rights. U can use that to clone public repos.
curl -s | python -c $'import json, sys, os\nfor repo in json.load(sys.stdin): os.system("git clone " + repo["clone_url"])'

1dal commented Jul 15, 2016

Ok, PHP one-liner here:

php -r 'foreach(json_decode(shell_exec("curl -s".readline("Target(\"orgs/twitter\", \"users/onedal88\" etc.): ")."/repos?per_page=31337")) as $r)system("git clone {$r->clone_url}");'

ameygat commented Aug 17, 2016

I have written python2 scripts for downloading all repos of a user or a organization Github Python Scripts

boussou commented Oct 2, 2016

Hey guys, you don't need python or ruby for that.
here's a pure shell version:

for i in `curl   -s$ORGANIZATION/repos?per_page=200 |grep html_url|awk 'NR%2 == 0'|cut -d ':'  -f 2-3|tr -d '",'`; do  git clone $i.git;  done

of course, if you put it in a file you should replace $ORGANIZATION by $1

Here is a script I created from these examples that build two additional scripts to manage you Organization Repos

derFunk commented Dec 2, 2016

This bash script helped me cloning private repos via ssh urls with two factor authentication (using a oauth token - you have to check "Full control of private repositories"):

for i in `curl -u [[USERNAME:TOKEN]] -s "" |grep ssh_url | cut -d ':' -f 2-3|tr -d '",'`; do git clone $i; done

See here for token generation:

brydavis commented Jan 4, 2017

Thanks for the script!

Ran into permission / SSH key issues on a new computer.

So, I changed script to use "clone_url" instead of "ssh_url".

Here's the full line.

curl -s | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone #{repo["clone_url"]} ]}'

For jq users, here's what it would look like:

curl -u TOKEN:x-oauth-basic '' | jq '.[].ssh_url' -r | while read url; do git clone "$url"; done

SISheogorath commented Mar 9, 2017

And let's simplify it even more:

wget -qO- | jq ".[].ssh_url" | xargs -L 1 git clone

I keep getting this

Permission denied (publickey).
fatal: Could not read from remote repository.

Phlosioneer commented Mar 26, 2017

For those having issues with Permission denied (publickey), changing the json key from "ssh_url" to "html_url" will do the trick.

The original line would then become

curl -s | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone \"#{repo["html_url"]}\" ]}'

I also found adding --depth 1 right after "git clone" helped speed things up a lot, if you're just trying to quickly search or quickly build the repos.

The original line would then become

curl -s | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone --depth 1 #{repo["ssh_url"]} ]}'

ypid commented Apr 26, 2017

@Phlosioneer "clone_url" is the proper URL for this case. "html_url has the word "html" in it ๐Ÿ˜‰

gagomes commented Jul 7, 2017

My twist on this, with parallel factor of 4 (i.e 4 git clones happening at the same time)

curl -s\?per_page\=200 | grep clone_url | awk -F '"' '{print $4}' | xargs -n 1 -P 4 git clone

arel commented Aug 29, 2017

If anyone is looking for this on an enterprise github installation, replace the url with:


I'm trying this on our enterprise Git. Thanks to all this solution works.
But, the total number of repositories are more than hundred and even though I pass 'per_page=200' or more, the json returned is always only 100 entries.
Does anyone have a solution?

thomson commented Oct 18, 2017

For private repos and paginating through each one steps I took were to create an access token and follow their pagination documentation.

Then, making a request to the organization's repo list endpoint - looking at only the headers you can see the Link header and how to fetch the second page and last page of the organization's repo list.

curl -H 'Authorization: token $TOKEN' -I '<organization>/repos?per_page=100'

From there, using the link for the next page of results found in the Link header, you can grab the rest of the repos - plugging into the ruby solution shared above:

curl -H "Authorization: token $TOKEN" -s '<organization id>/repos?per_page=100&page=2' | ruby -rubygems -e 'require "json"; JSON.load( { |repo| %x[git clone #{repo["ssh_url"]} ]}'

This script btw is great and cover most cases

ciacci1234 commented Dec 11, 2017

@caniszczyk Many thanks for taking the time to post this gist ๐ŸŒธ I and many others were able to preserve our bootcamp's curriculum for personal use because of it ๐Ÿ‘

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment