Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
async fetching of urls using goroutines and channels
package main
import (
var urls = []string{
type HttpResponse struct {
url string
response *http.Response
err error
func asyncHttpGets(urls []string) []*HttpResponse {
ch := make(chan *HttpResponse, len(urls)) // buffered
responses := []*HttpResponse{}
for _, url := range urls {
go func(url string) {
fmt.Printf("Fetching %s \n", url)
resp, err := http.Get(url)
ch <- &HttpResponse{url, resp, err}
for {
select {
case r := <-ch:
fmt.Printf("%s was fetched\n", r.url)
responses = append(responses, r)
if len(responses) == len(urls) {
return responses
case <-time.After(50 * time.Millisecond):
return responses
func main() {
results := asyncHttpGets(urls)
for _, result := range results {
fmt.Printf("%s status: %s\n", result.url,

ismasan commented Sep 29, 2012


Your main() was slightly broken, see:

ismasan commented Sep 29, 2012

I'm trying to understand why you've got that Sleep() in there. Is it because buffered channels are non-blocking? Wouldn't it be more efficient to use a non-buffered channel so your select() gets responses as soon as they're available?

  • (I am just starting with Go, sorry if the answer is obvious!)

mattetti commented Sep 29, 2012

The sleep isn't required, I added it so the loops won't be too tight since I know that fetching a url takes some time.

bgentry commented Sep 29, 2012

You should be careful to close your response bodies when you're finished with them. Doing so releases the TCP connection to be reused for future requests.

Obviously it doesn't matter for this simple program but it would cause a leak if this was long running or looping.

kr commented Nov 4, 2012

Hi! It's nice to see more Go examples such as this one showing up in slide decks. :)

Couple of comments on this gist:


mattetti commented Nov 28, 2012

As @kr pointed out, a nicer version that doesn't require returning a slice is available here:

How about fetching http requests with a timeout .
This one does not work; gives a runtime error .

What I m basically trying to do is fetch URLs with a timeout (have kept very small timeout; want to timeout deliberately), but on timeout throws runtime.

[I am New to go]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment