Skip to content

Instantly share code, notes, and snippets.

@avalanche123
Created November 21, 2012 19:53
Show Gist options
  • Star 15 You must be signed in to star a gist
  • Fork 5 You must be signed in to fork a gist
  • Save avalanche123/4127253 to your computer and use it in GitHub Desktop.
Save avalanche123/4127253 to your computer and use it in GitHub Desktop.
A Tour of Go. Exercise: Web Crawler
package main
import (
"fmt"
)
type Fetcher interface {
// Fetch returns the body of URL and
// a slice of URLs found on that page.
Fetch(url string) (body string, urls []string, err error)
}
type result struct {
url, body string
urls []string
err error
depth int
}
// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
results := make(chan *result)
fetched := make(map[string]bool)
fetch := func(url string, depth int) {
body, urls, err := fetcher.Fetch(url)
results <- &result{url, body, urls, err, depth}
}
go fetch(url, depth)
fetched[url] = true
// 1 url is currently being fetched in background, loop while fetching
for fetching := 1; fetching > 0; fetching-- {
res := <- results
// skip failed fetches
if res.err != nil {
fmt.Println(res.err)
continue
}
fmt.Printf("found: %s %q\n", res.url, res.body)
// follow links if depth has not been exhausted
if res.depth > 0 {
for _, u := range res.urls {
// don't attempt to re-fetch known url, decrement depth
if !fetched[u] {
fetching++
go fetch(u, res.depth - 1)
fetched[u] = true
}
}
}
}
close(results)
}
func main() {
Crawl("http://golang.org/", 4, fetcher)
}
// fakeFetcher is Fetcher that returns canned results.
type fakeFetcher map[string]*fakeResult
type fakeResult struct {
body string
urls []string
}
func (f *fakeFetcher) Fetch(url string) (string, []string, error) {
if res, ok := (*f)[url]; ok {
return res.body, res.urls, nil
}
return "", nil, fmt.Errorf("not found: %s", url)
}
// fetcher is a populated fakeFetcher.
var fetcher = &fakeFetcher{
"http://golang.org/": &fakeResult{
"The Go Programming Language",
[]string{
"http://golang.org/pkg/",
"http://golang.org/cmd/",
},
},
"http://golang.org/pkg/": &fakeResult{
"Packages",
[]string{
"http://golang.org/",
"http://golang.org/cmd/",
"http://golang.org/pkg/fmt/",
"http://golang.org/pkg/os/",
},
},
"http://golang.org/pkg/fmt/": &fakeResult{
"Package fmt",
[]string{
"http://golang.org/",
"http://golang.org/pkg/",
},
},
"http://golang.org/pkg/os/": &fakeResult{
"Package os",
[]string{
"http://golang.org/",
"http://golang.org/pkg/",
},
},
}
@immortalCockroach
Copy link

immortalCockroach commented Nov 20, 2016

Thanks for your solution.
But since we only crawl the url whose depth is greater than 0, should the if res.depth > 0 be if res.depth > 1 { ?

@colm-anseo
Copy link

One cannot assume this:

L52: fetched[u] = true

while performing the fetching function here:

L51: go fetch(u, res.depth - 1)

The fetch may fail (network error etc.) - so the boolean to track it something is truly "fetched" should be recorded within the fetch function - which already has error trapping.

@cahitbeyaz
Copy link

also
from https://tour.golang.org/concurrency/10
Hint: you can keep a cache of the URLs that have been fetched on a map, but maps alone are not safe for concurrent use!

@ashishnegi
Copy link

@beyazc map is not accessed concurrently. only one function Crawl touches it.

@mybluefish
Copy link

@ashishnegi Inside Crawl it calls "go fetch" to touch the map, while maps alone are not safe for concurrent use, so a cache that contains a sync.Mutex should be added to keep it safe as @beyazc mentioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment