Skip to content

Instantly share code, notes, and snippets.



Last active Dec 9, 2016
What would you like to do?
Limit parallel goroutines with buffered channel.
package main
import (
const Limit = 5
func main() {
log.SetFlags(log.Ltime) // format log output hh:mm:ss
var wg sync.WaitGroup
workers := make(chan struct{}, Limit)
doWork := func(i int, j string) {
defer wg.Done()
time.Sleep(2 * time.Second)
log.Printf("Worker %d working on %s\n", i, j)
for j := 0; j < 15; j++ {
work := string(rune(97 + j))
log.Printf("Work %s enqueued\n", work)
workers <- struct{}{}
go doWork(j, work)

This comment has been minimized.

Copy link

@eduncan911 eduncan911 commented May 25, 2016

With this pattern, if you set your Limit to 1 million, you'd get 1 million Go routines all using 4KB of RAM each. which ends up using 4 GB of RAM. But more ugently, if you are accessing 1 million file descriptors, you'll get the Too many files open issue the author described.

The point of the author's blog post is to instead use a pool of workers, that you limit to a controlled number. Not 1 million, but instead 50 or 200 or so.


This comment has been minimized.

Copy link
Owner Author

@heppu heppu commented Jun 20, 2016

That is true, I just got interested how one could achieve same thing using buffered channel instead of worker pool and got little drifted from original use case. =)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.