Skip to content

Instantly share code, notes, and snippets.

@nesv
Last active December 11, 2018 21:41
Show Gist options
  • Save nesv/9233300 to your computer and use it in GitHub Desktop.
Save nesv/9233300 to your computer and use it in GitHub Desktop.
package main
import "fmt"
var WorkerQueue chan chan WorkRequest
func StartDispatcher(nworkers int) {
// First, initialize the channel we are going to but the workers' work channels into.
WorkerQueue = make(chan chan WorkRequest, nworkers)
// Now, create all of our workers.
for i := 0; i<nworkers; i++ {
fmt.Println("Starting worker", i+1)
worker := NewWorker(i+1, WorkerQueue)
worker.Start()
}
go func() {
for {
select {
case work := <-WorkQueue:
fmt.Println("Received work requeust")
go func() {
worker := <-WorkerQueue
fmt.Println("Dispatching work request")
worker <- work
}()
}
}
}()
}
@jasonwbarnett
Copy link

+1 line 8 s/but/put/

@schmohlio
Copy link

thanks for the great article. you mentioned The reason we send the work request to the worker in another goroutine, is so that we make sure the work queue never fills up.. Just to confirm, if downstream workers get bogged down, this cause an exorbitant amount of goroutines to be created. Could this result in OOM errors on the server? thanks @nesv

@satarsa
Copy link

satarsa commented Feb 15, 2018

I have tried this example. What I have found out that if you stop your workers, then these goroutines

go func() {
          worker := <-WorkerQueue
          fmt.Println("Dispatching work request")
          worker <- work

will be blocked forever and abandonned, i.e. they just leak. If I do not stop the workers, I don't understand, how I can cancel the previous workRequests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment