go routine producer-consumer pattern panics


I have implemented goroutine’s producer-consumer pattern as mentioned in this answer. But it panics at some times with error saying: “panic: sync: negative WaitGroup counter”. I have sample code as below:

package main

import (
    _ "net/http/pprof"

// Test ...
type Test struct {
    PropA []int
    PropB []int

// Clone deep-copies a to b
func Clone(a, b interface{}) {

    buff := new(bytes.Buffer)
    enc := gob.NewEncoder(buff)
    dec := gob.NewDecoder(buff)

func main() {
    test := Test{
        PropA: []int{211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222},
        PropB: []int{111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124},
    var wg, wg2 sync.WaitGroup
    ch := make(chan int, 5)
    results := make(chan Test, 5)

    // start consumers
    for i := 0; i < 4; i++ {
        go func(ch <-chan int, results chan<- Test) {
            defer wg.Done()
            for propA := range ch {
                var temp Test
                Clone(&test, &temp)
                temp.PropA = []int{propA}
                results <- temp
        }(ch, results)

    // start producing
    go func(ch chan<- int) {
        defer wg.Done()
        for _, propA := range test.PropA {
            ch <- propA

    go func(results <-chan Test) {
        defer wg2.Done()
        for tt := range results {
            log.Printf("finished propA %+v\n", tt.PropA[0])

    wg.Wait() // Wait all consumers to finish processing jobs

    // All jobs are processed, no more values will be sent on results:


When I run above code 4-5 times, it panics at least once. At some time, the error message is “panic: send on closed channel”. I don’t understand how the channel is being closed before producer finishes to send and why Waitgroup counter reaches negative. Can someone please explain me it?

The stacktrace for panic is as below: (filename for above code is mycode.go)

panic: send on closed channel
    panic: sync: negative WaitGroup counter

goroutine 21 [running]:
sync.(*WaitGroup).Add(0xc420134020, 0xffffffffffffffff)
    /usr/local/go/src/sync/waitgroup.go:75 +0x134
    /usr/local/go/src/sync/waitgroup.go:100 +0x34
panic(0x7622e0, 0x80ffa0)
    /usr/local/go/src/runtime/panic.go:491 +0x283
main.main.func1(0xc420134020, 0xc420136090, 0xc420148000, 0xc42014a000)
    /home/mycode.go:45 +0x80
created by main.main
    /home/mycode.go:39 +0x21d
exit status 2


Your bookkeeping on wg is off by one because your producer calls wg.Done() but there is no Add() called to account for it. The panic has to do with the variability of the go scheduler, but once you see the fix I am sure you will see how you could get a “negative WaitGroup counter” and / or a “send on closed channel” all depending on timing.

The fix is easy, just add a wg.Add() before you start your producer.


// start producing
go func(ch chan<- int) {
    defer wg.Done()
    for _, propA := range test.PropA {
        ch <- propA


In the future, when you see a “negative WaitGroup counter” it is a guarantee that you aren’t matching 1:1 the number of Add to Done.

Answered By – sberry

Answer Checked By – Cary Denson (GoLangFix Admin)

Leave a Reply

Your email address will not be published.