Description
Description
Even after calling Close() on a cache, goleak
detects 4 routines still alive on the stack :
goleak: Errors on successful test run: found unexpected goroutines:
[Goroutine 37 in state select, with github.com/maypok86/otter/internal/unixtime.startTimer.func1 on top of the stack:
github.com/maypok86/otter/internal/unixtime.startTimer.func1()
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/unixtime/unixtime.go:43 +0xd0
created by github.com/maypok86/otter/internal/unixtime.startTimer in goroutine 35
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/unixtime/unixtime.go:39 +0xa8
Goroutine 38 in state sleep, with time.Sleep on top of the stack:
time.Sleep(0x3b9aca00)
/opt/homebrew/opt/go/libexec/src/runtime/time.go:300 +0xe0
github.com/maypok86/otter/internal/core.(*Cache[...]).cleanup(0x0)
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/core/cache.go:391 +0x28
created by github.com/maypok86/otter/internal/core.NewCache[...] in goroutine 35
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/core/cache.go:188 +0x694
Goroutine 39 in state sync.Cond.Wait, with sync.runtime_notifyListWait on top of the stack:
sync.runtime_notifyListWait(0x14000246018, 0x0)
/opt/homebrew/opt/go/libexec/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x14000246008)
/opt/homebrew/opt/go/libexec/src/sync/cond.go:71 +0xcc
github.com/maypok86/otter/internal/queue.(*Growable[...]).Pop(0x100e69d60)
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/queue/growable.go:71 +0x9c
github.com/maypok86/otter/internal/core.(*Cache[...]).process(0x100e6e240)
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/core/cache.go:459 +0x44
created by github.com/maypok86/otter/internal/core.NewCache[...] in goroutine 35
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/core/cache.go:191 +0x6f0
Goroutine 40 in state sleep, with time.Sleep on top of the stack:
time.Sleep(0x3b9aca00)
/opt/homebrew/opt/go/libexec/src/runtime/time.go:300 +0xe0
github.com/maypok86/otter/internal/core.(*Cache[...]).cleanup(0x0)
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/core/cache.go:391 +0x28
created by github.com/maypok86/otter/internal/core.NewCache[...] in goroutine 35
/Users/thomas/go/pkg/mod/github.com/maypok86/[email protected]/internal/core/cache.go:188 +0x694
]
This is caused by the lack of synchronization (or wait) when closing different routines.
unixtime/startTimer()
a.k.a Goroutine 37
func startTimer() {
done = make(chan struct{})
atomic.StoreInt64(&startTime, time.Now().Unix())
atomic.StoreUint32(&now, uint32(0))
go func() {
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case t := <-ticker.C:
//nolint:gosec // there will never be an overflow
atomic.StoreUint32(&now, uint32(t.Unix()-StartTime()))
case <-done:
return
}
}
}()
}
A struct{}{}
is sent to the channel to signal the routine to stop, but the caller can't check nor wait that the routine did close.
Also, the case <- done
should always come first in such a for-select loop to ensure priority for termination, avoid unnecessary work and prevent resource leaks by consistency and predictability (returning before anything else)
core/cleanup()
a.k.a Goroutine 38 && Goroutine 40
func (c *Cache[K, V]) cleanup() {
for {
time.Sleep(time.Second)
c.evictionMutex.Lock()
if c.isClosed {
c.evictionMutex.Unlock()
return
}
c.expiryPolicy.DeleteExpired()
c.evictionMutex.Unlock()
}
}
This routine called in NewCache()
by go cache.cleanup()
can still be sleeping while Close()
is called and the program exits.
queue/Pop()
a.k.a Goroutine 39
func (g *Growable[T]) Pop() T {
g.mutex.Lock()
for g.count == 0 {
g.notEmpty.Wait()
}
item := g.pop()
g.mutex.Unlock()
return item
}
Is stuck while waiting for sync.Cond
signal.
To Reproduce
Steps to reproduce the behavior:
cache, err := otter.MustBuilder[string, net.IP](1000).
WithTTL(1 * time.Hour).
Build()
if err != nil {
t.Fatal(err)
}
cache.Close()
Will make only the cleanup()
leak show up.
On the package I maintain, the cache is held in a struct and closed after any access to it can be done and it leaks on every unit test
Expected behavior
No goroutine leaks.
Environment
Personal dev setup
- OS: MacOS 14.6.1 (23G93)
- ARCH: arm64
- CPU: Apple M1 Ultra
and
Github Actions environment
- OS: Ubuntu 24.04
- ARCH: amd64
- CPU: AMD EPYC 7763 64-Core Processor
Activity