:speedboat: a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation

Related tags

Goroutines pool
Overview

Package pool

Project status Build Status Coverage Status Go Report Card GoDoc License

Package pool implements a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation.

Features:

  • Dead simple to use and makes no assumptions about how you will use it.
  • Automatic recovery from consumer goroutines which returns an error to the results

Pool v2 advantages over Pool v1:

  • Up to 300% faster due to lower contention ( BenchmarkSmallRun used to take 3 seconds, now 1 second )
  • Cancels are much faster
  • Easier to use, no longer need to know the # of Work Units to be processed.
  • Pool can now be used as a long running/globally defined pool if desired ( v1 Pool was only good for one run )
  • Supports single units of work as well as batching
  • Pool can easily be reset after a Close() or Cancel() for reuse.
  • Multiple Batches can be run and even cancelled on the same Pool.
  • Supports individual Work Unit cancellation.

Pool v3 advantages over Pool v2:

  • Objects are not interfaces allowing for less breaking changes going forward.
  • Now there are 2 Pool types, both completely interchangeable, a limited worker pool and unlimited pool.
  • Simpler usage of Work Units, instead of <-work.Done now can do work.Wait()

Installation

Use go get.

go get gopkg.in/go-playground/pool.v3

Then import the pool package into your own code.

import "gopkg.in/go-playground/pool.v3"

Important Information READ THIS!

  • It is recommended that you cancel a pool or batch from the calling function and not inside of the Unit of Work, it will work fine, however because of the goroutine scheduler and context switching it may not cancel as soon as if called from outside.
  • When Batching DO NOT FORGET TO CALL batch.QueueComplete(), if you do the Batch WILL deadlock
  • It is your responsibility to call WorkUnit.IsCancelled() to check if it's cancelled after a blocking operation like waiting for a connection from a pool. (optional)

Usage and documentation

Please see http://godoc.org/gopkg.in/go-playground/pool.v3 for detailed usage docs.

Examples:

both Limited Pool and Unlimited Pool have the same signatures and are completely interchangeable.

Per Unit Work

package main

import (
	"fmt"
	"time"

	"gopkg.in/go-playground/pool.v3"
)

func main() {

	p := pool.NewLimited(10)
	defer p.Close()

	user := p.Queue(getUser(13))
	other := p.Queue(getOtherInfo(13))

	user.Wait()
	if err := user.Error(); err != nil {
		// handle error
	}

	// do stuff with user
	username := user.Value().(string)
	fmt.Println(username)

	other.Wait()
	if err := other.Error(); err != nil {
		// handle error
	}

	// do stuff with other
	otherInfo := other.Value().(string)
	fmt.Println(otherInfo)
}

func getUser(id int) pool.WorkFunc {

	return func(wu pool.WorkUnit) (interface{}, error) {

		// simulate waiting for something, like TCP connection to be established
		// or connection from pool grabbed
		time.Sleep(time.Second * 1)

		if wu.IsCancelled() {
			// return values not used
			return nil, nil
		}

		// ready for processing...

		return "Joeybloggs", nil
	}
}

func getOtherInfo(id int) pool.WorkFunc {

	return func(wu pool.WorkUnit) (interface{}, error) {

		// simulate waiting for something, like TCP connection to be established
		// or connection from pool grabbed
		time.Sleep(time.Second * 1)

		if wu.IsCancelled() {
			// return values not used
			return nil, nil
		}

		// ready for processing...

		return "Other Info", nil
	}
}

Batch Work

package main

import (
	"fmt"
	"time"

	"gopkg.in/go-playground/pool.v3"
)

func main() {

	p := pool.NewLimited(10)
	defer p.Close()

	batch := p.Batch()

	// for max speed Queue in another goroutine
	// but it is not required, just can't start reading results
	// until all items are Queued.

	go func() {
		for i := 0; i < 10; i++ {
			batch.Queue(sendEmail("email content"))
		}

		// DO NOT FORGET THIS OR GOROUTINES WILL DEADLOCK
		// if calling Cancel() it calles QueueComplete() internally
		batch.QueueComplete()
	}()

	for email := range batch.Results() {

		if err := email.Error(); err != nil {
			// handle error
			// maybe call batch.Cancel()
		}

		// use return value
		fmt.Println(email.Value().(bool))
	}
}

func sendEmail(email string) pool.WorkFunc {
	return func(wu pool.WorkUnit) (interface{}, error) {

		// simulate waiting for something, like TCP connection to be established
		// or connection from pool grabbed
		time.Sleep(time.Second * 1)

		if wu.IsCancelled() {
			// return values not used
			return nil, nil
		}

		// ready for processing...

		return true, nil // everything ok, send nil, error if not
	}
}

Benchmarks

Run on MacBook Pro (Retina, 15-inch, Late 2013) 2.6 GHz Intel Core i7 16 GB 1600 MHz DDR3 using Go 1.6.2

run with 1, 2, 4,8 and 16 cpu to show it scales well...16 is double the # of logical cores on this machine.

NOTE: Cancellation times CAN vary depending how busy your system is and how the goroutine scheduler is but worse case I've seen is 1s to cancel instead of 0ns

go test -cpu=1,2,4,8,16 -bench=. -benchmem=true
PASS
BenchmarkLimitedSmallRun              	       1	1002492008 ns/op	    3552 B/op	      55 allocs/op
BenchmarkLimitedSmallRun-2            	       1	1002347196 ns/op	    3568 B/op	      55 allocs/op
BenchmarkLimitedSmallRun-4            	       1	1010533571 ns/op	    4720 B/op	      73 allocs/op
BenchmarkLimitedSmallRun-8            	       1	1008883324 ns/op	    4080 B/op	      63 allocs/op
BenchmarkLimitedSmallRun-16           	       1	1002317677 ns/op	    3632 B/op	      56 allocs/op
BenchmarkLimitedSmallCancel           	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedSmallCancel-2         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedSmallCancel-4         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedSmallCancel-8         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedSmallCancel-16        	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel           	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel-2         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel-4         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel-8         	 1000000	      1006 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedLargeCancel-16        	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkLimitedOverconsumeLargeRun   	       1	4027153081 ns/op	   36176 B/op	     572 allocs/op
BenchmarkLimitedOverconsumeLargeRun-2 	       1	4003489261 ns/op	   32336 B/op	     512 allocs/op
BenchmarkLimitedOverconsumeLargeRun-4 	       1	4005579847 ns/op	   34128 B/op	     540 allocs/op
BenchmarkLimitedOverconsumeLargeRun-8 	       1	4004639857 ns/op	   34992 B/op	     553 allocs/op
BenchmarkLimitedOverconsumeLargeRun-16	       1	4022695297 ns/op	   36864 B/op	     532 allocs/op
BenchmarkLimitedBatchSmallRun         	       1	1000785511 ns/op	    6336 B/op	      94 allocs/op
BenchmarkLimitedBatchSmallRun-2       	       1	1001459945 ns/op	    4480 B/op	      65 allocs/op
BenchmarkLimitedBatchSmallRun-4       	       1	1002475371 ns/op	    6672 B/op	      99 allocs/op
BenchmarkLimitedBatchSmallRun-8       	       1	1002498902 ns/op	    4624 B/op	      67 allocs/op
BenchmarkLimitedBatchSmallRun-16      	       1	1002202273 ns/op	    5344 B/op	      78 allocs/op
BenchmarkUnlimitedSmallRun            	       1	1002361538 ns/op	    3696 B/op	      59 allocs/op
BenchmarkUnlimitedSmallRun-2          	       1	1002230293 ns/op	    3776 B/op	      60 allocs/op
BenchmarkUnlimitedSmallRun-4          	       1	1002148953 ns/op	    3776 B/op	      60 allocs/op
BenchmarkUnlimitedSmallRun-8          	       1	1002120679 ns/op	    3584 B/op	      57 allocs/op
BenchmarkUnlimitedSmallRun-16         	       1	1001698519 ns/op	    3968 B/op	      63 allocs/op
BenchmarkUnlimitedSmallCancel         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedSmallCancel-2       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedSmallCancel-4       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedSmallCancel-8       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedSmallCancel-16      	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel         	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel-2       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel-4       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel-8       	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeCancel-16      	2000000000	         0.00 ns/op	       0 B/op	       0 allocs/op
BenchmarkUnlimitedLargeRun            	       1	1001631711 ns/op	   40352 B/op	     603 allocs/op
BenchmarkUnlimitedLargeRun-2          	       1	1002603908 ns/op	   38304 B/op	     586 allocs/op
BenchmarkUnlimitedLargeRun-4          	       1	1001452975 ns/op	   38192 B/op	     584 allocs/op
BenchmarkUnlimitedLargeRun-8          	       1	1005382882 ns/op	   35200 B/op	     537 allocs/op
BenchmarkUnlimitedLargeRun-16         	       1	1001818482 ns/op	   37056 B/op	     566 allocs/op
BenchmarkUnlimitedBatchSmallRun       	       1	1002391247 ns/op	    4240 B/op	      63 allocs/op
BenchmarkUnlimitedBatchSmallRun-2     	       1	1010313222 ns/op	    4688 B/op	      70 allocs/op
BenchmarkUnlimitedBatchSmallRun-4     	       1	1008364651 ns/op	    4304 B/op	      64 allocs/op
BenchmarkUnlimitedBatchSmallRun-8     	       1	1001858192 ns/op	    4448 B/op	      66 allocs/op
BenchmarkUnlimitedBatchSmallRun-16    	       1	1001228000 ns/op	    4320 B/op	      64 allocs/op

To put some of these benchmarks in perspective:

  • BenchmarkLimitedSmallRun did 10 seconds worth of processing in 1.002492008s
  • BenchmarkLimitedSmallCancel ran 20 jobs, cancelled on job 6 and and ran in 0s
  • BenchmarkLimitedLargeCancel ran 1000 jobs, cancelled on job 6 and and ran in 0s
  • BenchmarkLimitedOverconsumeLargeRun ran 100 jobs using 25 workers in 4.027153081s

License

Distributed under MIT License, please see license file in code for more details.

Comments
  • Example don't work

    Example don't work

    "Batch Work" example from readme go get go build

    .\main.go:12: undefined: pool.NewLimited
    .\main.go:44: cannot use func literal (type func(pool.WorkUnit) (interface {}, error)) as type pool.WorkFunc in return argument
    .\main.go:50: wu.IsCancelled undefined (type pool.WorkUnit has no field or method IsCancelled)
    
    bug 
    opened by akamajoris 6
  • Understand perf for marshalling a slice using pool

    Understand perf for marshalling a slice using pool

    Hey @joeybloggs, could you please take a look at this and let me know what you think - https://github.com/sudo-suhas/bulk-marshal? I don't have enough expertise to decipher the relatively poor performance of this library compared to goroutines with channels and wait groups. Would very much like to know your take on this.

    opened by sudo-suhas 2
  • Should the pool instance be reused?

    Should the pool instance be reused?

    Hey @joeybloggs, thanks for this package. The abstractions make it dead simple to start and manage worker pools.

    How do you recommend I use the pool instance? Should I create and reuse it(by making it package scope) if I am only ever going to create and use a batch inside a function? Here's the code:

    package util
    
    import (
    	"runtime"
    
    	jsoniter "github.com/json-iterator/go"
    	"github.com/pkg/errors"
    	log "github.com/sirupsen/logrus"
    	"gopkg.in/go-playground/pool.v3"
    )
    
    // jsoniter.ConfigFastest marshals the float with 6 digits precision (lossy),
    // which is significantly faster. It also does not escape HTML.
    var json = jsoniter.ConfigFastest
    
    // Unmarshal uses jsoniter for efficiently unmarshalling the byte stream into
    // the struct pointer.
    func Unmarshal(bs []byte, v interface{}) error {
    	// See https://github.com/json-iterator/go/blob/master/example_test.go#L69-L88
    	iter := json.BorrowIterator(bs)
    	defer json.ReturnIterator(iter)
    
    	iter.ReadVal(v)
    	if iter.Error != nil {
    		log.WithError(iter.Error).
    			Error("Got error while trying to unmarshal value into given struct")
    		return errors.Wrap(iter.Error, "util: unmarshal using jsoniter.ConfigFastest failed")
    	}
    	return nil
    }
    
    func unmarshalWorker(bs []byte, val interface{}) pool.WorkFunc {
    	return func(wu pool.WorkUnit) (interface{}, error) {
    		if wu.IsCancelled() {
    			// return values not used
    			return nil, nil
    		}
    
    		return nil, Unmarshal(bs, val)
    	}
    }
    
    // BulkUnmarshal uses worker pool
    func BulkUnmarshal(bytesSlice [][]byte, vals []interface{}) error {
    	if len(bytesSlice) != len(vals) {
    		return errors.New(
    			"util: bulk unmarshal failed: length of bytes slice did not match targets",
    		)
    	}
    
    	p := pool.NewLimited(uint(runtime.NumCPU() * 2))
    	defer p.Close()
    
    	batch := p.Batch()
    
    	go func() {
    		for idx, bs := range bytesSlice {
    			batch.Queue(unmarshalWorker(bs, vals[idx]))
    		}
    
    		// DO NOT FORGET THIS OR GOROUTINES WILL DEADLOCK
    		// if calling Cancel() it calles QueueComplete() internally
    		batch.QueueComplete()
    	}()
    
    	for res := range batch.Results() {
    		if err := res.Error(); err != nil {
    			batch.Cancel()
    			return errors.Wrap(err, "util: bulk unmarshal failed")
    		}
    	}
    
    	return nil
    }
    
    opened by sudo-suhas 2
  • Came a http request, how to open a pool to deal with

    Came a http request, how to open a pool to deal with

    Came a http request, how to open a pool to deal with. Can you give me an example? In the process, suddenly came an http request, how to insert into the queue.

    opened by vinerr 1
  • FIX: p.cancel might be closed twice

    FIX: p.cancel might be closed twice

    Cancel() might be invoked several times by the woker goroutines got an error at the same time, as a result the p.cancel will be closed more than once.

    I have fixed this issue by checking the p.cancelled before closing the p.cancel.

     [recovered]
        panic: close of closed channel
    
    goroutine 2033 [running]:
    panic(0x583320, 0xc820279120)
        /usr/local/Cellar/go/1.6.2/libexec/src/runtime/panic.go:481 +0x3e6
    go.planetmeican.com/libra/util/pool.(*Pool).Cancel(0xc82008d3e0)
        /Users/zzz/go/src/go.planetmeican.com/libra/util/pool/pool.go:167 +0x25
    go.planetmeican.com/libra/util/pool.(*Pool).Queue.func1.1.1(0xc82008d3e0)
        /Users/zzz/go/src/go.planetmeican.com/libra/util/pool/pool.go:117 +0x286
    panic(0x670ec0, 0xc820017c80)
        /usr/local/Cellar/go/1.6.2/libexec/src/runtime/panic.go:443 +0x4e9
    go.planetmeican.com/libra/service.BatchMigrateUser.func1(0xc82031ce80)
        /Users/zzz/go/src/go.planetmeican.com/libra/service/migrate.go:125 +0x1a0
    go.planetmeican.com/libra/util/pool.(*Pool).Queue.func1.1(0xc82008d3e0)
        /Users/zzz/go/src/go.planetmeican.com/libra/util/pool/pool.go:136 +0x1b4
    created by go.planetmeican.com/libra/util/pool.(*Pool).Queue.func1
        /Users/zzz/go/src/go.planetmeican.com/libra/util/pool/pool.go:142 +0x49
    exit status 2
    
    bug enhancement 
    opened by zwh8800 1
  • Add Consumer Hook

    Add Consumer Hook

    • now can register ConsumerHook function that will be run while firing up the consumer routines and that return value will be set/passed to each job. This is particularily usefull when creating a saving pool so a the consumer hook would create a database connection for each job to reuse instead of creating an additional one for each job.
    enhancement 
    opened by deankarn 0
  • A suggestion about Usage and documentation

    A suggestion about Usage and documentation

    In the Batch Work example,The email.Value will be nil when called batch.Cancel() that will cause Panic as a result of wrong type desertation

    for email := range batch.Results() {    
          if err := email.Error(); err != nil {
                // handle error
                // maybe call batch.Cancel()
            }
    
            // use return value 
           // The email.Value will be nil when called batch.Cancel() that will cause Panic as a result of wrong type desertation
            fmt.Println(email.Value().(bool)) 
        }
    

    It might be good to write like this

    for email := range batch.Results() {
    
            if err := email.Error(); err != nil {
                // handle error
                // maybe call batch.Cancel()
            }else{
           // use return value 
            fmt.Println(email.Value().(bool)) 
          }
    
    opened by fzxbl 0
  • fix deadlock when batch canceled, some workUnit's done channel can't be closed

    fix deadlock when batch canceled, some workUnit's done channel can't be closed

    It happend in this situation:

    1. call Batch.Cancel()
    2. wu.cancelling is set in cancelWithError()
    3. wu.writing is set in Queue() now, wu.done is never closed, batch will wait forever
    opened by vvnotw 0
Releases(v3.1.1)
  • v3.1.1(Aug 23, 2016)

    What was Fixed?

    • Go 1.7's race detector got even better and found a potential race that was not detected in Go 1.6.x and so this fixes that; no breaking changes, just update.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.2(Jul 7, 2016)

  • v3.1.0(Jun 20, 2016)

    What's New?

    • Added WaitAll() function to the batch, for when you need to wait for all work to be processed, but don't need to know the results.

    eg. If the Work Unit's handle their own errors, logging etc... and it doesn't need to be reported back to the calling program.

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Jun 20, 2016)

    Fix gopkg.in pointing to v2.

    Hi all, please update v3 by running go get -u gopkg.in/go-playground/pool.v3, I must have selected the wrong branch while cutting the v3 release initially and it was pointing to v2, appologies for any inconvenience.

    Source code(tar.gz)
    Source code(zip)
  • v3(Jun 16, 2016)

    What's New?

    • library is not just a consumer pool, but now also handles a no worker/unlimited goroutine option; best of all they are completely interchangeable.
    // Limited
    pool.NewLimited(10)
    
    //Unlimited
    pool.New()
    
    • Objects are not interfaces allowing for less breaking changes going forward.
    • Simpler usage of Work Units, instead of <-work.Done now can do work.Wait()
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Jun 16, 2016)

  • v2.0.1(Jun 15, 2016)

    What Changed

    • fixed batch not unlocking mutex before return in rare case.
    • fixed race condition found in the tests, not the pool logic, but the actual test logic.
    • Added race detection testing in CI tests
    Source code(tar.gz)
    Source code(zip)
  • v2(Jun 15, 2016)

    Whats New?

    • Up to 300% faster due to lower contention ( BenchmarkSmallRun used to take 3 seconds, now 1 second )
    • Cancels are much faster
    • Easier to use, no longer need to know the # of Work Units to be processed.
    • Pool can now be used as a long running/globally defined pool if desired ( v1 Pool was only good for one run )
    • Supports single units of work as well as batching
    • Pool can easily be reset after a Close() or Cancel() for reuse.
    • Multiple Batches can be run and even cancelled on the same Pool.
    • Supports individual Work Unit cancellation.

    Examples

    see here and in the README

    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Jun 10, 2016)

  • v1.2(Dec 11, 2015)

    Added Consumer Hook

    • now can register ConsumerHook function that will be run while firing up the consumer routines and that return value will be set/passed to each job. This is particularity useful when creating a saving pool so a the consumer hook would create a database connection for each job to reuse instead of creating an additional one for each job.

    Example

    https://github.com/go-playground/pool/blob/v1/pool_test.go#L55

    Source code(tar.gz)
    Source code(zip)
  • v1.1(Nov 10, 2015)

  • v1.0.1(Nov 5, 2015)

    • Fixed race bug when recovering consumer function error/panic.
    • Add custom Recovery error type so that you can differentiate between and error and a recovery error.

    NOTE

    • This is a breaking change but not versioning because v1 wasn't working in the first place for this.
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Nov 5, 2015)

    Initial Release

    the pool package enables easy use and management of a consumer goroutine pool.

    Why

    I needed a way to boost productivity while sticking as close to the core as possible and keeping flexibility of running how you want.

    Essentially it just abstracts a bunch of the repetitive code that you normally would write to boost your productivity.

    Source code(tar.gz)
    Source code(zip)
Owner
Go Playgound
multiple packages, libraries and programs to further the advancement of Go!
Go Playgound
golang worker pool , Concurrency limiting goroutine pool

golang worker pool δΈ­ζ–‡θ―΄ζ˜Ž Concurrency limiting goroutine pool. Limits the concurrency of task execution, not the number of tasks queued. Never blocks su

xxj 414 Sep 19, 2022
🐜🐜🐜 ants is a high-performance and low-cost goroutine pool in Go, inspired by fasthttp./ ants ζ˜―δΈ€δΈͺι«˜ζ€§θƒ½δΈ”δ½ŽζŸθ€—ηš„ goroutine 池。

A goroutine pool for Go English | ???? δΈ­ζ–‡ ?? Introduction Library ants implements a goroutine pool with fixed capacity, managing and recycling a massi

Andy Pan 9.1k Sep 19, 2022
Golang Implementation of Worker Pool/ Thread Pool

Golang Implementation of Worker Pool/ Thread Pool

Telkom DEV 1 Jun 18, 2022
Go-miningcore-pool - Miningcore Pool written in GOlang

Go-Miningcore-Pool (COMING SOON) Miningcore Pool written in GOlang 0x01 Configur

miningcore.com 2 Apr 24, 2022
Go-ldap-pool - A simple connection pool for go-ldap

Basic connection pool for go-ldap This little library use the go-ldap library an

Vincent 2 May 9, 2022
Work pool channlege - An url hash retriever worker pool for getting hash digest for a collection of urls

Code challenge The aim of this project is to provide an url hash retriever worke

null 0 Feb 16, 2022
🐝 A Highly Performant and easy to use goroutine pool for Go

gohive Package gohive implements a simple and easy to use goroutine pool for Go Features Pool can be created with a specific size as per the requireme

Lovelesh 43 Sep 26, 2022
Minimalistic and High-performance goroutine worker pool written in Go

pond Minimalistic and High-performance goroutine worker pool written in Go Motivation This library is meant to provide a simple way to limit concurren

Alejandro Durante 633 Sep 15, 2022
Queue is a Golang library for spawning and managing a Goroutine pool

Queue is a Golang library for spawning and managing a Goroutine pool, Alloowing you to create multiple worker according to limit CPU number of machine.

Bo-Yi Wu 223 Sep 20, 2022
Queue is a Golang library for spawning and managing a Goroutine pool

Queue is a Golang library for spawning and managing a Goroutine pool, Alloowing you to create multiple worker according to limit CPU number of machine.

golang-queue 223 Sep 20, 2022
Lightweight Goroutine pool

grpool Lightweight Goroutine pool Clients can submit jobs. Dispatcher takes job, and sends it to first available worker. When worker is done with proc

Ivan Pusic 711 Sep 20, 2022
A goroutine pool for Go

Tunny is a Golang library for spawning and managing a goroutine pool, allowing you to limit work coming from any number of goroutines with a synchrono

Ashley Jeffs 3.2k Sep 24, 2022
Concurrency limiting goroutine pool

workerpool Concurrency limiting goroutine pool. Limits the concurrency of task execution, not the number of tasks queued. Never blocks submitting task

Andrew Gillis 923 Sep 27, 2022
goroutine pool in golang

goroutine pool in golang

wksw 1 Nov 1, 2021
A sync.WaitGroup with error handling and concurrency control

go-waitgroup How to use An package that allows you to use the constructs of a sync.WaitGroup to create a pool of goroutines and control the concurrenc

Pieter Claerhout 33 Sep 26, 2022
Routines was a fixed number thread pool to process the user task, and it would respawn a corresponding new thread when panic

Routines Routines was a fixed number thread pool to process the user task, and it would respawn a corresponding new thread when panic. It supports the

hulk 12 Dec 16, 2021
Worker pool library with auto-scaling, backpressure, and easy composability of pools into pipelines

workerpool Worker pool library with auto-scaling, backpressure, and easy composability of pools into pipelines. Uses Go 1.18 generics. Notable differe

Charalampos Mitsakis 48 Sep 18, 2022
gpool - a generic context-aware resizable goroutines pool to bound concurrency based on semaphore.

gpool - a generic context-aware resizable goroutines pool to bound concurrency. Installation $ go get github.com/sherifabdlnaby/gpool import "github.c

Sherif Abdel-Naby 84 Feb 21, 2022
Golang simple thread pool implementation

Golang Threadpool implementation Scalable threadpool implementation using Go to handle the huge network trafic. Install go get github.com/shettyh/thre

Manjunath Shetty 79 Sep 26, 2022