High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

Overview

gaio

GoDoc MIT licensed Build Status Go Report Card Coverage Statusd

gaio

Introduction

中文介绍

For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a goroutine for handling the incoming data, then you would buf:=make([]byte, 4096) to allocate some buffer and finally waits on conn.Read(buf).

For a server holding >10K connections with frequent short messages(e.g. < 512B), cost for context switching is much more expensive than receiving message(a context switch needs at least 1000 CPU cycles or 600ns on 2.1GHz).

And by eliminating one goroutine per one connection scheme with Edge-Triggered IO Multiplexing, the 2KB(R)+2KB(W) per connection goroutine stack can be saved. By using internal swap buffer, buf:=make([]byte, 4096) can be saved(at the cost of performance).

gaio is an proactor pattern networking library satisfy both memory constraints and performance goals.

Features

  1. Designed for >C10K concurrent connections, maximized parallelism, and nice single connection throughput.
  2. Read(ctx, conn, buffer) can be called with nil buffer to make use of internal swap buffer.
  3. Non-intrusive design, this library works with net.Listener and net.Conn. (with syscall.RawConn support), easy to be integrated into your existing software.
  4. Amortized context switching cost for tiny messages, able to handle frequent chat message exchanging.
  5. Application can decide when to delegate net.Conn to gaio, for example, you can delegate net.Conn to gaio after some handshaking procedure, or having some net.TCPConn settings done.
  6. Application can decide when to submit read or write requests, per-connection back-pressure can be propagated to peer to slow down sending. This features is particular useful to transmit data from A to B via gaio, which B is slower than A.
  7. Tiny, around 1000 LOC, easy to debug.
  8. Support for Linux, BSD.

Conventions

  1. Once you submit an async read/write requests with related net.Conn to gaio.Watcher, this conn will be delegated to gaio.Watcher at first submit. Future use of this conn like conn.Read or conn.Write will return error, but TCP properties set by SetReadBuffer(), SetWriteBuffer(), SetLinger(), SetKeepAlive(), SetNoDelay() will be inherited.
  2. If you decide not to use this connection anymore, you could call Watcher.Free(net.Conn) to close socket and free related resources immediately.
  3. If you forget to call Watcher.Free(net.Conn), runtime garbage collector will cleanup related system resources if nowhere in the system holds the net.Conn.
  4. If you forget to call Watcher.Close(), runtime garbage collector will cleanup ALL related system resources if nowhere in the system holds this Watcher.
  5. For connection Load-Balance, you can create multiple gaio.Watcher with your own strategy to distribute net.Conn.
  6. For acceptor Load-Balance, you can use go-reuseport as the listener.
  7. For read requests submitted with 'nil' buffer, the returning []byte from Watcher.WaitIO() is SAFE to use before next call to Watcher.WaitIO() returned.

TL;DR

package main

import (
        "log"
        "net"

        "github.com/xtaci/gaio"
)

// this goroutine will wait for all io events, and sents back everything it received
// in async way
func echoServer(w *gaio.Watcher) {
        for {
                // loop wait for any IO events
                results, err := w.WaitIO()
                if err != nil {
                        log.Println(err)
                        return
                }

                for _, res := range results {
                        switch res.Operation {
                        case gaio.OpRead: // read completion event
                                if res.Error == nil {
                                        // send back everything, we won't start to read again until write completes.
                                        // submit an async write request
                                        w.Write(nil, res.Conn, res.Buffer[:res.Size])
                                }
                        case gaio.OpWrite: // write completion event
                                if res.Error == nil {
                                        // since write has completed, let's start read on this conn again
                                        w.Read(nil, res.Conn, res.Buffer[:cap(res.Buffer)])
                                }
                        }
                }
        }
}

func main() {
        w, err := gaio.NewWatcher()
        if err != nil {
              log.Fatal(err)
        }
        defer w.Close()
	
        go echoServer(w)

        ln, err := net.Listen("tcp", "localhost:0")
        if err != nil {
                log.Fatal(err)
        }
        log.Println("echo server listening on", ln.Addr())

        for {
                conn, err := ln.Accept()
                if err != nil {
                        log.Println(err)
                        return
                }
                log.Println("new client", conn.RemoteAddr())

                // submit the first async read IO request
                err = w.Read(nil, conn, make([]byte, 128))
                if err != nil {
                        log.Println(err)
                        return
                }
        }
}

More examples

Push server package main
package main

import (
        "fmt"
        "log"
        "net"
        "time"

        "github.com/xtaci/gaio"
)

func main() {
        // by simply replace net.Listen with reuseport.Listen, everything is the same as in push-server
        // ln, err := reuseport.Listen("tcp", "localhost:0")
        ln, err := net.Listen("tcp", "localhost:0")
        if err != nil {
                log.Fatal(err)
        }

        log.Println("pushing server listening on", ln.Addr(), ", use telnet to receive push")

        // create a watcher
        w, err := gaio.NewWatcher()
        if err != nil {
                log.Fatal(err)
        }

        // channel
        ticker := time.NewTicker(time.Second)
        chConn := make(chan net.Conn)
        chIO := make(chan gaio.OpResult)

        // watcher.WaitIO goroutine
        go func() {
                for {
                        results, err := w.WaitIO()
                        if err != nil {
                                log.Println(err)
                                return
                        }

                        for _, res := range results {
                                chIO <- res
                        }
                }
        }()

        // main logic loop, like your program core loop.
        go func() {
                var conns []net.Conn
                for {
                        select {
                        case res := <-chIO: // receive IO events from watcher
                                if res.Error != nil {
                                        continue
                                }
                                conns = append(conns, res.Conn)
                        case t := <-ticker.C: // receive ticker events
                                push := []byte(fmt.Sprintf("%s\n", t))
                                // all conn will receive the same 'push' content
                                for _, conn := range conns {
                                        w.Write(nil, conn, push)
                                }
                                conns = nil
                        case conn := <-chConn: // receive new connection events
                                conns = append(conns, conn)
                        }
                }
        }()

        // this loop keeps on accepting connections and send to main loop
        for {
                conn, err := ln.Accept()
                if err != nil {
                        log.Println(err)
                        return
                }
                chConn <- conn
        }
}

Documentation

For complete documentation, see the associated Godoc.

Benchmarks

Test Case Throughput test with 64KB buffer
Description A client keep on sending 64KB bytes to server, server keeps on reading and sending back whatever it received, the client keeps on receiving whatever the server sent back until all bytes received successfully
Command go test -v -run=^$ -bench Echo
Macbook Pro 1695.27 MB/s 518 B/op 4 allocs/op
Linux AMD64 1883.23 MB/s 518 B/op 4 allocs/op
Raspberry Pi4 354.59 MB/s 334 B/op 4 allocs/op
Test Case 8K concurrent connection echo test
Description Start 8192 clients, each client send 1KB data to server, server keeps on reading and sending back whatever it received, the client keeps on receiving whatever the server sent back until all bytes received successfully.
Command go test -v -run=8k
Macbook Pro 1.09s
Linux AMD64 0.94s
Raspberry Pi4 2.09s

Regression

regression

X -> number of concurrent connections, Y -> time of completion in seconds

Best-fit values	 
Slope	8.613e-005 ± 5.272e-006
Y-intercept	0.08278 ± 0.03998
X-intercept	-961.1
1/Slope	11610
 
95% Confidence Intervals	 
Slope	7.150e-005 to 0.0001008
Y-intercept	-0.02820 to 0.1938
X-intercept	-2642 to 287.1
 
Goodness of Fit	 
R square	0.9852
Sy.x	0.05421
 
Is slope significantly non-zero?	 
F	266.9
DFn,DFd	1,4
P Value	< 0.0001
Deviation from horizontal?	Significant
 
Data	 
Number of XY pairs	6
Equation	Y = 8.613e-005*X + 0.08278

License

gaio source code is available under the MIT License.

References

Status

Stable

Comments
  •  dara race issue

    dara race issue

    go version 1.13.8 OS: centOS 7.2

    $/usr/local/go/bin/go test -race .
    2020/02/17 16:56:45 accept tcp 127.0.0.1:35019: use of closed network connection
    2020/02/17 16:56:45 watcher closed
    2020/02/17 16:56:48 accept tcp 127.0.0.1:40316: use of closed network connection
    2020/02/17 16:56:48 accept tcp 127.0.0.1:40504: use of closed network connection
    2020/02/17 16:56:48 watcher closed
    2020/02/17 16:56:48 accept tcp 127.0.0.1:35054: use of closed network connection
    2020/02/17 16:56:48 watcher closed
    2020/02/17 16:56:48 accept tcp 127.0.0.1:43625: use of closed network connection
    2020/02/17 16:56:48 watcher closed
    2020/02/17 16:56:50 accept tcp 127.0.0.1:36666: use of closed network connection
    2020/02/17 16:56:50 watcher closed
    2020/02/17 16:56:50 accept tcp 127.0.0.1:41583: use of closed network connection
    2020/02/17 16:56:50 watcher closed
    ==================
    WARNING: DATA RACE
    Write at 0x00c0003c6070 by goroutine 81:
      github.com/xtaci/gaio.TestReadFull()
          /home/go/gaio/aio_test.go:417 +0x4e2
      testing.tRunner()
          /usr/local/go/src/testing/testing.go:909 +0x199
    
    Previous write at 0x00c0003c6070 by goroutine 90:
      github.com/xtaci/gaio.TestReadFull.func1()
          /home/go/gaio/aio_test.go:410 +0xaa
    
    Goroutine 81 (running) created at:
      testing.(*T).Run()
          /usr/local/go/src/testing/testing.go:960 +0x651
      testing.runTests.func1()
          /usr/local/go/src/testing/testing.go:1202 +0xa6
      testing.tRunner()
          /usr/local/go/src/testing/testing.go:909 +0x199
      testing.runTests()
          /usr/local/go/src/testing/testing.go:1200 +0x521
      testing.(*M).Run()
          /usr/local/go/src/testing/testing.go:1117 +0x2ff
      main.main()
          _testmain.go:112 +0x223
    
    Goroutine 90 (finished) created at:
      github.com/xtaci/gaio.TestReadFull()
          /home/go/gaio/aio_test.go:408 +0x438
      testing.tRunner()
          /usr/local/go/src/testing/testing.go:909 +0x199
    ==================
    2020/02/17 16:56:51 accept tcp 127.0.0.1:37621: use of closed network connection
    --- FAIL: TestReadFull (1.49s)
        aio_test.go:437: written: <nil> 104857600
        aio_test.go:439: read: <nil> 104857600
        testing.go:853: race detected during execution of test
    2020/02/17 16:56:51 watcher closed
    2020/02/17 16:56:52 accept tcp 127.0.0.1:37807: use of closed network connection
    2020/02/17 16:56:52 watcher closed
    2020/02/17 16:56:52 accept tcp 127.0.0.1:41224: use of closed network connection
    2020/02/17 16:56:52 watcher closed
    2020/02/17 16:56:52 accept tcp 127.0.0.1:39491: use of closed network connection
    2020/02/17 16:56:52 watcher closed
    2020/02/17 16:56:53 accept tcp 127.0.0.1:45587: use of closed network connection
    2020/02/17 16:56:53 watcher closed
    2020/02/17 16:56:53 accept tcp 127.0.0.1:33780: use of closed network connection
    2020/02/17 16:56:53 watcher closed
    2020/02/17 16:56:54 accept tcp 127.0.0.1:40090: use of closed network connection
    2020/02/17 16:56:54 watcher closed
    2020/02/17 16:56:56 accept tcp 127.0.0.1:39469: use of closed network connection
    2020/02/17 16:56:56 watcher closed
    2020/02/17 16:56:58 accept tcp 127.0.0.1:39311: use of closed network connection
    2020/02/17 16:56:58 watcher closed
    2020/02/17 16:56:58 accept tcp 127.0.0.1:42949: use of closed network connection
    2020/02/17 16:56:58 watcher closed
    2020/02/17 16:56:58 accept tcp 127.0.0.1:45309: use of closed network connection
    2020/02/17 16:56:58 watcher closed
    2020/02/17 16:56:59 accept tcp 127.0.0.1:36619: use of closed network connection
    2020/02/17 16:56:59 watcher closed
    2020/02/17 16:57:01 accept tcp 127.0.0.1:39795: use of closed network connection
    2020/02/17 16:57:01 watcher closed
    2020/02/17 16:57:02 accept tcp 127.0.0.1:45845: use of closed network connection
    2020/02/17 16:57:02 watcher closed
    2020/02/17 16:57:03 accept tcp 127.0.0.1:37195: use of closed network connection
    2020/02/17 16:57:03 watcher closed
    2020/02/17 16:57:05 accept tcp 127.0.0.1:45655: use of closed network connection
    2020/02/17 16:57:05 watcher closed
    2020/02/17 16:57:06 accept tcp 127.0.0.1:43165: use of closed network connection
    2020/02/17 16:57:06 watcher closed
    2020/02/17 16:57:08 accept tcp 127.0.0.1:33451: use of closed network connection
    2020/02/17 16:57:08 watcher closed
    FAIL
    FAIL    github.com/xtaci/gaio   23.326s
    FAIL
    [email protected][16:57:09]:gaio
    $
    
    opened by tsingson 2
  • Memory consumption benchmark?

    Memory consumption benchmark?

    @xtaci Thanks a lot for the great work as usual! 👍

    Since this project is primarily aimed at reducing memory consumption and context switching, I'd suggest that it would be very helpful to add a side-by-side comparison of memory usage for the two test cases shown in README.

    opened by riobard 2
  • does this support UDP ?

    does this support UDP ?

    Really interesting projects you do, learning alot here and really like your coding style !

    But wondering if this support UDP ? and if not how much would it require to make a UDP version ?

    opened by spexocta 1
  • 关于 loop 函数中 copy(pending, w.pending) 的疑问

    关于 loop 函数中 copy(pending, w.pending) 的疑问

    请问

    // func (w *watcher) loop()
    if cap(pending) < cap(w.pending) {
        pending = make([]*aiocb, 0, cap(w.pending))
    }
    pending = pending[:len(w.pending)]
    copy(pending, w.pending)
    

    既然 copy 的长度是 len(w.pending) ,为什么这里的第一句是以 cap(w.pending) 来扩容的?

    opened by fumeboy 1
  • tls support

    tls support

    心血来潮,把 nbio 又完善了下,kqueue加上了(但是我没有mac,只能用自动构建平台跑下test证明基本ok、没做其他大量测试),windows用std/net

    timer还是换成了heap,之前用wheel发现,epoll_wait那个间隔太长的话定时器精度太低、并且大量事件时响应速度慢、性能差,时间太短的话tick太频繁、没数据的时候cpu有点飙

    copy了标准库的tls,魔改了下支持了nbio,简单测了“粘包”之类的,应该算基本稳定了: https://github.com/lesismal/nbio/tree/master/examples/tls

    tls这个魔改,其他的异步库也可以用,但是依赖实现 net.Conn 作为 tls.Conn 的 under-layer 记得当初你还跟我说,nbio.Conn 实现了 net.Conn 没什么必要,现在看,算是无心插柳了,哈哈哈 实现net.Conn,给应用层SetDeadline还是挺有必要的 并且,我又撸了个http 1.x parser,现在 http server 也支持了 然后又撸了个 webwocket upgrader,websocket 也支持了

    缓缓节奏,后面想把 http 2.0 也支持上,标准库的 tls 比较浪费,有档期也想重写一份

    opened by lesismal 0
  • OOM for 3K websockets

    OOM for 3K websockets

    Hi,

    I am trying to implement a websocket push based server using this library and I am constantly running into OOM for large number of sockets like 3K for example. I am wondering why this is happening? Below is the code. OOM doesn't happen for small number of sockets and memory seems to be stable. It only happens for large number of sockets.

            clients := map[string]net.Conn{}
    	go func() {
    		for {
    			select {
    			case res := <-chIO: // receive IO events from watcher
    				if res.Error != nil {
    					log.Error().Msgf("Error receiving IO event from watcher: %v", res.Error)
    					delete(clients, res.Conn.RemoteAddr().String())
    					err = w.Free(res.Conn)
    					if err != nil {
    						log.Error().Msgf("error freeing connection: %v", err)
    					}
    					continue
    				}
    			case feed := <-out:
    				f := ws.NewTextFrame(feed)
    				bts := CompileHeader(f.Header)
    				for index, conn := range clients {
    					if conn != nil {
    						err = w.Write(nil, conn, bts)
    						if err != nil {
    							if errors.Is(err, syscall.EPIPE) || errors.Is(err, syscall.ECONNRESET) {
    								delete(clients, index)
    							} else {
    								log.Error().Msgf("unable to write header: %v", err)
    							}
    						}
    						err = w.Write(nil, conn, f.Payload)
    						if err != nil {
    							if errors.Is(err, syscall.EPIPE) || errors.Is(err, syscall.ECONNRESET) {
    								delete(clients, index)
    							} else {
    								log.Error().Msgf("unable to write payload: %v", err)
    							}
    						}
    					}
    				}
    			case conn := <-chConn: // receive new connection events
    				clients[conn.RemoteAddr().String()] = conn
    			}
    		}
    	}()
    

    I did some profiling with pprof and here is what I have

    heap profile: 35098695: 5565869472 [80607043: 16351145656] @ heap/2
    14246148: 2507322048 [14882947: 2619398672] @ 0x84d345 0x475a92 0x84b25a 0x8f7a31 0x8f785b 0x46d041
    	0x84d344	github.com/xtaci/gaio.init.0.func1+0x24			/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:28
    	0x475a91	sync.(*Pool).Get+0xb1					/usr/local/go/src/sync/pool.go:148
    	0x84b259	github.com/xtaci/gaio.(*watcher).aioCreate+0x1b9	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:272
    	0x8f7a30	github.com/xtaci/gaio.(*watcher).Write+0x7b0		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:240
    
    14246061: 2507306736 [14882895: 2619389520] @ 0x84d345 0x475a92 0x84b25a 0x8f7845 0x8f75db 0x46d041
    	0x84d344	github.com/xtaci/gaio.init.0.func1+0x24			/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:28
    	0x475a91	sync.(*Pool).Get+0xb1					/usr/local/go/src/sync/pool.go:148
    	0x84b259	github.com/xtaci/gaio.(*watcher).aioCreate+0x1b9	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:272
    	0x8f7844	github.com/xtaci/gaio.(*watcher).Write+0x5c4		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:240
    
    6377990: 306143520 [24276054: 1165250592] @ 0x84ca52 0x84ca3e 0x84cb0f 0x84c0b8 0x46d041
    	0x84ca51	container/list.(*List).insertValue+0x4d1		/usr/local/go/src/container/list/list.go:104
    	0x84ca3d	container/list.(*List).PushBack+0x4bd			/usr/local/go/src/container/list/list.go:155
    	0x84cb0e	github.com/xtaci/gaio.(*watcher).handlePending+0x58e	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:563
    	0x84c0b7	github.com/xtaci/gaio.(*watcher).loop+0x2d7		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:437
    
    1: 132481024 [1: 132481024] @ 0x84b425 0x8f7a31 0x8f785b 0x46d041
    	0x84b424	github.com/xtaci/gaio.(*watcher).aioCreate+0x384	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:276
    	0x8f7a30	github.com/xtaci/gaio.(*watcher).Write+0x7b0		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:240
    
    1: 67821568 [1: 67821568] @ 0x84b425 0x8f7845 0x8f75db 0x46d041
    	0x84b424	github.com/xtaci/gaio.(*watcher).aioCreate+0x384	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:276
    	0x8f7844	github.com/xtaci/gaio.(*watcher).Write+0x5c4		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:240
    
    159604: 28090304 [159656: 28099456] @ 0x84d345 0x475a92 0x84b25a 0x8f7545 0x8f74e7 0x46d041
    	0x84d344	github.com/xtaci/gaio.init.0.func1+0x24			/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:28
    	0x475a91	sync.(*Pool).Get+0xb1					/usr/local/go/src/sync/pool.go:148
    	0x84b259	github.com/xtaci/gaio.(*watcher).aioCreate+0x1b9	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:272
    	0x8f7544	github.com/xtaci/gaio.(*watcher).Free+0x2c4		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:256
    
    
    opened by indexk09 7
  • go 1.17在linux下编译不过

    go 1.17在linux下编译不过

    复现环境: CentOS release 6.3 (Final) go version go1.15.2 linux/amd64


    报错信息如下:

    github.com/xtaci/gaio

    cgo: gcc did not produce error at completed:1 on input:

    #line 1 "cgo-builtin-prolog" #include <stddef.h> /* for ptrdiff_t and size_t below */

    /* Define intgo when compiling with GCC. */ typedef ptrdiff_t intgo;

    #define GO_CGO_GOSTRING_TYPEDEF typedef struct { const char *p; intgo n; } GoString; typedef struct { char *p; intgo n; intgo c; } GoBytes; GoString GoString(char *p); GoString GoStringN(char *p, int l); GoBytes GoBytes(void *p, int n); char *CString(GoString); void *CBytes(GoBytes); void *_CMalloc(size_t);

    attribute ((unused)) static size_t _GoStringLen(GoString s) { return (size_t)s.n; }

    attribute ((unused)) static const char *_GoStringPtr(GoString s) { return s.p; } #line 5 "/home/batsdk/code/baidu/personal-code/crab-console/vendor/github.com/xtaci/gaio/affinity_linux.go"

    #define _GNU_SOURCE #include <sched.h> #include <pthread.h>

    void lock_thread(int cpuid) { pthread_t tid; cpu_set_t cpuset;

    tid = pthread_self();
    CPU_ZERO(&cpuset);
    CPU_SET(cpuid, &cpuset);
    pthread_setaffinity_np(tid, sizeof(cpu_set_t), &cpuset);
    

    }

    #line 1 "cgo-generated-wrapper" #line 1 "not-declared" void __cgo_f_1_1(void) { typeof(int) *__cgo_undefined__1; } #line 1 "not-type" void __cgo_f_1_2(void) { int *__cgo_undefined__2; } #line 1 "not-int-const" void __cgo_f_1_3(void) { enum { __cgo_undefined__3 = (int)*1 }; } #line 1 "not-num-const" void __cgo_f_1_4(void) { static const double __cgo_undefined__4 = (int); } #line 1 "not-str-lit" void __cgo_f_1_5(void) { static const char __cgo_undefined__5[] = (int); } #line 2 "not-declared" void __cgo_f_2_1(void) { typeof(lock_thread) *__cgo_undefined__1; } #line 2 "not-type" void __cgo_f_2_2(void) { lock_thread *__cgo_undefined__2; } #line 2 "not-int-const" void __cgo_f_2_3(void) { enum { __cgo_undefined__3 = (lock_thread)*1 }; } #line 2 "not-num-const" void __cgo_f_2_4(void) { static const double __cgo_undefined__4 = (lock_thread); } #line 2 "not-str-lit" void __cgo_f_2_5(void) { static const char __cgo_undefined__5[] = (lock_thread); } #line 1 "completed" int __cgo__1 = __cgo__2;

    full error output: cc1: error: unrecognized command line option "-fno-lto"

    opened by lixianmin 0
  • introducing cbList to replace list.List

    introducing cbList to replace list.List

    profile内存时发现,gaio内部用 list.List 来存放读者/写者的回调上下文,因此每次异步读写(写操作大部分情况就直接tryWrite成功了,所以更多情况是读操作)都会动态创建 list.Element 对象。

    其实一般的应用,一个连接只会有1个读请求,或者若干个写请求,readers/writers 的数量相对较少,用slice就可以满足需求。这个PR用slice实现了 cbList 用来代替 list.List,以此优化内存分配次数较多的问题。

    image

    opened by zjx20 0
  • allows WaitIO() to reuse the []OpResult slice

    allows WaitIO() to reuse the []OpResult slice

    每次调用WaitIO()都会新创建一个[]OpResult小对象,对性能造成一定影响;这个PR修改了WaitIO()的接口,允许用户传入一个预先分配好的slice,从而避免每次重新创建。

    buf := make([]gaio.OpResult, 0, 100)
    for {
      results, err := w.WaitIO(buf)
      ...
    }
    

    甚至用户可以直接重用返回的slice

    buf := make([]gaio.OpResult, 0, 100)
    for {
      results, err := w.WaitIO(buf)
      for _, r := range results {
        // ...
      }
      buf = results[:0]
    }
    

    image

    opened by zjx20 0
  • `go get` error

    `go get` error

    When i try to download the package using go get -u github.com/xtaci/gaio i recieve the following output:

    # github.com/xtaci/gaio
    ..\..\..\..\pkg\mod\github.com\xtaci\[email protected]\aio_generic.go:132:2: undefined: watcher
    
    opened by joaquinodz 2
Releases(v1.2.12)
Owner
xtaci
自心取自心,非幻成幻法。
xtaci
🚀 gnet is a high-performance, lightweight, non-blocking, event-driven networking framework written in pure Go./ gnet 是一个高性能、轻量级、非阻塞的事件驱动 Go 网络框架。

English | ???? 中文 ?? Introduction gnet is an event-driven networking framework that is fast and lightweight. It makes direct epoll and kqueue syscalls

Andy Pan 7.1k Dec 1, 2022
Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance. RPC is usually heavy on processing logic and therefore cannot handle I/O serially. But Go's standard library net designed blocking I/O API, so that the RPC framework can only follow the One Conn One Goroutine design.

CloudWeGo 3.2k Dec 2, 2022
High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

lesismal 1.1k Dec 2, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

mobus 2 Sep 19, 2021
Provides easy-to-use async IO interface with io_uring

What is io_uring io_uring io_uring-wahtsnew LWN io_uring Lord of the io_uring

Iceber Gu 352 Dec 3, 2022
easyRpc - Sync && Async Call supported

easyRpc - Sync && Async Call supported

Dub Dub 2 Feb 16, 2022
Fast event-loop networking for Go

evio is an event loop networking framework that is fast and small. It makes direct epoll and kqueue syscalls rather than using the standard Go net pac

Josh 5.5k Dec 1, 2022
Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Massoud Asadi 62 Nov 30, 2022
A decentralized P2P networking stack written in Go.

noise noise is an opinionated, easy-to-use P2P network stack for decentralized applications, and cryptographic protocols written in Go. noise is made

Perlin Network 1.7k Dec 2, 2022
Fork of Go stdlib's net/http that works with alternative TLS libraries like refraction-networking/utls.

github.com/ooni/oohttp This repository contains a fork of Go's standard library net/http package including patches to allow using this HTTP code with

Open Observatory of Network Interference (OONI) 30 Sep 29, 2022
🧪 Run common networking tests against your site.

dstp dstp, run common networking tests against your site. Usage Usage: dstp [OPTIONS] [ARGS]

Yagiz Degirmenci 916 Dec 1, 2022
Hybridnet is an open source container networking solution, integrated with Kubernetes and used officially by following well-known PaaS platforms

Hybridnet What is Hybridnet? Hybridnet is an open source container networking solution, integrated with Kubernetes and used officially by following we

Alibaba 157 Dec 1, 2022
Basic Got chat program using Ably for networking

Go Terminal Chat Basic Got chat program using Ably for networking. Taken from GopherCon UK 2021: Tom Camp - Creating a basic chat app. Setup Replace t

Stephen Mahon 0 Nov 30, 2021
NAT puncher for Wireguard mesh networking.

natpunch-go This is a NAT hole punching tool designed for creating Wireguard mesh networks. It was inspired by Tailscale and informed by this example.

Malcolm Seyd 120 Nov 24, 2022
Zero Trust Network Communication Sentinel provides peer-to-peer, multi-protocol, automatic networking, cross-CDN and other features for network communication.

Thank you for your interest in ZASentinel ZASentinel helps organizations improve information security by providing a better and simpler way to protect

ZTALAB 8 Nov 1, 2022
Gmqtt is a flexible, high-performance MQTT broker library that fully implements the MQTT protocol V3.1.1 and V5 in golang

中文文档 Gmqtt News: MQTT V5 is now supported. But due to those new features in v5, there area lots of breaking changes. If you have any migration problem

null 763 Dec 3, 2022
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Spiral Scout 6.9k Nov 29, 2022
Squzy - is a high-performance open-source monitoring, incident and alert system written in Golang with Bazel and love.

Squzy - opensource monitoring, incident and alerting system About Squzy - is a high-performance open-source monitoring and alerting system written in

Squzy 470 Nov 22, 2022