Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

Related tags

Network netpoll
Overview

中文

Introduction

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

RPC is usually heavy on processing logic and therefore cannot handle I/O serially. But Go's standard library net designed blocking I/O API, so that the RPC framework can only follow the One Conn One Goroutine design. It will waste a lot of cost for context switching, due to a large number of goroutines under high concurrency. Besides, net.Conn has no API to check Alive, so it is difficult to make an efficient connection pool for RPC framework, because there may be a large number of failed connections in the pool.

On the other hand, the open source community currently lacks Go network libraries that focus on RPC scenarios. Similar repositories such as: evio, gnet, etc., are all focus on scenarios like Redis, Haproxy.

But now, Netpoll was born and solved the above problems. It draws inspiration from the design of evio and netty, has excellent Performance, and is more suitable for microservice architecture. Also Netpoll provides a number of Features, and it is recommended to replace net in some RPC scenarios.

We developed the RPC framework KiteX and HTTP framework Hertz (to be open sourced) based on Netpoll, both with industry-leading performance.

Examples show how to build RPC client and server using Netpoll.

For more information, please refer to Document.

Features

  • Already

    • LinkBuffer provides nocopy API for streaming reading and writing
    • gopool provides high-performance goroutine pool
    • mcache provides efficient memory reuse
    • multisyscall supports batch system calls
    • IsActive supports checking whether the connection is alive
    • Dialer supports building clients
    • EventLoop supports building a server
    • TCP, Unix Domain Socket
    • Linux, Mac OS (operating system)
  • Future

    • io_uring
    • Shared Memory IPC
    • Serial scheduling I/O, suitable for pure computing
    • TLS
    • UDP
  • Unsupported

    • Windows (operating system)

Performance

Benchmark is not a digital game, it should meet the requirements of industrial use first. In the RPC scenario, concurrent calls and waiting timeout are necessary support items.

Therefore, we set that the benchmark should meet the following conditions:

  1. Support concurrent calls, support timeout(1s)
  2. Use protocol: header(4 bytes) indicates the total length of payload

Similar repositories such as net , evio, gnet. We compared their performance through Benchmarks, as shown below.

For more benchmark reference Netpoll-Benchmark , KiteX-Benchmark and Hertz-Benchmark .

Environment

  • CPU: Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz, 4 cores
  • Memory: 8GB
  • OS: Debian 5.4.56.bsk.1-amd64 x86_64 GNU/Linux
  • Go: 1.15.4

Concurrent Performance (Echo 1KB)

image image

Transport Performance (concurrent=100)

image image

Benchmark Conclusion

Compared with net , Netpoll latency about 34% and qps about 110% (continue to pressurize, net latency is too high to reference)

Document

Comments
  • connection.Reader().Next(n) n大于当前已有数据时阻塞,ReadTimeout后框架层没有关闭连接,connection.Reader()已有数据且应用层未读取时会不停触发 OnRequest

    connection.Reader().Next(n) n大于当前已有数据时阻塞,ReadTimeout后框架层没有关闭连接,connection.Reader()已有数据且应用层未读取时会不停触发 OnRequest

    server

    package main
    
    import (
    	"context"
    	"fmt"
    	"log"
    	"time"
    
    	"github.com/cloudwego/netpoll"
    )
    
    func main() {
    	network, address := "tcp", "127.0.0.1:8888"
    
    	// 创建 listener
    	listener, err := netpoll.CreateListener(network, address)
    	if err != nil {
    		panic("create netpoll listener fail")
    	}
    
    	// handle: 连接读数据和处理逻辑
    	var onRequest netpoll.OnRequest = handler
    
    	// options: EventLoop 初始化自定义配置项
    	var opts = []netpoll.Option{
    		netpoll.WithReadTimeout(5 * time.Second),
    		netpoll.WithIdleTimeout(10 * time.Minute),
    		netpoll.WithOnPrepare(nil),
    	}
    
    	// 创建 EventLoop
    	eventLoop, err := netpoll.NewEventLoop(onRequest, opts...)
    	if err != nil {
    		panic("create netpoll event-loop fail")
    	}
    
    	// 运行 Server
    	err = eventLoop.Serve(listener)
    	if err != nil {
    		panic("netpoll server exit")
    	}
    }
    
    // 读事件处理
    func handler(ctx context.Context, connection netpoll.Connection) error {
    	total := 2
    	t := time.Now()
    	reader := connection.Reader()
    	data, err := reader.Peek(reader.Len())
    	log.Printf("before Next, len: %v, data: %v", reader.Len(), string(data))
    	data, err = reader.Next(total)
    	if err != nil {
    		log.Printf("Next failed, total: %v, reader.Len: %v, block time: %v, error: %v", total, reader.Len(), int(time.Since(t)), err)
    		return err
    	}
    
    	log.Printf("after  Next, len: %v, data: %v, timeused: %v", len(data), string(data), int(time.Since(t).Seconds()))
    
    	n, err := connection.Write(data)
    	if err != nil {
    		return err
    	}
    	if n != len(data) {
    		return fmt.Errorf("write failed: %v < %v", n, len(data))
    	}
    
    	return nil
    }
    

    client

    package main
    
    import (
    	"log"
    	"net"
    	"time"
    )
    
    func main() {
    	conn, err := net.Dial("tcp", "127.0.0.1:8888")
    	if err != nil {
    		log.Fatal("dial failed:", err)
    	}
    
    	// 用于测试,设定单个完整协议包 2 字节,server端使用 connection.Reader.Next(2) 进行读取
    	// server 端超时时间设定为 5s
    
    	// 第一组,在超时时间内发送,server端能读到完整包,但 connection.Reader.Next(2) 阻塞 2s
    	conn.Write([]byte("a"))
    	time.Sleep(time.Second * 2)
    	conn.Write([]byte("a"))
    
    	time.Sleep(time.Second * 1)
    
    	// 第二组,超过超时时间、分开多次发送完整包,server端 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 30s 内、client 发送完剩余数据前,server 端多次触发 OnRequest,并且每次 OnRequest 中的 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 30s 后 client 发送完整包剩余数据,server 端 connection.Reader.Next(2) 读到完整包
    	conn.Write([]byte("b"))
    	time.Sleep(time.Second * 30)
    	conn.Write([]byte("b"))
    
    	time.Sleep(time.Second * 1)
    
    	// 第三组,只发送半包,client 端不再有行动
    	// server 端多次触发 OnRequest,并且每次 OnRequest 中的 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 实际场景中,server 端可能收不到 tcp FIN1,比如 client 设备断电,server 端无法及时迅速地释放该连接,如果大量连接进行攻击,存在服务不可用风险
    	conn.Write([]byte("c"))
    
    	<-make(chan int)
    }
    

    日志

    go run ./netpoll.go 
    2021/07/18 07:25:22 before Next, len: 1, data: a
    2021/07/18 07:25:24 after  Next, len: 2, data: aa, timeused: 2
    2021/07/18 07:25:25 before Next, len: 1, data: b
    2021/07/18 07:25:30 Next failed, total: 2, reader.Len: 1, block time: 5005315692, error: connection read timeout 5s
    2021/07/18 07:25:30 before Next, len: 1, data: b
    2021/07/18 07:25:35 Next failed, total: 2, reader.Len: 1, block time: 5017124559, error: connection read timeout 5s
    2021/07/18 07:25:35 before Next, len: 1, data: b
    2021/07/18 07:25:40 Next failed, total: 2, reader.Len: 1, block time: 5009562038, error: connection read timeout 5s
    2021/07/18 07:25:40 before Next, len: 1, data: b
    2021/07/18 07:25:45 Next failed, total: 2, reader.Len: 1, block time: 5008370180, error: connection read timeout 5s
    2021/07/18 07:25:45 before Next, len: 1, data: b
    2021/07/18 07:25:50 Next failed, total: 2, reader.Len: 1, block time: 5011104792, error: connection read timeout 5s
    2021/07/18 07:25:50 before Next, len: 1, data: b
    2021/07/18 07:25:55 after  Next, len: 2, data: bb, timeused: 4
    2021/07/18 07:25:56 before Next, len: 1, data: c
    2021/07/18 07:26:01 Next failed, total: 2, reader.Len: 1, block time: 5009599769, error: connection read timeout 5s
    2021/07/18 07:26:01 before Next, len: 1, data: c
    2021/07/18 07:26:06 Next failed, total: 2, reader.Len: 1, block time: 5017649436, error: connection read timeout 5s
    2021/07/18 07:26:06 before Next, len: 1, data: c
    2021/07/18 07:26:11 Next failed, total: 2, reader.Len: 1, block time: 5015780369, error: connection read timeout 5s
    2021/07/18 07:26:11 before Next, len: 1, data: c
    2021/07/18 07:26:16 Next failed, total: 2, reader.Len: 1, block time: 5013565228, error: connection read timeout 5s
    2021/07/18 07:26:16 before Next, len: 1, data: c
    2021/07/18 07:26:21 Next failed, total: 2, reader.Len: 1, block time: 5004234323, error: connection read timeout 5s
    2021/07/18 07:26:21 before Next, len: 1, data: c
    2021/07/18 07:26:26 Next failed, total: 2, reader.Len: 1, block time: 5014860948, error: connection read timeout 5s
    2021/07/18 07:26:26 before Next, len: 1, data: c
    2021/07/18 07:26:31 Next failed, total: 2, reader.Len: 1, block time: 5009890510, error: connection read timeout 5s
    2021/07/18 07:26:31 before Next, len: 1, data: c
    2021/07/18 07:26:36 Next failed, total: 2, reader.Len: 1, block time: 5009386524, error: connection read timeout 5s
    2021/07/18 07:26:36 before Next, len: 1, data: c
    2021/07/18 07:26:41 Next failed, total: 2, reader.Len: 1, block time: 5009694923, error: connection read timeout 5s
    2021/07/18 07:26:41 before Next, len: 1, data: c
    2021/07/18 07:26:46 Next failed, total: 2, reader.Len: 1, block time: 5006999390, error: connection read timeout 5s
    2021/07/18 07:26:46 before Next, len: 1, data: c
    2021/07/18 07:26:51 Next failed, total: 2, reader.Len: 1, block time: 5016639111, error: connection read timeout 5s
    2021/07/18 07:26:51 before Next, len: 1, data: c
    2021/07/18 07:26:56 Next failed, total: 2, reader.Len: 1, block time: 5004699154, error: connection read timeout 5s
    2021/07/18 07:26:56 before Next, len: 1, data: c
    2021/07/18 07:27:01 Next failed, total: 2, reader.Len: 1, block time: 5003720648, error: connection read timeout 5s
    2021/07/18 07:27:01 before Next, len: 1, data: c
    2021/07/18 07:27:06 Next failed, total: 2, reader.Len: 1, block time: 5013684114, error: connection read timeout 5s
    2021/07/18 07:27:06 before Next, len: 1, data: c
    2021/07/18 07:27:11 Next failed, total: 2, reader.Len: 1, block time: 5008594864, error: connection read timeout 5s
    2021/07/18 07:27:11 before Next, len: 1, data: c
    2021/07/18 07:27:16 Next failed, total: 2, reader.Len: 1, block time: 5016949058, error: connection read timeout 5s
    2021/07/18 07:27:16 before Next, len: 1, data: c
    ### 一直这样提示
    
    question 
    opened by lesismal 12
  • seem it can't work on Mac OSX

    seem it can't work on Mac OSX

    I try to write a simple test on Mac OSX below, but it can't work. I use telnet 127.0.0.1 8080, type hello world, but can't get the response.

    package main
    
    import (
    	"context"
    	"flag"
    	"fmt"
    	"time"
    
    	"github.com/cloudwego/netpoll"
    )
    
    var (
    	addr = flag.String("addr", "127.0.0.1:8080", "server address")
    )
    
    func onRequest(_ context.Context, conn netpoll.Connection) error {
    	println("hello world")
    
    	var reader, writer = conn.Reader(), conn.Writer()
    
    	defer reader.Release()
    	// client send "Hello World", size is 11
    	buf, err := reader.Next(11)
    	if err != nil {
    		fmt.Printf("error %v\n", err)
    		return err
    	}
    
    	alloc, _ := writer.Malloc(len(buf))
    	copy(alloc, buf)
    	err = writer.Flush()
    	if err != nil {
    		fmt.Printf("flush error %v\n", err)
    		return err
    	}
    
    	return nil
    }
    
    func onPrepare(conn netpoll.Connection) context.Context {
    	println("hello prepare")
    	return context.TODO()
    }
    
    func main() {
    	flag.Parse()
    	l, err := netpoll.CreateListener("tcp", *addr)
    	if err != nil {
    		panic("create netpoll listener failed")
    	}
    	defer l.Close()
    
    	println("hello event loop")
    
    	loop, err1 := netpoll.NewEventLoop(
    		onRequest,
    		netpoll.WithReadTimeout(time.Second),
    		netpoll.WithOnPrepare(onPrepare),
    	)
    	if err1 != nil {
    		panic("create event loop failed")
    	}
    
    	println("begin to serve")
    	loop.Serve(l)
    }
    
    question 
    opened by siddontang 10
  • 运行普通的server端demo有问题

    运行普通的server端demo有问题

    image 如上图: d当我监听12345端口后,想请求下该端口查看是netpoll处理io事件流程,却发现监听端口始终无法启动,也没有任何报错。 之前曾经短暂启动成功过 下面是具体的code代码:

    import ( "context" "fmt" "github.com/cloudwego/netpoll" "time" )

    func main() {

    l, err := netpoll.CreateListener("tcp", ":12345")
    if err != nil {
    	panic("监听错误")
    }
    eventLoop, _ := netpoll.NewEventLoop(
    	//这个就是接收到io事件后处理的handler
    	func(ctx context.Context, connection netpoll.Connection) error {
    		fmt.Printf("收到请求了")
    		return nil
    	},
    	netpoll.WithReadTimeout(time.Second))
    fmt.Printf("准备开始监听请求")
    eventLoop.Serve(l)
    

    }

    下面是我电脑的配置:

    image

    bug duplicate fix 
    opened by xj524598 7
  • graceful shutdown failed

    graceful shutdown failed

    EventLoop 的 OnRequest 内,链接被主动关闭时,会导致链接(processing)死锁,从而无法关闭。

    OnRequest

    func (s *server) onHandle(_ context.Context, conn netpoll.Connection) error {
    	time.Sleep(time.Second * 3)
    	conn.Close()
    	return nil
    }
    

    connection_onevent.go:115 image image

    bug 
    opened by lazychanger 6
  • 我在运行示例代码的时候报的netpoll找不到这些,升级了go版本还是这个问题,请问我该怎么做谢谢

    我在运行示例代码的时候报的netpoll找不到这些,升级了go版本还是这个问题,请问我该怎么做谢谢

    ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection.go:59:18: undefined: OnRequest ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:30:2: undefined: netFD ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:40:19: undefined: barrier ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:41:19: undefined: barrier ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:252:32: undefined: Conn ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:252:46: undefined: OnPrepare ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_impl.go:272:38: undefined: Conn ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_onevent.go:48:39: undefined: OnRequest ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\connection_onevent.go:71:40: undefined: OnPrepare ....\go1.17.1\pkg\mod\github.com\cloudwego\[email protected]\net_sock.go:41:116: undefined: netFD ....\go1.17.1\pkg\mod\github.com\cloudwego\net[email protected]\net_sock.go:41:116: too many errors

    question 
    opened by atlarge800 6
  • onRequest task里面为什么需要重新上锁呢?

    onRequest task里面为什么需要重新上锁呢?

    https://github.com/cloudwego/netpoll/blob/5607dcbce465c6c75e53fa346da356df96bfb38b/connection_onevent.go#L111 是否仅当 c.Reader().Len() <= 0 进行一次unlock即可,可以减少一次增删锁的开销,task中重新上锁的原因是什么呢?

    question 
    opened by heydoer 6
  • 关于benchmark代码、更多参数和测试指标

    关于benchmark代码、更多参数和测试指标

    目前benchmark代码的链接已经失效,希望能早点提供测试用例完整代码

    目前文档中benchmark的图,只包括了 100连接(没提到 payload)以及 payload 1k(没提到连接数)的指标,测试内容太少、能够说明的情况也局限

    希望能提供不同并发情况下,比如1k、10k、100k、1000k场景下不同payload时与其他网络库、标准库的指标对比

    另外由于本人参考文档写的echo server压测时发现netpoll内存占用非常高,而内存占用也是海量并发场景时的重要指标。所以希望也能提供包括内存占用以及其他指标的数据对比

    opened by lesismal 6
  • No Copy Buffer 的 No Copy 指的是什么呢?

    No Copy Buffer 的 No Copy 指的是什么呢?

    // readBinary cannot use mcache, because the memory allocated by readBinary will not be recycled.
    func (b *LinkBuffer) readBinary(n int) (p []byte) {
    	b.recalLen(-n) // re-cal length
    
    	// single node
    	p = make([]byte, n)
    	if b.isSingleNode(n) {
    		copy(p, b.read.Next(n))
    		return p
    	}
    	// multiple nodes
    	var pIdx int
    	var l int
    	for ack := n; ack > 0; ack = ack - l {
    		l = b.read.Len()
    		if l >= ack {
    			pIdx += copy(p[pIdx:], b.read.Next(ack))
    			break
    		} else if l > 0 {
    			pIdx += copy(p[pIdx:], b.read.Next(l))
    		}
    		b.read = b.read.next
    	}
    	_ = pIdx
    	return p
    }
    
    

    Reader 的 接口貌似返回的都是对数组的拷贝, No Copy 是指扩容缩容 No Copy 吗

    opened by Ccheers 5
  • 构建的websocket server, 在处理operation code, 写入返回帧数据异常

    构建的websocket server, 在处理operation code, 写入返回帧数据异常

    Describe the bug server 收到opcode=9 的ping帧,server返回opcode=10的pong帧 opcode https://tools.ietf.org/html/rfc6455#section-5.2 但在实际运行时收不到pong帧 基于对比github.com/mailru/easygo/netpoll 可以收到响应头

    To Reproduce

    // server
    import (
        "github.com/gobwas/ws"
        "github.com/gobwas/ws/wsutil"
    )
    func connect(ctx context.Context, conn netpoll.Connection) context.Context {
    	_, err := ws.Upgrade(conn)
    	if err != nil {
    		conn.Close()
    		return ctx
    	}
    	return ctx
    }
    
    func handle(ctx context.Context, conn netpoll.Connection) error {
        header, err := ws.ReadHeader(w.conn)
        if err != nil {
             return err
        }
    
        if header.OpCode.IsControl() {
            // 此处根据控制码返回相应的帧
            if err := wsutil.ControlFrameHandler(conn, ws.StateServerSide)(header, conn); err != nil {
    
    	}
            return
        }
        // read
        // write
    }
    
    // client
            d := wsutil.DebugDialer{
    		Dialer: ws.Dialer{
    			Timeout: 1 * time.Second,
    		},
    	}
            conn, _, _, err := d.Dial(ctx, url)
    
    	p := ws.NewPingFrame(nil)
    	p.Header.Masked = true
    	if err := ws.WriteFrame(conn, p); err != nil {
    		log.Printf("WriteFrame failed,err=%+v", i, err)
    	}
    
    	h, err := ws.ReadFrame(conn)
    	log.Printf("header=%+v, err=%+v", h, err)
    

    Expected behavior 客户端期待收到 header={Header:{Fin:true Rsv:0 OpCode:10 Masked:false Mask:[0 0 0 0] Length:0} Payload:[]} 实际是未收到

    go version 1.18 server os : 本地 WSL UBUNTU 22.04 和 测试环境k8s alpine 3.16

    Additional context 参考方案 https://www.freecodecamp.org/news/million-websockets-and-go-cc58418460bb 参考example https://github.com/gobwas/ws-examples/tree/master/src/chat

    opened by gh73962 4
  • Is ReadBinary still a faster implementation of Next+copy under single node scenario?

    Is ReadBinary still a faster implementation of Next+copy under single node scenario?

    image

    image

    image

    It is easy to see that under multiple nodes scenario, Next involves another additional cache making or slice making. I wonder if ReadBinary is still a faster implementation of Next+copy under single node scenario, where ReadBinary looks just like Next+copy.

    code analysis 
    opened by WineChord 4
  • waitRead 为什么 waitReadSize 设定成 N?

    waitRead 为什么 waitReadSize 设定成 N?

    // waitRead will wait full n bytes.
    func (c *connection) waitRead(n int) (err error) {
    	if n <= c.inputBuffer.Len() {
    		return nil
    	}
    	atomic.StoreInt32(&c.waitReadSize, int32(n)) // 这里为什么不设置成 int32(n-c.inputBuffer.Len())
    	defer atomic.StoreInt32(&c.waitReadSize, 0)
    	if c.readTimeout > 0 {
    		return c.waitReadWithTimeout(n)
    	}
    	// wait full n
    	for c.inputBuffer.Len() < n {
    		if c.IsActive() {
    			<-c.readTrigger
    			continue
    		}
    		// confirm that fd is still valid.
    		if atomic.LoadUint32(&c.netFD.closed) == 0 {
    			return c.fill(n)
    		}
    		return Exception(ErrConnClosed, "wait read")
    	}
    	return nil
    }
    
    // inputAck implements FDOperator.
    func (c *connection) inputAck(n int) (err error) {
    	if n < 0 {
    		n = 0
    	}
    	// Auto size bookSize.
    	if n == c.bookSize && c.bookSize < mallocMax {
    		c.bookSize <<= 1
    	}
    
    	length, _ := c.inputBuffer.bookAck(n)
    	if c.maxSize < length {
    		c.maxSize = length
    	}
    	if c.maxSize > mallocMax {
    		c.maxSize = mallocMax
    	}
    
    	var needTrigger = true
    	if length == n { // first start onRequest
    		needTrigger = c.onRequest()
    	}
    	if needTrigger && length >= int(atomic.LoadInt32(&c.waitReadSize)) {
    		c.triggerRead()
    	}
    	return nil
    }
    

    atomic.StoreInt32(&c.waitReadSize, int32(n)) // 这里为什么不设置成 int32(n-c.inputBuffer.Len()) 如果need的是一个大包或者实际只有几B读延迟,不就阻塞了吗

    question 
    opened by Ccheers 4
  • Are you using Netpoll ?

    Are you using Netpoll ?

    The purpose of this issue

    We are always interested in finding out who is using Netpoll, what attracted you to using it, how we can listen to your needs and if you are interested, help promote your organization.

    • We have people reaching out to us asking, who uses Netpoll in production?
    • We’d like to listen to what you would like to see in Netpoll and your scenarios?
    • We'd like to help promote your organization and work with you

    What we would like from you

    Submit a comment in this issue to include the following information

    • Your organization or company
    • Link to your website
    • Your country
    • Your contact info to reach out to you: blog, email or Twitter (at least one).
    • What is your scenario for using Netpoll?
    • Are you running you application in Testing or Production?
    Organization/Company: ByteDance
    Website: https://bytedance.com
    Country: China
    Contact: [email protected]
    Usage scenario: Using Netpoll as a default net lib in Kitex & Hertz to build large scale Cloud Native applications
    Status: Production
    
    opened by GuangmingLuo 0
  • WIP: Implement the netpoll I/O poller using io_uring

    WIP: Implement the netpoll I/O poller using io_uring

    This issue is about to trace #151. Hold, please ;)

    io_uring SDKs implemented via go

    • https://github.com/godzie44/go-uring/
    • https://github.com/Iceber/iouring-go
    • https://github.com/hodgesds/iouring-go
    • https://github.com/dshulyak/uring
    • https://github.com/ii64/gouring

    io_uring forums

    • https://unixism.net/loti/index.html
    • https://lwn.net/Kernel/Index/#io_uring

    io_uring blogs

    • https://cor3ntin.github.io/posts/iouring/#fnref:8
    • https://zhuanlan.zhihu.com/p/380726590
    • https://www.jianshu.com/p/32a3c72da1c1
    • https://developers.mattermost.com/blog/hands-on-iouring-go/
    • https://arthurchiao.art/blog/intro-to-io-uring-zh/#2-io_uring
    • https://wjwh.eu/posts/2021-10-01-no-syscall-server-iouring.html
    • https://man.archlinux.org/man/io_uring.7.en

    other references

    • https://github.com/golang/go/tree/41d8e61a6b9d8f9db912626eb2bbc535e929fefc/src/runtime
    • https://elixir.bootlin.com/linux/v5.19.1/source/fs/io_uring.c
    • https://mp.weixin.qq.com/s/YPiYNPa3xVD9Il1HeB5pTw
    feature 
    opened by Jacob953 0
  • WIP: feat: support for Windows

    WIP: feat: support for Windows

    What type of PR is this?

    feat


    What this PR does / why we need it (en: English/zh: Chinese):


    en: This pr is an adaptation modification made to netpoll in order to complete the GLCC competition title "providing windows support for kitex".

    Changes made so far:

    1. No error will be reported on the Windows platform (merged with previous modifications), but the function is not fully realized.
    2. Added compatible cross-platform abstractions, such as using fdtype to avoid platform differences caused by int and handle types. The specific modifications include: abstraction of handle type, abstraction of iovec, and read and write functions (only recv and send are supported under the windows platform).
    3. Implemented the sys_exec tool under the windows platform, including GetSysFdPairs, writev, readv, sendmsg, etc.
    4. Encapsulate WSAPoll into epoll interface similar to unix system, and can be called normally by poll type in netpoll.
    5. Modified the unit tests related to sys_exec and poll so that they can support the testing of the windows platform, and the above modifications have passed the unit tests.

    What needs to be done next:

    1. Further logic review and problem checking
    2. Cross-platform support for net_listener (the current problem is that net.TCPListener cannot be converted to a raw socket)

    zh: 此pr是为了完成GLCC赛题“对kitex提供windows支持”而对于netpoll进行的适配性修改,目的是能够使得netpoll支持windows平台,目前是截至中期考核进行的一些成果。

    到目前为止已经进行的改动:

    1. 在windows平台上不会报错(和此前的修改进行了合并),但是功能上并没有完全实现。
    2. 添加了具有兼容性的跨平台抽象,例如使用fdtype去避免int与handle类型带来的平台差异等。具体的修改内容包括:对句柄类型的抽象、对iovec的抽象、对于read以及write函数(windows平台下仅支持recv和send)。
    3. 实现了windows平台下的sys_exec工具,主要包括GetSysFdPairs、writev、readv、sendmsg等。
    4. 将WSAPoll封装成了类似unix系统的epoll接口,并且能够被netpoll中的poll类型正常调用。
    5. 修改了sys_exec、poll相关的单元测试,使其能够支持windows平台的测试,并且上述修改均通过了单元测试。

    接下来需要完成的:

    1. 进一步的逻辑审查与问题检查
    2. net_listener的跨平台支持(目前遇到的问题是net.TCPListener无法转换为原始socket)

    Which issue(s) this PR fixes:

    Fixes https://github.com/cloudwego/kitex/issues/469

    opened by lllbbbyyy 4
  • readv调用时并没有使用多个内存块向量?

    readv调用时并没有使用多个内存块向量?

    // https://github.com/cloudwego/netpoll/blob/v0.2.6/connection_reactor.go#L74
    func (c *connection) inputs(vs [][]byte) (rs [][]byte) {
    	vs[0] = c.inputBuffer.book(c.bookSize, c.maxSize)
    	return vs[:1]
    }
    

    有个疑惑,readv调用前会先用上面的方法获取内存块来存放数据,但是这里只给了一个内存块,那么使用readv的意义在哪里呢,为什么不使用read调用呢?

    code analysis 
    opened by xing393939 1
Releases(v0.2.6)
  • v0.2.6(Aug 3, 2022)

    Refactor:

    • refactor: netpoll can be imported by other components under Windows and compiled without error (#178)

    Bugfix:

    • fix: cliconn check inputbuffer when close (#154)
    • fix: pollDesc.operator.OnWrite fatal address (#184)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.5(Jul 13, 2022)

    Bugfix:

    • fix: barrier memory leak (#158)
    • fix: pollDesc need unused op when detach (#169)
    • fix: flush wrap raw errno to exception (#168)
    • fix: op.detach will data race when calling onClose (#166)
    • fix: SetOnRequest need trigger OnRequest if there is already input data (#165)

    Chore:

    • test: add test case for LinkBuffer.resetTail (#161)

    Doc:

    • doc: public hertz in readme (#164)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.4(May 20, 2022)

  • v0.2.3(May 16, 2022)

  • v0.2.2(Apr 28, 2022)

    Improvement

    • impr: recycle caches when LinkBuffer is closed (#131)
    • impr: mcache bsr use math/bits.Len instead (#133)
    • impr: SetNumLoops no longer resets all pollers (#135)

    Bugfix

    • fix: check IsActive when flush (#118)
    • fix: zcReader.fill lost some data when read io.EOF (#122)
    • fix: send&close ignored by OnRequest (#128)

    Chore

    • doc: fix replace examples url (#119)
    • doc: restate the definition of Reader.Slice (#126)
    • doc: update guide.md (#129)

    Revert

    • revert: "feat: change default number of loops policy (#31)" (#130)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Feb 24, 2022)

  • v0.2.0(Feb 22, 2022)

    Improvement

    • feat: on connect callback (#102)
    • feat: new conn api - Until (#91)
    • feat: support dialing without timeout (#105)

    Bugfix

    • fix: trigger close callback when only set the onConnect callback (#111)
    • fix: add max node size to prevent OOM (#76)
    • fix: FDOperator.reset() not reset op.OnWrite (#104)
    • fix: write panic when conn close (#96)

    Chore

    • docs: update README.md (#99) (#103)
    • fix: unit tests may fail (#106)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 13, 2021)

  • v0.1.1(Dec 9, 2021)

  • v0.1.0(Dec 1, 2021)

    Improvement

    • feature: add mux.ShardQueue to support connection multiplexing (#50)(#67)
    • optimize: input at a single LinkBuffer Node to improve performance (#75)
    • optimize: fix waitReadSize logic bug and enhance input trigger (#74)
    • optimize: reduce timeout issues when waitRead and inputAck have competition (#62)
    • optimize: unify and simplify conn locks (#45)

    Bugfix

    • fix: ensure EventLoop object will not be finalized before serve return (#54)

    Chore

    • doc: update readme (#85)
    • doc: update issue templates (#56)

    Breaking Change

    • remove Append and WriteBuffer returned parameter n (#49)(#83)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.4(Sep 16, 2021)

    Improvement

    • feat: support TCP_NODELAY by default (#39)
    • feat: read && write in a single loop (#38)
    • feat: return real error for nocopy rw (#34)
    • feat: change default number of loops policy (#31)
    • feat: redefine EventLoop.Serve arg: Listener -> net.Listener (#29)
    • feat: add API to DisableGopool (#28)
    • feat: rm reading lock (#19)
    • feat: blocking conn flush API (#9)

    Bugfix

    • fix: set leftover wait read size (#41)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.3(Jul 14, 2021)

  • v0.0.2(Jul 5, 2021)

Owner
CloudWeGo
CloudWeGo
High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

lesismal 777 Sep 15, 2022
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

徐旭 1.5k Sep 21, 2022
Native macOS networking for QEMU using vmnet.framework and socket networking.

qemu-vmnet Native macOS networking for QEMU using vmnet.framework and socket networking. Getting started TODO -netdev socket,id=net0,udp=:1234,localad

Alessio Dionisi 25 Aug 1, 2022
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

gaio Introduction 中文介绍 For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a

xtaci 458 Sep 22, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

mobus 2 Sep 19, 2021
🧙 High-performance PHP-to-Golang IPC/RPC bridge

High-performance PHP-to-Golang IPC bridge Goridge is high performance PHP-to-Golang codec library which works over native PHP sockets and Golang net/r

RoadRunner 1.1k Sep 22, 2022
`kawipiko` -- blazingly fast static HTTP server -- focused on low latency and high concurrency, by leveraging Go, `fasthttp` and the CDB embedded database

kawipiko -- blazingly fast static HTTP server kawipiko is a lightweight static HTTP server written in Go; focused on serving static content as fast an

Volution 310 Sep 20, 2022
Network-wide ads & trackers blocking DNS server

Privacy protection center for you and your devices Free and open source, powerful network-wide ads & trackers blocking DNS server. AdGuard.com | Wiki

AdGuard 12.9k Sep 16, 2022
meek is a blocking-resistant pluggable transport for Tor.

meek is a blocking-resistant pluggable transport for Tor. It encodes a data stream as a sequence of HTTPS requests and responses. Requests are reflect

Clair de Lune 1 Nov 9, 2021
Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks

firewall Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks. Features Easy to use Efficient and Fast Co

Golang libraries for everyone 5 May 20, 2022
A high-performance concurrent scanner written by go, which can be used for survival detection, tcp port detection, and web service detection.

aScan A high-performance concurrent scanner written by go, which can be used for survival detection, tcp port detection, and web service detection. Fu

seventeen 16 Aug 15, 2022
Antenna RPC is an RPC protocol for distributed computing, it's based on QUIC and Colfer. its currently an WIP.

aRPC - Antenna Remote Procedure Call Antenna remote procedure call (aRPC) is an RPC protocol focused on distributed processing and HPC. aRPC is implem

Raphael de Carvalho Almeida 3 Jun 16, 2021
rpc/v2 support for JSON-RPC 2.0 Specification.

rpc rpc/v2 support for JSON-RPC 2.0 Specification. gorilla/rpc is a foundation for RPC over HTTP services, providing access to the exported methods of

High Performance, Kubernetes Native Object Storage 3 Jul 4, 2021
Go Substrate RPC Client (GSRPC)Go Substrate RPC Client (GSRPC)

Go Substrate RPC Client (GSRPC) Substrate RPC client in Go. It provides APIs and types around Polkadot and any Substrate-based chain RPC calls. This c

Chino Chang 1 Nov 11, 2021
YoMo 45 Aug 25, 2022
Simple, fast and scalable golang rpc library for high load

gorpc Simple, fast and scalable golang RPC library for high load and microservices. Gorpc provides the following features useful for highly loaded pro

Aliaksandr Valialkin 654 Sep 2, 2022
Fast event-loop networking for Go

evio is an event loop networking framework that is fast and small. It makes direct epoll and kqueue syscalls rather than using the standard Go net pac

Josh 5.5k Sep 17, 2022
Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Massoud Asadi 60 Jul 30, 2022
A decentralized P2P networking stack written in Go.

noise noise is an opinionated, easy-to-use P2P network stack for decentralized applications, and cryptographic protocols written in Go. noise is made

Perlin Network 1.7k Sep 6, 2022