Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

Related tags

netpoll
Overview

中文

Introduction

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

RPC is usually heavy on processing logic and therefore cannot handle I/O serially. But Go's standard library net designed blocking I/O API, so that the RPC framework can only follow the One Conn One Goroutine design. It will waste a lot of cost for context switching, due to a large number of goroutines under high concurrency. Besides, net.Conn has no API to check Alive, so it is difficult to make an efficient connection pool for RPC framework, because there may be a large number of failed connections in the pool.

On the other hand, the open source community currently lacks Go network libraries that focus on RPC scenarios. Similar repositories such as: evio, gnet, etc., are all focus on scenarios like Redis, Haproxy.

But now, Netpoll was born and solved the above problems. It draws inspiration from the design of evio and netty, has excellent Performance, and is more suitable for microservice architecture. Also Netpoll provides a number of Features, and it is recommended to replace net in some RPC scenarios.

We developed the RPC framework KiteX and HTTP framework Hertz (to be open sourced) based on Netpoll, both with industry-leading performance.

Examples show how to build RPC client and server using Netpoll.

For more information, please refer to Document.

Features

  • Already

    • LinkBuffer provides nocopy API for streaming reading and writing
    • gopool provides high-performance goroutine pool
    • mcache provides efficient memory reuse
    • multisyscall supports batch system calls
    • IsActive supports checking whether the connection is alive
    • Dialer supports building clients
    • EventLoop supports building a server
    • TCP, Unix Domain Socket
    • Linux, Mac OS (operating system)
  • Future

    • io_uring
    • Shared Memory IPC
    • Serial scheduling I/O, suitable for pure computing
    • TLS
    • UDP
  • Unsupported

    • Windows (operating system)

Performance

Benchmark is not a digital game, it should meet the requirements of industrial use first. In the RPC scenario, concurrent calls and waiting timeout are necessary support items.

Therefore, we set that the benchmark should meet the following conditions:

  1. Support concurrent calls, support timeout(1s)
  2. Use protocol: header(4 bytes) indicates the total length of payload

Similar repositories such as net , evio, gnet. We compared their performance through Benchmarks, as shown below.

For more benchmark reference Netpoll-Benchmark , KiteX-Benchmark and Hertz-Benchmark .

Environment

  • CPU: Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz, 4 cores
  • Memory: 8GB
  • OS: Debian 5.4.56.bsk.1-amd64 x86_64 GNU/Linux
  • Go: 1.15.4

Concurrent Performance (Echo 1KB)

image image

Transport Performance (concurrent=100)

image image

Benchmark Conclusion

Compared with net , Netpoll latency about 34% and qps about 110% (continue to pressurize, net latency is too high to reference)

Document

Issues
  • echo 测试

    echo 测试

    想跑下echo测试,你们的完整例子代码已经找不到,所以参考文档自己写了个 echo server,麻烦看下这样写有没有问题

    package main
    
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"github.com/cloudwego/netpoll"
    )
    
    func main() {
    	network, address := "tcp", "127.0.0.1:8888"
    
    	// 创建 listener
    	listener, err := netpoll.CreateListener(network, address)
    	if err != nil {
    		panic("create netpoll listener fail")
    	}
    
    	// handle: 连接读数据和处理逻辑
    	var onRequest netpoll.OnRequest = handler
    
    	// options: EventLoop 初始化自定义配置项
    	var opts = []netpoll.Option{
    		netpoll.WithReadTimeout(1 * time.Second),
    		netpoll.WithIdleTimeout(10 * time.Minute),
    		netpoll.WithOnPrepare(nil),
    	}
    
    	// 创建 EventLoop
    	eventLoop, err := netpoll.NewEventLoop(onRequest, opts...)
    	if err != nil {
    		panic("create netpoll event-loop fail")
    	}
    
    	// 运行 Server
    	err = eventLoop.Serve(listener)
    	if err != nil {
    		panic("netpoll server exit")
    	}
    }
    
    // 读事件处理
    func handler(ctx context.Context, connection netpoll.Connection) error {
    	reader := connection.Reader()
    	buf, err := reader.Peek(reader.Len())
    	if err != nil {
    		return err
    	}
    
    	// n, err := connection.Write(append([]byte{}, buf...))
    	// or
    	n, err := connection.Write(buf)
    
    	if err != nil {
    		return err
    	}
    	if n != len(buf) {
    		return fmt.Errorf("write failed: %v < %v", n, len(buf))
    	}
    
    	return nil
    	// return connection.Writer().Flush()
    }
    
    opened by lesismal 4
  • link was broken of readme

    link was broken of readme

    hi, the link of KiteX is 404

    documentation 
    opened by tfzxyinhao 3
  • 性能、内存占用问题

    性能、内存占用问题

    用于压测的echo server代码在这里: server

    压测客户端100个连接、payload 64,代码在这:client

    跟其他几个库相比,都是append一个新的buffer用于写,其他框架也可以直接写读到的buffer、但也都用了append的新buffer回写,所以这应该不影响各个框架性能对比

    不知道现在这个echo server代码正确了没,用这个代码测,100连接、1k payload,吞吐量只有 gnet 的一半左右,甚至比 easygo还慢一些。。

    如果方便,请提供下你们的benchmark代码我重新跑下看看

    invalid 
    opened by lesismal 3
  • block read?

    block read?

    看了下 connection 的实现,如果 Next(n) 时对方没有发出足够 n 的数据,这个读将被阻塞,如果有大量慢连接 m 个,这意味着需要使用 m 个协程、否则其他 connection 将被线头阻塞

    https://github.com/cloudwego/netpoll/blob/main/connection_impl.go#L327

    如果每个连接一个协程,那可能不如直接用标准库了

    invalid 
    opened by lesismal 2
  • feat:add suport net protocol enums

    feat:add suport net protocol enums

    When I Use netpoll to create server eventloop or dial client,it`s unfriendly to write net types by hand.Such as ,"tcp" or "udp",I think a enums is need.

    opened by wsyingang 1
  • feat: support linux(arm64)

    feat: support linux(arm64)

    opened by Hchenn 0
  • 是否有必要维护workerPool?

    是否有必要维护workerPool?

    如果只是为了实现协程数量限制,仅维护逻辑上的workerCnt,直接使用go func()闭包就好了吧?还可以减少一次workerPool:Get/Put的开销。

    opened by halodou 0
  • docs: Some changes to the external link

    docs: Some changes to the external link

    Fix link

    • Fix the link of gopool and mcache.

    Declaration link

    • Because there are many links in the README, and there are several duplicate links.
    • So you can declare the link at the bottom of the README.
    • Maybe this is a better way.
    opened by Dup4 0
  • feat: refactor outputting

    feat: refactor outputting

    • Close connection gracefully:
      • conn used by client: closeCallback should wait for Flush && Outputs finished.
      • conn used by server: closeCallback should wait for onRequest processed if is processing.
    • Flush should do the syscall by itself and poller only promise to wake up writer trigger.
    opened by joway 0
  • 关于benchmark代码、更多参数和测试指标

    关于benchmark代码、更多参数和测试指标

    目前benchmark代码的链接已经失效,希望能早点提供测试用例完整代码

    目前文档中benchmark的图,只包括了 100连接(没提到 payload)以及 payload 1k(没提到连接数)的指标,测试内容太少、能够说明的情况也局限

    希望能提供不同并发情况下,比如1k、10k、100k、1000k场景下不同payload时与其他网络库、标准库的指标对比

    另外由于本人参考文档写的echo server压测时发现netpoll内存占用非常高,而内存占用也是海量并发场景时的重要指标。所以希望也能提供包括内存占用以及其他指标的数据对比

    opened by lesismal 2
  • connection.Reader().Next(n) n大于当前已有数据时阻塞,ReadTimeout后框架层没有关闭连接,connection.Reader()已有数据且应用层未读取时会不停触发 OnRequest

    connection.Reader().Next(n) n大于当前已有数据时阻塞,ReadTimeout后框架层没有关闭连接,connection.Reader()已有数据且应用层未读取时会不停触发 OnRequest

    server

    package main
    
    import (
    	"context"
    	"fmt"
    	"log"
    	"time"
    
    	"github.com/cloudwego/netpoll"
    )
    
    func main() {
    	network, address := "tcp", "127.0.0.1:8888"
    
    	// 创建 listener
    	listener, err := netpoll.CreateListener(network, address)
    	if err != nil {
    		panic("create netpoll listener fail")
    	}
    
    	// handle: 连接读数据和处理逻辑
    	var onRequest netpoll.OnRequest = handler
    
    	// options: EventLoop 初始化自定义配置项
    	var opts = []netpoll.Option{
    		netpoll.WithReadTimeout(5 * time.Second),
    		netpoll.WithIdleTimeout(10 * time.Minute),
    		netpoll.WithOnPrepare(nil),
    	}
    
    	// 创建 EventLoop
    	eventLoop, err := netpoll.NewEventLoop(onRequest, opts...)
    	if err != nil {
    		panic("create netpoll event-loop fail")
    	}
    
    	// 运行 Server
    	err = eventLoop.Serve(listener)
    	if err != nil {
    		panic("netpoll server exit")
    	}
    }
    
    // 读事件处理
    func handler(ctx context.Context, connection netpoll.Connection) error {
    	total := 2
    	t := time.Now()
    	reader := connection.Reader()
    	data, err := reader.Peek(reader.Len())
    	log.Printf("before Next, len: %v, data: %v", reader.Len(), string(data))
    	data, err = reader.Next(total)
    	if err != nil {
    		log.Printf("Next failed, total: %v, reader.Len: %v, block time: %v, error: %v", total, reader.Len(), int(time.Since(t)), err)
    		return err
    	}
    
    	log.Printf("after  Next, len: %v, data: %v, timeused: %v", len(data), string(data), int(time.Since(t).Seconds()))
    
    	n, err := connection.Write(data)
    	if err != nil {
    		return err
    	}
    	if n != len(data) {
    		return fmt.Errorf("write failed: %v < %v", n, len(data))
    	}
    
    	return nil
    }
    

    client

    package main
    
    import (
    	"log"
    	"net"
    	"time"
    )
    
    func main() {
    	conn, err := net.Dial("tcp", "127.0.0.1:8888")
    	if err != nil {
    		log.Fatal("dial failed:", err)
    	}
    
    	// 用于测试,设定单个完整协议包 2 字节,server端使用 connection.Reader.Next(2) 进行读取
    	// server 端超时时间设定为 5s
    
    	// 第一组,在超时时间内发送,server端能读到完整包,但 connection.Reader.Next(2) 阻塞 2s
    	conn.Write([]byte("a"))
    	time.Sleep(time.Second * 2)
    	conn.Write([]byte("a"))
    
    	time.Sleep(time.Second * 1)
    
    	// 第二组,超过超时时间、分开多次发送完整包,server端 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 30s 内、client 发送完剩余数据前,server 端多次触发 OnRequest,并且每次 OnRequest 中的 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 30s 后 client 发送完整包剩余数据,server 端 connection.Reader.Next(2) 读到完整包
    	conn.Write([]byte("b"))
    	time.Sleep(time.Second * 30)
    	conn.Write([]byte("b"))
    
    	time.Sleep(time.Second * 1)
    
    	// 第三组,只发送半包,client 端不再有行动
    	// server 端多次触发 OnRequest,并且每次 OnRequest 中的 connection.Reader.Next(2) 阻塞 5s(超时时间为5s)后报错,但是连接没有断开
    	// 实际场景中,server 端可能收不到 tcp FIN1,比如 client 设备断电,server 端无法及时迅速地释放该连接,如果大量连接进行攻击,存在服务不可用风险
    	conn.Write([]byte("c"))
    
    	<-make(chan int)
    }
    

    日志

    go run ./netpoll.go 
    2021/07/18 07:25:22 before Next, len: 1, data: a
    2021/07/18 07:25:24 after  Next, len: 2, data: aa, timeused: 2
    2021/07/18 07:25:25 before Next, len: 1, data: b
    2021/07/18 07:25:30 Next failed, total: 2, reader.Len: 1, block time: 5005315692, error: connection read timeout 5s
    2021/07/18 07:25:30 before Next, len: 1, data: b
    2021/07/18 07:25:35 Next failed, total: 2, reader.Len: 1, block time: 5017124559, error: connection read timeout 5s
    2021/07/18 07:25:35 before Next, len: 1, data: b
    2021/07/18 07:25:40 Next failed, total: 2, reader.Len: 1, block time: 5009562038, error: connection read timeout 5s
    2021/07/18 07:25:40 before Next, len: 1, data: b
    2021/07/18 07:25:45 Next failed, total: 2, reader.Len: 1, block time: 5008370180, error: connection read timeout 5s
    2021/07/18 07:25:45 before Next, len: 1, data: b
    2021/07/18 07:25:50 Next failed, total: 2, reader.Len: 1, block time: 5011104792, error: connection read timeout 5s
    2021/07/18 07:25:50 before Next, len: 1, data: b
    2021/07/18 07:25:55 after  Next, len: 2, data: bb, timeused: 4
    2021/07/18 07:25:56 before Next, len: 1, data: c
    2021/07/18 07:26:01 Next failed, total: 2, reader.Len: 1, block time: 5009599769, error: connection read timeout 5s
    2021/07/18 07:26:01 before Next, len: 1, data: c
    2021/07/18 07:26:06 Next failed, total: 2, reader.Len: 1, block time: 5017649436, error: connection read timeout 5s
    2021/07/18 07:26:06 before Next, len: 1, data: c
    2021/07/18 07:26:11 Next failed, total: 2, reader.Len: 1, block time: 5015780369, error: connection read timeout 5s
    2021/07/18 07:26:11 before Next, len: 1, data: c
    2021/07/18 07:26:16 Next failed, total: 2, reader.Len: 1, block time: 5013565228, error: connection read timeout 5s
    2021/07/18 07:26:16 before Next, len: 1, data: c
    2021/07/18 07:26:21 Next failed, total: 2, reader.Len: 1, block time: 5004234323, error: connection read timeout 5s
    2021/07/18 07:26:21 before Next, len: 1, data: c
    2021/07/18 07:26:26 Next failed, total: 2, reader.Len: 1, block time: 5014860948, error: connection read timeout 5s
    2021/07/18 07:26:26 before Next, len: 1, data: c
    2021/07/18 07:26:31 Next failed, total: 2, reader.Len: 1, block time: 5009890510, error: connection read timeout 5s
    2021/07/18 07:26:31 before Next, len: 1, data: c
    2021/07/18 07:26:36 Next failed, total: 2, reader.Len: 1, block time: 5009386524, error: connection read timeout 5s
    2021/07/18 07:26:36 before Next, len: 1, data: c
    2021/07/18 07:26:41 Next failed, total: 2, reader.Len: 1, block time: 5009694923, error: connection read timeout 5s
    2021/07/18 07:26:41 before Next, len: 1, data: c
    2021/07/18 07:26:46 Next failed, total: 2, reader.Len: 1, block time: 5006999390, error: connection read timeout 5s
    2021/07/18 07:26:46 before Next, len: 1, data: c
    2021/07/18 07:26:51 Next failed, total: 2, reader.Len: 1, block time: 5016639111, error: connection read timeout 5s
    2021/07/18 07:26:51 before Next, len: 1, data: c
    2021/07/18 07:26:56 Next failed, total: 2, reader.Len: 1, block time: 5004699154, error: connection read timeout 5s
    2021/07/18 07:26:56 before Next, len: 1, data: c
    2021/07/18 07:27:01 Next failed, total: 2, reader.Len: 1, block time: 5003720648, error: connection read timeout 5s
    2021/07/18 07:27:01 before Next, len: 1, data: c
    2021/07/18 07:27:06 Next failed, total: 2, reader.Len: 1, block time: 5013684114, error: connection read timeout 5s
    2021/07/18 07:27:06 before Next, len: 1, data: c
    2021/07/18 07:27:11 Next failed, total: 2, reader.Len: 1, block time: 5008594864, error: connection read timeout 5s
    2021/07/18 07:27:11 before Next, len: 1, data: c
    2021/07/18 07:27:16 Next failed, total: 2, reader.Len: 1, block time: 5016949058, error: connection read timeout 5s
    2021/07/18 07:27:16 before Next, len: 1, data: c
    ### 一直这样提示
    
    question 
    opened by lesismal 12
  • Broken link for multisyscall

    Broken link for multisyscall

    The link in README is not available https://github.com/cloudwego/multisyscall.

    help wanted 
    opened by jim3ma 3
  • 链接404

    链接404

    https://github.com/cloudwego/netpoll/blob/main/README_CN.md 该页面中,特性->已支持的特性中gopool的链接有点问题,貌似应该是: https://github.com/bytedance/gopkg/tree/main/util/gopool

    documentation 
    opened by processzero 1
  • 压测建连很慢,listener 跟 accept 得到的 fd 可能分配到相同的 epoll fd 进行管理?

    压测建连很慢,listener 跟 accept 得到的 fd 可能分配到相同的 epoll fd 进行管理?

    现象,压测客户端,10000连接,每建立一个连接后就立刻发送数据,10000个连接全部建连成功需要等待很长时间,或者等待很久仍然无法全部建立,netstat 发现大量出于 SYN_SENT 状态

    netpoll代码wrap层次稍微有点多,我暂时没review太多。

    盲猜一下,listener和client的fd是有可能在同一个epoll loop里吧?如果是这样,当client的fd很忙时,listener accept的响应能力就变慢了,我用来测试的client是每连接上一个,就直接开始数据收发了,如果是listener可能跟accept得到的client fd在同一个epoll loop,现象就能解释通了

    如果是这个问题,listener改用单独的epoll loop比较好,accept不是大量频繁操作,所以多个listener放同一个epoll loop也ok,但是最好不要跟 client fd 放到相同的 epoll loop

    question 
    opened by lesismal 3
Releases(v0.0.3)
Owner
CloudWeGo
CloudWeGo
🚀 gnet is a high-performance, lightweight, non-blocking, event-driven networking framework written in pure Go./ gnet 是一个高性能、轻量级、非阻塞的事件驱动 Go 网络框架。

English | ???? 中文 ?? Introduction gnet is an event-driven networking framework that is fast and lightweight. It makes direct epoll and kqueue syscalls

Andy Pan 4.9k Jul 24, 2021
Simple, fast and scalable golang rpc library for high load

gorpc Simple, fast and scalable golang RPC library for high load and microservices. Gorpc provides the following features useful for highly loaded pro

Aliaksandr Valialkin 637 Jul 7, 2021
Easily generate gRPC services in Go ⚡️

Lile is a application generator (think create-react-app, rails new or django startproject) for gRPC services in Go and a set of tools/libraries. The p

Lile 1.3k Jul 18, 2021
A pluggable backend API that enforces the Event Sourcing Pattern for persisting & broadcasting application state changes

A pluggable "Application State Gateway" that enforces the Event Sourcing Pattern for securely persisting & broadcasting application state changes

null 24 Apr 15, 2021
Network-wide ads & trackers blocking DNS server

Privacy protection center for you and your devices Free and open source, powerful network-wide ads & trackers blocking DNS server. AdGuard.com | Wiki

AdGuard 8.2k Jul 18, 2021
rpc/v2 support for JSON-RPC 2.0 Specification.

rpc rpc/v2 support for JSON-RPC 2.0 Specification. gorilla/rpc is a foundation for RPC over HTTP services, providing access to the exported methods of

High Performance, Kubernetes Native Object Storage 3 Jul 4, 2021
webrpc is a schema-driven approach to writing backend services for modern Web apps and networks

webrpc is a schema-driven approach to writing backend servers for the Web. Write your server's api interface in a schema format of RIDL or JSON, and t

null 378 Jul 16, 2021
kcp is a prototype of a Kubernetes API server that is not a Kubernetes cluster - a place to create, update, and maintain Kube-like APis with controllers above or without clusters.

kcp is a minimal Kubernetes API server How minimal exactly? kcp doesn't know about Pods or Nodes, let alone Deployments, Services, LoadBalancers, etc.

Prototype of Future Kubernetes Ideas 889 Jul 23, 2021
A protoc-gen-go wrapper including an RPC stub generator

// Copyright 2013 Google. All rights reserved. // // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE fi

Kyle Lemons 35 May 13, 2020
A native Thrift package for Go

Thrift Package for Go API Documentation: http://godoc.org/github.com/samuel/go-thrift License 3-clause BSD. See LICENSE file. Overview Thrift is an ID

Samuel Stauffer 347 Jul 14, 2021
The Go language implementation of gRPC. HTTP/2 based RPC

gRPC-Go The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information

grpc 14.1k Jul 23, 2021
Fast HTTP package for Go. Tuned for high performance. Zero memory allocations in hot paths. Up to 10x faster than net/http

fasthttp Fast HTTP implementation for Go. Currently fasthttp is successfully used by VertaMedia in a production serving up to 200K rps from more than

Aliaksandr Valialkin 15.6k Jul 25, 2021
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

gaio Introduction 中文介绍 For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a

xtaci 370 Jul 20, 2021