High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

Related tags

Network nbio
Overview

NBIO - NON-BLOCKING IO

Mentioned in Awesome Go

GoDoc MIT licensed Build Status Go Report Card Coverage Statusd

Contents

Features

  • linux: epoll
  • macos(bsd): kqueue
  • windows: golang std net
  • nbio.Conn implements a non-blocking net.Conn(except windows)
  • writev supported
  • least dependency
  • TLS supported
  • HTTP/HTTPS 1.x
  • Websocket, Passes the Autobahn Test Suite
  • HTTP 2.0

Installation

  1. Get and install nbio
$ go get -u github.com/lesismal/nbio
  1. Import in your code:
import "github.com/lesismal/nbio"

Quick Start

  • start a server
import "github.com/lesismal/nbio"

g := nbio.NewGopher(nbio.Config{
    Network: "tcp",
    Addrs:   []string{"localhost:8888"},
})

// echo
g.OnData(func(c *nbio.Conn, data []byte) {
    c.Write(append([]byte{}, data...))
})

err := g.Start()
if err != nil {
    panic(err)
}
// ...
  • start a client
import "github.com/lesismal/nbio"

g := nbio.NewGopher(nbio.Config{})

g.OnData(func(c *nbio.Conn, data []byte) {
    // ...
})

err := g.Start()
if err != nil {
    fmt.Printf("Start failed: %v\n", err)
}
defer g.Stop()

c, err := nbio.Dial("tcp", addr)
if err != nil {
    fmt.Printf("Dial failed: %v\n", err)
}
g.AddConn(c)

buf := make([]byte, 1024)
c.Write(buf)
// ...

API Examples

New Gopher For Server-Side

g := nbio.NewGopher(nbio.Config{
    Network: "tcp",
    Addrs:   []string{"localhost:8888"},
})

New Gopher For Client-Side

g := nbio.NewGopher(nbio.Config{})

Start Gopher

err := g.Start()
if err != nil {
    fmt.Printf("Start failed: %v\n", err)
}
defer g.Stop()

Custom Other Config For Gopher

conf := nbio.Config struct {
    // Name describes your gopher name for logging, it's set to "NB" by default
    Name: "NB",

    // MaxLoad represents the max online num, it's set to 10k by default
    MaxLoad: 1024 * 10, 

    // NListener represents the listener goroutine num on *nix, it's set to 1 by default
    NListener: 1,

    // NPoller represents poller goroutine num, it's set to runtime.NumCPU() by default
    NPoller: runtime.NumCPU(),

    // ReadBufferSize represents buffer size for reading, it's set to 16k by default
    ReadBufferSize: 1024 * 16,

    // MaxWriteBufferSize represents max write buffer size for Conn, it's set to 1m by default.
    // if the connection's Send-Q is full and the data cached by nbio is 
    // more than MaxWriteBufferSize, the connection would be closed by nbio.
    MaxWriteBufferSize uint32

    // LockListener represents listener's goroutine to lock thread or not, it's set to false by default.
	LockListener bool

    // LockPoller represents poller's goroutine to lock thread or not.
    LockPoller bool
}

SetDeadline/SetReadDeadline/SetWriteDeadline

var c *nbio.Conn = ...
c.SetDeadline(time.Now().Add(time.Second * 10))
c.SetReadDeadline(time.Now().Add(time.Second * 10))
c.SetWriteDeadline(time.Now().Add(time.Second * 10))

Bind User Session With Conn

var c *nbio.Conn = ...
var session *YourSessionType = ... 
c.SetSession(session)
var c *nbio.Conn = ...
session := c.Session().(*YourSessionType)

Writev / Batch Write

var c *nbio.Conn = ...
var data [][]byte = ...
c.Writev(data)

Handle New Connection

g.OnOpen(func(c *Conn) {
    // ...
    c.SetReadDeadline(time.Now().Add(time.Second*30))
})

Handle Disconnected

g.OnClose(func(c *Conn) {
    // clear sessions from user layer
})

Handle Data

g.OnData(func(c *Conn, data []byte) {
    // decode data
    // ...
})

Handle Memory Allocation/Free For Reading

import "sync"

var memPool = sync.Pool{
    New: func() interface{} {
        return make([]byte, yourSize)
    },
}

g.OnReadBufferAlloc(func(c *Conn) []byte {
    return memPool.Get().([]byte)
})
g.OnReadBufferFree(func(c *Conn, b []byte) {
    memPool.Put(b)
})

Handle Memory Free For Writing

g.OnWriteBufferFree(func(c *Conn, b []byte) {
    // ...
})

Handle Conn Before Read

// BeforeRead registers callback before syscall.Read
// the handler would be called only on windows
g.OnData(func(c *Conn, data []byte) {
    c.SetReadDeadline(time.Now().Add(time.Second*30))
})

Handle Conn After Read

// AfterRead registers callback after syscall.Read
// the handler would be called only on *nix
g.BeforeRead(func(c *Conn) {
    c.SetReadDeadline(time.Now().Add(time.Second*30))
})

Handle Conn Before Write

g.OnData(func(c *Conn, data []byte) {
    c.SetWriteDeadline(time.Now().Add(time.Second*5))
})

Std Net Examples

Echo Examples

TLS Examples

HTTP Examples

HTTPS Examples

HTTP 1M Connections Examples

Websocket Examples

Websocket TLS Examples

Websocket 1M Connections Examples

Bench Examples

refer to this test, or write your own test cases:

Use With Other STD Based Frameworkds

Issues
  • Should I receive one OnClose for every connection created and closed?

    Should I receive one OnClose for every connection created and closed?

    Hi. I am using nbio hard and it is performing very well. But here might be a small issue: I depend on OnClose for properly reverting things to the initial state. For example, I create a bunch of connections (say, 20000) using a synthetic test tool. And then I close all of these connections at once. In such case I seem to often end up with a small percentage of connections not receiving an OnClose event. The problem seems to occur especially when I just kill the client tool, ending all connections without properly closing them down. The vast majority of the connections does still receive it's OnClose event on the server side. But some (say, 279 out of 20000) do not. This could very well be an oddity in my app. But I can't find the issue. NOTE: I have print statements in OnClose. As a result it often takes some significant amount of time to gp through all OnClose events. Could this be causing the issue? Same context: What do the different OnClose error messages mean: 1. no error, 2. "EOF", 3. "broken pipe" and 4. "connection timed out". What can I derive from these error messages? Do any of them demand special (different) action on my part? Currently I am NOT closing any connection for which I receive an OnClose event (assuming they are closed already). Is this the correct handling of it?

    bug 
    opened by mehrvarz 38
  • How to read messages from *websocket.Connection?

    How to read messages from *websocket.Connection?

    Hi,

    I am looking at this example https://github.com/lesismal/nbio-examples/blob/master/websocket_1m/server/server.go and I have few questions

    1. I see that is 1m websocket server you are using mux but in the the readme I see this
    g := nbio.NewGopher(nbio.Config{
        Network: "tcp",
        Addrs:   []string{"localhost:8888"},
    })
    

    which one to use for large number of websocket connections like say 1m?

    1. why does connection object doesn't have onMessage? why only upgrader has? What is the way to read messages without allocating more buffers in the application? especially my goal is to scale for 1m webscokets?

    2. I want to send a stream of notifications over 1m socket connections so what is the best way to approach that using this library? since this is the only library I found that can use epoll as well as reuse buffers for low memory footprint. please suggest

    opened by kant777 32
  • Concurrency issues with NewServerTLS()

    Concurrency issues with NewServerTLS()

    In some cases my server may want to disconnect an existing websocket connection. It is a case that does not show up in any of your examples. Imagine you would like to do the following here

    if qps==1000 {
        wsConn.Close()
    }
    
    

    If I do something like this, the connection will get closed - the client will receive an onClose event. But wsConn.Close() then hangs. Any idea what could be causing this?

    opened by mehrvarz 28
  • Keeping many idle clients alive

    Keeping many idle clients alive

    In another thread you said:

    which would cost lots of goroutines when there are lots of connections, problems about mem/gc/schedule/stw comes with the huge number of goroutines.

    My current keep-alive implementation is using 1 goroutine per client to send PingMessages in perfect frequency. It is using case <-time.After(): which is simple (almost elegant) and is working really well. But it creates a lot of goroutines.

    The alternative "1 goroutine for all clients" implementation, that I can think of, would be way more complex. And I am not certain it would be really so much more efficient. Any thoughts you can share on this?

    opened by mehrvarz 26
  • Auto-closing WS in 2min

    Auto-closing WS in 2min

    Hi.

    Just got an issue during with websocket automatically closes itself in 2 min after connection was established. An error message tells me: websocket conn is closed [::1]:33776 read timeout. So, it looks like the net.Conn.SetReadDeadline(2min) call made somewhere inside and never been called again after message is received.

    Maybe it's my responsibility to call this method after message was received, but I guess it should be more clear whether should I call or not. Moreover, my thoughts are simple: "If I didn't make the first call, why should I make the next ones?".

    The example is simple. Just open the connection and wait. You will get your connection been closed in 2 min.

    Example
    ```golang
    package main
    
    import (
    	"context"
    	"fmt"
    	"log"
    	"net/http"
    	"os"
    	"os/signal"
    	"sync"
    	"syscall"
    	"time"
    
    	"github.com/lesismal/nbio/nbhttp"
    	"github.com/lesismal/nbio/nbhttp/websocket"
    )
    
    func OnWebsocket(w http.ResponseWriter, r *http.Request) {
    	upgrader := websocket.NewUpgrader()
    	upgrader.CheckOrigin = func(_ *http.Request) bool {
    		return true
    	}
    	upgrader.OnMessage(func(c *websocket.Conn, messageType websocket.MessageType, data []byte) {
    		fmt.Println(time.Now().String(), "websocket:", messageType, string(data), len(data))
    		c.WriteMessage(messageType, data)
    	})
    	upgrader.OnOpen(func(c *websocket.Conn) {
    		fmt.Println(time.Now().String(), "websocket conn is open", c.RemoteAddr().String())
    	})
    	upgrader.OnClose(func(c *websocket.Conn, err error) {
    		fmt.Println(time.Now().String(), "websocket conn is closed", c.RemoteAddr().String(), err)
    	})
    
    	conn, err := upgrader.Upgrade(w, r, nil)
    	if err != nil {
    		panic(err)
    	}
    	wsConn := conn.(*websocket.Conn)
    	_ = wsConn
    }
    
    func main() {
    
    	ctx, cancelFunc := context.WithCancel(context.Background())
    	wg := new(sync.WaitGroup)
    
    	sig := make(chan os.Signal, 1)
    	defer close(sig)
    	signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP)
    
    	srv1 := nbhttp.NewServer(nbhttp.Config{
    		Addrs: []string{":13000"},
    		Handler: http.HandlerFunc(OnWebsocket),
    	})
    
    	if err := srv1.Start(); err != nil {
    		log.Fatalln("Failed to start server 1", err)
    	}
    
    	wg.Add(1)
    	go func() {
    		defer wg.Done()
    		<-ctx.Done()
    
    		ctx, cancelFunc := context.WithTimeout(context.Background(), 5 * time.Second)
    		defer cancelFunc()
    
    		if err := srv1.Shutdown(ctx); err != nil {
    			log.Println("Failed to shutdown server 1", err)
    		}
    	}()
    
    	<-sig
    	cancelFunc()
    
    	wg.Wait()
    }
    ```
    
    opened by qioalice 25
  • can not reconnect on the tls websocket example

    can not reconnect on the tls websocket example

    when I have the server running in another shell

    $ go run .
    2021/05/13 17:11:43 connecting to wss://localhost:8888/wss
    2021/05/13 17:11:43 write: hello world
    2021/05/13 17:11:43 read : hello world
    2021/05/13 17:11:44 write: hello world
    2021/05/13 17:11:44 read : hello world
    2021/05/13 17:11:45 write: hello world
    2021/05/13 17:11:45 read : hello world
    2021/05/13 17:11:46 write: hello world
    2021/05/13 17:11:46 read : hello world
    ^Csignal: interrupt
    $ go run .
    2021/05/13 17:11:49 connecting to wss://localhost:8888/wss
    2021/05/13 17:11:49 dial:EOF
    exit status 1
    $ go run .
    2021/05/13 17:11:51 connecting to wss://localhost:8888/wss
    2021/05/13 17:11:51 dial:EOF
    exit status 1
    

    the server is still running the in the other shell

    I am on a mac if that helps

    opened by acgreek 21
  • after upgrader.Upgrade(w, r, nil),  if client send message right now, and wsConn.OnMessage handler set delay, message will miss,handler should set when upgrader.Upgrade

    after upgrader.Upgrade(w, r, nil), if client send message right now, and wsConn.OnMessage handler set delay, message will miss,handler should set when upgrader.Upgrade

    func onWebsocket(w http.ResponseWriter, r *http.Request) { upgrader := websocket.NewUpgrader() upgrader.EnableCompression = true conn, err := upgrader.Upgrade(w, r, nil) if err != nil { panic(err) } wsConn := conn.(*websocket.Conn) wsConn.EnableWriteCompression(true) wsConn.SetReadDeadline(time.Time{}) wsConn.OnMessage(func(c *websocket.Conn, messageType websocket.MessageType, data []byte) { // echo if *print { switch messageType { case websocket.TextMessage: fmt.Println("OnMessage:", messageType, string(data), len(data)) case websocket.BinaryMessage: fmt.Println("OnMessage:", messageType, data, len(data)) } } c.WriteMessage(messageType, data) }) wsConn.OnClose(func(c *websocket.Conn, err error) { if *print { fmt.Println("OnClose:", c.RemoteAddr().String(), err) } }) if *print { fmt.Println("OnOpen:", wsConn.RemoteAddr().String()) } }

    bug 
    opened by eric-zhangqi 20
  • support tls and non-tls servers and clients with the same engine/mux

    support tls and non-tls servers and clients with the same engine/mux

    not sure if all of these together are supported at the same time or if it's better to create multiple servers/engines.

    currently my server is using nbio for the tls connection and the stock go http server for the non-tls connection and gorrilla for the outbound websocket connections.

    I'm wondering if it is a good idea/possible to use a single common nbio engine for all these scenarios and share the mux for for the tls and non-tls servers.

    enhancement 
    opened by acgreek 19
  • autobahn tests failures

    autobahn tests failures

    seeing a lot of crashes on the autobahn test suite

    [nbio-autobahn]: Running test case ID 9.4.4 for agent non-tls from peer tcp4:127.0.0.1:9998
    [nbio-server]:   2021/08/10 12:37:47.210 [ERR] taskpool call failed: runtime error: invalid memory address or nil pointer dereference
    [nbio-server]:   goroutine 7 [running]:
    [nbio-server]:   github.com/lesismal/nbio/taskpool.call.func1()
    [nbio-server]:          /go/src/github.com/lesismal/nbio/taskpool/caller.go:19 +0xc7
    [nbio-server]:   panic(0x6bfaa0, 0x8ec800)
    [nbio-server]:          /usr/local/go/src/runtime/panic.go:965 +0x1b9
    [nbio-server]:   github.com/lesismal/nbio/nbhttp/websocket.(*Conn).writeFrame(0x0, 0x102, 0xc004243000, 0x1000, 0x1000, 0x0, 0x766d60, 0xc00004ace0)
    [nbio-server]:          /go/src/github.com/lesismal/nbio/nbhttp/websocket/conn.go:318 +0x192
    [nbio-server]:   github.com/lesismal/nbio/nbhttp/websocket.(*Conn).WriteFrame(...)
    [nbio-server]:          /go/src/github.com/lesismal/nbio/nbhttp/websocket/conn.go:274
    [nbio-server]:   github.com/lesismal/nbio/examples/websocket/server_autobahn.newUpgrader.func1(0x0, 0x460002, 0xc004243000, 0x1000, 0x1000)
    [nbio-server]:          /go/src/github.com/lesismal/nbio/examples/websocket/server_autobahn/server.go:23 +0x88
    [nbio-server]:   github.com/lesismal/nbio/nbhttp/websocket.newConn.func6(0x0, 0x50002, 0xc004243000, 0x1000, 0x1000)
    [nbio-server]:          /go/src/github.com/lesismal/nbio/nbhttp/websocket/conn.go:409 +0x6d
    [nbio-server]:   github.com/lesismal/nbio/nbhttp/websocket.(*Upgrader).handleDataFrame.func1()
    [nbio-server]:          /go/src/github.com/lesismal/nbio/nbhttp/websocket/upgrader.go:408 +0x6d
    [nbio-server]:   github.com/lesismal/nbio/nbhttp.(*Parser).Execute.func1()
    [nbio-server]:          /go/src/github.com/lesismal/nbio/nbhttp/parser.go:120 +0x62
    [nbio-server]:   github.com/lesismal/nbio/taskpool.call(0xc003082048)
    [nbio-server]:          /go/src/github.com/lesismal/nbio/taskpool/caller.go:23 +0x5d
    [nbio-server]:   github.com/lesismal/nbio/taskpool.(*fixedRunner).taskLoop(0xc000056300)
    [nbio-server]:          /go/src/github.com/lesismal/nbio/taskpool/fixedpool.go:41 +0x16d
    [nbio-server]:   created by github.com/lesismal/nbio/taskpool.NewFixedPool
    [nbio-server]:          /go/src/github.com/lesismal/nbio/taskpool/fixedpool.go:125 +0x167
    [nbio-server]:   
    [nbio-server]:   
    
    bug 
    opened by acgreek 16
  • auto expand the monitor events

    auto expand the monitor events

    awesome project, do you think about linear allocate algorithm to manage memory pool?

    and i want to implement the gateway with this project, do you have any questions ?

    opened by sunvim 14
  • memory usage explosion with provided examples

    memory usage explosion with provided examples

    I used your client to stream in 1.5 MB messages at high throughput and I am seeing high memory usage. I'm going to look into optimizing for this scenario, but was wondering if you have any thought of how I could ago about do this.

    Screen Shot 2021-10-01 at 1 46 09 PM

    I guess I could also go about creating a simple example, I'll do that first

    documentation 
    opened by acgreek 13
Releases(v1.2.20)
Owner
lesismal
less is more.
lesismal
Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance. RPC is usually heavy on processing logic and therefore cannot handle I/O serially. But Go's standard library net designed blocking I/O API, so that the RPC framework can only follow the One Conn One Goroutine design.

CloudWeGo 2.9k Aug 5, 2022
A fast port forwarding or reverse forwarding tool over HTTP1.0/HTTP1.1

gogw What's gogw ? gogw is a port forwarding/reverse forwarding tool over HTTP implements by golang. port forwarding/port reverse forwarding support T

xitongsys 46 Apr 17, 2022
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

徐旭 1.4k Jul 29, 2022
Http-server - A HTTP server and can be accessed via TLS and non-TLS mode

Application server.go runs a HTTP/HTTPS server on the port 9090. It gives you 4

Vedant Pareek 0 Feb 3, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

mobus 2 Sep 19, 2021
Native macOS networking for QEMU using vmnet.framework and socket networking.

qemu-vmnet Native macOS networking for QEMU using vmnet.framework and socket networking. Getting started TODO -netdev socket,id=net0,udp=:1234,localad

Alessio Dionisi 24 Jul 21, 2022
Fork of Go stdlib's net/http that works with alternative TLS libraries like refraction-networking/utls.

github.com/ooni/oohttp This repository contains a fork of Go's standard library net/http package including patches to allow using this HTTP code with

Open Observatory of Network Interference (OONI) 26 Jul 22, 2022
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

gaio Introduction 中文介绍 For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a

xtaci 447 Aug 1, 2022
grpc-http1: A gRPC via HTTP/1 Enabling Library for Go

grpc-http1: A gRPC via HTTP/1 Enabling Library for Go This library enables using all the functionality of a gRPC server even if it is exposed behind a

StackRox 82 Jul 28, 2022
Package event-driven makes it easy for you to drive events between services

Event-Driven Event-driven architecture is a software architecture and model for application design. With an event-driven system, the capture, communic

Ramooz 3 Apr 20, 2022
Fast event-loop networking for Go

evio is an event loop networking framework that is fast and small. It makes direct epoll and kqueue syscalls rather than using the standard Go net pac

Josh 5.4k Aug 2, 2022
Event driven modular status-bar for dwm; written in Go & uses Unix sockets for signaling.

dwmstat A simple event-driven modular status-bar for dwm. It is written in Go & uses Unix sockets for signaling. The status bar is conceptualized as a

Navaz Alani 1 Dec 25, 2021
An event driven remote access trojan for experimental purposes.

erat An event driven remote access trojan for experimental purposes. This example is very simple and leverages ssh failed login events to trigger erat

siovador 0 Jan 16, 2022
Network-wide ads & trackers blocking DNS server

Privacy protection center for you and your devices Free and open source, powerful network-wide ads & trackers blocking DNS server. AdGuard.com | Wiki

AdGuard 12.3k Aug 4, 2022
meek is a blocking-resistant pluggable transport for Tor.

meek is a blocking-resistant pluggable transport for Tor. It encodes a data stream as a sequence of HTTPS requests and responses. Requests are reflect

Clair de Lune 1 Nov 9, 2021
Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks

firewall Middleware for Blocking IP ranges by inserting CIDR Blocks and searching IPs through those blocks. Features Easy to use Efficient and Fast Co

Golang libraries for everyone 5 May 20, 2022
A decentralized P2P networking stack written in Go.

noise noise is an opinionated, easy-to-use P2P network stack for decentralized applications, and cryptographic protocols written in Go. noise is made

Perlin Network 1.7k Jul 28, 2022