A zero cost, faster multi-language bidirectional microservices framework in Go, like alibaba Dubbo, but with more features, Scale easily. Try it. Test it. If you feel it's better, use it! 𝐉𝐚𝐯𝐚有𝐝𝐮𝐛𝐛𝐨, 𝐆𝐨𝐥𝐚𝐧𝐠有𝐫𝐩𝐜𝐱!

Overview

Official site: http://rpcx.io

License GoDoc travis Go Report Card coveralls QQ2群 QQ群(已满)

Notice: etcd

etcd plugin has been moved to rpcx-etcd

Announce

A tcpdump-like tool added: rpcxdump。 You can use it to debug communications between rpcx services and clients.

Cross-Languages

you can use other programming languages besides Go to access rpcx services.

  • rpcx-gateway: You can write clients in any programming languages to call rpcx services via rpcx-gateway
  • http invoke: you can use the same http requests to access rpcx gateway
  • Java Services/Clients: You can use rpcx-java to implement/access rpcx servies via raw protocol.

If you can write Go methods, you can also write rpc services. It is so easy to write rpc applications with rpcx.

Installation

install the basic features:

go get -v github.com/smallnest/rpcx/...

If you want to use quickcp registry, use those tags to go getgo build or go run. For example, if you want to use all features, you can:

go get -v -tags "quic kcp" github.com/smallnest/rpcx/...

tags:

  • quic: support quic transport
  • kcp: support kcp transport
  • ping: support network quality load balancing
  • utp: support utp transport

Which companies are using rpcx?

Features

rpcx is a RPC framework like Alibaba Dubbo and Weibo Motan.

rpcx 3.0 has been refactored for targets:

  1. Simple: easy to learn, easy to develop, easy to intergate and easy to deploy
  2. Performance: high perforamnce (>= grpc-go)
  3. Cross-platform: support raw slice of bytes, JSON, Protobuf and MessagePack. Theoretically it can be used with java, php, python, c/c++, node.js, c# and other platforms
  4. Service discovery and service governance: support zookeeper, etcd and consul.

It contains below features

  • Support raw Go functions. There's no need to define proto files.
  • Pluggable. Features can be extended such as service discovery, tracing.
  • Support TCP, HTTP, QUIC and KCP
  • Support multiple codecs such as JSON, Protobuf, MessagePack and raw bytes.
  • Service discovery. Support peer2peer, configured peers, zookeeper, etcd, consul and mDNS.
  • Fault tolerance:Failover, Failfast, Failtry.
  • Load banlancing:support Random, RoundRobin, Consistent hashing, Weighted, network quality and Geography.
  • Support Compression.
  • Support passing metadata.
  • Support Authorization.
  • Support heartbeat and one-way request.
  • Other features: metrics, log, timeout, alias, circuit breaker.
  • Support bidirectional communication.
  • Support access via HTTP so you can write clients in any programming languages.
  • Support API gateway.
  • Support backup request, forking and broadcast.

rpcx uses a binary protocol and platform-independent, which means you can develop services in other languages such as Java, python, nodejs, and you can use other prorgramming languages to invoke services developed in Go.

There is a UI manager: rpcx-ui.

Performance

Test results show rpcx has better performance than other rpc framework except standard rpc lib.

The benchmark code is at rpcx-benchmark.

Listen to others, but test by yourself.

Test Environment

  • CPU: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 32 cores
  • Memory: 32G
  • Go: 1.9.0
  • OS: CentOS 7 / 3.10.0-229.el7.x86_64

Use

  • protobuf
  • the client and the server on the same server
  • 581 bytes payload
  • 500/2000/5000 concurrent clients
  • mock processing time: 0ms, 10ms and 30ms

Test Result

mock 0ms process time

Throughputs Mean Latency P99 Latency

mock 10ms process time

Throughputs Mean Latency P99 Latency

mock 30ms process time

Throughputs Mean Latency P99 Latency

Examples

You can find all examples at rpcxio/rpcx-examples.

The below is a simple example.

Server

    // define example.Arith
    ……

    s := server.NewServer()
	s.RegisterName("Arith", new(example.Arith), "")
	s.Serve("tcp", addr)

Client

    // prepare requests
    ……

    d := client.NewPeer2PeerDiscovery("[email protected]"+addr, "")
	xclient := client.NewXClient("Arith", client.Failtry, client.RandomSelect, d, client.DefaultOption)
	defer xclient.Close()
	err := xclient.Call(context.Background(), "Mul", args, reply, nil)

Contribute

see contributors.

Welcome to contribute:

  • submit issues or requirements
  • send PRs
  • write projects to use rpcx
  • write tutorials or articles to introduce rpcx

License

Apache License, Version 2.0

Issues
  • 求助!以HTTP方式公布Gateway会挂死?

    求助!以HTTP方式公布Gateway会挂死?

    我们把一组服务通过rpcx-gateway方式公布出来,但发现压测会挂死 image

    请求帮忙分析原因,目测只要通过http方式调用XClient或XClientPool就会这样。 但是绕过gateway,直接http压测服务,则没这个问题

    opened by jiangrzh 16
  • 一个结构中 有API接口限制吗?

    一个结构中 有API接口限制吗?

    rpcx: failed to handle request: internal error: runtime error: invalid memory address or nil pointer dereference image

    WaitingForInfo 
    opened by normandie199 16
  • 发现一个内存被改写的bug

    发现一个内存被改写的bug

    并发调用 GO 接口 发现一个内存被改写的bug 场景: A函数调用GO函数 B函数从channel接受数据

    tempChan := make(chan *client.Call, 10)
    func A(){
      for{
        oneClient.Go(context.Background(), "A", "B", a, r, tempChan)
      }
    }
    func B(){
      for{
        temp := <-tempChan
      }
    }
    
    type Call struct {
    	ServicePath   string            // The name of the service and method to call.
    	ServiceMethod string            // The name of the service and method to call.
    	Metadata      map[string]string //metadata
    	ResMetadata   map[string]string
    	Args          interface{} // The argument to the function (*struct).
    	Reply         interface{} // The reply from the function (*struct).
    	Error         error       // After completion, the error status.
    	Done          chan *Call  // Strobes when call is complete.
    	Raw           bool        // raw message or not
    }
    

    如果B函数没有及时接收消息 而client的input 连续接收到多个消息 则第二次的消息内容会覆盖掉第一次的内容

    错误原因: Call中string是通过SliceByteToString直接引用byte中的内存 func (m *Message) Decode(r io.Reader) 中如果没有重新make byte而是使用原先的data 则第二次的内容就会覆盖掉之前的内容

    if cap(m.data) >= totalL { //reuse data m.data = m.data[:totalL] } else { m.data = make([]byte, totalL) }

    4.0 
    opened by ldseraph 15
  • XClient.Call 的args和reply直接传[]byte数据?

    XClient.Call 的args和reply直接传[]byte数据?

    我在rpcx实践中有这个问题:数据流向为 客户端->agent->rpc(pb格式) agent接收客户端请求,并解析出后续需要发起rpc调用的服务名、方法名、Marshal之后的args([]byte数据)。在agent转到rpc服务时,我需要把Marshal之后的args(即[]byte)unmarshal到args对象。然后再通过XClient.Call发起rpc调用! Call(ctx context.Context, serviceMethod string, args interface{}, reply interface{}) error 上面做法带来的问题是: 1、agent上需要根据服务名和方法名反射unmarshal出args对象。并且添加了新的rpc服务后,agent也需要更新,很麻烦!

    我的问题是: 在XClient中能否加一个api,类似于Call,但args和reply类型为[]byte

    BCall(ctx context.Context, serviceMethod string, args []byte, reply []byte, marshalType int32) error

    这样rpc客户端直接把二进制数据传递到rpc服务端,服务端框架层根据marshalType将args恢复成controller中需要的结构体对象!然后进行业务逻辑的处理……

    请赐教!

    opened by JungleYe 14
  • add traceid plugin

    add traceid plugin

    client->server1->server2->server3 这个调用顺序下,server1 能拿到唯一的traceId,后续的server也可以拿到,方便问题排查,性能分析

    feature 5.0 
    opened by goodjava 10
  • 客户端打到10000次请求就卡

    客户端打到10000次请求就卡

    https://github.com/smallnest/rpcx/issues/5

    但是不知道怎么解决我是按照 example的方式使用的

    d := client.NewPeer2PeerDiscovery("[email protected]"+serverAddress+":"+ cast.ToString(port), "")
    
    oneClient := client.NewOneClient(client.Failtry, client.RandomSelect, d, client.DefaultOption)
    
    

    全局只使用了一个client。

    看到你有说连接池。但是也没看到相关例子

    我的server是这么启动的

    server := server.NewServer()
    
    	this.server = server
    
    	go server.Serve("tcp", fmt.Sprintf("%s:%d", this.serverAddress, this.port))
    
    

    貌似必须 go 的方式才能异步吧。。。

    您给掌掌眼。怎么搞

    Wontfix 
    opened by ansjsun 10
  • xxx_discovery.go 的watch方法可能会产生CPU 100%问题

    xxx_discovery.go 的watch方法可能会产生CPU 100%问题

    在关闭其他服务模块情况下,watch方法一直打印 chan is closed and will rewatch,for循环没有阻断,执行太快了。一台机器跑多个服务时,可能会造成100%CPU,卡死。今天正好卡死了一次

    5.0 NeedsInvestigation 
    opened by jkingben 10
  • rpcx sendraw response Error

    rpcx sendraw response Error

    env: go 1.12.1 rpcx: github.com/smallnest/rpcx v0.0.0-20191008054500-6a4c1b1de0fa 使用client.sendraw, SerializeType:protobuf 响应的PB格式: import "google/protobuf/any.proto"; // response message Resp { int32 code = 1; // 返回码 uint64 req_time = 2; // 请求时间 uint64 time = 3; // 当前服务器时间 string msg = 4; google.protobuf.Any data = 5; }


    使用: go funcA() { client.Sendraw(....) }() go funcB() { client.Sendraw(....) }() 同时请求同一个servicepath和servicemethod,偶尔出现sendraw响应的payload错误(貌似内存被覆盖)。 在client/client.go 的input接收数据时: if call.Raw { call.Metadata, call.Reply, _ = convertRes2Raw(res) log.Info("call payload: ", call.Reply, "args=", call.Args) } 这里的call.Reply的内容是正确的,然后进入call.done(),将call对象发给call内部的channel, 在channel接收数据后 select { ..... case call := <-done: err = call.Error m = call.Metadata if call.Reply != nil { // 这里的call.Reply的数据(protobuf的marshal)偶尔出现问题,和上面client.input时接收的不一致 log.Info("chan payload: ", call.Reply, "args=", call.Args) payload = call.Reply.([]byte) } } }

    求大神指导下

    bug 5.0 
    opened by bbqbyte 9
  • etcdv3 使用rpcx报错

    etcdv3 使用rpcx报错

    https://github.com/rpcxio/rpcx-examples/tree/master/registry/etcdv3

    操作这个example报错。 etcd: etcd Version: 3.4.9 Git SHA: Not provided (use ./build instead of go build) Go Version: go1.14.3 Go OS/Arch: darwin/amd64

    报错1: etcd的报错 WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing"

    client端端报错: 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch

    server报错3: server连上后,重新关闭,再开启 2020/07/14 15:42:56 etcdv3.go:66: ERROR: cannot create etcd path /rpcx_test: rpc error: code = Canceled desc = grpc: the client connection is closing 2020/07/14 15:42:56 rpc error: code = Canceled desc = grpc: the client connection is closing exit status 1

    怎么弄,大佬

    opened by 582727501 9
  • http方式访问服务接口,是这样吗?

    http方式访问服务接口,是这样吗?

    ``func ArithClientHttp() { args := &Args{ A: 10, B: 20, }

    argsBytes, err := json.Marshal(args)
    if err != nil {
    	fmt.Printf("Marshal args error:%v\n", err)
    	return
    }
    
    req, err := http.NewRequest("POST", "http://127.0.0.1:9528", bytes.NewReader(argsBytes))
    if err != nil {
    	fmt.Printf("http.NewRequest error:%v\n", err)
    	return
    }
    
    //需要设置的request header
    h := req.Header
    h.Set(client.XMessageID,"10000")
    h.Set(client.XMessageType,"0")
    h.Set(client.XSerializeType,"2")
    h.Set(client.XServicePath,"Arith")
    h.Set(client.XServiceMethod,"Mul")
    
    //发送http请求
    //http请求===>rpcx请求===>rpcx服务===>返回rpcx结果===>转换为http的response===>输出到client
    res, err := http.DefaultClient.Do(req)
    if err != nil{
    	fmt.Printf("failed to call server, err:%v\n", err)
    	return
    }
    defer res.Body.Close()
    
    
    fmt.Printf("res:%v\n", res)
    // 获取结果
    replyData, err := ioutil.ReadAll(res.Body)
    if err != nil{
    	fmt.Printf("failed to read response, err:%v\n", err)
    	return
    }
    
    fmt.Printf("replyData:%v\n", replyData)
    
    // 反序列化
    reply := &Reply{}
    err = json.Unmarshal(replyData, reply)
    if err != nil {
    	fmt.Printf("Unmarshal error:%v\n", err)
    	return
    }
    
    fmt.Printf("ArithClientHttp reply:%+v\n", reply)
    

    }

    请求报错: rpcx: failed to handle gateway request: *part1.Args is not a proto.Unmarshaler

    opened by lihuicms-code-rep 9
  • error处理

    error处理

    错误的error能否更丰富一些,类比grpc会有code和msg记录。 有时候在处理的时候,往往根据msg很难处理一些 需要code的场景。

    enhancement 
    opened by hanyue2020 0
  • support time.Time?

    support time.Time?

    If I add a time.Time type field in the struct, the server could not handle.

    type T struct {
      Start time.Time
    }
    
    rpcx: failed to handle request: msgpack: invalid code=b3 decoding ext len (C:/.../[email protected]/log/logger.go:64 github.com/smallnest/rpcx/log.Warnf)
    
    opened by FlyingOnion 2
  • 考虑以应用级进行服务发现和注册吗

    考虑以应用级进行服务发现和注册吗

    dubbo 3.0 已经改用应用级别了,感觉确实会比接口级别的好些。

    大多数服务颗粒不会细到只有几个接口,那每次服务的上下线就会有一批的接口注册或者退出。

    然后有些系统微服务化进程也是慢慢缩小服务的颗粒,初始颗粒可能还是很粗的。

    feature 
    opened by fengqi 2
  • has some guide that deploy in k8s?

    has some guide that deploy in k8s?

    Some helps?

    opened by hunterhug 0
  • C#原生实现的基于dotnetty的rpcx客户端

    C#原生实现的基于dotnetty的rpcx客户端

    opened by deeyi2000 0
  • enable ReqRateLimitingPlugin|RateLimitingPlugin may close connection

    enable ReqRateLimitingPlugin|RateLimitingPlugin may close connection

    func (s *Server) serveConn(conn net.Conn) {
              for {
                     ...
    		req, err := s.readRequest(ctx, r)
    		if err != nil {
    			if err == io.EOF {
    				log.Infof("client has closed this connection: %s", conn.RemoteAddr().String())
    			} else if strings.Contains(err.Error(), "use of closed network connection") {
    				log.Infof("rpcx: connection %s is closed", conn.RemoteAddr().String())
    			} else {
    				log.Warnf("rpcx: failed to read request: %v", err)
    			}
    			return
    		}
           ....
       }
       ...
    }
    

    when readRequest return an error, function serveConn will break the for loop and return which will call a defer function that will close connection.

                   s.mu.Lock()
    		delete(s.activeConn, conn)
    		s.mu.Unlock()
    		conn.Close()
    		s.Plugins.DoPostConnClose(conn)
    
    bug 7.0 
    opened by westfly 1
  • 平滑重启出现两个进程

    平滑重启出现两个进程

    重启后会出现两个进程 2021/06/16 15:01:05 server.go:184: INFO : server pid:28805 2021/06/16 15:01:13 server.go:883: INFO : restart a new rpcx server: 28806 2021/06/16 15:01:13 server.go:883: INFO : restart a new rpcx server: 28807 2021/06/16 15:01:13 server.go:184: INFO : server pid:28807 2021/06/16 15:01:13 server.go:184: INFO : server pid:28806 2021/06/16 15:01:16 server.go:825: INFO : shutdown begin

    501 28806 1 0 3:01下午 ttys002 0:00.02 ./server 501 28807 1 0 3:01下午 ttys002 0:00.02 ./server

    opened by dxxjing 0
  • 设置熔断器好像没有效果?

    设置熔断器好像没有效果?

    我自定义了一个熔断器,并通过这样的方式注册到了client rpcxOption.GenBreaker = func() client.Breaker { return NewClockDriftCircuitBreaker(time.Minute) }

    通过打日志发现 Call(fn func() error, d time.Duration) 没有被调用

    opened by LeechanX 0
  • 增加结构注册rpc的时候,可以指定具体注册哪些成员方法的接口

    增加结构注册rpc的时候,可以指定具体注册哪些成员方法的接口

    事情是这样的, 我打算把Http Handler, jsonrpc, rcpx 均采用统一的接口格式. 这样可以专注编写service层, Http Handler, jsonrpc 这俩个我们都可以做到哪些导出给这些协议,哪些不导出.但是rpcx暂时需要接口支持.

    opened by dengzhaofun 3
  • support grafana tempo for distributed tracing

    support grafana tempo for distributed tracing

    https://grafana.com/oss/tempo/

    opened by smallnest 0
Releases(v0.14)
Owner
smallnest
Author of 《Scala Collections Cookbook》
smallnest
a Framework for creating microservices using technologies and design patterns of Erlang/OTP in Golang

Technologies and design patterns of Erlang/OTP have been proven over the years. Now in Golang. Up to x5 times faster than original Erlang/OTP in terms

Taras Halturin 950 Sep 26, 2021
Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.

Dapr is a portable, serverless, event-driven runtime that makes it easy for developers to build resilient, stateless and stateful microservices that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

Dapr 14.7k Sep 21, 2021
Micro is a platform for cloud native development

Micro Overview Micro addresses the key requirements for building services in the cloud. It leverages the microservices architecture pattern and provid

Micro 10.4k Sep 16, 2021
A standard library for microservices.

Go kit Go kit is a programming toolkit for building microservices (or elegant monoliths) in Go. We solve common problems in distributed systems and ap

Go kit 21.3k Sep 18, 2021
Go Micro is a framework for distributed systems development

Go Micro Go Micro is a framework for distributed systems development. Overview Go Micro provides the core requirements for distributed systems develop

Asim Aslam 16.8k Sep 22, 2021
Go Micro is a standalone framework for distributed systems development

Go Micro Go Micro is a framework for distributed systems development. Overview Go Micro provides the core requirements for distributed systems develop

Asim Aslam 16.8k Sep 22, 2021
distributed data sync with operational transformation/transforms

DOT The DOT project is a blend of operational transformation, CmRDT, persistent/immutable datastructures and reactive stream processing. This is an im

DOT & Chain 58 Jul 5, 2021
Tarmac is a unique framework designed for the next generation of distributed systems

Framework for building distributed services with Web Assembly

Benjamin Cane 51 Sep 14, 2021
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Juicedata, Inc 3.7k Sep 24, 2021
Ultra performant API Gateway with middlewares

The KrakenD framework An open framework to assemble ultra performance API Gateways with middlewares; core service of the KrakenD API Gateway. Looking

Devops Faith - Open source for DevOps 4.5k Sep 23, 2021
A feature complete and high performance multi-group Raft library in Go.

Dragonboat - A Multi-Group Raft library in Go / 中文版 News 2021-01-20 Dragonboat v3.3 has been released, please check CHANGELOG for all changes. 2020-03

lni 3.8k Sep 22, 2021
Dkron - Distributed, fault tolerant job scheduling system https://dkron.io

Dkron - Distributed, fault tolerant job scheduling system for cloud native environments Website: http://dkron.io/ Dkron is a distributed cron service,

Distributed Works 2.8k Sep 23, 2021
Distributed Named Pipes

dnpipes Distributed Named Pipes (or: dnpipes) are essentially a distributed version of Unix named pipes comparable to, for example, SQS in AWS or the

Michael Hausenblas 451 Aug 17, 2021
A Go library for master-less peer-to-peer autodiscovery and RPC between HTTP services

sleuth sleuth is a Go library that provides master-less peer-to-peer autodiscovery and RPC between HTTP services that reside on the same network. It w

null 339 Sep 17, 2021
Take control of your data, connect with anything, and expose it anywhere through protocols such as HTTP, GraphQL, and gRPC.

Semaphore Chat: Discord Documentation: Github pages Go package documentation: GoDev Take control of your data, connect with anything, and expose it an

Jexia.com 56 Sep 3, 2021
Simple, fast and scalable golang rpc library for high load

gorpc Simple, fast and scalable golang RPC library for high load and microservices. Gorpc provides the following features useful for highly loaded pro

Aliaksandr Valialkin 641 Sep 17, 2021
A distributed MySQL binlog storage system built on Raft

What is kingbus? 中文 Kingbus is a distributed MySQL binlog store based on raft. Kingbus can act as a slave to the real master and as a master to the sl

Fei Chen 819 Sep 14, 2021