A zero cost, faster multi-language bidirectional microservices framework in Go, like alibaba Dubbo, but with more features, Scale easily. Try it. Test it. If you feel it's better, use it! 𝐉𝐚𝐯𝐚有𝐝𝐮𝐛𝐛𝐨, 𝐆𝐨𝐥𝐚𝐧𝐠有𝐫𝐩𝐜𝐱!

Overview

Official site: http://rpcx.io

License GoDoc travis Go Report Card coveralls QQ2群 QQ群(已满)

Notice: etcd

etcd plugin has been moved to rpcx-etcd

Announce

A tcpdump-like tool added: rpcxdump。 You can use it to debug communications between rpcx services and clients.

Cross-Languages

you can use other programming languages besides Go to access rpcx services.

  • rpcx-gateway: You can write clients in any programming languages to call rpcx services via rpcx-gateway
  • http invoke: you can use the same http requests to access rpcx gateway
  • Java Services/Clients: You can use rpcx-java to implement/access rpcx servies via raw protocol.

If you can write Go methods, you can also write rpc services. It is so easy to write rpc applications with rpcx.

Installation

install the basic features:

go get -v github.com/smallnest/rpcx/...

If you want to use quickcp registry, use those tags to go getgo build or go run. For example, if you want to use all features, you can:

go get -v -tags "quic kcp" github.com/smallnest/rpcx/...

tags:

  • quic: support quic transport
  • kcp: support kcp transport
  • ping: support network quality load balancing
  • utp: support utp transport

Which companies are using rpcx?

Features

rpcx is a RPC framework like Alibaba Dubbo and Weibo Motan.

rpcx 3.0 has been refactored for targets:

  1. Simple: easy to learn, easy to develop, easy to intergate and easy to deploy
  2. Performance: high perforamnce (>= grpc-go)
  3. Cross-platform: support raw slice of bytes, JSON, Protobuf and MessagePack. Theoretically it can be used with java, php, python, c/c++, node.js, c# and other platforms
  4. Service discovery and service governance: support zookeeper, etcd and consul.

It contains below features

  • Support raw Go functions. There's no need to define proto files.
  • Pluggable. Features can be extended such as service discovery, tracing.
  • Support TCP, HTTP, QUIC and KCP
  • Support multiple codecs such as JSON, Protobuf, MessagePack and raw bytes.
  • Service discovery. Support peer2peer, configured peers, zookeeper, etcd, consul and mDNS.
  • Fault tolerance:Failover, Failfast, Failtry.
  • Load banlancing:support Random, RoundRobin, Consistent hashing, Weighted, network quality and Geography.
  • Support Compression.
  • Support passing metadata.
  • Support Authorization.
  • Support heartbeat and one-way request.
  • Other features: metrics, log, timeout, alias, circuit breaker.
  • Support bidirectional communication.
  • Support access via HTTP so you can write clients in any programming languages.
  • Support API gateway.
  • Support backup request, forking and broadcast.

rpcx uses a binary protocol and platform-independent, which means you can develop services in other languages such as Java, python, nodejs, and you can use other prorgramming languages to invoke services developed in Go.

There is a UI manager: rpcx-ui.

Performance

Test results show rpcx has better performance than other rpc framework except standard rpc lib.

The benchmark code is at rpcx-benchmark.

Listen to others, but test by yourself.

Test Environment

  • CPU: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 32 cores
  • Memory: 32G
  • Go: 1.9.0
  • OS: CentOS 7 / 3.10.0-229.el7.x86_64

Use

  • protobuf
  • the client and the server on the same server
  • 581 bytes payload
  • 500/2000/5000 concurrent clients
  • mock processing time: 0ms, 10ms and 30ms

Test Result

mock 0ms process time

Throughputs Mean Latency P99 Latency

mock 10ms process time

Throughputs Mean Latency P99 Latency

mock 30ms process time

Throughputs Mean Latency P99 Latency

Examples

You can find all examples at rpcxio/rpcx-examples.

The below is a simple example.

Server

    // define example.Arith
    ……

    s := server.NewServer()
	s.RegisterName("Arith", new(example.Arith), "")
	s.Serve("tcp", addr)

Client

    // prepare requests
    ……

    d := client.NewPeer2PeerDiscovery("[email protected]"+addr, "")
	xclient := client.NewXClient("Arith", client.Failtry, client.RandomSelect, d, client.DefaultOption)
	defer xclient.Close()
	err := xclient.Call(context.Background(), "Mul", args, reply, nil)

Contribute

see contributors.

Welcome to contribute:

  • submit issues or requirements
  • send PRs
  • write projects to use rpcx
  • write tutorials or articles to introduce rpcx

License

Apache License, Version 2.0

Comments
  • 发现一个内存被改写的bug

    发现一个内存被改写的bug

    并发调用 GO 接口 发现一个内存被改写的bug 场景: A函数调用GO函数 B函数从channel接受数据

    tempChan := make(chan *client.Call, 10)
    func A(){
      for{
        oneClient.Go(context.Background(), "A", "B", a, r, tempChan)
      }
    }
    func B(){
      for{
        temp := <-tempChan
      }
    }
    
    type Call struct {
    	ServicePath   string            // The name of the service and method to call.
    	ServiceMethod string            // The name of the service and method to call.
    	Metadata      map[string]string //metadata
    	ResMetadata   map[string]string
    	Args          interface{} // The argument to the function (*struct).
    	Reply         interface{} // The reply from the function (*struct).
    	Error         error       // After completion, the error status.
    	Done          chan *Call  // Strobes when call is complete.
    	Raw           bool        // raw message or not
    }
    

    如果B函数没有及时接收消息 而client的input 连续接收到多个消息 则第二次的消息内容会覆盖掉第一次的内容

    错误原因: Call中string是通过SliceByteToString直接引用byte中的内存 func (m *Message) Decode(r io.Reader) 中如果没有重新make byte而是使用原先的data 则第二次的内容就会覆盖掉之前的内容

    if cap(m.data) >= totalL { //reuse data m.data = m.data[:totalL] } else { m.data = make([]byte, totalL) }

    4.0 
    opened by ldseraph 15
  • XClient.Call 的args和reply直接传[]byte数据?

    XClient.Call 的args和reply直接传[]byte数据?

    我在rpcx实践中有这个问题:数据流向为 客户端->agent->rpc(pb格式) agent接收客户端请求,并解析出后续需要发起rpc调用的服务名、方法名、Marshal之后的args([]byte数据)。在agent转到rpc服务时,我需要把Marshal之后的args(即[]byte)unmarshal到args对象。然后再通过XClient.Call发起rpc调用! Call(ctx context.Context, serviceMethod string, args interface{}, reply interface{}) error 上面做法带来的问题是: 1、agent上需要根据服务名和方法名反射unmarshal出args对象。并且添加了新的rpc服务后,agent也需要更新,很麻烦!

    我的问题是: 在XClient中能否加一个api,类似于Call,但args和reply类型为[]byte

    BCall(ctx context.Context, serviceMethod string, args []byte, reply []byte, marshalType int32) error

    这样rpc客户端直接把二进制数据传递到rpc服务端,服务端框架层根据marshalType将args恢复成controller中需要的结构体对象!然后进行业务逻辑的处理……

    请赐教!

    opened by JungleYe 14
  • 客户端打到10000次请求就卡

    客户端打到10000次请求就卡

    https://github.com/smallnest/rpcx/issues/5

    但是不知道怎么解决我是按照 example的方式使用的

    d := client.NewPeer2PeerDiscovery("[email protected]"+serverAddress+":"+ cast.ToString(port), "")
    
    oneClient := client.NewOneClient(client.Failtry, client.RandomSelect, d, client.DefaultOption)
    
    

    全局只使用了一个client。

    看到你有说连接池。但是也没看到相关例子

    我的server是这么启动的

    server := server.NewServer()
    
    	this.server = server
    
    	go server.Serve("tcp", fmt.Sprintf("%s:%d", this.serverAddress, this.port))
    
    

    貌似必须 go 的方式才能异步吧。。。

    您给掌掌眼。怎么搞

    Wontfix 
    opened by ansjsun 10
  • http方式访问服务接口,是这样吗?

    http方式访问服务接口,是这样吗?

    ``func ArithClientHttp() { args := &Args{ A: 10, B: 20, }

    argsBytes, err := json.Marshal(args)
    if err != nil {
    	fmt.Printf("Marshal args error:%v\n", err)
    	return
    }
    
    req, err := http.NewRequest("POST", "http://127.0.0.1:9528", bytes.NewReader(argsBytes))
    if err != nil {
    	fmt.Printf("http.NewRequest error:%v\n", err)
    	return
    }
    
    //需要设置的request header
    h := req.Header
    h.Set(client.XMessageID,"10000")
    h.Set(client.XMessageType,"0")
    h.Set(client.XSerializeType,"2")
    h.Set(client.XServicePath,"Arith")
    h.Set(client.XServiceMethod,"Mul")
    
    //发送http请求
    //http请求===>rpcx请求===>rpcx服务===>返回rpcx结果===>转换为http的response===>输出到client
    res, err := http.DefaultClient.Do(req)
    if err != nil{
    	fmt.Printf("failed to call server, err:%v\n", err)
    	return
    }
    defer res.Body.Close()
    
    
    fmt.Printf("res:%v\n", res)
    // 获取结果
    replyData, err := ioutil.ReadAll(res.Body)
    if err != nil{
    	fmt.Printf("failed to read response, err:%v\n", err)
    	return
    }
    
    fmt.Printf("replyData:%v\n", replyData)
    
    // 反序列化
    reply := &Reply{}
    err = json.Unmarshal(replyData, reply)
    if err != nil {
    	fmt.Printf("Unmarshal error:%v\n", err)
    	return
    }
    
    fmt.Printf("ArithClientHttp reply:%+v\n", reply)
    

    }

    请求报错: rpcx: failed to handle gateway request: *part1.Args is not a proto.Unmarshaler

    opened by lihuicms-code-rep 9
  • etcdv3 使用rpcx报错

    etcdv3 使用rpcx报错

    https://github.com/rpcxio/rpcx-examples/tree/master/registry/etcdv3

    操作这个example报错。 etcd: etcd Version: 3.4.9 Git SHA: Not provided (use ./build instead of go build) Go Version: go1.14.3 Go OS/Arch: darwin/amd64

    报错1: etcd的报错 WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing"

    client端端报错: 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch

    server报错3: server连上后,重新关闭,再开启 2020/07/14 15:42:56 etcdv3.go:66: ERROR: cannot create etcd path /rpcx_test: rpc error: code = Canceled desc = grpc: the client connection is closing 2020/07/14 15:42:56 rpc error: code = Canceled desc = grpc: the client connection is closing exit status 1

    怎么弄,大佬

    opened by 582727501 9
  • rpcx sendraw response Error

    rpcx sendraw response Error

    env: go 1.12.1 rpcx: github.com/smallnest/rpcx v0.0.0-20191008054500-6a4c1b1de0fa 使用client.sendraw, SerializeType:protobuf 响应的PB格式: import "google/protobuf/any.proto"; // response message Resp { int32 code = 1; // 返回码 uint64 req_time = 2; // 请求时间 uint64 time = 3; // 当前服务器时间 string msg = 4; google.protobuf.Any data = 5; }


    使用: go funcA() { client.Sendraw(....) }() go funcB() { client.Sendraw(....) }() 同时请求同一个servicepath和servicemethod,偶尔出现sendraw响应的payload错误(貌似内存被覆盖)。 在client/client.go 的input接收数据时: if call.Raw { call.Metadata, call.Reply, _ = convertRes2Raw(res) log.Info("call payload: ", call.Reply, "args=", call.Args) } 这里的call.Reply的内容是正确的,然后进入call.done(),将call对象发给call内部的channel, 在channel接收数据后 select { ..... case call := <-done: err = call.Error m = call.Metadata if call.Reply != nil { // 这里的call.Reply的数据(protobuf的marshal)偶尔出现问题,和上面client.input时接收的不一致 log.Info("chan payload: ", call.Reply, "args=", call.Args) payload = call.Reply.([]byte) } } }

    求大神指导下

    bug 5.0 
    opened by bbqbyte 9
  • server shutdown will wait for all client process

    server shutdown will wait for all client process

    fix #696

    因为 ln 的 close 会导致连接强制关闭,因此修改 ln 以及 conn 的关闭时机,通过 checkProcessMsg() 确保所有 process 处理完毕,并加上超时时间。

    time.Sleep(shutdownPollInterval) 是为了确保 ln 断开后,客户端正常的结束关闭流程,否则会出现 read 的过程中被 force close

    opened by Hanson 8
  • 使用ETCD做服务发现,当服务端没有启动的时候,客户端会造成go程泄露,Close()不管用的。

    使用ETCD做服务发现,当服务端没有启动的时候,客户端会造成go程泄露,Close()不管用的。

    goroutine profile: total 75 56 @ 0x3c2a5 0xs086af 0x82eb 0xb5d0 0x47da1 0xbb25cf github.com/rpcxio/rpcx-etcd/store/etcdv3.New.func1+0x4f /root/xxx/src/github.com/rpcxio/rpcx-etcd/store/etcdv3/etcdv3.go:71

    opened by Matthew-Zong 8
  • client hangs after sync call to server (one in 5000 times)

    client hangs after sync call to server (one in 5000 times)

    As I'm stress testing my server using rpcx on the server, and rpcx on the client, I notice that, for about 1 in 5000 calls, the client will hang in the spot shown below (stack trace from kill -QUIT). The server has received the synchronous request, and replied. However the client never thinks the call is complete, and just hangs forever.

    I wonder if you could advise on how to go about solving this?

    It does seem to be a bug somewhere (probably a race, since it happens infrequently), in the rpcx client implementation.

    I am on ubuntu 18.04 on this commit:

    commit d969a5f620f8383be39d17007b29dfd93983a819 (HEAD -> master, origin/master, origin/HEAD)
    Merge: a54ce65 54101b2
    Author: smallnest <[email protected]>
    Date:   Sat Mar 13 11:21:52 2021 +0800
    
        Merge pull request #562 from fly512/master
    

    the stack trace is always the same:

    SIGQUIT: quit
    PC=0x4732c1 m=0 sigcode=0
    
    goroutine 0 [idle]:
    runtime.futex(0x102b508, 0x80, 0x0, 0x0, 0x0, 0xc00003a800, 0xc000128008, 0x113920e2022c03, 0x7fffcb1fb128, 0x40db5f, ...)
    	/usr/local/go1.15.7/src/runtime/sys_linux_amd64.s:587 +0x21
    runtime.futexsleep(0x102b508, 0x0, 0xffffffffffffffff)
    	/usr/local/go1.15.7/src/runtime/os_linux.go:45 +0x46
    runtime.notesleep(0x102b508)
    	/usr/local/go1.15.7/src/runtime/lock_futex.go:159 +0x9f
    runtime.stopm()
    	/usr/local/go1.15.7/src/runtime/proc.go:1924 +0xc5
    runtime.findrunnable(0xc00003a800, 0x0)
    	/usr/local/go1.15.7/src/runtime/proc.go:2485 +0xa7f
    runtime.schedule()
    	/usr/local/go1.15.7/src/runtime/proc.go:2683 +0x2d7
    runtime.park_m(0xc000000300)
    	/usr/local/go1.15.7/src/runtime/proc.go:2851 +0x9d
    runtime.mcall(0x94fa00)
    	/usr/local/go1.15.7/src/runtime/asm_amd64.s:318 +0x5b
    
    goroutine 1 [select]:
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).call(0xc0002be750, 0xaff840, 0xc0002bc900, 0xa558a0, 0x11, 0xa4cadc, 0x5, 0x981d80, 0xc0002bc8a0, 0x981e80, ...)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:240 +0x23c
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).Call(...)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:231
    main.(*ClientRpcx).DoSyncCallWithContext(0xc0002be680, 0xaff7c0, 0xc000026170, 0xc0002dc000, 0x0, 0xc0002e4900, 0x2f1, 0x2f1, 0x0, 0x0)
    	/home/jaten/go/src/github.com/glycerine/goq/xc.go:169 +0x219
    main.(*ClientRpcx).DoSyncCall(...)
    	/home/jaten/go/src/github.com/glycerine/goq/xc.go:137
    main.(*Submitter).SubmitJobGetReply(0xc00012e6c0, 0xc0002dc000, 0x1, 0xc00013d860, 0xc0002dc000, 0x0, 0x400, 0x7fc18e1d1700)
    	/home/jaten/go/src/github.com/glycerine/goq/sub.go:86 +0xbb
    main.main()
    	/home/jaten/go/src/github.com/glycerine/goq/main.go:169 +0x221f
    
    goroutine 21 [IO wait]:
    internal/poll.runtime_pollWait(0x7fc18c3d0d48, 0x72, 0xaf7180)
    	/usr/local/go1.15.7/src/runtime/netpoll.go:222 +0x55
    internal/poll.(*pollDesc).wait(0xc000123f18, 0x72, 0xaf7100, 0xfe5680, 0x0)
    	/usr/local/go1.15.7/src/internal/poll/fd_poll_runtime.go:87 +0x45
    internal/poll.(*pollDesc).waitRead(...)
    	/usr/local/go1.15.7/src/internal/poll/fd_poll_runtime.go:92
    internal/poll.(*FD).Read(0xc000123f00, 0xc0002d8000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
    	/usr/local/go1.15.7/src/internal/poll/fd_unix.go:159 +0x1a5
    net.(*netFD).Read(0xc000123f00, 0xc0002d8000, 0x4000, 0x4000, 0x7fc18c1ea878, 0x7fc18c1ea878, 0x60)
    	/usr/local/go1.15.7/src/net/fd_posix.go:55 +0x4f
    net.(*conn).Read(0xc00011c218, 0xc0002d8000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
    	/usr/local/go1.15.7/src/net/net.go:182 +0x8e
    bufio.(*Reader).Read(0xc00010f1a0, 0xc00030a004, 0x1, 0xc, 0x203000, 0x203000, 0x203000)
    	/usr/local/go1.15.7/src/bufio/bufio.go:227 +0x222
    io.ReadAtLeast(0xaf5c00, 0xc00010f1a0, 0xc00030a004, 0x1, 0xc, 0x1, 0x201, 0x1000000000000, 0x7fc18c1ea878)
    	/usr/local/go1.15.7/src/io/io.go:314 +0x87
    io.ReadFull(...)
    	/usr/local/go1.15.7/src/io/io.go:333
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/protocol.(*Message).Decode(0xc000318000, 0xaf5c00, 0xc00010f1a0, 0xc000200060, 0x0)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/protocol/message.go:401 +0x7a
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).input(0xc0002be750)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:502 +0xd0
    created by github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).Connect
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/connection.go:56 +0x1ff
    
    rax    0xca
    rbx    0x102b3c0
    rcx    0x4732c3
    rdx    0x0
    rdi    0x102b508
    rsi    0x80
    rbp    0x7fffcb1fb0f0
    rsp    0x7fffcb1fb0a8
    r8     0x0
    r9     0x0
    r10    0x0
    r11    0x286
    r12    0x3
    r13    0x102ae80
    r14    0x4
    r15    0x11
    rip    0x4732c1
    rflags 0x286
    cs     0x33
    fs     0x0
    gs     0x0
    

    This is where the client is making the call. The source code is open source:

    https://github.com/glycerine/goq/blob/master/xc.go#L140

    opened by glycerine 8
  • 请教一个docker-compose部署多个server和一个client服务的连接报错问题。

    请教一个docker-compose部署多个server和一个client服务的连接报错问题。

    自己写了个demo,不使用docker部署的话,单纯运行backend和service(即客户端,服务端)是可以跑通的。但是尝试用docker-compose运行1个backend和2个service时有一个问题。

    架构思路的话:

    • backend集成了rpcx client和gin,gin暴露http api给外部使用,rpcx client使用etcd获取service的rpc接口地址,然后调用rpc。
    • service运行rpcx server,暴露rpc接口给backend使用。注册中心的话使用etcd,通过flag传入该server的地址然后注册到etcd上。

    这个是backend的报错:

    [GIN-debug] Listening and serving HTTP on :3010
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8973: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8973: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    [GIN] 2020/05/29 - 06:26:33 | 500 |     33.5876ms |     192.168.0.1 | GET      "/api/v1/match?page=1&size=10&key=format.keyword&val=doc&index=testtable1"
    
    

    这个是启动服务的docker-compose.yml

    version: '2'
    services:
      server1:
        image: yz_classic_server:1.0
        container_name: yz_classic_server1
        entrypoint: /rpcx-service/server -addr='10.0.0.201:8972'
        ports:
          - "8972:8972"
      server2:
        image: yz_classic_server:1.0
        container_name: yz_classic_server2
        # -flag 传入addr 10.0.0.201:8973
        entrypoint: /rpcx-service/server -addr='10.0.0.201:8973'
        ports:
          - "8973:8972"
      backend:
        image: yz_classic_backend:1.0
        container_name: yz_classic_backend
        ports:
          - "3010:3010"
    

    这个是backend的main.go

    
    var (
    	defaultAddr = ":3010"
    	addr  = flag.String("addr", defaultAddr, "http address")
    
    	//etcd
    	basePath = flag.String("base", "/rpcx_yz_classic", "prefix path")
    	etcdAddr = flag.String("etcdAddr", "10.0.0.201:2379", "etcd address")
    )
    
    
    
    func main() {
    	// gin + rpcx client
    	dEtcd := client.NewEtcdV3Discovery(*basePath, "search", []string{*etcdAddr}, nil)
    	v1.Xclient = client.NewXClient("search", client.Failover, client.RandomSelect, dEtcd, client.DefaultOption)
    	defer v1.Xclient.Close()
    
    	r := v1.InitRouter()
    	r.Run(defaultAddr) // listen and serve on 0.0.0.0:3010
    }
    
    

    这个是service的main.go

    var (
    	addr = flag.String("addr", "localhost:8972", "server address")
    
    	//etcd
    	basePath = flag.String("base", "/rpcx_yz_classic", "prefix path")
    	etcdAddr = flag.String("etcdAddr", "10.0.0.201:2379", "etcd address")
    )
    
    func main() {
    	flag.Parse()
    	// init es dao
    	dao.InitEs()
    
    	// rpcx service
    	s := server.NewServer()
    
    	addRegistryPlugin(s) //etcd
    
    	s.RegisterName("search", search.New(), "") //user.New()返回一个服务对象,该服务对象的所有方法都会是允许rpc调用的,只要符合方法签名
    	//err := s.Serve("tcp", *addr)
    	err := s.Serve("tcp", "localhost:8972")
    	if err != nil {
    		panic(err)
    	}
    }
    
    
    func addRegistryPlugin(s *server.Server) {
    	fmt.Println("*addr:", *addr)
    	r := &serverplugin.EtcdV3RegisterPlugin{
    		ServiceAddress: "[email protected]" + *addr,
    		EtcdServers:    []string{*etcdAddr},
    		BasePath:       *basePath,
    		UpdateInterval: time.Minute,
    	}
    	err := r.Start()
    	if err != nil {
    		log.Fatal(err)
    	}
    	s.Plugins.Add(r)
    }
    

    注意的是, 10.0.0.201是一台机子,里面部署了etcd; localhost(内网ip为10.0.0.222)是另一台机子,这台机子运行了docker-compose up -d

    自己排查了很久,但是还是不知道问题具体在哪。求指教。thx.

    opened by theoneLee 8
  • support pass metadata via jsonrpc 2.0

    support pass metadata via jsonrpc 2.0

    1. 客户端使用ReqMetaDataKey, 请求额外带元数据
    2. jsonrpc里没有这个元数据

    没想好怎么改, 从params中ReqMetaDataKey么?

    1. https://github.com/smallnest/rpcx/issues/391
    2. https://github.com/smallnest/rpcx/blob/master/server/jsonrpc2.go#L77
    feature 6.0 
    opened by Lonenso 8
  • server.Shutdown() 会报 gateway Serve 错误

    server.Shutdown() 会报 gateway Serve 错误

    server.Shutdown() 会报 gateway Serve 错误,错误日志如下

    1667358839      ERROR   mydefault.log   server/gateway.go:85    error in gateway Serve: *errors.errorString mux: server closed
    1667358839      INFO    mydefault.log   server/server.go:1042   need handle in-processing msg size:0
    1667358839      INFO    mydefault.log   server/server.go:974    closed gateway
    1667358839      INFO    mydefault.log   server/server.go:982    closed JSONRPC
    1667358839      INFO    mydefault.log   server/server.go:996    shutdown end
    
    opened by vbirds 0
  • getCachedClient 锁粒度问题

    getCachedClient 锁粒度问题

    c.mu.Lock() generatedClient, err, _ := c.slGroup.Do(k, func() (interface{}, error) { return c.generateClient(k, servicePath, serviceMethod) }) c.mu.Unlock()

    generateClient默认超时时间是1s,如果服务器重启了,且在重启过程中有大量的客户端请求,那么会导致大量goroutine等待获取锁,时间会很久,在我们生产环境中甚至达到了10分钟。

    另外,希望当一个client连续创建失败达到指定次数时,提供在selector中丢弃该client的功能(即条件满足后不再选择这个client),以提高系统的容错及快速恢复。

    enhancement 
    opened by eaglc 1
  • Update xclient.go

    Update xclient.go

    watch() many goroutine sort the single slice at the same time. it has a concurrent problem. and there is no need to sort the pairs, we should delete the sort code.

    opened by kuaikuai 0
Releases(v0.14)
Owner
smallnest
Author of 《Scala Collections Cookbook》
smallnest
Distributed lock manager. Warning: very hard to use it properly. Not because it's broken, but because distributed systems are hard. If in doubt, do not use this.

What Dlock is a distributed lock manager [1]. It is designed after flock utility but for multiple machines. When client disconnects, all his locks are

Sergey Shepelev 25 Dec 24, 2019
A distributed key-value storage system developed by Alibaba Group

Product Overview Tair is fast-access memory (MDB)/persistent (LDB) storage service. Using a high-performance and high-availability distributed cluster

Alibaba 2k Dec 1, 2022
a Framework for creating microservices using technologies and design patterns of Erlang/OTP in Golang

Technologies and design patterns of Erlang/OTP have been proven over the years. Now in Golang. Up to x5 times faster than original Erlang/OTP in terms

Taras Halturin 1.9k Dec 7, 2022
BlobStore is a highly reliable,highly available and ultra-large scale distributed storage system

BlobStore Overview Documents Build BlobStore Deploy BlobStore Manage BlobStore License Overview BlobStore is a highly reliable,highly available and ul

CubeFS 14 Oct 10, 2022
A standard library for microservices.

Go kit Go kit is a programming toolkit for building microservices (or elegant monoliths) in Go. We solve common problems in distributed systems and ap

Go kit 24.2k Dec 2, 2022
A feature complete and high performance multi-group Raft library in Go.

Dragonboat - A Multi-Group Raft library in Go / 中文版 News 2021-01-20 Dragonboat v3.3 has been released, please check CHANGELOG for all changes. 2020-03

lni 4.5k Dec 8, 2022
This is a comprehensive system that simulate multiple servers’ consensus behavior at local machine using multi-process deployment.

Raft simulator with Golang This project is a simulator for the Raft consensus protocol. It uses HTTP for inter-server communication, and a job schedul

Yujie Zhang 1 Jan 30, 2022
A simple but powerful distributed lock

nlock A simple but powerful distributed lock Get Started Download go get github.com/inuggets/nlock Usage Redis lock import lock "github.com/inuggets/

Nuggets 1 Nov 14, 2021
More effective network communication, two-way calling, notify and broadcast supported.

ARPC - More Effective Network Communication Contents ARPC - More Effective Network Communication Contents Features Performance Header Layout Installat

lesismal 641 Dec 4, 2022
Like Go channels over the network

libchan: like Go channels over the network Libchan is an ultra-lightweight networking library which lets network services communicate in the same way

Docker 2.5k Nov 15, 2022
AppsFlyer 500 Dec 1, 2022
The Go language implementation of gRPC. HTTP/2 based RPC

gRPC-Go The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information

grpc 17.2k Dec 6, 2022
Hprose is a cross-language RPC. This project is Hprose for Golang.

Hprose 3.0 for Golang Introduction Hprose is a High Performance Remote Object Service Engine. It is a modern, lightweight, cross-language, cross-platf

Hprose 1.2k Dec 2, 2022
dht is used by anacrolix/torrent, and is intended for use as a library in other projects both torrent related and otherwise

dht Installation Install the library package with go get github.com/anacrolix/dht, or the provided cmds with go get github.com/anacrolix/dht/cmd/....

Matt Joiner 250 Dec 2, 2022
*DEPRECATED* Please use https://gopkg.in/redsync.v1 (https://github.com/go-redsync/redsync)

Redsync.go This package is being replaced with https://gopkg.in/redsync.v1. I will continue to maintain this package for a while so that its users do

Mahmud Ridwan 302 Nov 20, 2022
Easy to use Raft library to make your app distributed, highly available and fault-tolerant

An easy to use customizable library to make your Go application Distributed, Highly available, Fault Tolerant etc... using Hashicorp's Raft library wh

Richard Bertok 60 Nov 16, 2022
Go Micro is a framework for distributed systems development

Go Micro Go Micro is a framework for distributed systems development. Overview Go Micro provides the core requirements for distributed systems develop

Asim Aslam 19.8k Dec 6, 2022
Skynet is a framework for distributed services in Go.

##Introduction Skynet is a communication protocol for building massively distributed apps in Go. It is not constrained to Go, so it will lend itself n

null 2k Nov 18, 2022
Cross-platform grid-based user interface framework.

Gruid The gruid module provides packages for easily building grid-based applications in Go. The library abstracts rendering and input for different pl

null 69 Nov 23, 2022