A zero cost, faster multi-language bidirectional microservices framework in Go, like alibaba Dubbo, but with more features, Scale easily. Try it. Test it. If you feel it's better, use it! 𝐉𝐚𝐯𝐚有𝐝𝐮𝐛𝐛𝐨, 𝐆𝐨𝐥𝐚𝐧𝐠有𝐫𝐩𝐜𝐱!

Overview

Official site: http://rpcx.io

License GoDoc travis Go Report Card coveralls QQ3群 QQ2群(已满) QQ群(已满)

Notice: etcd

etcd plugin has been moved to rpcx-etcd

Announce

A tcpdump-like tool added: rpcxdump。 You can use it to debug communications between rpcx services and clients.

Cross-Languages

you can use other programming languages besides Go to access rpcx services.

  • rpcx-gateway: You can write clients in any programming languages to call rpcx services via rpcx-gateway
  • http invoke: you can use the same http requests to access rpcx gateway
  • Java Services/Clients: You can use rpcx-java to implement/access rpcx servies via raw protocol.

If you can write Go methods, you can also write rpc services. It is so easy to write rpc applications with rpcx.

Installation

install the basic features:

go get -v github.com/smallnest/rpcx/...

If you want to use quickcp registry, use those tags to go getgo build or go run. For example, if you want to use all features, you can:

go get -v -tags "quic kcp" github.com/smallnest/rpcx/...

tags:

  • quic: support quic transport
  • kcp: support kcp transport
  • ping: support network quality load balancing
  • utp: support utp transport

Which companies are using rpcx?

Features

rpcx is a RPC framework like Alibaba Dubbo and Weibo Motan.

rpcx 3.0 has been refactored for targets:

  1. Simple: easy to learn, easy to develop, easy to intergate and easy to deploy
  2. Performance: high perforamnce (>= grpc-go)
  3. Cross-platform: support raw slice of bytes, JSON, Protobuf and MessagePack. Theoretically it can be used with java, php, python, c/c++, node.js, c# and other platforms
  4. Service discovery and service governance: support zookeeper, etcd and consul.

It contains below features

  • Support raw Go functions. There's no need to define proto files.
  • Pluggable. Features can be extended such as service discovery, tracing.
  • Support TCP, HTTP, QUIC and KCP
  • Support multiple codecs such as JSON, Protobuf, MessagePack and raw bytes.
  • Service discovery. Support peer2peer, configured peers, zookeeper, etcd, consul and mDNS.
  • Fault tolerance:Failover, Failfast, Failtry.
  • Load banlancing:support Random, RoundRobin, Consistent hashing, Weighted, network quality and Geography.
  • Support Compression.
  • Support passing metadata.
  • Support Authorization.
  • Support heartbeat and one-way request.
  • Other features: metrics, log, timeout, alias, circuit breaker.
  • Support bidirectional communication.
  • Support access via HTTP so you can write clients in any programming languages.
  • Support API gateway.
  • Support backup request, forking and broadcast.

rpcx uses a binary protocol and platform-independent, which means you can develop services in other languages such as Java, python, nodejs, and you can use other prorgramming languages to invoke services developed in Go.

There is a UI manager: rpcx-ui.

Performance

Test results show rpcx has better performance than other rpc framework except standard rpc lib.

The benchmark code is at rpcx-benchmark.

Listen to others, but test by yourself.

Test Environment

  • CPU: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 32 cores
  • Memory: 32G
  • Go: 1.9.0
  • OS: CentOS 7 / 3.10.0-229.el7.x86_64

Use

  • protobuf
  • the client and the server on the same server
  • 581 bytes payload
  • 500/2000/5000 concurrent clients
  • mock processing time: 0ms, 10ms and 30ms

Test Result

mock 0ms process time

Throughputs Mean Latency P99 Latency

mock 10ms process time

Throughputs Mean Latency P99 Latency

mock 30ms process time

Throughputs Mean Latency P99 Latency

Examples

You can find all examples at rpcxio/rpcx-examples.

The below is a simple example.

Server

    // define example.Arith
    ……

    s := server.NewServer()
	s.RegisterName("Arith", new(example.Arith), "")
	s.Serve("tcp", addr)

Client

    // prepare requests
    ……

    d := client.NewPeer2PeerDiscovery("[email protected]"+addr, "")
	xclient := client.NewXClient("Arith", client.Failtry, client.RandomSelect, d, client.DefaultOption)
	defer xclient.Close()
	err := xclient.Call(context.Background(), "Mul", args, reply, nil)

Contribute

see contributors.

Welcome to contribute:

  • submit issues or requirements
  • send PRs
  • write projects to use rpcx
  • write tutorials or articles to introduce rpcx

License

Apache License, Version 2.0

Issues
  • 发现一个内存被改写的bug

    发现一个内存被改写的bug

    并发调用 GO 接口 发现一个内存被改写的bug 场景: A函数调用GO函数 B函数从channel接受数据

    tempChan := make(chan *client.Call, 10)
    func A(){
      for{
        oneClient.Go(context.Background(), "A", "B", a, r, tempChan)
      }
    }
    func B(){
      for{
        temp := <-tempChan
      }
    }
    
    type Call struct {
    	ServicePath   string            // The name of the service and method to call.
    	ServiceMethod string            // The name of the service and method to call.
    	Metadata      map[string]string //metadata
    	ResMetadata   map[string]string
    	Args          interface{} // The argument to the function (*struct).
    	Reply         interface{} // The reply from the function (*struct).
    	Error         error       // After completion, the error status.
    	Done          chan *Call  // Strobes when call is complete.
    	Raw           bool        // raw message or not
    }
    

    如果B函数没有及时接收消息 而client的input 连续接收到多个消息 则第二次的消息内容会覆盖掉第一次的内容

    错误原因: Call中string是通过SliceByteToString直接引用byte中的内存 func (m *Message) Decode(r io.Reader) 中如果没有重新make byte而是使用原先的data 则第二次的内容就会覆盖掉之前的内容

    if cap(m.data) >= totalL { //reuse data m.data = m.data[:totalL] } else { m.data = make([]byte, totalL) }

    4.0 
    opened by ldseraph 15
  • XClient.Call 的args和reply直接传[]byte数据?

    XClient.Call 的args和reply直接传[]byte数据?

    我在rpcx实践中有这个问题:数据流向为 客户端->agent->rpc(pb格式) agent接收客户端请求,并解析出后续需要发起rpc调用的服务名、方法名、Marshal之后的args([]byte数据)。在agent转到rpc服务时,我需要把Marshal之后的args(即[]byte)unmarshal到args对象。然后再通过XClient.Call发起rpc调用! Call(ctx context.Context, serviceMethod string, args interface{}, reply interface{}) error 上面做法带来的问题是: 1、agent上需要根据服务名和方法名反射unmarshal出args对象。并且添加了新的rpc服务后,agent也需要更新,很麻烦!

    我的问题是: 在XClient中能否加一个api,类似于Call,但args和reply类型为[]byte

    BCall(ctx context.Context, serviceMethod string, args []byte, reply []byte, marshalType int32) error

    这样rpc客户端直接把二进制数据传递到rpc服务端,服务端框架层根据marshalType将args恢复成controller中需要的结构体对象!然后进行业务逻辑的处理……

    请赐教!

    opened by JungleYe 14
  • 客户端打到10000次请求就卡

    客户端打到10000次请求就卡

    https://github.com/smallnest/rpcx/issues/5

    但是不知道怎么解决我是按照 example的方式使用的

    d := client.NewPeer2PeerDiscovery("[email protected]"+serverAddress+":"+ cast.ToString(port), "")
    
    oneClient := client.NewOneClient(client.Failtry, client.RandomSelect, d, client.DefaultOption)
    
    

    全局只使用了一个client。

    看到你有说连接池。但是也没看到相关例子

    我的server是这么启动的

    server := server.NewServer()
    
    	this.server = server
    
    	go server.Serve("tcp", fmt.Sprintf("%s:%d", this.serverAddress, this.port))
    
    

    貌似必须 go 的方式才能异步吧。。。

    您给掌掌眼。怎么搞

    Wontfix 
    opened by ansjsun 10
  • http方式访问服务接口,是这样吗?

    http方式访问服务接口,是这样吗?

    ``func ArithClientHttp() { args := &Args{ A: 10, B: 20, }

    argsBytes, err := json.Marshal(args)
    if err != nil {
    	fmt.Printf("Marshal args error:%v\n", err)
    	return
    }
    
    req, err := http.NewRequest("POST", "http://127.0.0.1:9528", bytes.NewReader(argsBytes))
    if err != nil {
    	fmt.Printf("http.NewRequest error:%v\n", err)
    	return
    }
    
    //需要设置的request header
    h := req.Header
    h.Set(client.XMessageID,"10000")
    h.Set(client.XMessageType,"0")
    h.Set(client.XSerializeType,"2")
    h.Set(client.XServicePath,"Arith")
    h.Set(client.XServiceMethod,"Mul")
    
    //发送http请求
    //http请求===>rpcx请求===>rpcx服务===>返回rpcx结果===>转换为http的response===>输出到client
    res, err := http.DefaultClient.Do(req)
    if err != nil{
    	fmt.Printf("failed to call server, err:%v\n", err)
    	return
    }
    defer res.Body.Close()
    
    
    fmt.Printf("res:%v\n", res)
    // 获取结果
    replyData, err := ioutil.ReadAll(res.Body)
    if err != nil{
    	fmt.Printf("failed to read response, err:%v\n", err)
    	return
    }
    
    fmt.Printf("replyData:%v\n", replyData)
    
    // 反序列化
    reply := &Reply{}
    err = json.Unmarshal(replyData, reply)
    if err != nil {
    	fmt.Printf("Unmarshal error:%v\n", err)
    	return
    }
    
    fmt.Printf("ArithClientHttp reply:%+v\n", reply)
    

    }

    请求报错: rpcx: failed to handle gateway request: *part1.Args is not a proto.Unmarshaler

    opened by lihuicms-code-rep 9
  • etcdv3 使用rpcx报错

    etcdv3 使用rpcx报错

    https://github.com/rpcxio/rpcx-examples/tree/master/registry/etcdv3

    操作这个example报错。 etcd: etcd Version: 3.4.9 Git SHA: Not provided (use ./build instead of go build) Go Version: go1.14.3 Go OS/Arch: darwin/amd64

    报错1: etcd的报错 WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" WARNING: 2020/07/14 15:41:25 grpc: Server.processUnaryRPC failed to write status connection error: desc = "transport is closing"

    client端端报错: 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch 2020/07/14 15:41:25 etcdv3_discovery.go:244: WARN : chan is closed and will rewatch

    server报错3: server连上后,重新关闭,再开启 2020/07/14 15:42:56 etcdv3.go:66: ERROR: cannot create etcd path /rpcx_test: rpc error: code = Canceled desc = grpc: the client connection is closing 2020/07/14 15:42:56 rpc error: code = Canceled desc = grpc: the client connection is closing exit status 1

    怎么弄,大佬

    opened by 582727501 9
  • rpcx sendraw response Error

    rpcx sendraw response Error

    env: go 1.12.1 rpcx: github.com/smallnest/rpcx v0.0.0-20191008054500-6a4c1b1de0fa 使用client.sendraw, SerializeType:protobuf 响应的PB格式: import "google/protobuf/any.proto"; // response message Resp { int32 code = 1; // 返回码 uint64 req_time = 2; // 请求时间 uint64 time = 3; // 当前服务器时间 string msg = 4; google.protobuf.Any data = 5; }


    使用: go funcA() { client.Sendraw(....) }() go funcB() { client.Sendraw(....) }() 同时请求同一个servicepath和servicemethod,偶尔出现sendraw响应的payload错误(貌似内存被覆盖)。 在client/client.go 的input接收数据时: if call.Raw { call.Metadata, call.Reply, _ = convertRes2Raw(res) log.Info("call payload: ", call.Reply, "args=", call.Args) } 这里的call.Reply的内容是正确的,然后进入call.done(),将call对象发给call内部的channel, 在channel接收数据后 select { ..... case call := <-done: err = call.Error m = call.Metadata if call.Reply != nil { // 这里的call.Reply的数据(protobuf的marshal)偶尔出现问题,和上面client.input时接收的不一致 log.Info("chan payload: ", call.Reply, "args=", call.Args) payload = call.Reply.([]byte) } } }

    求大神指导下

    bug 5.0 
    opened by bbqbyte 9
  • server shutdown will wait for all client process

    server shutdown will wait for all client process

    fix #696

    因为 ln 的 close 会导致连接强制关闭,因此修改 ln 以及 conn 的关闭时机,通过 checkProcessMsg() 确保所有 process 处理完毕,并加上超时时间。

    time.Sleep(shutdownPollInterval) 是为了确保 ln 断开后,客户端正常的结束关闭流程,否则会出现 read 的过程中被 force close

    opened by Hanson 8
  • 使用ETCD做服务发现,当服务端没有启动的时候,客户端会造成go程泄露,Close()不管用的。

    使用ETCD做服务发现,当服务端没有启动的时候,客户端会造成go程泄露,Close()不管用的。

    goroutine profile: total 75 56 @ 0x3c2a5 0xs086af 0x82eb 0xb5d0 0x47da1 0xbb25cf github.com/rpcxio/rpcx-etcd/store/etcdv3.New.func1+0x4f /root/xxx/src/github.com/rpcxio/rpcx-etcd/store/etcdv3/etcdv3.go:71

    opened by Matthew-Zong 8
  • client hangs after sync call to server (one in 5000 times)

    client hangs after sync call to server (one in 5000 times)

    As I'm stress testing my server using rpcx on the server, and rpcx on the client, I notice that, for about 1 in 5000 calls, the client will hang in the spot shown below (stack trace from kill -QUIT). The server has received the synchronous request, and replied. However the client never thinks the call is complete, and just hangs forever.

    I wonder if you could advise on how to go about solving this?

    It does seem to be a bug somewhere (probably a race, since it happens infrequently), in the rpcx client implementation.

    I am on ubuntu 18.04 on this commit:

    commit d969a5f620f8383be39d17007b29dfd93983a819 (HEAD -> master, origin/master, origin/HEAD)
    Merge: a54ce65 54101b2
    Author: smallnest <[email protected]>
    Date:   Sat Mar 13 11:21:52 2021 +0800
    
        Merge pull request #562 from fly512/master
    

    the stack trace is always the same:

    SIGQUIT: quit
    PC=0x4732c1 m=0 sigcode=0
    
    goroutine 0 [idle]:
    runtime.futex(0x102b508, 0x80, 0x0, 0x0, 0x0, 0xc00003a800, 0xc000128008, 0x113920e2022c03, 0x7fffcb1fb128, 0x40db5f, ...)
    	/usr/local/go1.15.7/src/runtime/sys_linux_amd64.s:587 +0x21
    runtime.futexsleep(0x102b508, 0x0, 0xffffffffffffffff)
    	/usr/local/go1.15.7/src/runtime/os_linux.go:45 +0x46
    runtime.notesleep(0x102b508)
    	/usr/local/go1.15.7/src/runtime/lock_futex.go:159 +0x9f
    runtime.stopm()
    	/usr/local/go1.15.7/src/runtime/proc.go:1924 +0xc5
    runtime.findrunnable(0xc00003a800, 0x0)
    	/usr/local/go1.15.7/src/runtime/proc.go:2485 +0xa7f
    runtime.schedule()
    	/usr/local/go1.15.7/src/runtime/proc.go:2683 +0x2d7
    runtime.park_m(0xc000000300)
    	/usr/local/go1.15.7/src/runtime/proc.go:2851 +0x9d
    runtime.mcall(0x94fa00)
    	/usr/local/go1.15.7/src/runtime/asm_amd64.s:318 +0x5b
    
    goroutine 1 [select]:
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).call(0xc0002be750, 0xaff840, 0xc0002bc900, 0xa558a0, 0x11, 0xa4cadc, 0x5, 0x981d80, 0xc0002bc8a0, 0x981e80, ...)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:240 +0x23c
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).Call(...)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:231
    main.(*ClientRpcx).DoSyncCallWithContext(0xc0002be680, 0xaff7c0, 0xc000026170, 0xc0002dc000, 0x0, 0xc0002e4900, 0x2f1, 0x2f1, 0x0, 0x0)
    	/home/jaten/go/src/github.com/glycerine/goq/xc.go:169 +0x219
    main.(*ClientRpcx).DoSyncCall(...)
    	/home/jaten/go/src/github.com/glycerine/goq/xc.go:137
    main.(*Submitter).SubmitJobGetReply(0xc00012e6c0, 0xc0002dc000, 0x1, 0xc00013d860, 0xc0002dc000, 0x0, 0x400, 0x7fc18e1d1700)
    	/home/jaten/go/src/github.com/glycerine/goq/sub.go:86 +0xbb
    main.main()
    	/home/jaten/go/src/github.com/glycerine/goq/main.go:169 +0x221f
    
    goroutine 21 [IO wait]:
    internal/poll.runtime_pollWait(0x7fc18c3d0d48, 0x72, 0xaf7180)
    	/usr/local/go1.15.7/src/runtime/netpoll.go:222 +0x55
    internal/poll.(*pollDesc).wait(0xc000123f18, 0x72, 0xaf7100, 0xfe5680, 0x0)
    	/usr/local/go1.15.7/src/internal/poll/fd_poll_runtime.go:87 +0x45
    internal/poll.(*pollDesc).waitRead(...)
    	/usr/local/go1.15.7/src/internal/poll/fd_poll_runtime.go:92
    internal/poll.(*FD).Read(0xc000123f00, 0xc0002d8000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
    	/usr/local/go1.15.7/src/internal/poll/fd_unix.go:159 +0x1a5
    net.(*netFD).Read(0xc000123f00, 0xc0002d8000, 0x4000, 0x4000, 0x7fc18c1ea878, 0x7fc18c1ea878, 0x60)
    	/usr/local/go1.15.7/src/net/fd_posix.go:55 +0x4f
    net.(*conn).Read(0xc00011c218, 0xc0002d8000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
    	/usr/local/go1.15.7/src/net/net.go:182 +0x8e
    bufio.(*Reader).Read(0xc00010f1a0, 0xc00030a004, 0x1, 0xc, 0x203000, 0x203000, 0x203000)
    	/usr/local/go1.15.7/src/bufio/bufio.go:227 +0x222
    io.ReadAtLeast(0xaf5c00, 0xc00010f1a0, 0xc00030a004, 0x1, 0xc, 0x1, 0x201, 0x1000000000000, 0x7fc18c1ea878)
    	/usr/local/go1.15.7/src/io/io.go:314 +0x87
    io.ReadFull(...)
    	/usr/local/go1.15.7/src/io/io.go:333
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/protocol.(*Message).Decode(0xc000318000, 0xaf5c00, 0xc00010f1a0, 0xc000200060, 0x0)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/protocol/message.go:401 +0x7a
    github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).input(0xc0002be750)
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/client.go:502 +0xd0
    created by github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client.(*Client).Connect
    	/home/jaten/go/src/github.com/glycerine/goq/vendor/github.com/smallnest/rpcx/client/connection.go:56 +0x1ff
    
    rax    0xca
    rbx    0x102b3c0
    rcx    0x4732c3
    rdx    0x0
    rdi    0x102b508
    rsi    0x80
    rbp    0x7fffcb1fb0f0
    rsp    0x7fffcb1fb0a8
    r8     0x0
    r9     0x0
    r10    0x0
    r11    0x286
    r12    0x3
    r13    0x102ae80
    r14    0x4
    r15    0x11
    rip    0x4732c1
    rflags 0x286
    cs     0x33
    fs     0x0
    gs     0x0
    

    This is where the client is making the call. The source code is open source:

    https://github.com/glycerine/goq/blob/master/xc.go#L140

    opened by glycerine 8
  • 请教一个docker-compose部署多个server和一个client服务的连接报错问题。

    请教一个docker-compose部署多个server和一个client服务的连接报错问题。

    自己写了个demo,不使用docker部署的话,单纯运行backend和service(即客户端,服务端)是可以跑通的。但是尝试用docker-compose运行1个backend和2个service时有一个问题。

    架构思路的话:

    • backend集成了rpcx client和gin,gin暴露http api给外部使用,rpcx client使用etcd获取service的rpc接口地址,然后调用rpc。
    • service运行rpcx server,暴露rpc接口给backend使用。注册中心的话使用etcd,通过flag传入该server的地址然后注册到etcd上。

    这个是backend的报错:

    [GIN-debug] Listening and serving HTTP on :3010
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8973: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8973: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    2020/05/29 06:26:33 connection.go:96: WARN : failed to dial server: dial tcp 10.0.0.201:8972: connect: connection refused
    [GIN] 2020/05/29 - 06:26:33 | 500 |     33.5876ms |     192.168.0.1 | GET      "/api/v1/match?page=1&size=10&key=format.keyword&val=doc&index=testtable1"
    
    

    这个是启动服务的docker-compose.yml

    version: '2'
    services:
      server1:
        image: yz_classic_server:1.0
        container_name: yz_classic_server1
        entrypoint: /rpcx-service/server -addr='10.0.0.201:8972'
        ports:
          - "8972:8972"
      server2:
        image: yz_classic_server:1.0
        container_name: yz_classic_server2
        # -flag 传入addr 10.0.0.201:8973
        entrypoint: /rpcx-service/server -addr='10.0.0.201:8973'
        ports:
          - "8973:8972"
      backend:
        image: yz_classic_backend:1.0
        container_name: yz_classic_backend
        ports:
          - "3010:3010"
    

    这个是backend的main.go

    
    var (
    	defaultAddr = ":3010"
    	addr  = flag.String("addr", defaultAddr, "http address")
    
    	//etcd
    	basePath = flag.String("base", "/rpcx_yz_classic", "prefix path")
    	etcdAddr = flag.String("etcdAddr", "10.0.0.201:2379", "etcd address")
    )
    
    
    
    func main() {
    	// gin + rpcx client
    	dEtcd := client.NewEtcdV3Discovery(*basePath, "search", []string{*etcdAddr}, nil)
    	v1.Xclient = client.NewXClient("search", client.Failover, client.RandomSelect, dEtcd, client.DefaultOption)
    	defer v1.Xclient.Close()
    
    	r := v1.InitRouter()
    	r.Run(defaultAddr) // listen and serve on 0.0.0.0:3010
    }
    
    

    这个是service的main.go

    var (
    	addr = flag.String("addr", "localhost:8972", "server address")
    
    	//etcd
    	basePath = flag.String("base", "/rpcx_yz_classic", "prefix path")
    	etcdAddr = flag.String("etcdAddr", "10.0.0.201:2379", "etcd address")
    )
    
    func main() {
    	flag.Parse()
    	// init es dao
    	dao.InitEs()
    
    	// rpcx service
    	s := server.NewServer()
    
    	addRegistryPlugin(s) //etcd
    
    	s.RegisterName("search", search.New(), "") //user.New()返回一个服务对象,该服务对象的所有方法都会是允许rpc调用的,只要符合方法签名
    	//err := s.Serve("tcp", *addr)
    	err := s.Serve("tcp", "localhost:8972")
    	if err != nil {
    		panic(err)
    	}
    }
    
    
    func addRegistryPlugin(s *server.Server) {
    	fmt.Println("*addr:", *addr)
    	r := &serverplugin.EtcdV3RegisterPlugin{
    		ServiceAddress: "[email protected]" + *addr,
    		EtcdServers:    []string{*etcdAddr},
    		BasePath:       *basePath,
    		UpdateInterval: time.Minute,
    	}
    	err := r.Start()
    	if err != nil {
    		log.Fatal(err)
    	}
    	s.Plugins.Add(r)
    }
    

    注意的是, 10.0.0.201是一台机子,里面部署了etcd; localhost(内网ip为10.0.0.222)是另一台机子,这台机子运行了docker-compose up -d

    自己排查了很久,但是还是不知道问题具体在哪。求指教。thx.

    opened by theoneLee 8
  • support pass metadata via jsonrpc 2.0

    support pass metadata via jsonrpc 2.0

    1. 客户端使用ReqMetaDataKey, 请求额外带元数据
    2. jsonrpc里没有这个元数据

    没想好怎么改, 从params中ReqMetaDataKey么?

    1. https://github.com/smallnest/rpcx/issues/391
    2. https://github.com/smallnest/rpcx/blob/master/server/jsonrpc2.go#L77
    feature 6.0 
    opened by Lonenso 8
  • What's the benefit to use QUIC vs TCP, HTTP in RPC area?

    What's the benefit to use QUIC vs TCP, HTTP in RPC area?

    Hi maintainer,

    I notice rpcx has supported QUIC protocol. Could I know the benefits to use rpcx over QUIC? Is there benchmark to compare the performance with TCP/HTTP/QUIC/KCP?

    Trying to see how QUIC does in rpc area? We are evaluating whether we should use RPC over QUIC. Thanks for your information!

    opened by Jeffwan 1
  • Update xclient.go

    Update xclient.go

    watch() many goroutine sort the single slice at the same time. it has a concurrent problem. and there is no need to sort the pairs, we should delete the sort code.

    opened by kuaikuai 0
  • 文档需求:预定义plugin的执行顺序、含义,参数及返回值含义缺失

    文档需求:预定义plugin的执行顺序、含义,参数及返回值含义缺失

    在Server Plugin中,预定义了很多Plugin

    type (
    	// RegisterPlugin is .
    	RegisterPlugin interface {
    		Register(name string, rcvr interface{}, metadata string) error
    		Unregister(name string) error
    	}
    
    	// RegisterFunctionPlugin is .
    	RegisterFunctionPlugin interface {
    		RegisterFunction(serviceName, fname string, fn interface{}, metadata string) error
    	}
    
    	// PostConnAcceptPlugin represents connection accept plugin.
    	// if returns false, it means subsequent IPostConnAcceptPlugins should not continue to handle this conn
    	// and this conn has been closed.
    	PostConnAcceptPlugin interface {
    		HandleConnAccept(net.Conn) (net.Conn, bool)
    	}
    
    	// PostConnClosePlugin represents client connection close plugin.
    	PostConnClosePlugin interface {
    		HandleConnClose(net.Conn) bool
    	}
    
    	// PreReadRequestPlugin represents .
    	PreReadRequestPlugin interface {
    		PreReadRequest(ctx context.Context) error
    	}
    
    	// PostReadRequestPlugin represents .
    	PostReadRequestPlugin interface {
    		PostReadRequest(ctx context.Context, r *protocol.Message, e error) error
    	}
    
    	// PreHandleRequestPlugin represents .
    	PreHandleRequestPlugin interface {
    		PreHandleRequest(ctx context.Context, r *protocol.Message) error
    	}
    
    	PreCallPlugin interface {
    		PreCall(ctx context.Context, serviceName, methodName string, args interface{}) (interface{}, error)
    	}
    
    	PostCallPlugin interface {
    		PostCall(ctx context.Context, serviceName, methodName string, args, reply interface{}) (interface{}, error)
    	}
    
    	// PreWriteResponsePlugin represents .
    	PreWriteResponsePlugin interface {
    		PreWriteResponse(context.Context, *protocol.Message, *protocol.Message, error) error
    	}
    
    	// PostWriteResponsePlugin represents .
    	PostWriteResponsePlugin interface {
    		PostWriteResponse(context.Context, *protocol.Message, *protocol.Message, error) error
    	}
    
    	// PreWriteRequestPlugin represents .
    	PreWriteRequestPlugin interface {
    		PreWriteRequest(ctx context.Context) error
    	}
    
    	// PostWriteRequestPlugin represents .
    	PostWriteRequestPlugin interface {
    		PostWriteRequest(ctx context.Context, r *protocol.Message, e error) error
    	}
    
    	// HeartbeatPlugin is .
    	HeartbeatPlugin interface {
    		HeartbeatRequest(ctx context.Context, req *protocol.Message) error
    	}
    
    	CMuxPlugin interface {
    		MuxMatch(m cmux.CMux)
    	}
    )
    

    能够麻烦一下,将每一种plugin的执行顺序和含义,参数含义,返回值含义都解释一下吗? 在原始文档中,这一部分没有写。

    谢谢

    enhancement 8.0 
    opened by piyongcai 1
Owner
smallnest
Author of 《Scala Collections Cookbook》
smallnest
Coral, a friendly Cobra fork with nearly all its features, but only 4 dependencies

Coral Preamble I love Cobra and I love Viper. They are great projects, incredibly useful and outstandingly important for the Go community. But sometim

Christian Muehlhaeuser 427 Jun 23, 2022
Zero Trust Network Communication Sentinel provides peer-to-peer, multi-protocol, automatic networking, cross-CDN and other features for network communication.

Thank you for your interest in ZASentinel ZASentinel helps organizations improve information security by providing a better and simpler way to protect

ZTALAB 6 Jul 30, 2022
apache dubbo gateway,L7 proxy,virtual host,k8s ingress controller.

apache dubbo gateway,L7 proxy,virtual host,k8s ingress controller.

null 0 Jul 22, 2022
Totem - A Go library that can turn a single gRPC stream into bidirectional unary gRPC servers

Totem is a Go library that can turn a single gRPC stream into bidirectional unar

Joe Kralicky 2 Jan 10, 2022
Easily increment 📶 a multi-language 🔱 project version

version-bump Have you ever made a mistake incrementing a project version? Do you have multiple files to update the version at? I was always forgetting

Anton Yurchenko 8 Jul 19, 2022
Fast HTTP package for Go. Tuned for high performance. Zero memory allocations in hot paths. Up to 10x faster than net/http

fasthttp Fast HTTP implementation for Go. Currently fasthttp is successfully used by VertaMedia in a production serving up to 200K rps from more than

Aliaksandr Valialkin 18.2k Aug 15, 2022
Yet another TCP Port Scanner, but lightning faster.

Fast TCP Port Scanner A highly concurrent TCP port scanner. Run Tests with Code Coverage (Linux) go test -cover Compile (Linux) go build -v -o fglps R

Hysteresis 6 Jul 23, 2022
llb - It's a very simple but quick backend for proxy servers. Can be useful for fast redirection to predefined domain with zero memory allocation and fast response.

llb What the f--k it is? It's a very simple but quick backend for proxy servers. You can setup redirect to your main domain or just show HTTP/1.1 404

Kirill Danshin 12 Jan 23, 2022
go mod tidy, but for multi-module monorepos via mad science

monotidy go mod tidy, but for multi-module monorepos Why? In a multi-module monorepo, when dependabot updates a shared lib's go.mod, this also invalid

Steve Coffman 5 Oct 21, 2021
An easy-to-use platform for creating microservices without complex infrastructure solutions.

RPCPlatform An easy-to-use platform for creating microservices without complex infrastructure solutions. Only etcd required. Out of the box you get a

Oleg Trifonov 0 Jan 4, 2022
A suite of gRPC debugging tools. Like Fiddler/Charles but for gRPC.

grpc-tools A suite of tools for gRPC debugging and development. Like Fiddler/Charles but for gRPC! The main tool is grpc-dump which transparently inte

Bradley Kemp 1.1k Aug 10, 2022
⚡️Lightweight framework for microservices & web services in golang

Quickstart Zepto is a lightweight framework for the development of microservices & web services in golang. As an opinionated framework, zepto proposes

null 117 Jun 19, 2022
Orion - a small lightweight framework written around grpc/protobuf with the aim to shorten time to build microservices at Carousell.

Orion Orion is a small lightweight framework written around grpc/protobuf with the aim to shorten time to build microservices at Carousell. It is deri

Carousell 147 Jun 14, 2022
A local meetup to show some of the features of the Twirp RPC framework

twirpdemo This repo was created for a local meetup to show some of the features of the Twirp RPC framework. Usage Generate proto code: protoc --twirp

Fatih Arslan 31 Jul 21, 2022
Nhat Tran 0 Feb 10, 2022
Use Consul to do service discovery, use gRPC +kafka to do message produce and consume. Use redis to store result.

目录 gRPC/consul/kafka简介 gRPC+kafka的Demo gRPC+kafka整体示意图 限流器 基于redis计数器生成唯一ID kafka生产消费 kafka生产消费示意图 本文kafka生产消费过程 基于pprof的性能分析Demo 使用pprof统计CPU/HEAP数据的

null 43 Jul 9, 2022
Go Library to Execute Commands Over SSH at Scale

vSSH Go library to handle tens of thousands SSH connections and execute the command(s) with higher-level API for building network device / server auto

Yahoo 856 Aug 8, 2022