The Go language implementation of gRPC. HTTP/2 based RPC



Build Status GoDoc GoReportCard

The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the Go gRPC docs, or jump directly into the quick start.



With Go module support (Go 1.11+), simply add the following import

import ""

to your code, and then go [build|run|test] will automatically fetch the necessary dependencies.

Otherwise, to install the grpc-go package, run the following command:

$ go get -u

Note: If you are trying to access grpc-go from China, see the FAQ below.

Learn more


I/O Timeout Errors

The domain may be blocked from some countries. go get usually produces an error like the following when this happens:

$ go get -u
package unrecognized import path "" (https fetch: Get dial tcp i/o timeout)

To build Go code, there are several options:

  • Set up a VPN and access through that.

  • Without Go module support: git clone the repo manually:

    git clone $GOPATH/src/

    You will need to do the same for all of grpc's dependencies in, e.g.

  • With Go module support: it is possible to use the replace feature of go mod to create aliases for packages. In your project's directory:

    go mod edit[email protected]
    go mod tidy
    go mod vendor
    go build -mod=vendor

    Again, this will need to be done for all transitive dependencies hosted on as well. For details, refer to golang/go issue #28652.

Compiling error, undefined: grpc.SupportPackageIsVersion

If you are using Go modules:

Ensure your gRPC-Go version is required at the appropriate version in the same module containing the generated .pb.go files. For example, SupportPackageIsVersion6 needs v1.27.0, so in your go.mod file:

module <your module name>

require ( v1.27.0

If you are not using Go modules:

Update the proto package, gRPC package, and rebuild the .proto files:

go get -u{proto,protoc-gen-go}
go get -u
protoc --go_out=plugins=grpc:. *.proto

How to turn on logging

The default logger is controlled by environment variables. Turn everything on like this:


The RPC failed with error "code = Unavailable desc = transport is closing"

This error means the connection the RPC is using was closed, and there are many possible reasons, including:

  1. mis-configured transport credentials, connection failed on handshaking
  2. bytes disrupted, possibly by a proxy in between
  3. server shutdown
  4. Keepalive parameters caused connection shutdown, for example if you have configured your server to terminate connections regularly to trigger DNS lookups. If this is the case, you may want to increase your MaxConnectionAgeGrace, to allow longer RPC calls to finish.

It can be tricky to debug this because the error happens on the client side but the root cause of the connection being closed is on the server side. Turn on logging on both client and server, and see if there are any transport errors.

  • protoc-gen-go-grpc: API for service registration

    protoc-gen-go-grpc: API for service registration

    There were some changes in #3657 that make it harder to develop gRPC services and harder to find new unimplemented methods - I wanted to start a discussion around the new default and figure out why the change was made. I do understand this is in an unreleased version, so I figured a discussion would be better than a bug report or feature request.

    From my perspective, this is a number of steps backwards for reasons I will outline below.

    When implementing a gRPC service in Go, I often start with a blank slate - the service has been defined in proto files, the go and gRPC protobuf definitions have been generated, all that's left to do is write the code. I often use something like the following so the compiler will help me along, telling me about missing methods, incorrect signatures, things like that.

    package chat
    func init() {
    	// Ensure that Server implements the ChatIngestServer interface
    	var server *Server = nil
    	var _ pb.ChatIngestServer = server

    This can alternatively be done with var _ pb.ChatIngestServer = &Server{} but that theoretically leaves a little bit more memory around at runtime.

    After this, I add all the missing methods myself (returning the unimplemented status) and start adding implementations to them until I have a concrete implementation for all the methods.

    Problems with the new approach

    • As soon as you embed the Unimplemented implementation, the Go compiler gets a lot less useful - it will no longer tell you about missing methods and because Go interfaces are implicit, if you make a mistake in implementing a method (like misspelling a method name), you will not find out until runtime. Additionally, because these are public methods, if they're attached to a public struct (like the commonly used Server), they may not be detected as an unused method.
    • If protos and generated files are updated, you will not know about any missing methods until runtime. Personally, I would prefer to know if I have not fully implemented a service at compile time rather than waiting until clients are running against it.
    • IDE hinting is worse with the new changes - IDEs will generally not recommend a method stub for an unimplemented method if it's provided by an embedded struct because even though it's technically implemented, it does not have a working implementation.

    I generally prefer compile time guarantees that all methods are implemented over runtime checks.

    Benefits of the new approach

    • Protos and generated files can be updated without requiring updates to server implementations.


    By default, the option requireUnimplementedServers should default to false. This option is more valuable when dealing with external protobufs which are not versioned (maybe there should be a recommendation to embed the unimplemented struct in this instance) and makes it harder to catch mistakes if you are developing a canonical implementation of a service which should implement all the available methods.

    At least for me, the problems with the new approach vastly outweigh the benefits I've seen so far.

    Type: Question P1 
    opened by belak 86
  • Support serving web content from the same port

    Support serving web content from the same port says grpc protobuf mapping uses service names as paths. Would it be possible to serve web content from other urls?

    I see TLS side does alpn, so that's an easy place to hook up.

    Any thoughts about non-TLS? e.g. a service running on localhost. Of course that would mean needing to do a http/1 upgrade negotiation, as then the port could not default to http/2.

    Use case 1: host a web application and its api on the same port.

    Use case 2: serve a health check response.

    Use case 3: serve a "nothing to see here" html page.

    Use case 4: serve a /robots.txt file.

    opened by tv42 70
  • Go 1.7 uses import

    Go 1.7 uses import "context"

    If you're using the new "context" library, then gRPC servers can't match the server to the interface, because it has the wrong type of context.

    Compiling bin/linux.x86_64/wlserver


    ./wlserver.go:767: cannot use wls (type _Server) as type WhitelistProto.GoogleWhitelistServer in argument to WhitelistProto.RegisterGoogleWhitelistServer: *Server does not implement WhitelistProto.GoogleWhitelistServer (wrong type for Delete method) have Delete(context.Context, *WhitelistProto.DeleteRequest) (_WhitelistProto.DeleteReply, error) want Delete(context.Context, _WhitelistProto.DeleteRequest) (_WhitelistProto.DeleteReply, error)

    Type: API Change P3 Status: Blocked 
    opened by puellanivis 67
  • Connection latency significantly affects throughput

    Connection latency significantly affects throughput

    I am working on a system that uses GRPC to send 1MB blobs between clients and servers and have observed some poor throughput when connection latency is high (180ms round trip is typical between Australia and the USA).

    The same throughput issues are not present when I take GRPC out of the equation. I have prepared a self-contained program that reproduces the problem on a local machine by simulating latency at the net.Listener level. It can run either using GRPC or just plain HTTP/2 POST requests. In each case the payload and latency are the same, but—as you can see from the data below—GRPC becomes several orders of magnitude slower as latency increases.

    The program and related files:

    The output of a typical run:

    $ ./ 
    Duration	Latency	Proto
    6.977221ms	0s	GRPC
    4.833989ms	0s	GRPC
    4.714891ms	0s	GRPC
    3.884165ms	0s	GRPC
    5.254322ms	0s	GRPC
    8.507352ms	0s	HTTP/2.0
    936.436µs	0s	HTTP/2.0
    453.471µs	0s	HTTP/2.0
    252.786µs	0s	HTTP/2.0
    265.955µs	0s	HTTP/2.0
    107.32663ms	1ms	GRPC
    102.51629ms	1ms	GRPC
    100.235617ms	1ms	GRPC
    100.444982ms	1ms	GRPC
    100.881221ms	1ms	GRPC
    12.423725ms	1ms	HTTP/2.0
    3.02918ms	1ms	HTTP/2.0
    2.775928ms	1ms	HTTP/2.0
    4.161895ms	1ms	HTTP/2.0
    2.951534ms	1ms	HTTP/2.0
    195.731175ms	2ms	GRPC
    190.571784ms	2ms	GRPC
    188.810298ms	2ms	GRPC
    190.593822ms	2ms	GRPC
    190.015888ms	2ms	GRPC
    19.18046ms	2ms	HTTP/2.0
    4.663983ms	2ms	HTTP/2.0
    5.45113ms	2ms	HTTP/2.0
    5.56255ms	2ms	HTTP/2.0
    5.582744ms	2ms	HTTP/2.0
    378.653747ms	4ms	GRPC
    362.14625ms	4ms	GRPC
    357.95833ms	4ms	GRPC
    364.525646ms	4ms	GRPC
    364.954077ms	4ms	GRPC
    33.666184ms	4ms	HTTP/2.0
    8.68926ms	4ms	HTTP/2.0
    10.658349ms	4ms	HTTP/2.0
    10.741361ms	4ms	HTTP/2.0
    10.188444ms	4ms	HTTP/2.0
    719.696194ms	8ms	GRPC
    699.807568ms	8ms	GRPC
    703.794127ms	8ms	GRPC
    702.610461ms	8ms	GRPC
    710.592955ms	8ms	GRPC
    55.66933ms	8ms	HTTP/2.0
    18.449093ms	8ms	HTTP/2.0
    17.080567ms	8ms	HTTP/2.0
    20.597944ms	8ms	HTTP/2.0
    17.318133ms	8ms	HTTP/2.0
    1.415272339s	16ms	GRPC
    1.350923577s	16ms	GRPC
    1.355653965s	16ms	GRPC
    1.338834603s	16ms	GRPC
    1.358419144s	16ms	GRPC
    102.133898ms	16ms	HTTP/2.0
    39.144638ms	16ms	HTTP/2.0
    40.82348ms	16ms	HTTP/2.0
    35.133498ms	16ms	HTTP/2.0
    39.516466ms	16ms	HTTP/2.0
    2.630821843s	32ms	GRPC
    2.46741086s	32ms	GRPC
    2.507019279s	32ms	GRPC
    2.476177935s	32ms	GRPC
    2.49266693s	32ms	GRPC
    179.271675ms	32ms	HTTP/2.0
    72.575954ms	32ms	HTTP/2.0
    67.23265ms	32ms	HTTP/2.0
    70.651455ms	32ms	HTTP/2.0
    67.579124ms	32ms	HTTP/2.0

    I theorize that there is something wrong with GRPC's flow control mechanism, but that's just a guess.

    P1 Type: Performance 
    opened by adg 66
  • Failed HTTP/2 Parsing StatusCode.Unavailable when calling Streaming RPCs from Golang Server

    Failed HTTP/2 Parsing StatusCode.Unavailable when calling Streaming RPCs from Golang Server

    Please answer these questions before submitting your issue.

    This is a continuation of which I am opening here for better visibility from the grpc-go devs.

    What version of gRPC are you using?

    We are using python grpcio==1.3.5 and grpc-go==v1.4.x. We've also reproduced this on python grpcio==1.4.0

    What version of Go are you using (go version)?

    We're using go version 1.8.1

    What operating system (Linux, Windows, …) and version?

    Ubuntu 14.04

    What did you do?

    If possible, provide a recipe for reproducing the error. Happens inconsistently, every so often a streaming RPC will fail with the following error: <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Failed parsing HTTP/2)>

    Some grpc logs: E0629 13:45:52.222804121 27606 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.222827355 27606 completion_queue.c:226] Operation failed: tag=0x7f10bbd9ca60, error={"created":"@1498769152.222798356","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.222838571 27606 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.222846339 27606 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498769152.222799406","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.223925299 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.223942312 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c9f0, error={"created":"@1498769152.223918465","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.223949262 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.223979616 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498769152.223919439","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.224009309 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.224017226 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498769152.223920475","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.224391810 27609 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.224403941 27609 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc20, error={"created":"@1498769152.224387963","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.556768181 28157 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.556831045 28157 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cb40, error={"created":"@1498769557.556750425","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557441154 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557504078 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498769557.557416763","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557563746 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557608834 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc20, error={"created":"@1498769557.557420283","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557649360 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557694897 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498769557.557423433","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.558510258 28166 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.558572634 28166 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cd70, error={"created":"@1498769557.558490789","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.558610179 28166 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.558644492 28166 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cec0, error={"created":"@1498769557.558494483","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.559833158 28167 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.559901218 28167 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498769557.559815450","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.635698278 29153 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.635812439 29153 completion_queue.c:226] Operation failed: tag=0x7f108afcb1a0, error={"created":"@1498770706.635668871","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.635887056 29153 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.635944586 29153 completion_queue.c:226] Operation failed: tag=0x7f108afcb210, error={"created":"@1498770706.635675260","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.636461489 29155 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.636525366 29155 completion_queue.c:226] Operation failed: tag=0x7f108afcb130, error={"created":"@1498770706.636440110","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.636556141 29155 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.636585820 29155 completion_queue.c:226] Operation failed: tag=0x7f108afcb360, error={"created":"@1498770706.636443702","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.637721291 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.637791752 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb0c0, error={"created":"@1498770706.637702529","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.637836300 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.637872014 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb2f0, error={"created":"@1498770706.637706809","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.641194536 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.641241298 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb050, error={"created":"@1498770706.641178364","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.539497986 29251 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.539555939 29251 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc90, error={"created":"@1498771717.539483236","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.540536617 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.540601626 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c910, error={"created":"@1498771717.540517372","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.540647559 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.540679773 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cd70, error={"created":"@1498771717.540521809","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.541893786 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.541943420 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9ce50, error={"created":"@1498771717.541871189","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.541982533 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.542009741 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498771717.541874944","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.542044730 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.542080406 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498771717.541878692","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.543488271 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.543534201 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498771717.543473445","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Failed parsing HTTP/2)>

    What did you expect to see?

    The streaming RPC to succeed.

    What did you see instead?

    The streaming RPC failed

    Status: Requires Reporter Clarification Type: Bug 
    opened by kahuang 54
  • ClientConn is inflexible for client-side LB

    ClientConn is inflexible for client-side LB

    Client side LB was being discussed here:!searchin/grpc-io/loadbalancing/grpc-io/yqB8sNNHeoo/0Mfu4b2cdaUJ

    We've been considering using GRPC for our new MicroService stack. We are using Etcd+SkyDNS for DNS SRV based service discovery and would like to leverage that for RR-like RPC load balancing between backends.

    However, it seems that the current ClientConn is fairly "single-homed". I thought about implementing an LbClientConn that would aggregate multiple ClientConn, but all the auto-generated code takes ClientConn structure and not a swappable interface.

    Are you planning on doing client-side LB anytime soon? Or maybe ideas or hints on how to make an early stab at it?

    opened by mwitkow 51
  • Instrumentation hooks

    Instrumentation hooks

    We're currently experimenting with GRPC and wondering how we'll monitor the client code/server code dispatch using Prometheus metrics (should look familiar ;)

    I've been looking for a place in the grpc-go to be able to hook up gathering of ServiceName, MethodName, bytes, latency data, and found none.

    Reading upon the thread in #131 about RPC interceptors, it is suggested to add the instrumentation in our Application Code (a.k.a. the code implementing the auto-generated Proto interfaces). I see the point about not cluttering grpc-go implementation and being implementation agnostic.

    However, adding instrumentation into Application Code means that either we need to: a) add a lot of repeatable code inside Application Code to handle instrumentation b) use the callFoo pattern described in #131 proposed [only applicable to Client] c) add a thin implementation of each Proto-generated interface that wraps the "real" Proto-generated method calls with metrics [only applicable to Client]

    There are downsides to each solution though: a) leads to a lot of clutter and errors related to copy pasting, and some of these will be omitted or badly done b) means that we loose the best (IMHO) feature of Proto-generated interfaces: the "natural" syntax that allows for easy mocking in unit tests (through injection of the Proto-generated Interface), and is only applicable on the Client-side c) is very tedious because each time we re-generate the Proto (add a method or a service) we need to go and manually copy paste some boiler plate. This would be a huge drag on our coding workflow, since we really want to rely on Proto-generate code as much as possible. And also is only applicable on the Client-side.

    I think that cleanest solution would be a pluggable set of callbacks on pre-call/post-call on client/server that would grant access to ServiceName, MethodName and RpcContext (provided the latter stats about bytes transferred/start time of the call). This would allow people to plug an instrumentation mechanism of their choice (statsd, grafana, Prometheus), and shouldn't have any impact on performance that interceptors described in #131 could have had (the double serialization/deserialization).

    Having seen how amazingly useful RPC instrumentation was inside Google, I'm sure you've been thinking about solving this in gRPC, and I'm curious to know what you're planning :)

    opened by mwitkow 45
  • Access to TLS client certificate

    Access to TLS client certificate

    I can't see any way for an RPC method to authenticate a client based on a TLS certificate.

    An example program where an RPC method echoes the client TLS certificate would be great.

    opened by tv42 44
  • Document how to use ServeHTTP

    Document how to use ServeHTTP

    Now that #75 is fixed (via #514), let's add examples on how to use ServeHTTP. The examples were removed from earlier versions of #514 to reduce the size of that change.

    First, I'd like to get #545 submitted, to clean up the testdata files, before fixing this issue would otherwise make it worse.

    /cc @iamqizhao

    Type: Documentation 
    opened by bradfitz 34
  • Unexpected transport closing: too many pings from client

    Unexpected transport closing: too many pings from client

    What version of gRPC are you using?


    What version of Go are you using (go version)?


    What operating system (Linux, Windows, …) and version?

    Linux AMD64, Kernel 4.10

    What did you do?

    When I have the server configured with GZIP compression as so:

    gzcp := grpc.NewGZIPCompressor()
    grpcServer := grpc.NewServer(grpc.RPCCompressor(gzcp))

    Then when serving thousands of concurrent requests a second, clients will occasionally be disconnected with

    rpc error: code = Unavailable desc = transport is closing

    I see no errors from the server, and the both the client and server are far from overloaded (<10% CPU usage etc). Not all clients are affected at once, it will just be one connection which gets this error.

    While trying to debug this, I disabled GZIP compression so I could use more easily look at packet captures. I am unable to reproduce this error once the GZIP compressor is no longer in use.

    This issue is mostly to ask what the best way to proceed with diagnosing the problem is, or if there are any reasons why having a compressor would change the behavior of the system (aside from CPU usage which I don't think is a problem).

    P1 Type: Bug 
    opened by immesys 33
  • cmd/protoc-gen-go-grpc: add code generator

    cmd/protoc-gen-go-grpc: add code generator

    Add a code generator for gRPC services.

    No tests included here. A followup PR updates code generation to use this generator, which acts as a test of the generator as well.

    Type: Feature no release notes 
    opened by neild 30
  • Cleanup error formatting and logging

    Cleanup error formatting and logging

    Mostly add missing colons but also address few other minor issues.

    It's confusing to read an error message without a colon for nesting, so that was the main motivation for the change. Happened to me couple of times and I couldn't find the whole string anywhere in code, wasted some time on it.


    opened by ash2k 0
  • How to check if server implements bidirectional stream method before sending data

    How to check if server implements bidirectional stream method before sending data

    We (Dapr) have a gRPC API that uses a bi-directional stream:

    service ServiceInvocation {
      rpc CallLocalStream (stream InternalInvokeRequestStream) returns (stream InternalInvokeResponseStream) {}

    This is a new API that we're adding, and we need to maintain backwards-compatibility so new clients (which implement CallLocalStream) can invoke old servers (which do not implement CallLocalStream), falling back to a unary RPC in that case.

    The problem we have is that when we create the stream on the client, with:

    stream, err := clientV1.CallLocalStream(ctx, opts...)
    if err != nil {
    	return nil, err

    Even when the server does not implement CallLocalStream, err is always nil. We get an error (with code "Unimplemented") only after we try to receive a message from the stream.

    This is a problem for us because by the time we (try to) receive a message from the stream, we have already sent data to the server, and that works without errors. That data comes from a readable stream, which we end up consuming along the way, so the data doesn't exist anymore if we then need to fall back to a unary RPC.

    Is there a way to determine earlier if the server implements a bidirectional stream method? Ideally, without having to make a "ping" call which would add latency.

    Type: Question 
    opened by ItalyPaleAle 0
  • transport: send a trailers only-response if no messages or headers are sent

    transport: send a trailers only-response if no messages or headers are sent

    This PR adds a check if a stream is closed with an error. If so, and no messages or headers are sent, a trailers only-response is sent.



    • transport: send a trailers only-response if no messages or headers are sent
    opened by erni27 1
  • Improve observability of hitting maxConcurrentStream limit

    Improve observability of hitting maxConcurrentStream limit

    Use case(s) - what problem will this feature solve?

    In our setup we are seeing latency spikes caused by hitting maxConcurrentStream limit enforced by the server. The problem is that we had to change the grpc code and rebuild the binary with custom logging to understand that this is a case. I propose improving observability for this, either by exposing metric or some log entry.

    Proposed Solution

    The naive solution here would be:

    • Measure latency of this for the loop
    • If the latency exceeds hardcoded threshold (let's say 50ms), we log info on some higher verbosity level.
      • Similar pattern is widely used e.g. kubernetes/client-go, for different waiting time:

    This will make debugging easier in some testing environments.

    We can also expose some metric with histogram of waiting time to be able to track this in production environment, where wedo not enable higher verbosity by default.

    Alternatives Considered

    Additional Context

    Type: Feature 
    opened by mborsz 0
  • random balancer

    random balancer

    Please see the FAQ in our main before submitting your issue.

    Use case(s) - what problem will this feature solve?

    I've a lot of clients which will connect a small number of servers to long polling some data periodically. So it's important that each server should bear as equal connections as possible.

    Proposed Solution

    In this scenario, a random balancer is an easy solution to think about. Althrough grpc-go supports to regist customized balancer, but because random balancer is so common that I created this issue to discuss whether it's possible to add a grpc-go/balancer/random just like roundroubin etc.

    p.s. I also found that pickfirst balancer is in grpc-go/pickfirst.go but not in grpc-go/balancer. I'd like to know why all balancer implentations are not in a same directory and whether it's possible to move pickfirst.go to grpc-go/balancer which makes balancers of grpc-go organized.

    Alternatives Considered


    Additional Context


    Status: Requires Reporter Clarification Type: Feature 
    opened by pirDOL 1
  • Fix header limit exceeded

    Fix header limit exceeded

    This PR fixes by returning a small hardcoded error message to the client in cases when header size limit is exceeded, instead of closing the connection without passing any error. This implements the proposed solution from the issue description.

    In the @easwars also suggested a different approach: ignore header size limit on the server and let the client handle it. Please let me know if I should implement this suggestion instead.

    opened by s-matyukevich 0
  • v1.51.0(Nov 18, 2022)

    Behavior Changes

    • xds: NACK EDS resources with duplicate addresses in accordance with a recent spec change (#5715)
      • Special Thanks: @erni27
    • grpc: restrict status codes that can be generated by the control plane (gRFC A54) (#5653)

    New Features

    • client: set grpc-accept-encoding header with all registered compressors (#5541)
      • Special Thanks: @jronak
    • xds/weightedtarget: return a more meaningful error when all child policies are in TRANSIENT_FAILURE (#5711)
    • gcp/observability: add "started rpcs" metric (#5768)
    • xds: de-experimentalize the google-c2p-resolver (#5707)
    • balancer: add experimental Producer types and methods (#5669)
    • orca: provide a way for LB policies to receive OOB load reports (#5669)

    Bug Fixes

    • go.mod: upgrade x/text dependency to address CVE 2022-32149 (#5769)
    • client: fix race that could lead to an incorrect connection state if it was closed immediately after the server's HTTP/2 preface was received (#5714)
      • Special Thanks: @fuweid
    • xds: ensure sum of the weights of all EDS localities at the same priority level does not exceed uint32 max (#5703)
      • Special Thanks: @erni27
    • client: fix binary logging bug which logs a server header on a trailers-only response (#5763)
    • balancer/priority: fix a bug where unreleased references to removed child policies (and associated state) was causing a memory leak (#5682)
    • xds/google-c2p: validate URI schema for no authorities (#5756)
    Source code(tar.gz)
    Source code(zip)
  • v1.50.1(Oct 14, 2022)

  • v1.50.0(Oct 6, 2022)

    Behavior Changes

    • client: use proper "@" semantics for connecting to abstract unix sockets. (#5678)
      • This is technically a bug fix; the result is that the address was including a trailing NULL byte, which it should not have. This may break users creating the socket in Go by prefixing a NULL instead of an "@", though, so calling it out as a behavior change.
      • Special Thanks: @jachor

    New Features

    • metadata: add experimental ValueFromIncomingContext to more efficiently retrieve a single value (#5596)
      • Special Thanks: @horpto
    • stats: provide peer information in HandleConn context (#5589)
      • Special Thanks: @feihu-stripe
    • xds: add support for Outlier Detection, enabled by default (#5435, #5673)

    Bug Fixes

    • client: fix deadlock in transport caused by GOAWAY racing with stream creation (#5652)
      • This should only occur with an HTTP/2 server that does not follow best practices of an advisory GOAWAY (not a grpc-go server).
    • xds/xdsclient: fix a bug which was causing routes with cluster_specifier_plugin set to be NACKed when GRPC_EXPERIMENTAL_XDS_RLS_LB was off (#5670)
    • xds/xdsclient: NACK cluster resource if config_source_specifier in lrs_server is not self (#5613)
    • xds/ringhash: fix a bug which sometimes prevents the LB policy from retrying connection attempts (#5601)
    • xds/ringhash: do nothing when asked to exit IDLE instead of falling back on the default channel behavior of connecting to all addresses (#5614)
    • xds/rls: fix a bug which was causing the channel to be stuck in IDLE (#5656)
    • alts: fix a bug which was setting WaitForReady on handshaker service RPCs, thereby delaying fallback when required (#5620)
    • gcp/observability: fix End() to cleanup global state correctly (#5623)
    Source code(tar.gz)
    Source code(zip)
  • v1.49.0(Aug 24, 2022)

    New Features

    • gcp/observability: add support for Environment Variable GRPC_CONFIG_OBSERVABILITY_JSON (#5525)
    • gcp/observability: add support for custom tags (#5565)

    Behavior Changes

    • server: reduce log level from Warning to Info for early connection establishment errors (#5524)
      • Special Thanks: @jpkrohling

    Bug Fixes

    • client: fix race in flow control that could lead to unexpected EOF errors (#5494)
    • client: fix a race that could cause RPCs to time out instead of failing more quickly with UNAVAILABLE (#5503)
    • client & server: fix a panic caused by passing a nil stats handler to grpc.WithStatsHandler or grpc.StatsHandler (#5543)
    • transport/server: fix a race that could cause a stray header to be sent (#5513)
    • balancer: give precedence to IDLE over TRANSIENT_FAILURE when aggregating connectivity state (#5473)
    • xds/xdsclient: request correct resource name when user specifies a new style resource name with empty authority (#5488)
    • xds/xdsclient: NACK endpoint resources with zero weight (#5560)
    • xds/xdsclient: fix bug that would reset resource version information after ADS stream restart (#5422)
    • xds/xdsclient: fix goroutine leaks when load reporting is enabled (#5505)
    • xds/ringhash: fix config update processing to recreate ring and picker when min/max ring size changes (#5557)
    • xds/ringhash: avoid recreating subChannels when update doesn't change address weight information (#5431)
    • xds/priority: fix bug which could cause priority LB to block all traffic after a config update (#5549)
    • xds: fix bug when environment variable GRPC_EXPERIMENTAL_ENABLE_OUTLIER_DETECTION is set to true (#5537)
    Source code(tar.gz)
    Source code(zip)
  • v1.48.0(Jul 12, 2022)

    Bug Fixes

    • xds/priority: fix bug that could prevent higher priorities from receiving config updates (#5417)
    • RLS load balancer: don't propagate the status code returned on control plane RPCs to data plane RPCs (#5400)

    New Features

    • stats: add support for multiple stats handlers in a single client or server (#5347)
    • gcp/observability: add experimental OpenCensus tracing/metrics support (#5372)
    • xds: enable aggregate and logical DNS clusters by default (#5380)
    • credentials/google (for xds): support xdstp C2P cluster names (#5399)
    Source code(tar.gz)
    Source code(zip)
  • v1.47.0(May 31, 2022)

    New Features

    • xds: add support for RBAC metadata invert matchers (#5345)

    Bug Fixes

    • client: fix a context leaked if a connection to an address is lost before it is fully established (#5337)
      • Special Thanks: @carzil
    • client: fix potential panic during RPC retries (#5323)
    • xds/client: fix a potential concurrent map read/write in load reporting (#5331)
    • client/SubConn: do not recreate addrConn if UpdateAddresses is called with the same addresses (#5373)
    • xds/eds: resources containing duplicate localities with the same priority will be rejected (#5303)
    • server: return Canceled or DeadlineExceeded status code when writing headers to a stream that is already closed (#5292)
      • Special Thanks: @idiamond-stripe

    Behavior Changes

    • xds/priority: start the init timer when a child switches to Connecting from non-failure states (#5334)
    • server: respond with HTTP Status 405 and gRPC status INTERNAL if the method sent to server is not POST (#5364)


    • server: clarify documentation around setting and sending headers and ServerStream errors (#5302)
    Source code(tar.gz)
    Source code(zip)
  • v1.46.2(May 13, 2022)

  • v1.46.0(Apr 22, 2022)

    New Features

    • server: Support setting TCP_USER_TIMEOUT on grpc.Server connections using keepalive.ServerParameters.Time (#5219)
      • Special Thanks: @bonnefoa
    • client: perform graceful switching of LB policies in the ClientConn by default (#5285)
    • all: improve logging by including channelz identifier in log messages (#5192)

    API Changes

    • grpc: delete WithBalancerName() API, deprecated over 4 years ago in #1697 (#5232)
    • balancer: change BuildOptions.ChannelzParentID to an opaque identifier instead of int (#5192)
      • Note: the balancer package is labeled as EXPERIMENTAL, and we don't believe users were using this field.

    Behavior Changes

    • client: change connectivity state to TransientFailure in pick_first LB policy when all addresses are removed (#5274)
      • This is a minor change that brings grpc-go's behavior in line with the intended behavior and how C and Java behave.
    • metadata: add client-side validation of HTTP-invalid metadata before attempting to send (#4886)
      • Special Thanks: @Patrick0308

    Bug Fixes

    • metadata: make a copy of the value slices in FromContext() functions so that modifications won't be made to the original copy (#5267)
    • client: handle invalid service configs by applying the default, if applicable (#5238)
    • xds: the xds client will now apply a 1 second backoff before recreating ADS or LRS streams (#5280)


    • Upgrade security/authorization module dependencies to v0.10.1 and others (#5243)
      • Special Thanks: @TristonianJones
    Source code(tar.gz)
    Source code(zip)
  • v1.45.0(Mar 9, 2022)

    Bug Fixes

    • xds/clusterresolver: pass cluster name to DNS child policy to be used in creds handshake (#5119)
    • reflection: support dynamic messages (#5180)
      • Special Thanks: @codebutler

    Performance Improvements

    • wrr: improve randomWRR performance (#5067)
      • Special Thanks: @huangchong94

    Behavior Changes

    • server: convert context errors returned by service handlers to status with the correct status code (Canceled or DeadlineExceeded), instead of Unknown (#5156)

    New Features

    • reflection: add NewServer(ServerOptions) for creating a reflection server with advanced customizations (#5197)
    • xds: support federation (#5128)
    • xds/resource: accept Self as LDS's RDS config source and CDS's EDS config source (#5152)
    • xds/bootstrap: add plugin system for credentials specified in bootstrap file (#5136)
    Source code(tar.gz)
    Source code(zip)
  • v1.44.0(Jan 25, 2022)

    New Features

    • balancer: add RLS load balancing policy (#5046)
    • xds: add RLS Cluster Specifier Plugin (#5004)
    • insecure: remove experimental notice (#5069)

    Bug Fixes

    • internal/balancergroup: eliminate race in exitIdle (#5012)
    • authz: fix regex expression match (#5035)


    • grpc: minor improvement on WithInsecure() document (#5068)
      • Special Thanks: @shitian-ni
    • attributes: document that some value types (e.g. maps) must implement Equal (#5109)
    • dialoptions.go: Fix WithBlock godoc (#5073)
      • Special Thanks: @sgreene570
    • grpclog.DepthLoggerV2: Correct comment: formats like fmt.Println (#5038)
      • Special Thanks: @evanj
    Source code(tar.gz)
    Source code(zip)
  • cmd/protoc-gen-go-grpc/v1.2.0(Dec 21, 2021)

    New Features

    • Add protoc and protoc-gen-go-grpc versions to top comment

    Bug Fixes

    • update to v1.27.1 to fix package name generation on macOS
      • NOTE: this includes a behavior change in the protobuf package:
    Source code(tar.gz)
    Source code(zip)
    protoc-gen-go-grpc.v1.2.0.darwin.amd64.tar.gz(3.83 MB)
    protoc-gen-go-grpc.v1.2.0.linux.386.tar.gz(3.79 MB)
    protoc-gen-go-grpc.v1.2.0.linux.amd64.tar.gz(3.89 MB) MB) MB)
  • v1.43.0(Dec 14, 2021)

    API Changes

    • grpc: stabilize WithConnectParams DialOption (#4915)
      • Special Thanks: @hypnoglow

    Behavior Changes

    • status: support wrapped errors in FromContextError (#4977)
      • Special Thanks: @bestbeforetoday
    • config: remove the environment variable to disable retry support (#4922)

    New Features

    • balancer: new field Authority in BuildOptions for server name to use in the authentication handshake with a remote load balancer (#4969)

    Bug Fixes

    • xds/resolver: fix possible ClientConn leak upon resolver initialization failure (#4900)
    • client: fix nil panic in rare race conditions with the pick first LB policy (#4971)
    • xds: improve RPC error messages when xDS connection errors occur (#5032, #5054)
    • transport: do not create stream object in the face of illegal stream IDs (#4873)
      • Special Thanks: @uds5501


    • client: clarify errors to indicate whether compressed or uncompressed messages exceeded size limits (#4918)
      • Special Thanks: @uds5501
    Source code(tar.gz)
    Source code(zip)
  • v1.41.1(Dec 1, 2021)

    • creds/google: add NewDefaultCredentialsWithOptions() to support custom per-RPC creds (#4767, #4830)
    • pickfirst: check before calling Connect (#4971)
    Source code(tar.gz)
    Source code(zip)
  • v1.40.1(Dec 1, 2021)

  • v1.42.0(Nov 2, 2021)

    Behavior Changes

    • grpc: Dial("unix://relative-path") no longer works (#4817)
      • use "unix://absolute-path" or "unix:relative-path" instead in accordance with our documentation
    • xds/csds: use new field GenericXdsConfig instead of PerXdsConfig (#4898)
    • transport: server transport will reject requests with connection header (#4803)

    New Features

    • grpc: support grpc.WithAuthority when secure credentials are used (#4817)
    • creds/google: add NewDefaultCredentialsWithOptions() to support custom per-RPC creds (#4767, #4830)
    • authz: create file watcher interceptor for gRPC SDK API (#4760)
    • attributes: add Equal method (#4855)
    • resolver: add AddressMap and State.BalancerAttributes (#4855)
    • resolver: Add URL field to Target to store parsed dial target (#4817)
    • grpclb: add a target_name field to lb config to specify target when used as a child policy (#4847)
    • grpclog: support formatting log output as JSON (#4854)

    Bug Fixes

    • server: add missing conn.Close if the connection dies before reading the HTTP/2 preface (#4837)
    • grpclb: recover if addresses are received after an empty server list was received previously (#4879)
    • authz: support empty principals and fix rbac authenticated matcher (#4883)
    • xds/rds: NACK the RDS response if it contains unknown cluster specifier (#4788)
    • xds/priority: do not switch to low priority when high priority is in Idle (e.g. ringhash) (#4889)


    • grpc: stabilize WithDefaultServiceConfig and improve godoc (#4888)
    • status: clarify FromError docstring (#4880)
    • examples: add example illustrating the use of unix abstract sockets (#4848)
    • examples: update load balancing example to use loadBalancingConfig (#4887)
    • doc: promote WithDisableRetry to stable; clarify retry is enabled by default (#4901)

    API Changes

    • credentials: Mark TransportCredentials.OverrideServerName method as deprecated (#4817)
    Source code(tar.gz)
    Source code(zip)
  • v1.41.0(Sep 24, 2021)

    API Changes

    • xds: Promote xds server and creds APIs to stable (#4753)
    • balancer: add ExitIdle interface to instruct the balancer to attempt to leave the IDLE state by connecting SubConns if appropriate. (#4673)
      • NOTICE: This method will be required by the Balancer interface in the future

    Behavior Changes

    • xds: update xdsclient to keep valid resources from the response even if it has invalid responses and is NACK'ed (see gRFC 260) (#4743)
    • balancer: SubConns no longer automatically reconnect after READY; instead they transition to IDLE on connection loss (#4613)

    New Features

    • xds: add support for RINGHASH lb-policy and affinity (#4741)
    • xds: add support for retry policy in VirtualHosts and Routes (#4738)
    • stats: support stats for all retry attempts; support transparent retry (#4749)
    • authz: create interceptors for gRPC security policy API (#4664)

    Bug Fixes

    • transport: fix race in transport stream accessing s.recvCompress (#4641)
    • client: fix transparent retries when per-RPC credentials are in use (#4785)
    • server: fix bug that net.Conn is leaked if the connection is closed (io.EOF) immediately with no traffic (#4633)
    • oauth: Allow access to Google API regional endpoints via Google Default Credentials (#4713)
    Source code(tar.gz)
    Source code(zip)
  • v1.40.0(Aug 12, 2021)

    Behavior Changes

    • balancer: client channel no longer connects to idle subchannels that are returned by the pickers; LB policy should call SubConn.Connect instead. (#4579)
      • This change is in line with existing documentation stating the balancer must call Connect on idle SubConns in order for them to connect, and is preparation for an upcoming change that transitions SubConns to the idle state when connections are lost. See for more details.

    Bug Fixes

    • transport: fail RPCs without HTTP status 200 (OK), according to the gRPC spec (#4474)
      • Special Thanks: @JNProtzman
    • binarylog: fail the Write() method if proto marshaling fails (#4582)
      • Special Thanks: @Jille
    • binarylog: exit the flusher goroutine upon closing the bufferedSink (#4583)
      • Special Thanks: @Jille

    New Features

    • metadata: add Delete method to MD to encapsulate lowercasing (#4549)
      • Special Thanks: @konradreiche
    • xds/cds: support logical DNS cluster and aggregated cluster (#4594)
    • stats: add stats.Begin.IsClientStream and IsServerStream to indicate the type of RPC invoked (#4533)
      • Special Thanks: @leviska

    Performance Improvements

    • server: improve performance when multiple interceptors are used (#4524)
      • Special Thanks: @amenzhinsky
    Source code(tar.gz)
    Source code(zip)
  • v1.39.1(Aug 5, 2021)

    • server: fix bug that net.Conn is leaked if the connection is closed (io.EOF) immediately with no traffic (#4642)
    • transport: fix race in transport stream accessing s.recvCompress (#4627)
    Source code(tar.gz)
    Source code(zip)
  • v1.39.0(Jun 29, 2021)

    Behavior Changes

    • csds: return empty response if xds client is not set (#4505)
    • metadata: convert keys to lowercase in FromContext() (#4416)

    New Features

    • xds: add GetServiceInfo to GRPCServer (#4507)
      • Special Thanks: @amenzhinsky
    • xds: add test-only injection of xds config to client and server (#4476)
    • server: allow PreparedMsgs to work for server streams (#3480)
      • Special Thanks: @eafzali

    Performance Improvements

    • transport: remove decodeState from client & server to reduce allocations (#4423)
      • Special Thanks: @JNProtzman

    Bug Fixes

    • server: return UNIMPLEMENTED on receipt of malformed method name (#4464)
    • xds/rds: use 100 as default weighted cluster totalWeight instead of 0 (#4439)
      • Special Thanks: @alpha-baby
    • transport: unblock read throttling when controlbuf exits (#4447)
    • client: fix status code to return Unavailable for servers shutting down instead of Unknown (#4561)


    • doc: fix broken benchmark dashboard link in (#4503)
      • Special Thanks: @laststem
    • example: improve hello world server with starting msg (#4468)
      • Special Thanks: @dkkb
    • client: Clarify that WaitForReady will block for CONNECTING channels (#4477)
      • Special Thanks: @evanj
    Source code(tar.gz)
    Source code(zip)
  • v1.38.1(Jun 29, 2021)

  • v1.38.0(May 19, 2021)

    API Changes

    • reflection: accept interface instead of grpc.Server struct in Register() (#4340)
    • resolver: add error return value from ClientConn.UpdateState (#4270)

    Behavior Changes

    • client: do not poll name resolver when errors or bad updates are reported (#4270)
    • transport: InTapHandle may return RPC status errors; no longer RST_STREAMs (#4365)

    New Features

    • client: propagate connection error causes to RPC status (#4311, #4316)
    • xds: support inline RDS resource from LDS response (#4299)
    • xds: server side support is now experimentally available
    • server: add ForceServerCodec() to set a custom encoding.Codec on the server (#4205)
      • Special Thanks: @ash2k

    Performance Improvements

    • metadata: reduce memory footprint in FromOutgoingContext (#4360)
      • Special Thanks: @irfansharif

    Bug Fixes

    • xds/balancergroup: fix rare memory leak after closing ClientConn (#4308)


    • examples: update xds examples for PSM security (#4256)
    • grpc: improve docs on StreamDesc (#4397)
    Source code(tar.gz)
    Source code(zip)
  • v1.37.1(May 11, 2021)

    • client: fix rare panic when shutting down client while receiving the first name resolver update (#4398)
    • client: fix leaked addrConn struct when addresses are updated (#4347)
    • xds/resolver: prevent panic when two LDS updates are receives without RDS in between (#4327)
    Source code(tar.gz)
    Source code(zip)
  • v1.37.0(Apr 7, 2021)

    API Changes

    • balancer: Add UpdateAddresses() to balancer.ClientConn interface (#4215)
      • NOTICE: balancer.SubConn.UpdateAddresses() is now deprecated and will be REMOVED in gRPC-Go 1.39

    Behavior Changes

    • balancer/base: keep address attributes for pickers (#4253)
      • Special Thanks: @longXboy

    New Features

    • xds: add support for csds (#4226, #4217, #4243)
    • admin: create admin package for conveniently registering standard admin services (#4274)
    • xds: add support for HTTP filters (gRFC A39) (#4206, #4221)
    • xds: implement fault injection HTTP filter (A33) (#4236)
    • xds: enable timeout, circuit breaking, and fault injection by default (#4286)
    • xds: implement a priority based load balancer (#4070)
    • xds/creds: support all SAN matchers on client-side (#4246)

    Bug Fixes

    • xds: add env var protection for client-side security (#4247)
    • circuit breaking: update picker inline when there's a counter update (#4212)
    • server: fail RPCs without POST HTTP method (#4241)
    Source code(tar.gz)
    Source code(zip)
  • v1.36.1(Mar 25, 2021)

  • v1.35.1(Mar 12, 2021)

  • v1.34.2(Mar 12, 2021)

  • v1.36.0(Feb 24, 2021)

    New Features

    • xds bootstrap: support config content in env variable (#4153)

    Bug Fixes

    • encoding/proto: do not panic when types do not match (#4218)


    • status: document nil error handling of FromError (#4196)
      • Special Thanks: @gauravgahlot
    Source code(tar.gz)
    Source code(zip)
  • v1.35.0(Jan 14, 2021)

    Behavior Changes

    • roundrobin: strip attributes from addresses (#4024)
    • balancer: set RPC metadata in address attributes, instead of Metadata field (#4041)

    New Features

    • support unix-abstract schema (#4079)
      • Special Thanks: @resec
    • xds: implement experimental RouteAction timeout support (#4116)
    • xds: Implement experimental circuit breaking support. (#4050)

    Bug Fixes

    • xds: server_features should be a child of xds_servers and not a sibling (#4087)
    • xds: NACK more invalid RDS responses (#4120)
    Source code(tar.gz)
    Source code(zip)
  • v1.34.1(Jan 11, 2021)

    • xds client: Updated v3 type for http connection manager (#4137)
    • lrs: use JSON for locality's String representation (#4135)
    • eds/lrs: handle nil when LRS is disabled (#4086)
    • client: fix "unix" scheme handling for some corner cases (#4021)
    Source code(tar.gz)
    Source code(zip)
A high performance, open source, general-purpose RPC framework
Antenna RPC is an RPC protocol for distributed computing, it's based on QUIC and Colfer. its currently an WIP.

aRPC - Antenna Remote Procedure Call Antenna remote procedure call (aRPC) is an RPC protocol focused on distributed processing and HPC. aRPC is implem

Raphael de Carvalho Almeida 3 Jun 16, 2021
rpc/v2 support for JSON-RPC 2.0 Specification.

rpc rpc/v2 support for JSON-RPC 2.0 Specification. gorilla/rpc is a foundation for RPC over HTTP services, providing access to the exported methods of

High Performance, Kubernetes Native Object Storage 3 Jul 4, 2021
Go Substrate RPC Client (GSRPC)Go Substrate RPC Client (GSRPC)

Go Substrate RPC Client (GSRPC) Substrate RPC client in Go. It provides APIs and types around Polkadot and any Substrate-based chain RPC calls. This c

Chino Chang 1 Nov 11, 2021
Server and client implementation of the grpc go libraries to perform unary, client streaming, server streaming and full duplex RPCs from gRPC go introduction

Description This is an implementation of a gRPC client and server that provides route guidance from gRPC Basics: Go tutorial. It demonstrates how to u

Joram Wambugu 0 Nov 24, 2021
Raft-grpc-demo - Some example code for how to use Hashicorp's Raft implementation with gRPC

raft-grpc-example This is some example code for how to use Hashicorp's Raft impl

dougsong 1 Jan 4, 2022
grpc-http1: A gRPC via HTTP/1 Enabling Library for Go

grpc-http1: A gRPC via HTTP/1 Enabling Library for Go This library enables using all the functionality of a gRPC server even if it is exposed behind a

StackRox 89 Dec 17, 2022
Go based grpc - grpc gateway micro service example

go-grpc-gateway-server This repository provides an example for go based microservice. Go micro services developed based on gRPC protobuf's and also us

Suresh Yekasiri 0 Dec 8, 2021
Json to rpc example with envoy, go, grpc, nats

grpc-nats-envoy json to rpc example with envoy, go, grpc, redis This repo is a mirror of It

Charles Onunze 1 Dec 7, 2021
A suite of gRPC debugging tools. Like Fiddler/Charles but for gRPC.

grpc-tools A suite of tools for gRPC debugging and development. Like Fiddler/Charles but for gRPC! The main tool is grpc-dump which transparently inte

Bradley Kemp 1.1k Dec 22, 2022
Simple grpc web and grpc transcoding with Envoy

gRPC Web and gRPC Transcoding with Envoy This is a simple stand-alone set of con

null 0 Dec 25, 2021
Go-grpc - This is grpc server for golang.

go-grpc This is grpc server for golang. protocのインストール brew install protoc Golang用のプラグインのインストール go install

jotaro yuza 1 Jan 2, 2022
GRPC - Creating a gRPC service from scratch

#Go gRPC services course Creating a gRPC service from scratch Command line colle

Rafael Diaz Miles 1 Jan 2, 2022
Totem - A Go library that can turn a single gRPC stream into bidirectional unary gRPC servers

Totem is a Go library that can turn a single gRPC stream into bidirectional unar

Joe Kralicky 3 Jan 6, 2023
Grpc-gateway-map-null - gRPC Gateway test using nullable values in map

Demonstrate gRPC gateway behavior with nullable values in maps Using grpc-gatewa

null 1 Jan 6, 2022
Todo-app-grpc - Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, etc)

Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, e

Sammi Aldhi Yanto 5 Oct 12, 2022
GRPC - A client-server mockup, using gRPC to expose functionality.

gRPC This is a mockup application that I built to help me visualise and understand the basic concepts of gRPC. In this exchange, the client can use a

Fergal Bittles 0 Jan 4, 2022
Benthos-input-grpc - gRPC custom benthos input

gRPC custom benthos input Create a custom benthos input that receives messages f

Marco Amador 3 Sep 26, 2022
Go-grpc-template - A small template for quickly bootstrapping a, developer platform independent gRPC golang application

go-grpc-template A small template for quickly bootstrapping a developer platform

null 1 Jan 20, 2022
Grpc-train - Train booking demo using gRPC

gRPC Demo: Train Booking Service Description Usage Contributing Development Tool

Fadi Asfour 0 Feb 6, 2022