The Go language implementation of gRPC. HTTP/2 based RPC

Overview

gRPC-Go

Build Status GoDoc GoReportCard

The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the Go gRPC docs, or jump directly into the quick start.

Prerequisites

Installation

With Go module support (Go 1.11+), simply add the following import

import "google.golang.org/grpc"

to your code, and then go [build|run|test] will automatically fetch the necessary dependencies.

Otherwise, to install the grpc-go package, run the following command:

$ go get -u google.golang.org/grpc

Note: If you are trying to access grpc-go from China, see the FAQ below.

Learn more

FAQ

I/O Timeout Errors

The golang.org domain may be blocked from some countries. go get usually produces an error like the following when this happens:

$ go get -u google.golang.org/grpc
package google.golang.org/grpc: unrecognized import path "google.golang.org/grpc" (https fetch: Get https://google.golang.org/grpc?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)

To build Go code, there are several options:

  • Set up a VPN and access google.golang.org through that.

  • Without Go module support: git clone the repo manually:

    git clone https://github.com/grpc/grpc-go.git $GOPATH/src/google.golang.org/grpc

    You will need to do the same for all of grpc's dependencies in golang.org, e.g. golang.org/x/net.

  • With Go module support: it is possible to use the replace feature of go mod to create aliases for golang.org packages. In your project's directory:

    go mod edit -replace=google.golang.org/grpc=github.com/grpc/[email protected]
    go mod tidy
    go mod vendor
    go build -mod=vendor

    Again, this will need to be done for all transitive dependencies hosted on golang.org as well. For details, refer to golang/go issue #28652.

Compiling error, undefined: grpc.SupportPackageIsVersion

If you are using Go modules:

Ensure your gRPC-Go version is required at the appropriate version in the same module containing the generated .pb.go files. For example, SupportPackageIsVersion6 needs v1.27.0, so in your go.mod file:

module <your module name>

require (
    google.golang.org/grpc v1.27.0
)

If you are not using Go modules:

Update the proto package, gRPC package, and rebuild the .proto files:

go get -u github.com/golang/protobuf/{proto,protoc-gen-go}
go get -u google.golang.org/grpc
protoc --go_out=plugins=grpc:. *.proto

How to turn on logging

The default logger is controlled by environment variables. Turn everything on like this:

$ export GRPC_GO_LOG_VERBOSITY_LEVEL=99
$ export GRPC_GO_LOG_SEVERITY_LEVEL=info

The RPC failed with error "code = Unavailable desc = transport is closing"

This error means the connection the RPC is using was closed, and there are many possible reasons, including:

  1. mis-configured transport credentials, connection failed on handshaking
  2. bytes disrupted, possibly by a proxy in between
  3. server shutdown
  4. Keepalive parameters caused connection shutdown, for example if you have configured your server to terminate connections regularly to trigger DNS lookups. If this is the case, you may want to increase your MaxConnectionAgeGrace, to allow longer RPC calls to finish.

It can be tricky to debug this because the error happens on the client side but the root cause of the connection being closed is on the server side. Turn on logging on both client and server, and see if there are any transport errors.

Issues
  • protoc-gen-go-grpc: API for service registration

    protoc-gen-go-grpc: API for service registration

    There were some changes in #3657 that make it harder to develop gRPC services and harder to find new unimplemented methods - I wanted to start a discussion around the new default and figure out why the change was made. I do understand this is in an unreleased version, so I figured a discussion would be better than a bug report or feature request.

    From my perspective, this is a number of steps backwards for reasons I will outline below.

    When implementing a gRPC service in Go, I often start with a blank slate - the service has been defined in proto files, the go and gRPC protobuf definitions have been generated, all that's left to do is write the code. I often use something like the following so the compiler will help me along, telling me about missing methods, incorrect signatures, things like that.

    package chat
    func init() {
    	// Ensure that Server implements the ChatIngestServer interface
    	var server *Server = nil
    	var _ pb.ChatIngestServer = server
    }
    

    This can alternatively be done with var _ pb.ChatIngestServer = &Server{} but that theoretically leaves a little bit more memory around at runtime.

    After this, I add all the missing methods myself (returning the unimplemented status) and start adding implementations to them until I have a concrete implementation for all the methods.

    Problems with the new approach

    • As soon as you embed the Unimplemented implementation, the Go compiler gets a lot less useful - it will no longer tell you about missing methods and because Go interfaces are implicit, if you make a mistake in implementing a method (like misspelling a method name), you will not find out until runtime. Additionally, because these are public methods, if they're attached to a public struct (like the commonly used Server), they may not be detected as an unused method.
    • If protos and generated files are updated, you will not know about any missing methods until runtime. Personally, I would prefer to know if I have not fully implemented a service at compile time rather than waiting until clients are running against it.
    • IDE hinting is worse with the new changes - IDEs will generally not recommend a method stub for an unimplemented method if it's provided by an embedded struct because even though it's technically implemented, it does not have a working implementation.

    I generally prefer compile time guarantees that all methods are implemented over runtime checks.

    Benefits of the new approach

    • Protos and generated files can be updated without requiring updates to server implementations.

    Proposal

    By default, the option requireUnimplementedServers should default to false. This option is more valuable when dealing with external protobufs which are not versioned (maybe there should be a recommendation to embed the unimplemented struct in this instance) and makes it harder to catch mistakes if you are developing a canonical implementation of a service which should implement all the available methods.

    At least for me, the problems with the new approach vastly outweigh the benefits I've seen so far.

    Type: Question P1 
    opened by belak 85
  • Support serving web content from the same port

    Support serving web content from the same port

    https://github.com/grpc/grpc-common/blob/master/PROTOCOL-HTTP2.md#appendix-a---grpc-for-protobuf says grpc protobuf mapping uses service names as paths. Would it be possible to serve web content from other urls?

    I see TLS side does alpn, so that's an easy place to hook up.

    Any thoughts about non-TLS? e.g. a service running on localhost. Of course that would mean needing to do a http/1 upgrade negotiation, as then the port could not default to http/2.

    Use case 1: host a web application and its api on the same port.

    Use case 2: serve a health check response.

    Use case 3: serve a "nothing to see here" html page.

    Use case 4: serve a /robots.txt file.

    opened by tv42 70
  • Go 1.7 uses import

    Go 1.7 uses import "context"

    If you're using the new "context" library, then gRPC servers can't match the server to the interface, because it has the wrong type of context.

    Compiling bin/linux.x86_64/wlserver

    asci/cs/tools/whitelist/wlserver

    ./wlserver.go:767: cannot use wls (type _Server) as type WhitelistProto.GoogleWhitelistServer in argument to WhitelistProto.RegisterGoogleWhitelistServer: *Server does not implement WhitelistProto.GoogleWhitelistServer (wrong type for Delete method) have Delete(context.Context, *WhitelistProto.DeleteRequest) (_WhitelistProto.DeleteReply, error) want Delete(context.Context, _WhitelistProto.DeleteRequest) (_WhitelistProto.DeleteReply, error)

    Type: API Change P3 Status: Blocked 
    opened by puellanivis 67
  • Connection latency significantly affects throughput

    Connection latency significantly affects throughput

    I am working on a system that uses GRPC to send 1MB blobs between clients and servers and have observed some poor throughput when connection latency is high (180ms round trip is typical between Australia and the USA).

    The same throughput issues are not present when I take GRPC out of the equation. I have prepared a self-contained program that reproduces the problem on a local machine by simulating latency at the net.Listener level. It can run either using GRPC or just plain HTTP/2 POST requests. In each case the payload and latency are the same, but—as you can see from the data below—GRPC becomes several orders of magnitude slower as latency increases.

    The program and related files: https://gist.github.com/adg/641d04ef335782648502cb32a03b2b07

    The output of a typical run:

    $ ./run.sh 
    Duration	Latency	Proto
    
    6.977221ms	0s	GRPC
    4.833989ms	0s	GRPC
    4.714891ms	0s	GRPC
    3.884165ms	0s	GRPC
    5.254322ms	0s	GRPC
    
    8.507352ms	0s	HTTP/2.0
    936.436µs	0s	HTTP/2.0
    453.471µs	0s	HTTP/2.0
    252.786µs	0s	HTTP/2.0
    265.955µs	0s	HTTP/2.0
    
    107.32663ms	1ms	GRPC
    102.51629ms	1ms	GRPC
    100.235617ms	1ms	GRPC
    100.444982ms	1ms	GRPC
    100.881221ms	1ms	GRPC
    
    12.423725ms	1ms	HTTP/2.0
    3.02918ms	1ms	HTTP/2.0
    2.775928ms	1ms	HTTP/2.0
    4.161895ms	1ms	HTTP/2.0
    2.951534ms	1ms	HTTP/2.0
    
    195.731175ms	2ms	GRPC
    190.571784ms	2ms	GRPC
    188.810298ms	2ms	GRPC
    190.593822ms	2ms	GRPC
    190.015888ms	2ms	GRPC
    
    19.18046ms	2ms	HTTP/2.0
    4.663983ms	2ms	HTTP/2.0
    5.45113ms	2ms	HTTP/2.0
    5.56255ms	2ms	HTTP/2.0
    5.582744ms	2ms	HTTP/2.0
    
    378.653747ms	4ms	GRPC
    362.14625ms	4ms	GRPC
    357.95833ms	4ms	GRPC
    364.525646ms	4ms	GRPC
    364.954077ms	4ms	GRPC
    
    33.666184ms	4ms	HTTP/2.0
    8.68926ms	4ms	HTTP/2.0
    10.658349ms	4ms	HTTP/2.0
    10.741361ms	4ms	HTTP/2.0
    10.188444ms	4ms	HTTP/2.0
    
    719.696194ms	8ms	GRPC
    699.807568ms	8ms	GRPC
    703.794127ms	8ms	GRPC
    702.610461ms	8ms	GRPC
    710.592955ms	8ms	GRPC
    
    55.66933ms	8ms	HTTP/2.0
    18.449093ms	8ms	HTTP/2.0
    17.080567ms	8ms	HTTP/2.0
    20.597944ms	8ms	HTTP/2.0
    17.318133ms	8ms	HTTP/2.0
    
    1.415272339s	16ms	GRPC
    1.350923577s	16ms	GRPC
    1.355653965s	16ms	GRPC
    1.338834603s	16ms	GRPC
    1.358419144s	16ms	GRPC
    
    102.133898ms	16ms	HTTP/2.0
    39.144638ms	16ms	HTTP/2.0
    40.82348ms	16ms	HTTP/2.0
    35.133498ms	16ms	HTTP/2.0
    39.516466ms	16ms	HTTP/2.0
    
    2.630821843s	32ms	GRPC
    2.46741086s	32ms	GRPC
    2.507019279s	32ms	GRPC
    2.476177935s	32ms	GRPC
    2.49266693s	32ms	GRPC
    
    179.271675ms	32ms	HTTP/2.0
    72.575954ms	32ms	HTTP/2.0
    67.23265ms	32ms	HTTP/2.0
    70.651455ms	32ms	HTTP/2.0
    67.579124ms	32ms	HTTP/2.0
    

    I theorize that there is something wrong with GRPC's flow control mechanism, but that's just a guess.

    P1 Type: Performance 
    opened by adg 66
  • Failed HTTP/2 Parsing StatusCode.Unavailable when calling Streaming RPCs from Golang Server

    Failed HTTP/2 Parsing StatusCode.Unavailable when calling Streaming RPCs from Golang Server

    Please answer these questions before submitting your issue.

    This is a continuation of https://github.com/grpc/grpc/issues/11586 which I am opening here for better visibility from the grpc-go devs.

    What version of gRPC are you using?

    We are using python grpcio==1.3.5 and grpc-go==v1.4.x. We've also reproduced this on python grpcio==1.4.0

    What version of Go are you using (go version)?

    We're using go version 1.8.1

    What operating system (Linux, Windows, …) and version?

    Ubuntu 14.04

    What did you do?

    If possible, provide a recipe for reproducing the error. Happens inconsistently, every so often a streaming RPC will fail with the following error: <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Failed parsing HTTP/2)>

    Some grpc logs: E0629 13:45:52.222804121 27606 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.222827355 27606 completion_queue.c:226] Operation failed: tag=0x7f10bbd9ca60, error={"created":"@1498769152.222798356","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.222838571 27606 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.222846339 27606 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498769152.222799406","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.223925299 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.223942312 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c9f0, error={"created":"@1498769152.223918465","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.223949262 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.223979616 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498769152.223919439","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.224009309 27603 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.224017226 27603 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498769152.223920475","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:45:52.224391810 27609 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:45:52.224403941 27609 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc20, error={"created":"@1498769152.224387963","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.556768181 28157 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.556831045 28157 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cb40, error={"created":"@1498769557.556750425","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557441154 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557504078 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498769557.557416763","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557563746 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557608834 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc20, error={"created":"@1498769557.557420283","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.557649360 28161 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.557694897 28161 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498769557.557423433","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.558510258 28166 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.558572634 28166 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cd70, error={"created":"@1498769557.558490789","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.558610179 28166 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.558644492 28166 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cec0, error={"created":"@1498769557.558494483","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 13:52:37.559833158 28167 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 13:52:37.559901218 28167 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498769557.559815450","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.635698278 29153 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.635812439 29153 completion_queue.c:226] Operation failed: tag=0x7f108afcb1a0, error={"created":"@1498770706.635668871","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.635887056 29153 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.635944586 29153 completion_queue.c:226] Operation failed: tag=0x7f108afcb210, error={"created":"@1498770706.635675260","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.636461489 29155 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.636525366 29155 completion_queue.c:226] Operation failed: tag=0x7f108afcb130, error={"created":"@1498770706.636440110","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.636556141 29155 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.636585820 29155 completion_queue.c:226] Operation failed: tag=0x7f108afcb360, error={"created":"@1498770706.636443702","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.637721291 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.637791752 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb0c0, error={"created":"@1498770706.637702529","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.637836300 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.637872014 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb2f0, error={"created":"@1498770706.637706809","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:11:46.641194536 29163 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:11:46.641241298 29163 completion_queue.c:226] Operation failed: tag=0x7f108afcb050, error={"created":"@1498770706.641178364","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.539497986 29251 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.539555939 29251 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cc90, error={"created":"@1498771717.539483236","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.540536617 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.540601626 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c910, error={"created":"@1498771717.540517372","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.540647559 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.540679773 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cd70, error={"created":"@1498771717.540521809","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.541893786 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.541943420 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9ce50, error={"created":"@1498771717.541871189","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.541982533 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.542009741 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c830, error={"created":"@1498771717.541874944","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.542044730 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.542080406 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9c980, error={"created":"@1498771717.541878692","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} E0629 14:28:37.543488271 29265 channel_connectivity.c:138] watch_completion_error: "Cancelled" E0629 14:28:37.543534201 29265 completion_queue.c:226] Operation failed: tag=0x7f10bbd9cad0, error={"created":"@1498771717.543473445","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.c","file_line":145} <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Failed parsing HTTP/2)>

    What did you expect to see?

    The streaming RPC to succeed.

    What did you see instead?

    The streaming RPC failed

    Status: Requires Reporter Clarification Type: Bug 
    opened by kahuang 54
  • ClientConn is inflexible for client-side LB

    ClientConn is inflexible for client-side LB

    Client side LB was being discussed here: https://groups.google.com/forum/#!searchin/grpc-io/loadbalancing/grpc-io/yqB8sNNHeoo/0Mfu4b2cdaUJ

    We've been considering using GRPC for our new MicroService stack. We are using Etcd+SkyDNS for DNS SRV based service discovery and would like to leverage that for RR-like RPC load balancing between backends.

    However, it seems that the current ClientConn is fairly "single-homed". I thought about implementing an LbClientConn that would aggregate multiple ClientConn, but all the auto-generated code takes ClientConn structure and not a swappable interface.

    Are you planning on doing client-side LB anytime soon? Or maybe ideas or hints on how to make an early stab at it?

    opened by mwitkow 51
  • Instrumentation hooks

    Instrumentation hooks

    We're currently experimenting with GRPC and wondering how we'll monitor the client code/server code dispatch using Prometheus metrics (should look familiar ;)

    I've been looking for a place in the grpc-go to be able to hook up gathering of ServiceName, MethodName, bytes, latency data, and found none.

    Reading upon the thread in #131 about RPC interceptors, it is suggested to add the instrumentation in our Application Code (a.k.a. the code implementing the auto-generated Proto interfaces). I see the point about not cluttering grpc-go implementation and being implementation agnostic.

    However, adding instrumentation into Application Code means that either we need to: a) add a lot of repeatable code inside Application Code to handle instrumentation b) use the callFoo pattern described in #131 proposed [only applicable to Client] c) add a thin implementation of each Proto-generated interface that wraps the "real" Proto-generated method calls with metrics [only applicable to Client]

    There are downsides to each solution though: a) leads to a lot of clutter and errors related to copy pasting, and some of these will be omitted or badly done b) means that we loose the best (IMHO) feature of Proto-generated interfaces: the "natural" syntax that allows for easy mocking in unit tests (through injection of the Proto-generated Interface), and is only applicable on the Client-side c) is very tedious because each time we re-generate the Proto (add a method or a service) we need to go and manually copy paste some boiler plate. This would be a huge drag on our coding workflow, since we really want to rely on Proto-generate code as much as possible. And also is only applicable on the Client-side.

    I think that cleanest solution would be a pluggable set of callbacks on pre-call/post-call on client/server that would grant access to ServiceName, MethodName and RpcContext (provided the latter stats about bytes transferred/start time of the call). This would allow people to plug an instrumentation mechanism of their choice (statsd, grafana, Prometheus), and shouldn't have any impact on performance that interceptors described in #131 could have had (the double serialization/deserialization).

    Having seen how amazingly useful RPC instrumentation was inside Google, I'm sure you've been thinking about solving this in gRPC, and I'm curious to know what you're planning :)

    opened by mwitkow 45
  • Access to TLS client certificate

    Access to TLS client certificate

    I can't see any way for an RPC method to authenticate a client based on a TLS certificate.

    An example program where an RPC method echoes the client TLS certificate would be great.

    opened by tv42 44
  • Document how to use ServeHTTP

    Document how to use ServeHTTP

    Now that #75 is fixed (via #514), let's add examples on how to use ServeHTTP. The examples were removed from earlier versions of #514 to reduce the size of that change.

    First, I'd like to get #545 submitted, to clean up the testdata files, before fixing this issue would otherwise make it worse.

    /cc @iamqizhao

    Type: Documentation 
    opened by bradfitz 34
  • Unexpected transport closing: too many pings from client

    Unexpected transport closing: too many pings from client

    What version of gRPC are you using?

    583a6303969ea5075e9bd1dc4b75805dfe66989a

    What version of Go are you using (go version)?

    1.10

    What operating system (Linux, Windows, …) and version?

    Linux AMD64, Kernel 4.10

    What did you do?

    When I have the server configured with GZIP compression as so:

    gzcp := grpc.NewGZIPCompressor()
    grpcServer := grpc.NewServer(grpc.RPCCompressor(gzcp))
    

    Then when serving thousands of concurrent requests a second, clients will occasionally be disconnected with

    rpc error: code = Unavailable desc = transport is closing
    

    I see no errors from the server, and the both the client and server are far from overloaded (<10% CPU usage etc). Not all clients are affected at once, it will just be one connection which gets this error.

    While trying to debug this, I disabled GZIP compression so I could use more easily look at packet captures. I am unable to reproduce this error once the GZIP compressor is no longer in use.

    This issue is mostly to ask what the best way to proceed with diagnosing the problem is, or if there are any reasons why having a compressor would change the behavior of the system (aside from CPU usage which I don't think is a problem).

    P1 Type: Bug 
    opened by immesys 33
  • grpc: cleanup parse target and authority tests

    grpc: cleanup parse target and authority tests

    This PR is preparation for making grpcutil.ParseTarget more RFC 3986 compliant. Testing grpcutil.ParseTarget from the ClientConn instead of as a unit-test makes more sense since target parsing has more context around it (like whether a custom dialer has a been configured, what is the configured default scheme etc).

    These tests will change slightly when we change the parsing logic. But moving them out now, will make those changes more apparent.

    Addresses https://github.com/grpc/grpc-go/issues/4717

    RELEASE NOTES: N/A

    Type: Testing 
    opened by easwars 0
  • xds: Added validations for HCM to support xDS RBAC Filter

    xds: Added validations for HCM to support xDS RBAC Filter

    This implements the paragraph in A41: "New validation should occur for HttpConnectionManager to allow equating direct_remote_ip and remote_ip. If the implementation does not distinguish between these fields, then xff_num_trusted_hops must be unset or zero and original_ip_detection_extensions must be empty. If either field has an incorrect value, the Listener must be NACKed. For simplicity, this behavior applies independent of the Listener type (both client-side and server-side)."

    RELEASE NOTES: None

    Type: Feature 
    opened by zasweq 0
  • transport: fix transparent retries when per-RPC credentials are in use

    transport: fix transparent retries when per-RPC credentials are in use

    Fixes a regression caused by #3677

    RELEASE NOTES:

    • transport: fix transparent retries when per-RPC credentials are in use
    Type: Bug 
    opened by dfawley 0
  • advancedtls: add revocation support to client/server options

    advancedtls: add revocation support to client/server options

    This PR adds the revocation configs to the client/server options, and enables the checks if they are specified. With the revocation check, advanced TLS users can detect if the certs sent from the peer are in a good state or not.The revocation information is very useful when certain certificates are revoked by the CA.

    RELEASE NOTES: add certificate revocation(CRL) check support to advancedtls

    Type: API Change Type: Security 
    opened by ZhenLian 1
  • Errors in `operateHeaders` should count stream ID for the purposes of new stream ID validation

    Errors in `operateHeaders` should count stream ID for the purposes of new stream ID validation

    The following max stream ID check / update:

    https://github.com/grpc/grpc-go/blob/b186ee8975f3c69bc36333a99fc82d1388977012/internal/transport/http2_server.go#L460-L469

    happens after other stream validation, and will be skipped if there are HTTP/gRPC errors. This should be moved up, as the stream ID should not be reused in the face of any of these errors.

    Type: Bug 
    opened by dfawley 0
  • Flaky test: Test/ClientSideAffinitySanityCheck

    Flaky test: Test/ClientSideAffinitySanityCheck

    https://github.com/grpc/grpc-go/pull/4777/checks?check_run_id=3616120012

    --- FAIL: Test (3.38s)
        --- FAIL: Test/ClientSideAffinitySanityCheck (0.12s)
            tlogger.go:101: server.go:89 [xds-e2e] Created new snapshot cache...  (t=+59.701µs)
            tlogger.go:101: server.go:103 [xds-e2e] Registered Aggregated Discovery Service (ADS)...  (t=+564.21µs)
            tlogger.go:101: server.go:107 [xds-e2e] xDS management server serving at: 127.0.0.1:33759...  (t=+653.912µs)
            tlogger.go:101: server.go:157 [xds-e2e] Created new resource snapshot...  (t=+2.451043ms)
            tlogger.go:101: server.go:163 [xds-e2e] Updated snapshot cache with resource snapshot...  (t=+2.503044ms)
            tlogger.go:101: clientconn.go:252 [core] parsed scheme: "xds"  (t=+2.578045ms)
            tlogger.go:101: xds_resolver.go:72 [xds] [xds-resolver 0xc000703300] Creating resolver for target: {Scheme:xds Authority: Endpoint:my-service-client-side-xds}  (t=+2.652847ms)
            tlogger.go:101: bootstrap.go:282 [xds] [xds-bootstrap] Bootstrap config for creating xds-client: {
                  "BalancerName": "127.0.0.1:33759",
                  "Creds": {},
                  "TransportAPI": 1,
                  "NodeProto": {
                    "id": "54bca118-4cdc-4751-a839-b59c21a88ada",
                    "user_agent_name": "gRPC Go",
                    "UserAgentVersionType": {
                      "UserAgentVersion": "1.41.0-dev"
                    },
                    "client_features": [
                      "envoy.lb.does_not_support_overprovisioning"
                    ]
                  },
                  "CertProviderConfigs": {
                    "client-side-certificate-provider-instance": {},
                    "server-side-certificate-provider-instance": {}
                  },
                  "ServerListenerResourceNameTemplate": "grpc/server?xds.resource.listening_address=%s"
                }  (t=+8.198843ms)
            tlogger.go:101: clientconn.go:252 [core] parsed scheme: ""  (t=+8.317345ms)
            tlogger.go:101: clientconn.go:258 [core] scheme "" not registered, fallback to default scheme  (t=+8.389946ms)
            tlogger.go:101: resolver_conn_wrapper.go:100 [core] ccResolverWrapper: sending update to cc: {[{127.0.0.1:33759  <nil> 0 <nil>}] <nil> <nil>}  (t=+8.508248ms)
            tlogger.go:101: clientconn.go:728 [core] ClientConn switching balancer to "pick_first"  (t=+8.558049ms)
            tlogger.go:101: clientconn.go:748 [core] Channel switches to new LB policy "pick_first"  (t=+8.659451ms)
            tlogger.go:101: client.go:480 [xds] [xds-client 0xc00014c600] Created ClientConn to xDS management server: 127.0.0.1:33759  (t=+8.860854ms)
            tlogger.go:101: clientconn.go:1142 [core] Subchannel Connectivity change to CONNECTING  (t=+9.020257ms)
            tlogger.go:101: clientconn.go:1253 [core] Subchannel picks a new address "127.0.0.1:33759" to connect  (t=+9.092158ms)
            tlogger.go:101: picker_wrapper.go:161 [core] blockingPicker: the picked transport is not ready, loop back to repick  (t=+9.866372ms)
            tlogger.go:101: clientconn.go:1142 [core] Subchannel Connectivity change to READY  (t=+10.013774ms)
            tlogger.go:101: clientconn.go:436 [core] Channel Connectivity change to CONNECTING  (t=+10.262779ms)
            tlogger.go:101: clientconn.go:436 [core] Channel Connectivity change to READY  (t=+10.34998ms)
            tlogger.go:101: transport_helper.go:208 [xds] [xds-client 0xc00014c600] ADS stream created  (t=+10.482782ms)
            tlogger.go:101: client.go:492 [xds] [xds-client 0xc00014c600] Created  (t=+11.665803ms)
            tlogger.go:101: xds_resolver.go:105 [xds] [xds-resolver 0xc000703300] Watch started on resource name my-service-client-side-xds with xds-client 0xc000462798  (t=+11.781005ms)
            tlogger.go:101: simple.go:361 [xds-e2e] respond type.googleapis.com/envoy.config.listener.v3.Listener[my-service-client-side-xds] version "" with version "1"  (t=+16.68769ms)
            tlogger.go:101: client.go:146 [xds] [xds-client 0xc00014c600] ADS response received, type: type.googleapis.com/envoy.config.listener.v3.Listener  (t=+17.186799ms)
            tlogger.go:101: xds.go:77 [xds] [xds-client 0xc00014c600] Resource with name: my-service-client-side-xds, type: *envoy_config_listener_v3.Listener, contains: {
                  "name": "my-service-client-side-xds",
                  "filterChains": [
                    {
                      "filters": [
                        {
                          "name": "envoy.filters.network.http_connection_manager",
                          "typedConfig": {
                            "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
                            "rds": {
                              "configSource": {
                                "ads": {
                
                                }
                              },
                              "routeConfigName": "route-my-service-client-side-xds"
                            },
                            "httpFilters": [
                              {
                                "name": "router",
                                "typedConfig": {
                                  "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
                                }
                              }
                            ]
                          }
                        }
                      ],
                      "name": "filter-chain-name"
                    }
                  ],
                  "apiListener": {
                    "apiListener": {
                      "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
                      "rds": {
                        "configSource": {
                          "ads": {
                
                          }
                        },
                        "routeConfigName": "route-my-service-client-side-xds"
                      },
                      "httpFilters": [
                        {
                          "name": "router",
                          "typedConfig": {
                            "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
                          }
                        }
                      ]
                    }
                  }
                }  (t=+19.439738ms)
            tlogger.go:101: transport_helper.go:340 [xds] [xds-client 0xc00014c600] Sending ACK for response type: ListenerResource, version: 1, nonce: 1  (t=+19.978047ms)
            tlogger.go:101: simple.go:313 [xds-e2e] open watch 1 for type.googleapis.com/envoy.config.listener.v3.Listener[my-service-client-side-xds] from nodeID "54bca118-4cdc-4751-a839-b59c21a88ada", version "1"  (t=+20.742661ms)
            tlogger.go:101: watch_service.go:85 [xds] [xds-resolver 0xc000703300] received LDS update: {
                  "RouteConfigName": "route-my-service-client-side-xds",
                  "InlineRouteConfig": null,
                  "MaxStreamDuration": 0,
                  "HTTPFilters": [
                    {
                      "Name": "router",
                      "Filter": {},
                      "Config": {}
                    }
                  ],
                  "InboundListenerCfg": null,
                  "Raw": {
                    "type_url": "type.googleapis.com/envoy.config.listener.v3.Listener",
                    "value": "ChpteS1zZXJ2aWNlLWNsaWVudC1zaWRlLXhkcxqpAhqTAgotZW52b3kuZmlsdGVycy5uZXR3b3JrLmh0dHBfY29ubmVjdGlvbl9tYW5hZ2VyIuEBCmV0eXBlLmdvb2dsZWFwaXMuY29tL2Vudm95LmV4dGVuc2lvbnMuZmlsdGVycy5uZXR3b3JrLmh0dHBfY29ubmVjdGlvbl9tYW5hZ2VyLnYzLkh0dHBDb25uZWN0aW9uTWFuYWdlchJ4Kk4KBnJvdXRlciJECkJ0eXBlLmdvb2dsZWFwaXMuY29tL2Vudm95LmV4dGVuc2lvbnMuZmlsdGVycy5odHRwLnJvdXRlci52My5Sb3V0ZXIaJgoCGgASIHJvdXRlLW15LXNlcnZpY2UtY2xpZW50LXNpZGUteGRzOhFmaWx0ZXItY2hhaW4tbmFtZZoB5AEK4QEKZXR5cGUuZ29vZ2xlYXBpcy5jb20vZW52b3kuZXh0ZW5zaW9ucy5maWx0ZXJzLm5ldHdvcmsuaHR0cF9jb25uZWN0aW9uX21hbmFnZXIudjMuSHR0cENvbm5lY3Rpb25NYW5hZ2VyEngqTgoGcm91dGVyIkQKQnR5cGUuZ29vZ2xlYXBpcy5jb20vZW52b3kuZXh0ZW5zaW9ucy5maWx0ZXJzLmh0dHAucm91dGVyLnYzLlJvdXRlchomCgIaABIgcm91dGUtbXktc2VydmljZS1jbGllbnQtc2lkZS14ZHM="
                  }
                }, err: <nil>  (t=+21.972982ms)
            tlogger.go:101: simple.go:361 [xds-e2e] respond type.googleapis.com/envoy.config.route.v3.RouteConfiguration[route-my-service-client-side-xds] version "" with version "1"  (t=+22.523591ms)
            tlogger.go:101: client.go:146 [xds] [xds-client 0xc00014c600] ADS response received, type: type.googleapis.com/envoy.config.route.v3.RouteConfiguration  (t=+22.9905ms)
            tlogger.go:101: xds.go:313 [xds] [xds-client 0xc00014c600] Resource with name: route-my-service-client-side-xds, type: *envoy_config_route_v3.RouteConfiguration, contains: {
                  "name": "route-my-service-client-side-xds",
                  "virtualHosts": [
                    {
                      "domains": [
                        "my-service-client-side-xds"
                      ],
                      "routes": [
                        {
                          "match": {
                            "prefix": "/"
                          },
                          "route": {
                            "cluster": "cluster-my-service-client-side-xds",
                            "hashPolicy": [
                              {
                                "header": {
                                  "headerName": "session_id"
                                },
                                "terminal": true
                              }
                            ]
                          }
                        }
                      ]
                    }
                  ]
                }.  (t=+25.721247ms)
            tlogger.go:101: transport_helper.go:340 [xds] [xds-client 0xc00014c600] Sending ACK for response type: RouteConfigResource, version: 1, nonce: 2  (t=+26.574462ms)
            tlogger.go:101: watch_service.go:166 [xds] [xds-resolver 0xc000703300] received RDS update: {
                  "VirtualHosts": [
                    {
                      "Domains": [
                        "my-service-client-side-xds"
                      ],
                      "Routes": [
                        {
                          "Path": null,
                          "Prefix": "/",
                          "Regex": null,
                          "CaseInsensitive": false,
                          "Headers": null,
                          "Fraction": null,
                          "HashPolicies": [
                            {
                              "HashPolicyType": 0,
                              "Terminal": true,
                              "HeaderName": "session_id",
                              "Regex": null,
                              "RegexSubstitution": ""
                            }
                          ],
                          "WeightedClusters": {
                            "cluster-my-service-client-side-xds": {
                              "Weight": 1,
                              "HTTPFilterConfigOverride": null
                            }
                          },
                          "MaxStreamDuration": null,
                          "HTTPFilterConfigOverride": null,
                          "RetryConfig": null,
                          "RouteAction": 1
                        }
                      ],
                      "HTTPFilterConfigOverride": null,
                      "RetryConfig": null
                    }
                  ],
                  "Raw": {
                    "type_url": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
                    "value": "CiByb3V0ZS1teS1zZXJ2aWNlLWNsaWVudC1zaWRlLXhkcxJbEhpteS1zZXJ2aWNlLWNsaWVudC1zaWRlLXhkcxo9CgMKAS8SNnoQIAEKDAoKc2Vzc2lvbl9pZAoiY2x1c3Rlci1teS1zZXJ2aWNlLWNsaWVudC1zaWRlLXhkcw=="
                  }
                }, err: <nil>  (t=+29.687616ms)
            tlogger.go:101: xds_resolver.go:182 [xds] [xds-resolver 0xc000703300] Received update on resource my-service-client-side-xds from xds-client 0xc000462798, generated service config: {
                  "loadBalancingConfig": [
                    {
                      "xds_cluster_manager_experimental": {
                        "children": {
                          "cluster-my-service-client-side-xds": {
                            "childPolicy": [
                              {
                                "cds_experimental": {
                                  "cluster": "cluster-my-service-client-side-xds"
                                }
                              }
                            ]
                          }
                        }
                      }
                    }
                  ]
                }  (t=+30.094423ms)
            tlogger.go:101: resolver_conn_wrapper.go:100 [core] ccResolverWrapper: sending update to cc: {[] 0xc00044ec00 0xc0003f40a8}  (t=+30.453829ms)
            tlogger.go:101: clientconn.go:728 [core] ClientConn switching balancer to "xds_cluster_manager_experimental"  (t=+30.51833ms)
            tlogger.go:101: clientconn.go:748 [core] Channel switches to new LB policy "xds_cluster_manager_experimental"  (t=+30.569231ms)
            tlogger.go:101: clustermanager.go:51 [xds] [xds-cluster-manager-lb 0xc00044f520] Created  (t=+30.666933ms)
            tlogger.go:101: clustermanager.go:119 [xds] [xds-cluster-manager-lb 0xc00044f520] update with config {
                  "LoadBalancingConfig": null,
                  "Children": {
                    "cluster-my-service-client-side-xds": {
                      "ChildPolicy": [
                        {
                          "cds_experimental": {
                            "LoadBalancingConfig": null,
                            "Cluster": "cluster-my-service-client-side-xds"
                          }
                        }
                      ]
                    }
                  }
                }, resolver state {Addresses:[] ServiceConfig:0xc00044ec00 Attributes:0xc0003f40a8}  (t=+30.879937ms)
            tlogger.go:101: cdsbalancer.go:84 [xds] [cds-lb 0xc0000fc3c0] Created  (t=+30.971138ms)
            tlogger.go:101: cdsbalancer.go:95 [xds] [cds-lb 0xc0000fc3c0] xDS credentials in use: false  (t=+31.011339ms)
            tlogger.go:101: balancergroup.go:100 [xds] [xds-cluster-manager-lb 0xc00044f520] Created child policy 0xc0000fc3c0 of type cds_experimental  (t=+31.09684ms)
            tlogger.go:101: cdsbalancer.go:469 [xds] [cds-lb 0xc0000fc3c0] Received update from resolver, balancer config: {
                  "LoadBalancingConfig": null,
                  "Cluster": "cluster-my-service-client-side-xds"
                }  (t=+31.181742ms)
            tlogger.go:101: cluster_handler.go:160 [xds] [cds-lb 0xc0000fc3c0] CDS watch started on cluster-my-service-client-side-xds  (t=+31.276943ms)
            tlogger.go:101: simple.go:313 [xds-e2e] open watch 2 for type.googleapis.com/envoy.config.route.v3.RouteConfiguration[route-my-service-client-side-xds] from nodeID "54bca118-4cdc-4751-a839-b59c21a88ada", version "1"  (t=+32.041057ms)
            tlogger.go:101: simple.go:361 [xds-e2e] respond type.googleapis.com/envoy.config.cluster.v3.Cluster[cluster-my-service-client-side-xds] version "" with version "1"  (t=+32.125858ms)
            tlogger.go:101: client.go:146 [xds] [xds-client 0xc00014c600] ADS response received, type: type.googleapis.com/envoy.config.cluster.v3.Cluster  (t=+32.468664ms)
            tlogger.go:101: xds.go:649 [xds] [xds-client 0xc00014c600] Resource with name: cluster-my-service-client-side-xds, type: *envoy_config_cluster_v3.Cluster, contains: {
                  "name": "cluster-my-service-client-side-xds",
                  "type": "EDS",
                  "edsClusterConfig": {
                    "edsConfig": {
                      "ads": {
                
                      }
                    },
                    "serviceName": "endpoints-my-service-client-side-xds"
                  },
                  "lbPolicy": "RING_HASH"
                }  (t=+33.288978ms)
            tlogger.go:101: transport_helper.go:340 [xds] [xds-client 0xc00014c600] Sending ACK for response type: ClusterResource, version: 1, nonce: 3  (t=+33.688085ms)
            tlogger.go:101: cdsbalancer.go:281 [xds] [cds-lb 0xc0000fc3c0] Watch update from xds-client 0xc000462798, content: [
                  {
                    "ClusterType": 0,
                    "ClusterName": "cluster-my-service-client-side-xds",
                    "EDSServiceName": "endpoints-my-service-client-side-xds",
                    "EnableLRS": false,
                    "SecurityCfg": null,
                    "MaxRequests": null,
                    "DNSHostName": "",
                    "PrioritizedClusterNames": null,
                    "LBPolicy": {
                      "MinimumRingSize": 1024,
                      "MaximumRingSize": 8388608
                    },
                    "Raw": {
                      "type_url": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
                      "value": "CiJjbHVzdGVyLW15LXNlcnZpY2UtY2xpZW50LXNpZGUteGRzGioKAhoAEiRlbmRwb2ludHMtbXktc2VydmljZS1jbGllbnQtc2lkZS14ZHMwAhAD"
                    }
                  }
                ], security config: null  (t=+34.374997ms)
            tlogger.go:101: clusterresolver.go:80 [xds] [xds-cluster-resolver-lb 0xc0009cad80] Created  (t=+34.473599ms)
            tlogger.go:101: cdsbalancer.go:307 [xds] [cds-lb 0xc0000fc3c0] Created child policy 0xc0009cad80 of type cluster_resolver_experimental  (t=+34.562ms)
            tlogger.go:101: clusterresolver.go:158 [xds] [xds-cluster-resolver-lb 0xc0009cad80] Receive update from resolver, balancer config: {
                  "discoveryMechanisms": [
                    {
                      "cluster": "cluster-my-service-client-side-xds",
                      "edsServiceName": "endpoints-my-service-client-side-xds"
                    }
                  ],
                  "xdsLbPolicy": [
                    {
                      "ring_hash_experimental": {
                        "minRingSize": 1024,
                        "maxRingSize": 8388608
                      }
                    }
                  ]
                }  (t=+34.796105ms)
            tlogger.go:101: resource_resolver.go:226 [xds] [xds-cluster-resolver-lb 0xc0009cad80] EDS watch started on endpoints-my-service-client-side-xds  (t=+34.858606ms)
            tlogger.go:101: simple.go:313 [xds-e2e] open watch 3 for type.googleapis.com/envoy.config.cluster.v3.Cluster[cluster-my-service-client-side-xds] from nodeID "54bca118-4cdc-4751-a839-b59c21a88ada", version "1"  (t=+35.421615ms)
            tlogger.go:101: simple.go:361 [xds-e2e] respond type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment[endpoints-my-service-client-side-xds] version "" with version "1"  (t=+35.497017ms)
            tlogger.go:101: client.go:146 [xds] [xds-client 0xc00014c600] ADS response received, type: type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment  (t=+35.833423ms)
            tlogger.go:101: xds.go:1059 [xds] [xds-client 0xc00014c600] Resource with name: endpoints-my-service-client-side-xds, type: *envoy_config_endpoint_v3.ClusterLoadAssignment, contains: {
                  "clusterName": "endpoints-my-service-client-side-xds",
                  "endpoints": [
                    {
                      "locality": {
                        "subZone": "subzone"
                      },
                      "lbEndpoints": [
                        {
                          "endpoint": {
                            "address": {
                              "socketAddress": {
                                "address": "localhost",
                                "portValue": 32801
                              }
                            }
                          }
                        }
                      ],
                      "loadBalancingWeight": 1
                    }
                  ]
                }  (t=+45.940598ms)
            tlogger.go:101: transport_helper.go:340 [xds] [xds-client 0xc00014c600] Sending ACK for response type: EndpointsResource, version: 1, nonce: 4  (t=+46.299204ms)
            tlogger.go:101: clusterresolver.go:190 [xds] [xds-cluster-resolver-lb 0xc0009cad80] resource update: [
                  {}
                ]  (t=+46.696511ms)
            tlogger.go:101: balancer.go:65 [xds] [priority-lb 0xc0009ffb00] Created  (t=+46.811213ms)
            tlogger.go:101: configbuilder.go:280 [xds] xds lb policy is "ring_hash_experimental", building config with ring_hash  (t=+46.883814ms)
            tlogger.go:101: simple.go:313 [xds-e2e] open watch 4 for type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment[endpoints-my-service-client-side-xds] from nodeID "54bca118-4cdc-4751-a839-b59c21a88ada", version "1"  (t=+47.296522ms)
            tlogger.go:101: clusterresolver.go:222 [xds] [xds-cluster-resolver-lb 0xc0009cad80] build balancer config: {
                  "children": {
                    "priority-0-0": {
                      "config": [
                        {
                          "xds_cluster_impl_experimental": {
                            "cluster": "cluster-my-service-client-side-xds",
                            "edsServiceName": "endpoints-my-service-client-side-xds",
                            "childPolicy": [
                              {
                                "ring_hash_experimental": {
                                  "minRingSize": 1024,
                                  "maxRingSize": 8388608
                                }
                              }
                            ]
                          }
                        }
                      ],
                      "ignoreReresolutionRequests": true
                    }
                  },
                  "priorities": [
                    "priority-0-0"
                  ]
                }  (t=+49.317757ms)
            tlogger.go:101: balancer.go:112 [xds] [priority-lb 0xc0009ffb00] Received update from resolver, balancer config: {
                  "children": {
                    "priority-0-0": {
                      "config": [
                        {
                          "xds_cluster_impl_experimental": {
                            "cluster": "cluster-my-service-client-side-xds",
                            "edsServiceName": "endpoints-my-service-client-side-xds",
                            "childPolicy": [
                              {
                                "ring_hash_experimental": {
                                  "minRingSize": 1024,
                                  "maxRingSize": 8388608
                                }
                              }
                            ]
                          }
                        }
                      ],
                      "ignoreReresolutionRequests": true
                    }
                  },
                  "priorities": [
                    "priority-0-0"
                  ]
                }  (t=+50.521678ms)
            tlogger.go:101: balancer_priority.go:98 [xds] [priority-lb 0xc0009ffb00] switching to ("priority-0-0", 0) in syncPriority  (t=+50.624579ms)
            tlogger.go:101: clusterimpl.go:72 [xds] [xds-cluster-impl-lb 0xc000933a00] Created  (t=+50.806182ms)
            tlogger.go:101: balancergroup.go:100 [xds] [priority-lb 0xc0009ffb00] Created child policy 0xc000933a00 of type xds_cluster_impl_experimental  (t=+50.852183ms)
            tlogger.go:101: clusterimpl.go:223 [xds] [xds-cluster-impl-lb 0xc000933a00] Received update from resolver, balancer config: {
                  "cluster": "cluster-my-service-client-side-xds",
                  "edsServiceName": "endpoints-my-service-client-side-xds",
                  "childPolicy": [
                    {
                      "ring_hash_experimental": {
                        "minRingSize": 1024,
                        "maxRingSize": 8388608
                      }
                    }
                  ]
                }  (t=+51.024586ms)
            tlogger.go:101: ringhash.go:55 [xds] [ring-hash-lb 0xc0009ffe80] Created  (t=+51.094488ms)
            tlogger.go:101: ringhash.go:253 [xds] [ring-hash-lb 0xc0009ffe80] Received update from resolver, balancer config: {
                  "minRingSize": 1024,
                  "maxRingSize": 8388608
                }  (t=+51.991303ms)
            tlogger.go:101: balancergroup.go:469 [xds] [priority-lb 0xc0009ffb00] Balancer state update from locality priority-0-0, new state: {ConnectivityState:IDLE Picker:0xc0007677c0}  (t=+55.606866ms)
            tlogger.go:101: balancergroup.go:469 [xds] [xds-cluster-manager-lb 0xc00044f520] Balancer state update from locality cluster-my-service-client-side-xds, new state: {ConnectivityState:IDLE Picker:0xc0007677c0}  (t=+56.314778ms)
            tlogger.go:101: balancerstateaggregator.go:210 [xds] [xds-cluster-manager-lb 0xc00044f520] Child pickers: map[cluster-my-service-client-side-xds:picker:0xc0007677c0,state:IDLE,stateToAggregate:IDLE]  (t=+56.41848ms)
            tlogger.go:101: clientconn.go:1142 [core] Subchannel Connectivity change to CONNECTING  (t=+56.591783ms)
            tlogger.go:101: clientconn.go:1253 [core] Subchannel picks a new address "localhost:32801" to connect  (t=+56.658684ms)
            tlogger.go:101: ringhash.go:318 [xds] [ring-hash-lb 0xc0009ffe80] handle SubConn state change: 0xc0006172e0, CONNECTING  (t=+56.954489ms)
            tlogger.go:101: balancergroup.go:469 [xds] [priority-lb 0xc0009ffb00] Balancer state update from locality priority-0-0, new state: {ConnectivityState:CONNECTING Picker:0xc0009fb6d0}  (t=+57.226394ms)
            tlogger.go:101: clientconn.go:1142 [core] Subchannel Connectivity change to READY  (t=+58.788121ms)
            tlogger.go:101: ringhash.go:318 [xds] [ring-hash-lb 0xc0009ffe80] handle SubConn state change: 0xc0006172e0, READY  (t=+58.899823ms)
            tlogger.go:101: balancergroup.go:469 [xds] [priority-lb 0xc0009ffb00] Balancer state update from locality priority-0-0, new state: {ConnectivityState:READY Picker:0xc0009fbb30}  (t=+58.979524ms)
            tlogger.go:101: clientconn.go:436 [core] Channel Connectivity change to TRANSIENT_FAILURE  (t=+59.113627ms)
            tlogger.go:101: balancergroup.go:469 [xds] [xds-cluster-manager-lb 0xc00044f520] Balancer state update from locality cluster-my-service-client-side-xds, new state: {ConnectivityState:CONNECTING Picker:0xc0009fb6d0}  (t=+59.195028ms)
            tlogger.go:101: balancerstateaggregator.go:210 [xds] [xds-cluster-manager-lb 0xc00044f520] Child pickers: map[cluster-my-service-client-side-xds:picker:0xc0009fb6d0,state:CONNECTING,stateToAggregate:CONNECTING]  (t=+59.268429ms)
            tlogger.go:101: clientconn.go:436 [core] Channel Connectivity change to CONNECTING  (t=+59.361331ms)
            tlogger.go:96: picker_wrapper.go:147 [core] subconn returned from pick is not *acBalancerWrapper  (t=+59.456633ms)
            tlogger.go:101: balancergroup.go:469 [xds] [xds-cluster-manager-lb 0xc00044f520] Balancer state update from locality cluster-my-service-client-side-xds, new state: {ConnectivityState:READY Picker:0xc0009fbb30}  (t=+59.564735ms)
            tlogger.go:101: balancerstateaggregator.go:210 [xds] [xds-cluster-manager-lb 0xc00044f520] Child pickers: map[cluster-my-service-client-side-xds:picker:0xc0009fbb30,state:READY,stateToAggregate:READY]  (t=+59.643736ms)
            tlogger.go:101: clientconn.go:436 [core] Channel Connectivity change to READY  (t=+60.190345ms)
            tlogger.go:101: clientconn.go:436 [core] Channel Connectivity change to SHUTDOWN  (t=+61.531869ms)
            tlogger.go:101: cluster_handler.go:163 [xds] [cds-lb 0xc0000fc3c0] CDS watch canceled on cluster-my-service-client-side-xds  (t=+62.474185ms)
            tlogger.go:101: resource_resolver.go:243 [xds] [xds-cluster-resolver-lb 0xc0009cad80] EDS watch canceled on endpoints-my-service-client-side-xds  (t=+62.588787ms)
            tlogger.go:101: simple.go:313 [xds-e2e] open watch 5 for type.googleapis.com/envoy.config.cluster.v3.Cluster[] from nodeID "54bca118-4cdc-4751-a839-b59c21a88ada", version "1"  (t=+63.3542ms)
            tlogger.go:101: clusterimpl.go:335 [xds] [xds-cluster-impl-lb 0xc000933a00] Shutdown  (t=+63.534103ms)
            tlogger.go:101: clusterresolver.go:310 [xds] [xds-cluster-resolver-lb 0xc0009cad80] Shutdown  (t=+63.589104ms)
            tlogger.go:101: cdsbalancer.go:408 [xds] [cds-lb 0xc0000fc3c0] Shutdown  (t=+63.713207ms)
            tlogger.go:101: clustermanager.go:136 [xds] [xds-cluster-manager-lb 0xc00044f520] Shutdown  (t=+63.786608ms)
            tlogger.go:101: xds_resolver.go:108 [xds] [xds-resolver 0xc000703300] Watch cancel on resource name my-service-client-side-xds with xds-client 0xc000462798  (t=+63.952511ms)
            tlogger.go:101: clientconn.go:436 [core] Channel Connectivity change to SHUTDOWN  (t=+64.014512ms)
            tlogger.go:101: clientconn.go:1142 [core] Subchannel Connectivity change to SHUTDOWN  (t=+64.111213ms)
            tlogger.go:101: client.go:534 [xds] [xds-client 0xc00014c600] Shutdown  (t=+65.09003ms)
            tlogger.go:101: xds_resolver.go:274 [xds] [xds-resolver 0xc000703300] Shutdown  (t=+65.140031ms)
            tlogger.go:101: clientconn.go:1142 [core] Subchannel Connectivity change to SHUTDOWN  (t=+65.188932ms)
            tlogger.go:101: transport_helper.go:315 [xds] [xds-client 0xc00014c600] ADS stream is closed with error: xds: stream.Recv() failed: rpc error: code = Canceled desc = grpc: the client connection is closing  (t=+65.373435ms)
            tlogger.go:101: transport_helper.go:275 [xds] [xds-client 0xc00014c600] ADS request for {target: [], type: EndpointsResource, version: "1", nonce: "4"} failed: xds: stream.Send(version_info:"1"  node:{id:"54bca118-4cdc-4751-a839-b59c21a88ada"  user_agent_name:"gRPC Go"  user_agent_version:"1.41.0-dev"  client_features:"envoy.lb.does_not_support_overprovisioning"}  type_url:"type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment"  response_nonce:"4") failed: EOF  (t=+67.420771ms)
    FAIL
    FAIL	google.golang.org/grpc/xds/internal/test	8.998s
    
    P1 Type: Bug fixit 
    opened by easwars 3
  • xds: suppress redundant updates for LDS/RDS resources

    xds: suppress redundant updates for LDS/RDS resources

    This PR adds equality checks and suppression logic for Listener and RouteConfiguration resources. Equality checks and suppression logic for other resources will be done in a follow-up PR.

    RELEASE NOTES: N/A

    Type: Internal Cleanup 
    opened by easwars 0
  • xds: Transport layer logic specified in A41

    xds: Transport layer logic specified in A41

    This PR adds logic from A41 at the transport layer. Each line from the spec is commented in the code. There is no official test for the eventual :authority header, but I'm trying my best.

    RELEASE NOTES: None

    Type: Feature 
    opened by zasweq 0
  • metadata: make use of MD's Set, Append methods

    metadata: make use of MD's Set, Append methods

    Make use of MD's public method: Set and Append. If we need to change the key spec in the future (eg. all keys should be upper case), we only need to modify Set, Append, Delete methods.

    RELEASE NOTES: None

    Type: Internal Cleanup 
    opened by huangchong94 0
Releases(v1.40.0)
  • v1.40.0(Aug 11, 2021)

    Behavior Changes

    • balancer: client channel no longer connects to idle subchannels that are returned by the pickers; LB policy should call SubConn.Connect instead. (#4579)
      • This change is in line with existing documentation stating the balancer must call Connect on idle SubConns in order for them to connect, and is preparation for an upcoming change that transitions SubConns to the idle state when connections are lost. See https://pkg.go.dev/google.golang.org/grpc/balancer#SubConn for more details.

    Bug Fixes

    • transport: fail RPCs without HTTP status 200 (OK), according to the gRPC spec (#4474)
      • Special Thanks: @JNProtzman
    • binarylog: fail the Write() method if proto marshaling fails (#4582)
      • Special Thanks: @Jille
    • binarylog: exit the flusher goroutine upon closing the bufferedSink (#4583)
      • Special Thanks: @Jille

    New Features

    • metadata: add Delete method to MD to encapsulate lowercasing (#4549)
      • Special Thanks: @konradreiche
    • xds/cds: support logical DNS cluster and aggregated cluster (#4594)
    • stats: add stats.Begin.IsClientStream and IsServerStream to indicate the type of RPC invoked (#4533)
      • Special Thanks: @leviska

    Performance Improvements

    • server: improve performance when multiple interceptors are used (#4524)
      • Special Thanks: @amenzhinsky
    Source code(tar.gz)
    Source code(zip)
  • v1.39.1(Aug 5, 2021)

    • server: fix bug that net.Conn is leaked if the connection is closed (io.EOF) immediately with no traffic (#4642)
    • transport: fix race in transport stream accessing s.recvCompress (#4627)
    Source code(tar.gz)
    Source code(zip)
  • v1.38.1(Jun 29, 2021)

  • v1.39.0(Jun 29, 2021)

    Behavior Changes

    • csds: return empty response if xds client is not set (#4505)
    • metadata: convert keys to lowercase in FromContext() (#4416)

    New Features

    • xds: add GetServiceInfo to GRPCServer (#4507)
      • Special Thanks: @amenzhinsky
    • xds: add test-only injection of xds config to client and server (#4476)
    • server: allow PreparedMsgs to work for server streams (#3480)
      • Special Thanks: @eafzali

    Performance Improvements

    • transport: remove decodeState from client & server to reduce allocations (#4423)
      • Special Thanks: @JNProtzman

    Bug Fixes

    • server: return UNIMPLEMENTED on receipt of malformed method name (#4464)
    • xds/rds: use 100 as default weighted cluster totalWeight instead of 0 (#4439)
      • Special Thanks: @alpha-baby
    • transport: unblock read throttling when controlbuf exits (#4447)
    • client: fix status code to return Unavailable for servers shutting down instead of Unknown (#4561)

    Documentation

    • doc: fix broken benchmark dashboard link in README.md (#4503)
      • Special Thanks: @laststem
    • example: improve hello world server with starting msg (#4468)
      • Special Thanks: @dkkb
    • client: Clarify that WaitForReady will block for CONNECTING channels (#4477)
      • Special Thanks: @evanj
    Source code(tar.gz)
    Source code(zip)
  • v1.38.0(May 19, 2021)

    API Changes

    • reflection: accept interface instead of grpc.Server struct in Register() (#4340)
    • resolver: add error return value from ClientConn.UpdateState (#4270)

    Behavior Changes

    • client: do not poll name resolver when errors or bad updates are reported (#4270)
    • transport: InTapHandle may return RPC status errors; no longer RST_STREAMs (#4365)

    New Features

    • client: propagate connection error causes to RPC status (#4311, #4316)
    • xds: support inline RDS resource from LDS response (#4299)
    • xds: server side support is now experimentally available
    • server: add ForceServerCodec() to set a custom encoding.Codec on the server (#4205)
      • Special Thanks: @ash2k

    Performance Improvements

    • metadata: reduce memory footprint in FromOutgoingContext (#4360)
      • Special Thanks: @irfansharif

    Bug Fixes

    • xds/balancergroup: fix rare memory leak after closing ClientConn (#4308)

    Documentation

    • examples: update xds examples for PSM security (#4256)
    • grpc: improve docs on StreamDesc (#4397)
    Source code(tar.gz)
    Source code(zip)
  • v1.37.1(May 11, 2021)

    • client: fix rare panic when shutting down client while receiving the first name resolver update (#4398)
    • client: fix leaked addrConn struct when addresses are updated (#4347)
    • xds/resolver: prevent panic when two LDS updates are receives without RDS in between (#4327)
    Source code(tar.gz)
    Source code(zip)
  • v1.37.0(Apr 7, 2021)

    API Changes

    • balancer: Add UpdateAddresses() to balancer.ClientConn interface (#4215)
      • NOTICE: balancer.SubConn.UpdateAddresses() is now deprecated and will be REMOVED in gRPC-Go 1.39

    Behavior Changes

    • balancer/base: keep address attributes for pickers (#4253)
      • Special Thanks: @longXboy

    New Features

    • xds: add support for csds (#4226, #4217, #4243)
    • admin: create admin package for conveniently registering standard admin services (#4274)
    • xds: add support for HTTP filters (gRFC A39) (#4206, #4221)
    • xds: implement fault injection HTTP filter (A33) (#4236)
    • xds: enable timeout, circuit breaking, and fault injection by default (#4286)
    • xds: implement a priority based load balancer (#4070)
    • xds/creds: support all SAN matchers on client-side (#4246)

    Bug Fixes

    • xds: add env var protection for client-side security (#4247)
    • circuit breaking: update picker inline when there's a counter update (#4212)
    • server: fail RPCs without POST HTTP method (#4241)
    Source code(tar.gz)
    Source code(zip)
  • v1.36.1(Mar 25, 2021)

  • v1.35.1(Mar 12, 2021)

  • v1.34.2(Mar 12, 2021)

  • v1.36.0(Feb 24, 2021)

    New Features

    • xds bootstrap: support config content in env variable (#4153)

    Bug Fixes

    • encoding/proto: do not panic when types do not match (#4218)

    Documentation

    • status: document nil error handling of FromError (#4196)
      • Special Thanks: @gauravgahlot
    Source code(tar.gz)
    Source code(zip)
  • v1.33.3(Jan 11, 2021)

    • xds client: Updated v3 type for http connection manager (#4137)
    • lrs: use JSON for locality's String representation (#4135)
    • eds/lrs: handle nil when LRS is disabled (#4086)
    Source code(tar.gz)
    Source code(zip)
  • v1.34.1(Jan 11, 2021)

    • xds client: Updated v3 type for http connection manager (#4137)
    • lrs: use JSON for locality's String representation (#4135)
    • eds/lrs: handle nil when LRS is disabled (#4086)
    • client: fix "unix" scheme handling for some corner cases (#4021)
    Source code(tar.gz)
    Source code(zip)
  • v1.35.0(Jan 13, 2021)

    Behavior Changes

    • roundrobin: strip attributes from addresses (#4024)
    • balancer: set RPC metadata in address attributes, instead of Metadata field (#4041)

    New Features

    • support unix-abstract schema (#4079)
      • Special Thanks: @resec
    • xds: implement experimental RouteAction timeout support (#4116)
    • xds: Implement experimental circuit breaking support. (#4050)

    Bug Fixes

    • xds: server_features should be a child of xds_servers and not a sibling (#4087)
    • xds: NACK more invalid RDS responses (#4120)
    Source code(tar.gz)
    Source code(zip)
  • v1.34.0(Dec 2, 2020)

    New Features

    • client: implement support for "unix" resolver scheme (#3890)
    • rds: allow case_insensitive path matching (#3997)
    • credentials/insecure: implement insecure credentials. (#3964)
    • lrs: handle multiple clusters in LRS stream (#3935)

    Performance Improvements

    • encoding/proto: simplify & optimize proto codec (#3958)

    Bug Fixes

    • internal/transport: fix a bug causing -bin metadata to be incorrectly encoded (#3985)
      • Special Thanks: @dntj
    • grpclb: consider IDLE SubConns as connecting (#4031)
    • grpclb: send custom user-agent (#4011)
    • client: use "localhost:port" as authority if target is ":port" (#4017)
    • credentials: fix PerRPCCredentials w/RequireTransportSecurity and security levels (#3995)

    Documentation

    • Documentation: fix outgoing metadata example code (#3979)
      • Special Thanks: @aaronjheng
    • Remove experimental comment from client interceptors (#3948)
      • Special Thanks: @hypnoglow
    • Documentation: update keepalive.md to add why (#3993)
    Source code(tar.gz)
    Source code(zip)
  • v1.33.2(Nov 6, 2020)

    • protobuf: update all generated code to google.golang.org/protobuf (#3932)
    • xdsclient: populate error details for NACK (#3975)
    • internal/credentials: fix a bug and add one more helper function SPIFFEIDFromCert (#3929)
    Source code(tar.gz)
    Source code(zip)
  • cmd/protoc-gen-go-grpc/v1.0.1(Oct 23, 2020)

    Bug Fixes

    • use grpc.ServiceRegistrar instead of *grpc.Server in Register<Service>Server functions
    • fix method name in interceptor info
      • Special Thanks: @ofpiyush

    Extra thanks to @sagikazarmark for setting up automation to attach binaries for Linux, Mac, and Windows to our releases.

    Source code(tar.gz)
    Source code(zip)
    protoc-gen-go-grpc.v1.0.1.darwin.amd64.tar.gz(4.04 MB)
    protoc-gen-go-grpc.v1.0.1.linux.386.tar.gz(4.02 MB)
    protoc-gen-go-grpc.v1.0.1.linux.amd64.tar.gz(4.09 MB)
    protoc-gen-go-grpc.v1.0.1.windows.386.tar.gz(4.04 MB)
    protoc-gen-go-grpc.v1.0.1.windows.amd64.tar.gz(4.12 MB)
  • v1.33.1(Oct 20, 2020)

    API Changes

    • connectivity: remove unused, experimental Reporter interface (#3875)

    New Features

    • xds bootstrap: support insecure and make Creds required (#3881)
    • xds: add bootstrap support for certificate providers. (#3901)
    • lrs: add a layer for clusters in load store (#3880)
    • credentials/xds: implementation of client-side xDS credentials. (#3888)

    Bug Fixes

    • http2_client: fix reader segfault on PROTOCOL_ERRORs (#3926)
      • Special Thanks: @sorah
    • internal/transport: handle h2 errcode on header decoding (#3872)
      • Special Thanks: @tz70s
    • xds: exit from run() goroutine when resolver is closed. (#3882)
    • credentials/alts: ClientAuthorizationCheck to case-fold compare of peer SA (#3792)
      • Special Thanks: @AntonNep
    • service reflection: include transitive closure for a file (#3851)
    • stats: include message header in stats.InPayload.WireLength (#3886)
      • Special Thanks: @xstephen95x
    • binarylog: export Sink (#3879)
    • service reflection: include transitive closure for a file (#3851)
    Source code(tar.gz)
    Source code(zip)
  • cmd/protoc-gen-go-grpc/v1.0.0(Oct 2, 2020)

    Background

    This is the first major release of the protoc plugin for generating grpc stubs from the grpc-go repo. The previous version was part of the protoc-gen-go tool at https://github.com/golang/protobuf. With the migration of that tool to https://google.golang.org/protobuf, grpc support was removed and shifted to this repo.

    Incompatible change

    There is one backward-compatibility breaking change in this version of the tool, which can be disabled with a flag at generation time. The Unimplemented<Service>Server struct is now required to be embedded in implementations of the generated <Service>Server interface. This requirement guarantees that backward-compatibility is maintained by the generated code when new methods are added to a service, which is, itself, a backward-compatible operation.

    To disable this behavior, which is recommended only for producing code compatible with earlier generated code, set the flag require_unimplemented_servers=false. For more details, please see the binary's README.md

    Source code(tar.gz)
    Source code(zip)
  • v1.32.0(Sep 8, 2020)

    Dependencies

    • Remove Go 1.9 support; assume go1.12 build tag (#3767)

    New Features

    • grpc: add ServiceRegistrar interface; bump up support package version. (#3816; #3818)
    • xds: add LRS balancing policy (#3799)

    Bug Fixes

    • server: prevent hang in Go HTTP transport in some error cases (#3833)
    • server: respond correctly to client headers with END_STREAM flag set (#3803)
    • eds: fix priority timeout failure when EDS removes all priorities (#3830)
    Source code(tar.gz)
    Source code(zip)
  • v1.30.1(Aug 25, 2020)

  • v1.31.1(Aug 25, 2020)

  • v1.31.0(Jul 30, 2020)

    API Changes

    • balancer: remove deprecated type aliases (#3742)

    New Features

    • The following new xDS functionalities are added in this release (xDS features supported in a given release are documented here):
      • Requests matching based on path (prefix, full path and safe regex) and headers
      • Requests routing to multiple clusters based on weights
    • service config: add default method config support (#3684)
      • Special Thanks: @amenzhinsky
    • credentials/sts: PerRPCCreds Implementation (#3696)
    • credentials: check and expose SPIFFE ID (#3626)
    • protoc-gen-go-grpc: support for proto3 field presence (#3752)

    Bug Fixes

    • client: set auth header to localhost for unix target (#3730)

    Documentation

    • doc: mark CustomCodec as deprecated (#3698)
    • examples: cleanup README.md (#3738)
    • doc: fix references to status methods (#3702)
      • Special Thanks: @evanlimanto
    Source code(tar.gz)
    Source code(zip)
  • v1.30.0(Jun 22, 2020)

    API Changes

    • This release adds an xDS URI scheme called xds. This is the stable version of the scheme xds-experimental that was introduced in v1.28.0. xds-experimental scheme will be removed in subsequent releases so you must switch to xds scheme instead. xds scheme is a client side implementation of xDSv2 APIs. This allows a gRPC client written in Go to receive configuration from an xDSv2 API compatible server and use that configuration to load balance RPCs. In this release, only the virtual host matching, default path (“” or “/”) matching and cluster route action are supported. The features supported in a given release are documented here.
    • balancer: move Balancer and Picker to V2; delete legacy API (#3180, #3431)
      • Replace balancer.Balancer and balancer.Picker with the V2Balancer and V2Picker versions.
      • Remove balancer.ClientConn.UpdateBalancerState.
      • Remove the original balancer plugin API, based on grpc.Balancer, and all related functionality.
      • Remove the deprecated naming package.

    Behavior Changes

    • grpclb, dns: pass balancer addresses via resolver.State (#3614)

    New Features

    • balancer: support hierarchical paths in addresses (#3494)
    • client: option to surface connection errors to callers (#3430)
      • Special Thanks: @sethp-nr
    • credentials: pass address attributes from balancer to creds handshaker. (#3548)
    • credentials: local creds implementation (#3517)
    • advancedtls: add fine-grained verification levels in XXXOptions (#3454)
    • xds: handle weighted cluster as route action (#3613)
    • xds: add weighted_target balancer (#3541)

    Performance Improvements

    • transport: move append of header and data down to http2 write loop to save garbage (#3568)
      • Special Thanks: @bboreham
    • server.go: use worker goroutines for fewer stack allocations (#3204)
      • Special Thanks: @adtac

    Bug Fixes

    • stream: fix calloption.After() race in finish (#3672)
    • retry: prevent per-RPC creds error from being transparently retried (#3677, #3691)
    • cache: callback without cache's mutex (#3603)
    • client: properly check GRPC_GO_IGNORE_TXT_ERRORS environment variable (#3532)
      • Special Thanks: @t33m
    • balancergroup: fix connectivity state (#3585)
    • xds: use google default creds (#3673)
    • xds: accept either "" or "/" as the prefix for the default route (#3535)
    • xds: reject RDS response containing match with case-sensitive false (#3592)

    Documentation

    • examples: add go.mod to make examples a separate module (#3546)
    • doc: update README for supported Go versions and travis for tests (#3516)
    Source code(tar.gz)
    Source code(zip)
  • v1.29.1(Apr 23, 2020)

  • v1.29.0(Apr 21, 2020)

    New Features

    • client: add a WithNoProxy dialoption (#3411)
      • Special Thanks: @pdbogen

    Bug Fixes

    • xds: update nonce even if the ACK/NACK is not sent on wire (#3497)
    • xds: add temporary logging to LRS (#3490)
    • wrr: make random wrr thread safe (#3470)
    • transport: fix handling of header metadata in serverHandler (#3484)
      • Special Thanks: @misberner
    • balancer: change roundrobin to accept empty address list (#3491)
    • stats: set response compression codec on stats.InHeader and stats.OutHeader (#3390)
      • Special Thanks: @MatthewDolan

    Documentation

    • credentials: Update doc strings for NewClientTLSFromCert et. al. (#3508)
    • examples: add example to show how to use the health service (#3381)
      • Special Thanks: @mjpitz
    Source code(tar.gz)
    Source code(zip)
  • v1.28.1(Apr 6, 2020)

    • xds: update nonce even if the ACK/NACK is not sent on wire (#3497)
    • balancer: change roundrobin to accept empty address list (#3491)
    • xds: add logging to LRS (#3490)
    Source code(tar.gz)
    Source code(zip)
  • v1.28.0(Mar 10, 2020)

    New Features

    • This release adds an experimental client side implementation of xDSv2 APIs. This allows a gRPC client written in Go to receive configuration from an xDSv2 API compatible server and use that configuration to load balance RPCs. In this release, only the virtual host matching and cluster route action is supported. More features will be added in future.
    • grpclb: support explicit fallback signal (#3351)
    • interceptor: new APIs for chaining server interceptors. (#3336)
      • Special Thanks: @tukeJonny
    • stats: add client side user agent to outgoing header (#3331)
      • Special Thanks: @piotrkowalczuk

    API Changes

    • credentials: deprecate ProtocolInfo.SecurityVersion (#3372)

    Bug Fixes

    • interop: Build grpclb_fallback/client.go only for linux. (#3375)
    • internal: Update service_config.pb.go (#3365)
    • internal: Move parseTarget function into internal package and export it. (#3368)
    • balancer/base: keep bad SubConns in TransientFailure until Ready (#3366)
    • balancer/base: consider an empty address list an error (#3361)

    Dependencies

    • protobuf: update protoc-gen-go version and generated code (#3345)
    Source code(tar.gz)
    Source code(zip)
  • v1.27.1(Feb 5, 2020)

Owner
grpc
A high performance, open source, general-purpose RPC framework
grpc
The Go language implementation of gRPC. HTTP/2 based RPC

gRPC-Go The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information

grpc 14.5k Sep 23, 2021
Use Consul to do service discovery, use gRPC +kafka to do message produce and consume. Use redis to store result.

目录 gRPC/consul/kafka简介 gRPC+kafka的Demo gRPC+kafka整体示意图 限流器 基于redis计数器生成唯一ID kafka生产消费 kafka生产消费示意图 本文kafka生产消费过程 基于pprof的性能分析Demo 使用pprof统计CPU/HEAP数据的

null 38 Jan 31, 2021
Một script nho nhỏ viết bằng Go để crawl toàn bộ điểm thi THPTQG-2021

Crawl toàn bộ điểm thi THPTQG-2021 Một script nho nhỏ viết bằng Go để crawl toàn bộ điểm thi THPTQG-2021, mình đã crawl sẵn toàn bộ ở đây: https://dri

null 16 Sep 13, 2021
grpc-http1: A gRPC via HTTP/1 Enabling Library for Go

grpc-http1: A gRPC via HTTP/1 Enabling Library for Go This library enables using all the functionality of a gRPC server even if it is exposed behind a

StackRox 57 Sep 10, 2021
🎉 An awesome version control tool for protoc and its related plugins.

❤️ PowerProto is actively maintained! Any questions in use can be directly raised issue, I will respond to you as fast as possible. If you think the p

storyicon 104 Sep 18, 2021
Easily generate gRPC services in Go ⚡️

Lile is a application generator (think create-react-app, rails new or django startproject) for gRPC services in Go and a set of tools/libraries. The p

Lile 1.4k Sep 24, 2021
A pluggable backend API that enforces the Event Sourcing Pattern for persisting & broadcasting application state changes

A pluggable "Application State Gateway" that enforces the Event Sourcing Pattern for securely persisting & broadcasting application state changes

null 25 Aug 26, 2021
protoc-gen-grpc-gateway-ts is a Typescript client generator for the grpc-gateway project. It generates idiomatic Typescript clients that connect the web frontend and golang backend fronted by grpc-gateway.

protoc-gen-grpc-gateway-ts protoc-gen-grpc-gateway-ts is a Typescript client generator for the grpc-gateway project. It generates idiomatic Typescript

gRPC Ecosystem 34 Sep 12, 2021
Fast time-series data storage server accessible over gRPC

tstorage-server Persistent fast time-series data storage server accessible over gRPC. tstorage-server is lightweight local on-disk storage engine serv

Bartlomiej Mika 4 Aug 3, 2021
A Protocol Buffers compiler that generates optimized marshaling & unmarshaling Go code for ProtoBuf APIv2

vtprotobuf, the Vitess Protocol Buffers compiler This repository provides the protoc-gen-go-vtproto plug-in for protoc, which is used by Vitess to gen

PlanetScale 320 Sep 19, 2021
gRPC over WebRTC

gRPC over WebRTC Just a proof of concept, please be kind. How to Start all the things Client, create-react-app + grpc-web signaling + webrtc extension

Jean-Sebastien Mouret 228 Sep 23, 2021
RPC over libp2p pubsub with error handling

go-libp2p-pubsub-rpc RPC over libp2p pubsub with error handling Table of Contents Background Install Usage Contributing Changelog License Background g

textile.io 4 Aug 20, 2021
Simple, fast and scalable golang rpc library for high load

gorpc Simple, fast and scalable golang RPC library for high load and microservices. Gorpc provides the following features useful for highly loaded pro

Aliaksandr Valialkin 641 Sep 17, 2021
gRPC/REST proxy for Kafka

Kafka-Pixy (gRPC/REST Proxy for Kafka) Kafka-Pixy is a dual API (gRPC and REST) proxy for Kafka with automatic consumer group control. It is designed

Mailgun Team 653 Sep 9, 2021
webrpc is a schema-driven approach to writing backend services for modern Web apps and networks

webrpc is a schema-driven approach to writing backend servers for the Web. Write your server's api interface in a schema format of RIDL or JSON, and t

null 389 Sep 14, 2021
In memory Key/Value store in go using gRPC.

In memory cache, using gRPC Contents About Running Server Local Docker Kubernetes Example Helm Terraform API Add Get GetByPrefix GetAllItems DeleteKey

Kautilya Tripathi 137 Sep 3, 2021
Create a gRPC Server from Database

xo-grpc Create a gRPC Server from the generated code by the xo project. Requirements Go 1.16 or superior protoc xo, protoc-gen-go and protoc-gen-go-gr

Walter Wanderley 19 Sep 8, 2021
gproxy is a tiny service/library for creating lets-encrypt/acme secured gRPC and http reverse proxies

gproxy is a reverse proxy service AND library for creating flexible, expression-based, lets-encrypt/acme secured gRPC/http reverse proxies GProxy as a

null 14 Aug 16, 2021