gRPC/REST proxy for Kafka

Overview

Kafka-Pixy (gRPC/REST Proxy for Kafka)

Build Status Go Report Card Coverage Status Docker Pulls

Kafka-Pixy is a dual API (gRPC and REST) proxy for Kafka with automatic consumer group control. It is designed to hide the complexity of the Kafka client protocol and provide a stupid simple API that is trivial to implement in any language.

Kafka-Pixy is tested against Kafka versions 1.1.1 and 2.3.0, but is likely to work with any version starting from 0.8.2.2. It uses the Kafka Offset Commit/Fetch API to keep track of consumer offsets. However Group Membership API is not yet implemented, therefore it needs to talk to Zookeeper directly to manage consumer group membership.

Warning: Kafka-Pixy does not support wildcard subscriptions and therefore cannot coexist in a consumer group with clients using them. It should be possible to use other clients in the same consumer group as kafka-pixy instance if they subscribe to topics by their full names, but that has never been tested so do that at your own risk.

If you are anxious to get started then install Kafka-Pixy and proceed with a quick start guide for your weapon of choice: Curl, Python, or Golang. If you want to use some other language, then you still can use either of the guides for inspiration, but you would need to generate gRPC client stubs from kafkapixy.proto yourself (please refer to gRPC documentation for details).

Key Features:

  • Automatic Consumer Group Management: Unlike in Kafka REST Proxy by Confluent clients do not need to explicitly create a consumer instance. When Kafka-Pixy gets a consume request for a group-topic pair for the first time, it automatically joins the group and subscribes to the topic. When requests stop coming for longer than the subscription timeout it cancels the subscription;
  • At Least Once Guarantee: The main feature of Kafka-Pixy is that it guarantees at-least-once message delivery. The guarantee is achieved via combination of synchronous production and explicit acknowledgement of consumed messages;
  • Dual API: Kafka-Pixy provides two types of API:
    • gRPC (Protocol Buffers over HTTP/2) recommended to produce/consume messages;
    • REST (JSON over HTTP) intended for testing and operations purposes, although you can use it to produce/consume messages too;
  • Multi-Cluster Support: One Kafka-Pixy instance can proxy to several Kafka clusters. You just need to define them in the config file and then address clusters by name given in the config file in your API requests.
  • Aggregation: Kafka works best when messages are read/written in batches, but from an application's standpoint it is easier to deal with individual message read/writes. Kafka-Pixy provides a message based API to clients, but internally it aggregates requests and sends them to Kafka in batches.
  • Locality: Kafka-Pixy is intended to run on the same host as the applications using it. Remember that it only provides a message based API - no batching, therefore using it over network is suboptimal.

gRPC API

gRPC is an open source framework that is using Protocol Buffers as interface definition language and HTTP/2 as transport protocol. Kafka-Pixy API is defined in kafkapixy.proto. Client stubs for Golang and Python are generated and provided in this repository, but you can easily generate stubs for a bunch of other languages. Please refer to the gRPC documentation for information on the language of your choice.

REST API

It is highly recommended to use the gRPC API for production/consumption. The HTTP API is only provided for quick tests and operational purposes.

Each API endpoint has two variants which differ by /clusters/<cluster> prefix. The one with the proxy prefix is to be used when multiple clusters are configured. The one without the prefix operates on the default cluster (the one that is mentioned first in the YAML configuration file).

Produce

POST /topics/<topic>/messages
POST /clusters/<cluster>/topics/<topic>/messages

Writes a message to a topic on a particular cluster. If the request content type is either text/plain or application/json then a message should be sent as the body of a request. If content type is x-www-form-urlencoded then a message should be passed as the msg form parameter.

If Kafka-Pixy is configured to use a version of the Kafka protocol (via the kafka.version proxy setting) that is 0.11.0.0 or later, it is also possible to add record headers to a message by adding HTTP headers to your message. Any HTTP header with the prefix "X-Kafka-" will have that prefix stripped and the header will be used as a record header. Since the values of Kafka headers can be arbitrary byte strings, the value of the HTTP header must be Base 64-encoded.

Parameter Opt Description
cluster yes The name of a cluster to operate on. By default the cluster mentioned first in the proxies section of the config file is used.
topic The name of a topic to produce to
key yes A string whose hash is used to determine a partition to produce to. By default a random partition is selected.
msg * Used only if the request content type is x-www-form-urlencoded. In other cases the request body is the message.
sync yes A flag (value is ignored) that makes Kafka-Pixy wait for all ISR to confirm write before sending a response back. By default a response is sent immediatelly after the request is received.

By default the message is written to Kafka asynchronously, that is the HTTP request completes as soon as Kafka-Pixy reads the request from the wire, and production to Kafka is performed in the background. Therefore it is not guaranteed that the message will ever get into Kafka.

If you need a guarantee that a message is written to Kafka, then pass the sync flag with your request. In that case when Kafka-Pixy returns a response is governed by producer.required_acks parameter in the YAML config. It can be one of:

  • no_response: the response is returned as soon as a produce request is delivered to a partition leader Kafka broker (no disk writes performed yet).
  • wait_for_local: the response is returned as soon as data is written to the disk by a partition leader Kafka broker.
  • wait_for_all: the response is returned after all in-sync replicas have data committed to disk.

E.g. if a Kafka-Pixy process has been started with the --tcpAddr=0.0.0.0:8080 argument, then you can test it using curl as follows:

curl -X POST localhost:8080/topics/foo/messages?key=bar&sync \
  -H 'Content-Type: text/plain' \
  -d 'Good news everyone!'

If the message is submitted asynchronously then the response will be an empty json object {}.

If the message is submitted synchronously then in case of success (HTTP status 200) the response will be like:

{
  "partition": <partition number>,
  "offset": <message offset>
}

In case of failure (HTTP statuses 404 and 500) the response will be:

{
  "error": <human readable explanation>
}

Consume

GET /topics/<topic>/messages
GET /clusters/<cluster>/topics/<topic>/messages

Consumes a message from a topic of a particular cluster as a member of a particular consumer group. A message previously consumed from the same topic can be optionally acknowledged.

Parameter Opt Description
cluster yes The name of a cluster to operate on. By default the cluster mentioned first in the proxies section of the config file is used.
topic The name of a topic to consume from.
group The name of a consumer group.
noAck yes A flag (value is ignored) that no message should be acknowledged. For default behaviour read below.
ackPartition yes A partition number that the acknowledged message was consumed from. For default behaviour read below.
ackOffset yes An offset of the acknowledged message. For default behaviour read below.

If noAck is defined in a request then no message is acknowledged by the request. If a request defines both ackPartition and ackOffset parameters then a message previously consumed from the same topic from the specified partition with the specified offset is acknowledged by the request. If none of the ack related parameters is specified then the request will acknowledge the message consumed in this requests if any. It is called auto-ack mode.

When a message is consumed as a member of a consume group for the first time, Kafka-Pixy joins the consumer group and subscribes to the topic. All Kafka-Pixy instances that are currently members of that group and subscribed to that topic distribute partitions between themselves, so that each Kafka-Pixy instance gets a subset of partitions for exclusive consumption (Read more about Kafka consumer groups here).

If a Kafka-Pixy instance has not received consume requests for a topic for the duration of the subscription timeout, then it unsubscribes from the topic, and the topic partitions are redistributed among Kafka-Pixy instances that are still consuming from it.

If there are no unread messages in the topic the request will block waiting for the duration of the long polling timeout. If there are no messages produced during this long poll waiting then the request will return 408 Request Timeout error, otherwise the response will be a JSON document of the following structure:

{
  "key": <base64 encoded key>,
  "value": <base64 encoded message body>,
  "partition": <partition number>,
  "offset": <message offset>,
  "headers": [
    {
      "key": <string header key>,
      "value": <base64-encoded header value>
    }
  ]
}

e.g.:

{
  "key": "0JzQsNGA0YPRgdGP",
  "value": "0JzQvtGPINC70Y7QsdC40LzQsNGPINC00L7Rh9C10L3RjNC60LA=",
  "partition": 0,
  "offset": 13,
  "headers": [
    {
      "key": "foo",
      "value": "YmFy"
    }
  ]
}

Note that headers are only supported if the Kafka protocol version (set via the kafka.version configuration flag) is set to 0.11.0.0 or later.

Acknowledge

POST /topics/<topic>/acks
POST /clusters/<cluster>/topics/<topic>/acks

Acknowledges a previously consumed message.

Parameter Opt Description
cluster yes The name of a cluster to operate on. By default the cluster mentioned first in the proxies section of the config file is used.
topic The name of a topic to produce to.
group The name of a consumer group.
partition A partition number that the acknowledged message was consumed from.
offset An offset of the acknowledged message.

Get Offsets

GET /topics/<topic>/offsets
GET /clusters/<cluster>/topics/<topic>/offsets

Returns offset information for all partitions of the specified topic including the next offset to be consumed by the specified consumer group. The structure of the returned JSON document is as follows:

Parameter Opt Description
cluster yes The name of a cluster to operate on. By default the cluster mentioned first in the proxies section of the config file is used.
topic The name of a topic to produce to.
group The name of a consumer group.
[
  {
    "partition": <partition id>,
    "begin": <oldest offset>,
    "end": <newest offset>,
    "count": <the number of messages in the topic, equals to `end` - `begin`>,
    "offset": <next offset to be consumed by this consumer group>,
    "lag": <equals to `end` - `offset`>,
    "metadata": <arbitrary string committed with the offset, not used by Kafka-Pixy. It is omitted if empty>
  },
  ...
]

Set Offsets

POST /topics/<topic>/offsets
POST /clusters/<cluster>/topics/<topic>/offsets

Sets offsets to be consumed from the specified topic by a particular consumer group. The request content should be a list of JSON objects, where each object defines an offset to be set for a particular partition:

Parameter Opt Description
cluster yes The name of a cluster to operate on. By default the cluster mentioned first in the proxies section of the config file is used.
topic The name of a topic to produce to.
group The name of a consumer group.
[
  {
    "partition": <partition id>,
    "offset": <next offset to be consumed by this consumer group>,
    "metadata": <arbitrary string>
  },
  ...
]

Note that consumption by all consumer group members should cease before this call can be executed. That is necessary because while consuming, Kafka-Pixy constantly updates partition offsets, and it does not expect them to be updated by somebody else. So it only reads them on group initialization, that happens when a consumer group request comes after 20 seconds or more of the consumer group inactivity on all Kafka-Pixy instances working with the Kafka cluster.

List Consumers

GET /topics/<topic>/consumers
GET /clusters/<cluster>/topics/<topic>/consumers

Returns a list of consumers that are subscribed to a topic.

Parameter Opt Description
cluster yes The name of a cluster to operate on. By default the cluster mentioned first in the proxies section of the config file is used.
topic The name of a topic to produce to.
group yes The name of a consumer group. By default returns data for all known consumer groups subscribed to the topic.

e.g.:

curl -G localhost:19092/topic/some_queue/consumers

yields:

{
  "integrations": {
    "pixy_jobs1_62065_2015-09-24T22:21:05Z": [0,1,2,3],
    "pixy_jobs2_18075_2015-09-24T22:21:28Z": [4,5,6],
    "pixy_jobs3_336_2015-09-24T22:21:51Z": [7,8,9]
  },
  "logstash-customer": {
    "logstash-customer_logs01-1443116116450-7f54d246-0": [0,1,2],
    "logstash-customer_logs01-1443116116450-7f54d246-1": [3,4,5],
    "logstash-customer_logs01-1443116116450-7f54d246-2": [6,7],
    "logstash-customer_logs01-1443116116450-7f54d246-3": [8,9]
  },
  "logstash-reputation4": {
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-0": [0],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-1": [1],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-10": [2],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-11": [3],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-12": [4],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-13": [5],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-14": [6],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-15": [7],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-2": [8],
    "logstash-reputation4_logs16-1443178335419-c08d8ab6-3": [9]
  },
  "test": {
    "pixy_core1_47288_2015-09-24T22:15:36Z": [0,1,2,3,4],
    "pixy_in7_102745_2015-09-24T22:24:14Z": [5,6,7,8,9]
  }
}

List Topics

GET /topics
GET /clusters/<cluster>/topics

Returns a list of topics optionally with detailed configuration and partitions.

Parameter Opt Description
cluster yes The name of a cluster to operate on. By default the cluster mentioned first in the proxies section of the config file is used.
withPartitions yes Whether a list of partitions should be returned for every topic.
withConfig yes Whether configuration should be returned for every topic.

Get Topic Config

GET /topics/<topic>
GET /clusters/<cluster>/topics/<topic>

Returns topic configuration optionally with a list of partitions.

Parameter Opt Description
cluster yes The name of a cluster to operate on. By default the cluster mentioned first in the proxies section of the config file is used.
withPartitions yes Whether a list of partitions should be returned.

Configuration

Kafka-Pixy is designed to be very simple to run. It consists of a single executable that can be started just by passing a bunch of command line parameters to it - no configuration file needed.

However if you do need to fine-tune Kafka-Pixy for your use case, you can provide a YAML configuration file. Default configuration file default.yaml is shipped in the release archive. In your configuration file you can specify only parameters that you want to change, other options take their default values. If some option is both specified in the configuration file and provided as a command line argument, then the command line argument wins.

Command line parameters that Kafka-Pixy accepts are listed below:

Parameter Description
config Path to a YAML configuration file.
kafkaPeers Comma separated list of Kafka brokers. Note that these are just seed brokers. The other brokers are discovered automatically. (Default localhost:9092)
zookeeperPeers Comma separated list of ZooKeeper nodes followed by optional chroot. (Default localhost:2181)
grpcAddr TCP address that the gRPC API should listen on. (Default 0.0.0.0:19091)
tcpAddr TCP address that the HTTP API should listen on. (Default 0.0.0.0:19092)
unixAddr Unix Domain Socket that the HTTP API should listen on. If not specified then the service will not listen on a Unix Domain Socket.
pidFile Name of a pid file to create. If not specified then a pid file is not created.

You can run kafka-pixy -help to make it list all available command line parameters.

Security

SSL/TLS can be configured on both the gRPC and HTTP servers by specifying a certificate and key file in the configuration. Both files must be specified in order to run with security enabled.

If configured, both the gRPC and HTTP servers will run with TLS enabled.

Additionally TLS may be configured for the Kafka cluster by enabling tls in the kafka section of the configuration YAML (along with any required certificates). Details can be found in the default YAML file (default.yaml).

License

Kafka-Pixy is under the Apache 2.0 license. See the LICENSE file for details.

Comments
  • Question: what is the level of effort to upgrade to kafka 2.6.2?

    Question: what is the level of effort to upgrade to kafka 2.6.2?

    I'm not a go developer so I'm not sure what's involved (maybe the latest version will just work?). If I can get a rough estimate then I could more easily request a resource. Thanks!

    opened by blaise-lapinski 0
  • How To Read and Understand Source Code

    How To Read and Understand Source Code

    I read this project source code from v0.10.1 (becaues the branch master is hard to understand for me). but,when I want to try to understant logic for handleConsume, this logic new dispatch tier one by one , and channels , and many communicate by channel , make this logic is not sequential execution. Can you please provide advices for learn this code . Thank you very much

    opened by bytehello 1
  • Aggregation, how it works?

    Aggregation, how it works?

    Hi

    How aggregation works, and especially what if we recieve one request, how it will react and match time it will take to produce it to kafka?

    My goal is to send to kafka all messages each 1second no matter how much messages batched.

    Any idea on this?

    opened by Leanerz 0
  • Consumer connect to localhost instead of specified kafka endpoint

    Consumer connect to localhost instead of specified kafka endpoint

    Hello,

    I'm using this command to start:

    /usr/bin # kafka-pixy -kafkaPeers 192.168.0.102:9092 -zookeeperPeers 192.168.0.102:2181

    The startup logs:

    2021-02-26 09:47:12.244740 Z info "Starting with config: &{GRPCAddr:0.0.0.0:19091 TCPAddr:0.0.0.0:19092 UnixAdd r: Proxies:map[default:0xc0000e2380] DefaultCluster:default TLS:{CertPath: KeyPath:} Logging:[{Name:console Sev erity:info Params:map[]}]}" 2021-02-26 09:47:12.245145 Z info "Initializing new client" category=sarama 2021-02-26 09:47:12.245402 Z info "client/metadata fetching metadata for all topics from broker 192.168.0.102:9 092" category=sarama 2021-02-26 09:47:12.250869 Z info "Connected to broker at 192.168.0.102:9092 (unregistered)" category=sarama 2021-02-26 09:47:12.259271 Z info "client/brokers registered new broker #1 at localhost:9092" category=sarama 2021-02-26 09:47:12.259622 Z info "Successfully initialized new client" category=sarama 2021-02-26 09:47:12.259764 Z info "Initializing new client" category=sarama 2021-02-26 09:47:12.259958 Z info "client/metadata fetching metadata for all topics from broker 192.168.0.102:9 092" category=sarama 2021-02-26 09:47:12.260502 Z info </default.0/offset_mgr_f.0/mapper.0> Started 2021-02-26 09:47:12.264473 Z info "Connected to broker at 192.168.0.102:9092 (unregistered)" category=sarama 2021-02-26 09:47:12.272524 Z info "client/brokers registered new broker #1 at localhost:9092" category=sarama 2021-02-26 09:47:12.273188 Z info "Successfully initialized new client" category=sarama 2021-02-26 09:47:12.273486 Z info "Initializing new client" category=sarama 2021-02-26 09:47:12.273584 Z info "client/metadata fetching metadata for all topics from broker 192.168.0.102:9 092" category=sarama 2021-02-26 09:47:12.273942 Z info </default.0/prod_disp.0> Started 2021-02-26 09:47:12.273954 Z info </default.0/prod_merg.0> Started 2021-02-26 09:47:12.278704 Z info "Connected to broker at 192.168.0.102:9092 (unregistered)" category=sarama 2021-02-26 09:47:12.286308 Z info "client/brokers registered new broker #1 at localhost:9092" category=sarama 2021-02-26 09:47:12.286596 Z info "Successfully initialized new client" category=sarama 2021-02-26 09:47:12.287543 Z info </default.0/cons.0/disp.0> Started 2021-02-26 09:47:12.291316 Z info "Connected to 192.168.0.102:2181" category=zk 2021-02-26 09:47:12.291549 Z info </service.0> Started 2021-02-26 09:47:12.293309 Z info </0.0.0.0:19092.0> Started 2021-02-26 09:47:12.293628 Z info </grpc://0.0.0.0:19091.0> Started 2021-02-26 09:47:12.303813 Z info "authenticated: id=72175915223285767, timeout=15000" category=zk 2021-02-26 09:47:12.303984 Z info "re-submitting0credentials after reconnect" category=zk

    I don't understand why this happens:

    2021-02-26 09:47:12.278704 Z info "Connected to broker at 192.168.0.102:9092 (unregistered)" category=sarama
    2021-02-26 09:47:12.286308 Z info "client/brokers registered new broker #1 at localhost:9092" category=sarama
    

    Client is forwarded to localhost. It is any way to configure it ?

    opened by marales 0
  • Can hang on shutdown without explaining what it's waiting for

    Can hang on shutdown without explaining what it's waiting for

    Pixy hangs on shutdown after being used. In our case it's a single client connected to a single topic with a single partition, which produced and consumed a few messages.

    Two example logs. First was SIGINT the second was SIGTERM, in case it matters.

    ^C2020-03-27 10:52:08.675328 Z info </service.0> "Shutting down"
    2020-03-27 10:52:08.675378 Z info </service.0/default_pxy_stop.0> Started
    2020-03-27 10:52:08.675427 Z info </default.0/adm_stop.0> Started
    2020-03-27 10:52:08.675443 Z info </default.0/adm_stop.0> Stopped
    2020-03-27 10:52:08.675446 Z info </default.0/prod_stop.0> Started
    2020-03-27 10:52:08.675451 Z info </default.0/cons_stop.0> Started
    2020-03-27 10:52:08.675523 Z info </default.0/cons.0/disp.0> "Shutting down"
    2020-03-27 10:52:08.675491 Z info </default.0/prod_disp.0> "About to stop producer: pendingMsgCount=0"
    2020-03-27 10:52:08.675570 Z info </default.0/prod_disp.0> "Stopping producer: pendingMsgCount=0"
    2020-03-27 10:52:08.675542 Z info </default.0/cons.0/ussjc-bx-001.ts.example.com.0/disp.0> "Shutting down" kafka.group=ussjc-bx-001.ts.example.com
    2020-03-27 10:52:08.675588 Z info "Producer shutting down." category=sarama
    2020-03-27 10:52:08.675625 Z info </default.0/prod_merg.0> Stopped
    2020-03-27 10:52:08.675639 Z info </default.0/prod_disp.0> Stopped
    2020-03-27 10:52:08.675650 Z info "producer/broker/1001 input chan closed" category=sarama
    2020-03-27 10:52:08.675665 Z info "producer/broker/1001 shut down" category=sarama
    2020-03-27 10:52:08.675665 Z info </default.0/prod_stop.0> Stopped
    
    2020-03-27 10:54:33.659139 Z info </service.0> "Shutting down"
    2020-03-27 10:54:33.659175 Z info </service.0/default_pxy_stop.0> Started
    2020-03-27 10:54:33.659194 Z info </default.0/prod_stop.0> Started
    2020-03-27 10:54:33.659210 Z info </default.0/prod_disp.0> "About to stop producer: pendingMsgCount=0"
    2020-03-27 10:54:33.659217 Z info </default.0/prod_disp.0> "Stopping producer: pendingMsgCount=0"
    2020-03-27 10:54:33.659222 Z info </default.0/adm_stop.0> Started
    2020-03-27 10:54:33.659226 Z info "Producer shutting down." category=sarama
    2020-03-27 10:54:33.659240 Z info </default.0/adm_stop.0> Stopped
    2020-03-27 10:54:33.659246 Z info </default.0/prod_merg.0> Stopped
    2020-03-27 10:54:33.659251 Z info </default.0/prod_disp.0> Stopped
    2020-03-27 10:54:33.659259 Z info "producer/broker/1001 input chan closed" category=sarama
    2020-03-27 10:54:33.659269 Z info "producer/broker/1001 shut down" category=sarama
    2020-03-27 10:54:33.659225 Z info </default.0/cons_stop.0> Started
    2020-03-27 10:54:33.659278 Z info </default.0/cons.0/disp.0> "Shutting down"
    2020-03-27 10:54:33.659285 Z info </default.0/cons.0/ussjc-bx-001.ts.example.com.0/disp.0> "Shutting down" kafka.group=ussjc-bx-001.ts.example.com
    2020-03-27 10:54:33.659254 Z info </default.0/prod_stop.0> Stopped
    
    documentation 
    opened by timbunce 6
Releases(v0.17.0)
Owner
Mailgun Team
Mailgun Team
Use Consul to do service discovery, use gRPC +kafka to do message produce and consume. Use redis to store result.

目录 gRPC/consul/kafka简介 gRPC+kafka的Demo gRPC+kafka整体示意图 限流器 基于redis计数器生成唯一ID kafka生产消费 kafka生产消费示意图 本文kafka生产消费过程 基于pprof的性能分析Demo 使用pprof统计CPU/HEAP数据的

null 43 Jul 9, 2022
A suite of gRPC debugging tools. Like Fiddler/Charles but for gRPC.

grpc-tools A suite of tools for gRPC debugging and development. Like Fiddler/Charles but for gRPC! The main tool is grpc-dump which transparently inte

Bradley Kemp 1.1k Dec 22, 2022
grpc-http1: A gRPC via HTTP/1 Enabling Library for Go

grpc-http1: A gRPC via HTTP/1 Enabling Library for Go This library enables using all the functionality of a gRPC server even if it is exposed behind a

StackRox 89 Dec 17, 2022
Server and client implementation of the grpc go libraries to perform unary, client streaming, server streaming and full duplex RPCs from gRPC go introduction

Description This is an implementation of a gRPC client and server that provides route guidance from gRPC Basics: Go tutorial. It demonstrates how to u

Joram Wambugu 0 Nov 24, 2021
Go based grpc - grpc gateway micro service example

go-grpc-gateway-server This repository provides an example for go based microservice. Go micro services developed based on gRPC protobuf's and also us

Suresh Yekasiri 0 Dec 8, 2021
Simple grpc web and grpc transcoding with Envoy

gRPC Web and gRPC Transcoding with Envoy This is a simple stand-alone set of con

null 0 Dec 25, 2021
Go-grpc - This is grpc server for golang.

go-grpc This is grpc server for golang. protocのインストール brew install protoc Golang用のプラグインのインストール go install google.golang.org/protobuf/cmd/protoc-gen-go

jotaro yuza 1 Jan 2, 2022
GRPC - Creating a gRPC service from scratch

#Go gRPC services course Creating a gRPC service from scratch Command line colle

Rafael Diaz Miles 1 Jan 2, 2022
Totem - A Go library that can turn a single gRPC stream into bidirectional unary gRPC servers

Totem is a Go library that can turn a single gRPC stream into bidirectional unar

Joe Kralicky 3 Jan 6, 2023
Grpc-gateway-map-null - gRPC Gateway test using nullable values in map

Demonstrate gRPC gateway behavior with nullable values in maps Using grpc-gatewa

null 1 Jan 6, 2022
Todo-app-grpc - Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, etc)

Go/GRPC codebase containing RealWorld examples (CRUD, auth, advanced patterns, e

Sammi Aldhi Yanto 5 Oct 12, 2022
GRPC - A client-server mockup, using gRPC to expose functionality.

gRPC This is a mockup application that I built to help me visualise and understand the basic concepts of gRPC. In this exchange, the client can use a

Fergal Bittles 0 Jan 4, 2022
Raft-grpc-demo - Some example code for how to use Hashicorp's Raft implementation with gRPC

raft-grpc-example This is some example code for how to use Hashicorp's Raft impl

dougsong 1 Jan 4, 2022
Benthos-input-grpc - gRPC custom benthos input

gRPC custom benthos input Create a custom benthos input that receives messages f

Marco Amador 3 Sep 26, 2022
Go-grpc-template - A small template for quickly bootstrapping a, developer platform independent gRPC golang application

go-grpc-template A small template for quickly bootstrapping a developer platform

null 1 Jan 20, 2022
Grpc-train - Train booking demo using gRPC

gRPC Demo: Train Booking Service Description Usage Contributing Development Tool

Fadi Asfour 0 Feb 6, 2022
This repo contains a sample app exposing a gRPC health endpoint to demo Kubernetes gRPC probes.

This repo contains a sample app exposing a health endpoint by implementing grpc_health_v1. Usecase is to demo the gRPC readiness and liveness probes introduced in Kubernetes 1.23.

Nico Meisenzahl 1 Feb 9, 2022
Go-grpc-tutorial - Simple gRPC server/client using go

Simple gRPC server/client using go Run server go run usermgmt_server/usermgmt_

Renner Poveda 0 Feb 14, 2022
Mortar is a GO framework/library for building gRPC (and REST) web services.

Mortar Mortar is a GO framework/library for building gRPC (and REST) web services. Mortar has out-of-the-box support for configuration, application me

null 637 Dec 26, 2022