Lightweight, fault-tolerant message streams.

Overview

Liftbridge Logo

Build License ReportCard Coverage

Liftbridge provides lightweight, fault-tolerant message streams by implementing a durable stream augmentation for the NATS messaging system. It extends NATS with a Kafka-like publish-subscribe log API that is highly available and horizontally scalable. The goal of Liftbridge is to provide a message-streaming solution with a focus on simplicity and usability. Use it as a simpler and lighter alternative to systems like Kafka and Pulsar or to add streaming semantics to an existing NATS deployment.

See the introduction post on Liftbridge and this post for more context and some of the inspiration behind it.

Documentation

Community

Issues
  • Is it production ready?

    Is it production ready?

    Hi,

    Thanks for the great project, Is this project actively under development? (I was following your published roadmap and it seem it's not on the track) Can I use it in a production environments?

    opened by hamidrezakks 2
  • Embedded NATS on k8s

    Embedded NATS on k8s

    Hi,

    is it possible to run liftbridge on k8s in multiple pods with embedded NATS enabled? How are those embedded NATS servers connected to each other? If it works, are there any example configuration files?

    Thanks for the information

    opened by nfoerster 1
  • nil pointer dereference when setting partition leader

    nil pointer dereference when setting partition leader

    Found weird nil pointer dereference issue. Simple 3-node cluster in docker-compose. Pulled up to v1.6.0. Suddenly, 2nd node starts to crash on startup during the raft election.

    liftbridge2_1        | time="2021-06-23 08:03:10" level=info msg="Liftbridge Version:        v1.6.0"
    liftbridge2_1        | time="2021-06-23 08:03:10" level=info msg="Server ID:                 liftbridge2"
    liftbridge2_1        | time="2021-06-23 08:03:10" level=info msg="Namespace:                 liftbridge-default"
    liftbridge2_1        | time="2021-06-23 08:03:10" level=info msg="NATS Servers:              [nats://nats:4222/]"
    liftbridge2_1        | time="2021-06-23 08:03:10" level=info msg="Default Retention Policy:  [Age: 1 day, Compact: true]"
    liftbridge2_1        | time="2021-06-23 08:03:10" level=info msg="Default Partition Pausing: disabled"
    liftbridge2_1        | time="2021-06-23 08:03:10" level=info msg="Starting Liftbridge server on 0.0.0.0:9292..."
    liftbridge2_1        | panic: runtime error: invalid memory address or nil pointer dereference
    liftbridge2_1        | [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xdcf642]
    liftbridge2_1        | 
    liftbridge2_1        | goroutine 1 [running]:
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*replica).updateLatestOffset(0x0, 0xffffffffffffffff, 0xc0002bad90)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/partition.go:40 +0x22
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*partition).becomeLeader(0xc000497e00, 0x5f48, 0x32, 0xc00045e4c0)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/partition.go:494 +0xe1
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*partition).startLeadingOrFollowing(0xc000497e00, 0xc000128948, 0x10c3018)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/partition.go:430 +0x3c3
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*partition).SetLeader(0xc000497e00, 0xc0000b3240, 0xb, 0x5f48, 0x0, 0x0)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/partition.go:402 +0x117
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*metadataAPI).addPartition(0xc000174b00, 0xc000448af0, 0xc000412900, 0x0, 0xc0000bcd80, 0x0, 0x0)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/metadata.go:780 +0xf3
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*metadataAPI).AddStream(0xc000174b00, 0xc000133810, 0x0, 0x0, 0x0, 0x0)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/metadata.go:740 +0x3fd
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*Server).applyCreateStream(0xc0002b6b40, 0xc000133810, 0x0, 0x0, 0x0)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/fsm.go:367 +0x49
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*Server).Restore(0xc0002b6b40, 0x121bfc0, 0xc000106c20, 0x0, 0x0)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/fsm.go:347 +0x344
    liftbridge2_1        | github.com/hashicorp/raft.(*Raft).restoreSnapshot(0xc00041e000, 0x1, 0x1)
    liftbridge2_1        | 	/go/pkg/mod/github.com/hashicorp/[email protected]/api.go:594 +0x28d
    liftbridge2_1        | github.com/hashicorp/raft.NewRaft(0xc000129150, 0x12243c0, 0xc0002b6b40, 0x1236c00, 0xc0000bbc40, 0x122a9c0, 0xc0000a8f40, 0x1223980, 0xc0003861b0, 0x123d280, ...)
    liftbridge2_1        | 	/go/pkg/mod/github.com/hashicorp/[email protected]/api.go:542 +0x801
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*Server).createRaftNode(0xc0002b6b40, 0xc000129238, 0x47bc1c, 0xc0002a15b0, 0xc0001a8600)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/raft.go:395 +0x6a2
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*Server).setupMetadataRaft(0xc0002b6b40, 0x10985e3, 0x23, 0xc000064110)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/raft.go:203 +0x40
    liftbridge2_1        | github.com/liftbridge-io/liftbridge/server.(*Server).Start(0xc0002b6b40, 0x0, 0x0)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/server/server.go:178 +0xa1b
    liftbridge2_1        | main.start(0xc00026ef20, 0x0, 0xc0001cf940)
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/main.go:39 +0xbe
    liftbridge2_1        | github.com/urfave/cli.HandleAction(0xebfc20, 0x10c26f8, 0xc00026ef20, 0xc00026ef20, 0x0)
    liftbridge2_1        | 	/go/pkg/mod/github.com/urfave/[email protected]/app.go:526 +0xbe
    liftbridge2_1        | github.com/urfave/cli.(*App).Run(0xc0002ac1c0, 0xc000094150, 0x3, 0x3, 0x0, 0x0)
    liftbridge2_1        | 	/go/pkg/mod/github.com/urfave/[email protected]/app.go:288 +0x5f8
    liftbridge2_1        | main.main()
    liftbridge2_1        | 	/go/src/github.com/liftbridge-io/liftbridge/main.go:24 +0x11a
    noc_liftbridge2_1 exited with code 2
    

    I've tried to clean 2nd node data directiory without visible result.

    I can suppose the reason in following code in partition.go

    	// Update this replica's latest offset to ensure it's up to date.
    	rep := p.isr[p.srv.config.Clustering.ServerID]
    	rep.updateLatestOffset(p.log.NewestOffset())
    

    p.isr contains nil pointer, set somewhere before.

    opened by dvolodin7 4
  • Message ordering guarantees

    Message ordering guarantees

    Is there documentation on the type of message ordering guarantees that LiftBridge offers?

    opened by gc-ss 10
  • Add Source & Sink Support to benthos

    Add Source & Sink Support to benthos

    Research what's needed to add LiftBridge as Source & Sink in addition to the NATS, NATS JetStream, NATS Streaming Support to benthos

    https://github.com/Jeffail/benthos

    Here's the sister issue in benthos to help track the same. I see opportunity in synergy between both: https://github.com/Jeffail/benthos/issues/778

    opened by gc-ss 0
  • running on k8s; connection error

    running on k8s; connection error

    I'm trying to evaluate liftbridge in kubernetes by using this helm chart and this example go-code.

    The liftbridge-external service in my cluster has an external IP address that is reachable from my workstation as noted by the protocol violation in my curl test:

    invoke-webrequest -Method GET http://liftbridge-io.test.com:9292
    invoke-webrequest : The server committed a protocol violation. Section=ResponseStatusLine
    

    When I run this to connect to the cluster:

    addrs := []string{"liftbridge-io.test.com:9292"}
    
    client, err := lift.Connect(addrs)
    if err != nil {
    log.Fatalf("error getting client: %v", err)
    }
    defer client.Close()
    

    I get this: (the IP shown is the IP for the liftbridge-0 pod)

    error starting consumer: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 10.96.3.25:9292: i/o timeout"
    

    I tried specifying the host environment variable as well as in the config file to explicitly match the external hostname and I get the same result.

    Is there another setting that I need to set?

    opened by justinrush 0
  • Add latest offset and timestamp to activity stream events

    Add latest offset and timestamp to activity stream events

    We would like to add the latest partition message offset and timestamp to pause/resume and readonly activity stream events, and were wondering how to best implement that.

    raftNode.applyOperation returns an ApplyFuture that has a Response() that could be used to return this information. The partition leader, when applyPauseStream is called, could return the latest message offset and timestamp. The metadata leader publishes activity stream messages though, so this information has to get to it somehow.

    Does this seem a good way to implement this feature or is there a simpler solution?

    opened by Jmgr 4
  • failed to create commit log: parse high watermark file failed: strconv.ParseInt: parsing

    failed to create commit log: parse high watermark file failed: strconv.ParseInt: parsing "": invalid syntax

    Hello!

    I saw this issue before on 1.3.*, but it also happens on 1.5.1

    /bin/liftbridge --config=/etc/liftbridge/liftbridge.yml
    INFO[2021-03-16 19:36:48] Liftbridge Version:        v1.5.1            
    INFO[2021-03-16 19:36:48] Server ID:                 srvnc-nc          
    INFO[2021-03-16 19:36:48] Namespace:                 liftbridge-default 
    INFO[2021-03-16 19:36:48] NATS Servers:              [nats://172.24.20.9:4222] 
    INFO[2021-03-16 19:36:48] Default Retention Policy:  [Age: 3 hours, Compact: false] 
    INFO[2021-03-16 19:36:48] Default Partition Pausing: disabled          
    INFO[2021-03-16 19:36:48] Starting Liftbridge server on 172.24.20.9:9292... 
    INFO[2021-03-16 19:36:48]  raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:srvnc-nc Address:srvnc-nc}]" 
    DEBU[2021-03-16 19:36:48] Loaded existing state for metadata Raft group 
    DEBU[2021-03-16 19:36:48] Raft node initialized                        
    INFO[2021-03-16 19:36:48]  raft: entering follower state: follower="Node at srvnc-nc [Follower]" leader= 
    WARN[2021-03-16 19:36:50]  raft: heartbeat timeout reached, starting election: last-leader= 
    INFO[2021-03-16 19:36:50]  raft: entering candidate state: node="Node at srvnc-nc [Candidate]" term=74 
    DEBU[2021-03-16 19:36:50] raft: votes: needed=1                        
    DEBU[2021-03-16 19:36:50] raft: vote granted: from=srvnc-nc term=74 tally=1 
    INFO[2021-03-16 19:36:50]  raft: election won: tally=1                 
    INFO[2021-03-16 19:36:50]  raft: entering leader state: leader="Node at srvnc-nc [Leader]" 
    INFO[2021-03-16 19:36:50] Server became metadata leader, performing leader promotion actions 
    DEBU[2021-03-16 19:36:50] fsm: Replaying Raft log...                   
    panic: failed to add stream to metadata store: failed to create commit log: parse high watermark file failed: strconv.ParseInt: parsing "": invalid syntax
    
    goroutine 55 [running]:
    github.com/liftbridge-io/liftbridge/server.(*Server).Apply(0xc000198300, 0xc0004c4230, 0x0, 0x0)
    	/home/circleci/project/server/fsm.go:111 +0x3e3
    github.com/hashicorp/raft.(*Raft).runFSM.func1(0xc000389320)
    	/go/pkg/mod/github.com/hashicorp/[email protected]/fsm.go:90 +0x2c1
    github.com/hashicorp/raft.(*Raft).runFSM.func2(0xc0003ca600, 0x40, 0x40)
    	/go/pkg/mod/github.com/hashicorp/[email protected]/fsm.go:113 +0x75
    github.com/hashicorp/raft.(*Raft).runFSM(0xc0001ea300)
    	/go/pkg/mod/github.com/hashicorp/[email protected]/fsm.go:219 +0x42f
    github.com/hashicorp/raft.(*raftState).goFunc.func1(0xc0001ea300, 0xc000388e80)
    	/go/pkg/mod/github.com/hashicorp/[email protected]/state.go:146 +0x55
    created by github.com/hashicorp/raft.(*raftState).goFunc
    	/go/pkg/mod/github.com/hashicorp/[email protected]/state.go:144 +0x66
    

    config file:

    cat /etc/liftbridge/liftbridge.yml
    clustering:
        raft.bootstrap.seed: true
        server.id: srvnc-nc
    cursors:
        stream.auto.pause.time: 0
        stream.partitions: 1
    data:
        dir: /var/lib/liftbridge
    host: 172.24.20.9
    logging:
        level: debug
        raft: true
    nats.servers:
    - nats://172.24.20.9:4222
    streams:
        compact.enabled: false
        retention.max:
            age: 3h
        segment.max:
            age: 20m
    

    Workaround: drop LB_data folder content

    P.S. i have an archive data with example of this case. I can upload it somewhere

    opened by ekbfh 7
  • runtime error: slice bounds out of range

    runtime error: slice bounds out of range

    Hi! Liftbridge 1.4.1

    3liftbridge + 3nats servers, in cluster of 3 with min isr 2.

    How i can debug what can produce this error and why liftbridge failed?

    After long retention long i have panic:

    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Finished cleaning log /var/lib/liftbridge/streams/events.DGT/0"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Cleaning log /var/lib/liftbridge/streams/events.DGT/1 based on retention policy {Bytes:0 Messages:0 Age:24h0m0s}"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Finished cleaning log /var/lib/liftbridge/streams/events.DGT/1"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Cleaning log /var/lib/liftbridge/streams/events.DGT/2 based on retention policy {Bytes:0 Messages:0 Age:24h0m0s}"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Finished cleaning log /var/lib/liftbridge/streams/events.DGT/2"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Cleaning log /var/lib/liftbridge/streams/events.DGT/3 based on retention policy {Bytes:0 Messages:0 Age:24h0m0s}"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Finished cleaning log /var/lib/liftbridge/streams/events.DGT/3"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Cleaning log /var/lib/liftbridge/streams/dispose.DGT/0 based on retention policy {Bytes:0 Messages:0 Age:24h0m0s}"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Finished cleaning log /var/lib/liftbridge/streams/dispose.DGT/0"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Cleaning log /var/lib/liftbridge/streams/dispose.DGT/1 based on retention policy {Bytes:0 Messages:0 Age:24h0m0s}"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Finished cleaning log /var/lib/liftbridge/streams/dispose.DGT/1"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="api: Publish [stream=events.IVANOVO, partition=1]"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="api: Publish [stream=events.IVANOVO, partition=1]"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="api: Publish [stream=events.IVANOVO, partition=3]"
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="api: Publish [stream=events.IVANOVO, partition=3]"
    
    янв 25 12:17:04 mevent04 liftbridge[30762]: time="2021-01-25 12:17:04" level=debug msg="Updated log leader epoch. New: {epoch:5, offset:20078}, Previous: {epoch:0, offset:-1} for log [subject=liftbridge-default.cursors, stream=__cursors, partition=0]. Cache now contains 1 entry."
    янв 25 12:17:04 mevent04 liftbridge[30762]: panic: runtime error: slice bounds out of range [:1635020668] with capacity 36
    янв 25 12:17:04 mevent04 liftbridge[30762]: goroutine 421 [running]:
    янв 25 12:17:04 mevent04 liftbridge[30762]: github.com/liftbridge-io/liftbridge/server/commitlog.SerializedMessage.Key(0xc002d2615c, 0x18, 0x24, 0x40, 0xc0045f2b10, 0x0)
    янв 25 12:17:04 mevent04 liftbridge[30762]: /home/circleci/project/server/commitlog/message.go:108 +0xc3
    янв 25 12:17:04 mevent04 liftbridge[30762]: github.com/liftbridge-io/liftbridge/server/commitlog.(*compactCleaner).cleanSegment(0xc000114990, 0xc00019b130, 0xc000e66000, 0xa7521, 0xc003fe89c0, 0xc000d60a50, 0x4, 0x0, 0x0)
    янв 25 12:17:04 mevent04 liftbridge[30762]: /home/circleci/project/server/commitlog/compact_cleaner.go:143 +0x1e3
    янв 25 12:17:04 mevent04 liftbridge[30762]: github.com/liftbridge-io/liftbridge/server/commitlog.(*compactCleaner).compact(0xc000114990, 0xa7521, 0xc0006bf040, 0x6, 0x8, 0x1, 0x1, 0xc003135d70, 0xb3a06f, 0xc0003f6e70, ...)
    янв 25 12:17:04 mevent04 liftbridge[30762]: /home/circleci/project/server/commitlog/compact_cleaner.go:101 +0x22d
    янв 25 12:17:04 mevent04 liftbridge[30762]: github.com/liftbridge-io/liftbridge/server/commitlog.(*compactCleaner).Compact(0xc000114990, 0xa7521, 0xc0006bf040, 0x6, 0x8, 0x6, 0x8, 0x0, 0x0, 0xc000151768, ...)
    янв 25 12:17:04 mevent04 liftbridge[30762]: /home/circleci/project/server/commitlog/compact_cleaner.go:53 +0x159
    янв 25 12:17:04 mevent04 liftbridge[30762]: github.com/liftbridge-io/liftbridge/server/commitlog.(*commitLog).clean(0xc000356a00, 0xc0006bf040, 0x6, 0x8, 0x1c18ad43, 0x3c0094deb277f9, 0xc000151658, 0xb47772, 0xbffbc1041c18ad43, 0x4773d10fca)
    янв 25 12:17:04 mevent04 liftbridge[30762]: /home/circleci/project/server/commitlog/commitlog.go:784 +0x112
    янв 25 12:17:04 mevent04 liftbridge[30762]: github.com/liftbridge-io/liftbridge/server/commitlog.(*commitLog).Clean(0xc000356a00, 0xc000151700, 0x0)
    янв 25 12:17:04 mevent04 liftbridge[30762]: /home/circleci/project/server/commitlog/commitlog.go:735 +0x95
    янв 25 12:17:04 mevent04 liftbridge[30762]: github.com/liftbridge-io/liftbridge/server/commitlog.(*commitLog).cleanerLoop(0xc000356a00)
    янв 25 12:17:04 mevent04 liftbridge[30762]: /home/circleci/project/server/commitlog/commitlog.go:724 +0x151
    янв 25 12:17:04 mevent04 liftbridge[30762]: created by github.com/liftbridge-io/liftbridge/server/commitlog.New
    янв 25 12:17:04 mevent04 liftbridge[30762]: /home/circleci/project/server/commitlog/commitlog.go:146 +0x595
    янв 25 12:17:04 mevent04 systemd[1]: liftbridge.service: main process exited, code=exited, status=2/INVALIDARGUMENT
    
    opened by ekbfh 3
  • Started work on a .Net client

    Started work on a .Net client

    Hi!

    I recently started working on a .Net library for Liftbridge, written in C#. Work will possibly move quite slowly, but for visibility I thought it's good to have an issue mentioning it here.

    Liftbridge.Net

    opened by mamaar 1
Releases(v1.6.0)
Owner
Liftbridge
Lightweight, fault-tolerant message streams.
Liftbridge
The lightweight, distributed relational database built on SQLite

rqlite is a lightweight, distributed relational database, which uses SQLite as its storage engine. Forming a cluster is very straightforward, it grace

rqlite 9k Oct 21, 2021
Lightweight, fault-tolerant message streams.

Liftbridge provides lightweight, fault-tolerant message streams by implementing a durable stream augmentation for the NATS messaging system. It extend

Liftbridge 2.1k Oct 20, 2021
A distributed fault-tolerant order book matching engine

Go - Between A distributed fault-tolerant order book matching engine. Features Limit orders Market orders Order book depth Calculate market price for

Daniel Gatis 5 Sep 15, 2021
High performance, distributed and low latency publish-subscribe platform.

Emitter: Distributed Publish-Subscribe Platform Emitter is a distributed, scalable and fault-tolerant publish-subscribe platform built with MQTT proto

emitter 3.1k Oct 20, 2021
Sandglass is a distributed, horizontally scalable, persistent, time sorted message queue.

Sandglass is a distributed, horizontally scalable, persistent, time ordered message queue. It was developed to support asynchronous tasks and message

Sandglass 1.5k Oct 3, 2021
Scalable, fault-tolerant application-layer sharding for Go applications

ringpop-go (This project is no longer under active development.) Ringpop is a library that brings cooperation and coordination to distributed applicat

Uber Open Source 703 Oct 15, 2021
⟁ Tendermint Core (BFT Consensus) in Go

Tendermint Byzantine-Fault Tolerant State Machines. Or Blockchain, for short. Branch Tests Coverage Linting master Tendermint Core is Byzantine Fault

Tendermint 4.4k Oct 20, 2021
Like Go channels over the network

libchan: like Go channels over the network Libchan is an ultra-lightweight networking library which lets network services communicate in the same way

Docker 2.4k Oct 14, 2021
a Framework for creating microservices using technologies and design patterns of Erlang/OTP in Golang

Technologies and design patterns of Erlang/OTP have been proven over the years. Now in Golang. Up to x5 times faster than original Erlang/OTP in terms

Taras Halturin 1.1k Oct 17, 2021
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly.

Gleam Gleam is a high performance and efficient distributed execution system, and also simple, generic, flexible and easy to customize. Gleam is built

Chris Lu 2.9k Oct 22, 2021
Golang implementation of the Raft consensus protocol

raft raft is a Go library that manages a replicated log and can be used with an FSM to manage replicated state machines. It is a library for providing

HashiCorp 5.1k Oct 16, 2021
The jsonrpc package helps implement of JSON-RPC 2.0

jsonrpc About Simple, Poetic, Pithy. No reflect package. But reflect package is used only when invoke the debug handler. Support GAE/Go Standard Envir

Osamu TONOMORI 156 Oct 13, 2021
An actor framework for Go

gosiris is an actor framework for Golang. Features Manage a hierarchy of actors (each actor has its own: state, behavior, mailbox, child actors) Deplo

Teiva Harsanyi 235 Oct 15, 2021
A distributed systems library for Kubernetes deployments built on top of spindle and Cloud Spanner.

hedge A library built on top of spindle and Cloud Spanner that provides rudimentary distributed computing facilities to Kubernetes deployments. Featur

null 19 Oct 15, 2021
Flowgraph package for scalable asynchronous system development

flowgraph Getting Started go get -u github.com/vectaport/flowgraph go test Links Wiki Slides from Minneapolis Golang Meetup, May 22nd 2019 Overview F

Scott Johnston 41 Aug 12, 2021
Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.

Dapr is a portable, serverless, event-driven runtime that makes it easy for developers to build resilient, stateless and stateful microservices that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

Dapr 15k Oct 22, 2021
A library that implements the outboxer pattern in go

Outboxer Outboxer is a go library that implements the outbox pattern. Getting Started Outboxer was designed to simplify the tough work of orchestratin

Ítalo Vietro 57 Oct 16, 2021
kim 是一个高性能分式式通信架构

KIM King IM CLoud 简介 kim 是一个高性能分式式通信架构。

klint 49 Oct 16, 2021
Raft library Raft is a protocol with which a cluster of nodes can maintain a replicated state machine.

Raft library Raft is a protocol with which a cluster of nodes can maintain a replicated state machine. The state machine is kept in sync through the u

Kalyan Akella 0 Oct 15, 2021