Efficient cache for gigabytes of data written in Go.

Overview

BigCache Build Status Coverage Status GoDoc Go Report Card

Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on heap but omits GC for them. To achieve that, operations on byte slices take place, therefore entries (de)serialization in front of the cache will be needed in most use cases.

Requires Go 1.12 or newer.

Usage

Simple initialization

import "github.com/allegro/bigcache"

cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))

cache.Set("my-unique-key", []byte("value"))

entry, _ := cache.Get("my-unique-key")
fmt.Println(string(entry))

Custom initialization

When cache load can be predicted in advance then it is better to use custom initialization because additional memory allocation can be avoided in that way.

import (
	"log"

	"github.com/allegro/bigcache"
)

config := bigcache.Config {
		// number of shards (must be a power of 2)
		Shards: 1024,

		// time after which entry can be evicted
		LifeWindow: 10 * time.Minute,

		// Interval between removing expired entries (clean up).
		// If set to <= 0 then no action is performed.
		// Setting to < 1 second is counterproductive — bigcache has a one second resolution.
		CleanWindow: 5 * time.Minute,

		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,

		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,

		// prints information about additional memory allocation
		Verbose: true,

		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,

		// callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A bitmask representing the reason will be returned.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: nil,

		// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A constant representing the reason will be passed through.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		// Ignored if OnRemove is specified.
		OnRemoveWithReason: nil,
	}

cache, initErr := bigcache.NewBigCache(config)
if initErr != nil {
	log.Fatal(initErr)
}

cache.Set("my-unique-key", []byte("value"))

if entry, err := cache.Get("my-unique-key"); err == nil {
	fmt.Println(string(entry))
}

LifeWindow & CleanWindow

  1. LifeWindow is a time. After that time, an entry can be called dead but not deleted.

  2. CleanWindow is a time. After that time, all the dead entries will be deleted, but not the entries that still have life.

Benchmarks

Three caches were compared: bigcache, freecache and map. Benchmark tests were made using an i7-6700K CPU @ 4.00GHz with 32GB of RAM on Ubuntu 18.04 LTS (5.2.12-050212-generic).

Benchmarks source code can be found here

Writes and reads

go version
go version go1.13 linux/amd64

go test -bench=. -benchmem -benchtime=4s ./... -timeout 30m
goos: linux
goarch: amd64
pkg: github.com/allegro/bigcache/v2/caches_bench
BenchmarkMapSet-8                     	12999889	       376 ns/op	     199 B/op	       3 allocs/op
BenchmarkConcurrentMapSet-8           	 4355726	      1275 ns/op	     337 B/op	       8 allocs/op
BenchmarkFreeCacheSet-8               	11068976	       703 ns/op	     328 B/op	       2 allocs/op
BenchmarkBigCacheSet-8                	10183717	       478 ns/op	     304 B/op	       2 allocs/op
BenchmarkMapGet-8                     	16536015	       324 ns/op	      23 B/op	       1 allocs/op
BenchmarkConcurrentMapGet-8           	13165708	       401 ns/op	      24 B/op	       2 allocs/op
BenchmarkFreeCacheGet-8               	10137682	       690 ns/op	     136 B/op	       2 allocs/op
BenchmarkBigCacheGet-8                	11423854	       450 ns/op	     152 B/op	       4 allocs/op
BenchmarkBigCacheSetParallel-8        	34233472	       148 ns/op	     317 B/op	       3 allocs/op
BenchmarkFreeCacheSetParallel-8       	34222654	       268 ns/op	     350 B/op	       3 allocs/op
BenchmarkConcurrentMapSetParallel-8   	19635688	       240 ns/op	     200 B/op	       6 allocs/op
BenchmarkBigCacheGetParallel-8        	60547064	        86.1 ns/op	     152 B/op	       4 allocs/op
BenchmarkFreeCacheGetParallel-8       	50701280	       147 ns/op	     136 B/op	       3 allocs/op
BenchmarkConcurrentMapGetParallel-8   	27353288	       175 ns/op	      24 B/op	       2 allocs/op
PASS
ok  	github.com/allegro/bigcache/v2/caches_bench	256.257s

Writes and reads in bigcache are faster than in freecache. Writes to map are the slowest.

GC pause time

go version
go version go1.13 linux/amd64

go run caches_gc_overhead_comparison.go

Number of entries:  20000000
GC pause for bigcache:  1.506077ms
GC pause for freecache:  5.594416ms
GC pause for map:  9.347015ms
go version
go version go1.13 linux/arm64

go run caches_gc_overhead_comparison.go
Number of entries:  20000000
GC pause for bigcache:  22.382827ms
GC pause for freecache:  41.264651ms
GC pause for map:  72.236853ms

Test shows how long are the GC pauses for caches filled with 20mln of entries. Bigcache and freecache have very similar GC pause time.

Memory usage

You may encounter system memory reporting what appears to be an exponential increase, however this is expected behaviour. Go runtime allocates memory in chunks or 'spans' and will inform the OS when they are no longer required by changing their state to 'idle'. The 'spans' will remain part of the process resource usage until the OS needs to repurpose the address. Further reading available here.

How it works

BigCache relies on optimization presented in 1.5 version of Go (issue-9477). This optimization states that if map without pointers in keys and values is used then GC will omit its content. Therefore BigCache uses map[uint64]uint32 where keys are hashed and values are offsets of entries.

Entries are kept in byte slices, to omit GC again. Byte slices size can grow to gigabytes without impact on performance because GC will only see single pointer to it.

Collisions

BigCache does not handle collisions. When new item is inserted and it's hash collides with previously stored item, new item overwrites previously stored value.

Bigcache vs Freecache

Both caches provide the same core features but they reduce GC overhead in different ways. Bigcache relies on map[uint64]uint32, freecache implements its own mapping built on slices to reduce number of pointers.

Results from benchmark tests are presented above. One of the advantage of bigcache over freecache is that you don’t need to know the size of the cache in advance, because when bigcache is full, it can allocate additional memory for new entries instead of overwriting existing ones as freecache does currently. However hard max size in bigcache also can be set, check HardMaxCacheSize.

HTTP Server

This package also includes an easily deployable HTTP implementation of BigCache, which can be found in the server package.

More

Bigcache genesis is described in allegro.tech blog post: writing a very fast cache service in Go

License

BigCache is released under the Apache 2.0 license (see LICENSE)

Issues
  • panic out of range in bytes queue

    panic out of range in bytes queue

    https://github.com/allegro/bigcache/blob/bbf64ae21fc5555f4e9752825ee9ffe13b1e5fa0/queue/bytes_queue.go#L222

    It appears there is a bounds check before the blockSize, but then there is an assumption that adding block size is not out of bounds.

    bug 
    opened by codyohl 31
  • Added basic HTTP server implementation.

    Added basic HTTP server implementation.

    Features: Basic HTTP server. Basic middleware implementation.

    I wanted to provide some thoughts on this implementation.

    API Surface:

    GET /api/v1/cache/{key}
    PUT /api/v1/cache/{key}
    

    The API path denotes it's an API which is versioned, so making changes/implementing new features allows it to be versioned easily. I felt PUT was more relevant than POST because bigcache.BigCache.Set() will overwrite an existing key of the same name. PUT more closely aligns with the semantic aspect of the feature as opposed to the spirit. When looking at the /cache/ heirarchy of the path, there was a PR I saw recently which looked to implement stats about the cache. If it gets approved, you can add /api/v1/stats to the API surface as an extra API. Overall, this API surface seemed easily sustainable to me.

    HTTP Implementation:

    What I love about this package is the focus is on performance and the standard library. My proposal suggested using the standard library, so I felt that was important. When looking at implementation options, I wanted something that could be easily extensible without pulling in dependencies. This post gave me a great idea, and I liked the adapter idea because it's highly extensible using only the standard library. If you want to implement request tracing, it's easy to do by adding a quick service adapter; if you want to implement some form of authentication or geo-limitation, just add another service adapter. The requestMetrics() feature, to me, is both a way to log basic metrics about the HTTP server, as well as an example for a basic service adapter.

    Overall, my goal with this implementation was a focus on the standard library with no external dependencies in a highly extensible way.

    opened by mxplusb 18
  • Use uint64 intead of uint32

    Use uint64 intead of uint32

    There are posibility we run into a problem of int32 overflow. To prevent this let's use uint64 everywhere.

    https://github.com/allegro/bigcache/blob/21e5ca5c3d539f94e8dc563350acd97c5400154f/shard.go#L138

    Fixes: https://github.com/allegro/bigcache/issues/148

    opened by janisz 15
  • Add remove reason signal to OnRemove callback

    Add remove reason signal to OnRemove callback

    I'm still pretty new to golang, so this PR probably has a C-like odor to it (especially with regard to the bitmasks). The general purpose is to indicate in the OnRemove callback why a key is being removed. There are 3 reasons, represented as bitmasks by leftshifting iota in a const expression. I'm not sure if there's a more go-like approach, but I'd like to see support for this feature so I thought I'd open a PR and get a discussion going.

    As for the API- I've made a small breaking change. Users of the library will have to add a parameter to their OnRemove callbacks, which can be ignored. If it's preferable to add new functions instead of changing the signature if existing ones, I'd be happy to adapt the code.

    opened by jshufro 14
  • fix: panic when update and iterate simultaneously

    fix: panic when update and iterate simultaneously

    1. Fix panic when update and iteration simultaneously. Related to https://github.com/allegro/bigcache/issues/222.
    2. Add panic recover in cleanUp to prevent the program exited. Related to https://github.com/allegro/bigcache/issues/226, https://github.com/allegro/bigcache/issues/148, just protect the main program not exited.
    3. Byte queue set full is false after allocated addition memory. Also added a test case reproduce this problem.
    opened by WideLee 13
  • Memory usage grows indefinitely

    Memory usage grows indefinitely

    Hello,

    I've been playing around with bigcache and I've noticed that calling Set() with the same keys causes the memory usage to grow indefinitely. Here's an example:

    cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))
    data := []byte("TESTDATATESTDATATESTDATATESTDATATESTDATATESTDATATESTDATA")
    
    for {
    	for i := 0; i < 10000; i++ {
    		cache.Set(strconv.Itoa(i), data)
    	}
    	time.Sleep(time.Second)
    }
    

    Running that causes the memory usage of the application to grow indefinitely until I ran out of memory.. Is this expected behaviour? I'm just using bigcache as if it was a concurrent map and I would have expected the elements to get replaced and thus memory usage shouldn't grow beyond what is necessary for 10,000 elements.

    discussion 
    opened by tangtony 12
  • Improve encapsulation of shards

    Improve encapsulation of shards

    • Get, Set methods are moved from bigcache.go to shard.go
    • initNewShard has additional param clock
    • all locks logic exists only inside shard
    • get rid of copyCurrentShardMap
    • getIndex and oldest methods added to shard
    • introduced index variable in getShard

    Next PR will introduce Shard interface, newShard method and additional changes.

    opened by cristaloleg 12
  • HardMaxCacheSize exceeded

    HardMaxCacheSize exceeded

    Sorry if this is a complete duplicate of https://github.com/allegro/bigcache/issues/18 , but that issue is 3 years old. And I see the same problem.

    I even took the code from that issue. Here is my output:

    $ go run .
    2019/06/17 18:41:17 profile: memory profiling enabled (rate 4096), mem.pprof
    Number of entries: 200000Alloc:      0 MB
    Alloc:     46 MB
    Alloc:     55 MB
    Alloc:     78 MB
    Alloc:     61 MB
    Alloc:     87 MB
    Alloc:     70 MB
    Alloc:     50 MB
    Alloc:     74 MB
    Alloc:     57 MB
    Alloc:     83 MB
    2019/06/17 18:41:20 profile: memory profiling disabled, mem.pprof
    
    $ go tool pprof mem.pprof
    Type: inuse_space
    Time: Jun 17, 2019 at 6:41pm (MSK)
    Entering interactive mode (type "help" for commands, "o" for options)
    (pprof) top
    Showing nodes accounting for 34.85MB, 99.33% of 35.09MB total
    Dropped 13 nodes (cum <= 0.18MB)
          flat  flat%   sum%        cum   cum%
       31.56MB 89.96% 89.96%    31.56MB 89.96%  github.com/allegro/bigcache/queue.NewBytesQueue
        3.29MB  9.37% 99.33%    34.86MB 99.36%  github.com/allegro/bigcache.initNewShard
             0     0% 99.33%    34.86MB 99.36%  github.com/allegro/bigcache.NewBigCache
             0     0% 99.33%    34.86MB 99.36%  github.com/allegro/bigcache.newBigCache
             0     0% 99.33%    35.08MB   100%  main.main
             0     0% 99.33%    35.08MB   100%  runtime.main
    

    Quite unexpected for HardMaxCacheSize: 1.

    bug question 
    opened by tetafro 11
  • Data race when running test with 1000 goroutines on Travis CI

    Data race when running test with 1000 goroutines on Travis CI

    I'm developing a key-value store abstraction and implementation / wrapper package for Go and one of the implementations is for BigCache. I have a test that launches 1000 goroutines to concurrently interact with the underlying store. On my local machine it works fine all the time, but on Travis CI I sometimes get this warning and then a subsequent error:

    WARNING: DATA RACE
    Write at 0x00c43a932018 by goroutine 163:
      runtime.slicecopy()
          /home/travis/.gimme/versions/go1.10.linux.amd64/src/runtime/slice.go:192 +0x0
      github.com/allegro/bigcache/queue.(*BytesQueue).push()
          /home/travis/gopath/src/github.com/allegro/bigcache/queue/bytes_queue.go:129 +0x2ca
      github.com/allegro/bigcache/queue.(*BytesQueue).Push()
          /home/travis/gopath/src/github.com/allegro/bigcache/queue/bytes_queue.go:81 +0xf0
      github.com/allegro/bigcache.(*cacheShard).set()
          /home/travis/gopath/src/github.com/allegro/bigcache/shard.go:75 +0x209
      github.com/allegro/bigcache.(*BigCache).Set()
          /home/travis/gopath/src/github.com/allegro/bigcache/bigcache.go:117 +0x153
      github.com/philippgille/gokv/bigcache.Store.Set()
          /home/travis/gopath/src/github.com/philippgille/gokv/bigcache/bigcache.go:42 +0x1e3
      github.com/philippgille/gokv/bigcache.(*Store).Set()
          <autogenerated>:1 +0xa0
      github.com/philippgille/gokv/test.InteractWithStore()
          /home/travis/gopath/src/github.com/philippgille/gokv/test/test.go:306 +0x1d0
    

    For the full test output see: https://travis-ci.org/philippgille/gokv/builds/468489707#L1206

    The test itself is: https://github.com/philippgille/gokv/blob/e48dea7fdf56ca55fecd32be28d8fd895682ae3a/bigcache/bigcache_test.go#L42 The implementation is: https://github.com/philippgille/gokv/blob/e48dea7fdf56ca55fecd32be28d8fd895682ae3a/bigcache/bigcache.go

    Is this an error in the way I use BigCache? Or is this a bug in BigCache itself?

    help wanted question 
    opened by philippgille 11
  • Remove unsafe code for appengine

    Remove unsafe code for appengine

    This function should be replaced with a safe versions and protected with a build tag:

    func bytesToString(b []byte) string {
    	return string(b)
    }
    

    See: https://github.com/allegro/bigcache/issues/96

    enhancement help wanted hacktoberfest good-first-issue 
    opened by cristaloleg 11
  • Implemented new Append() method

    Implemented new Append() method

    Implemented new Append() method with proper locking. Without this method you would need to wrap a Get()/Set() part with your own locks hurting performance.

    Fixes #158

    opened by snacker81 10
  • Reset() don't work

    Reset() don't work

    What is the issue you are having? Reset shards no longer works

    What is BigCache doing that it shouldn't? Empty the shards.

    Using stats method i can see the how many items are in cache

    cache, initErr := bigcache.NewBigCache(config)
    cache.Stats()
    
    Output: {"hits":1222,"misses":2,"delete_hits":0,"delete_misses":0,"collisions":0}
    

    but when i run the method Reset to flush the shards, they are not flushed / empty, , in previously versions this method works correctly.

    cache, initErr := bigcache.NewBigCache(config)
    cache.Reset()
    

    After i run the Reset() the stats are se same Output: {"hits":1222,"misses":2,"delete_hits":0,"delete_misses":0,"collisions":0}

    Environment:

    • Version (git sha or release): [email protected]
    • OS (e.g. from /etc/os-release or winver.exe): Debian GNU/Linux 10 (buster)
    • go version: 1.18.3
    bug 
    opened by abolinhas 0
  • [Question]can bigcache add func reset stats

    [Question]can bigcache add func reset stats

    I want to calculate the 10 second cache hit rate, but func Stats() get hit rate is from when cache is init. can bigcache add func reset stats for example

    func (c *BigCache) ResetStats() {
    	for _, shard := range c.shards {
    		 shard.resetStats()
    	}
    }
    
    func (s *cacheShard) resetStats() {
    	s.stats = Stats{}
    }
    

    of course, i can record last Stats and get new Stats after 10 seconds, then use it calc hit rate. but i think add reset stats func can make code more clear

    opened by blacklensama 0
  • shard onEvict get an empty oldestEntry.so panic and not unLock

    shard onEvict get an empty oldestEntry.so panic and not unLock

    What is the issue you are having?

    shard onEvict get an empty oldestEntry.so panic and not unLock

    What is BigCache doing that it shouldn't? fix panic

    Minimal, Complete, and Verifiable Example

    runtime error: index out of range [7] with length 0
    goroutine 60116345 [running]:
    runtime/debug.Stack(0x2153be0, 0x22c8700, 0xc1ec77bf40)
    	/usr/local/go/src/runtime/debug/stack.go:24 +0x9f
    git.in.zhihu.com/go/utils.SafelyRun.func1(0xc1ea5fdf40)
    	/go/pkg/mod/git.in.zhihu.com/go/[email protected]/concurrent.go:20 +0x78
    panic(0x22c8700, 0xc1ec77bf40)
    	/usr/local/go/src/runtime/panic.go:969 +0x175
    encoding/binary.littleEndian.Uint64(...)
    	/usr/local/go/src/encoding/binary/binary.go:77
    github.com/allegro/bigcache/v3.readTimestampFromEntry(...)
    	/go/pkg/mod/github.com/allegro/bigcache/[email protected]/encoding.go:58
    github.com/allegro/bigcache/v3.(*cacheShard).onEvict(0xc0ba3fd7a0, 0xc166605820, 0x0, 0xbc7e0, 0x623d7f7d, 0xc1ea5fd730, 0x0)
    	/go/pkg/mod/github.com/allegro/bigcache/[email protected]/shard.go:271 +0x85
    github.com/allegro/bigcache/v3.(*cacheShard).set(0xc0ba3fd7a0, 0xc1ecdc1ce0, 0x23, 0xd8bc05fe228582f8, 0xc1ecdd9200, 0x29d, 0x2c6, 0x0, 0x0)
    	/go/pkg/mod/github.com/allegro/bigcache/[email protected]/shard.go:134 +0x31a
    github.com/allegro/bigcache/v3.(*BigCache).Set(0xc000214680, 0xc1ecdc1ce0, 0x23, 0xc1ecdd9200, 0x29d, 0x2c6, 0x0, 0x2105a00)
    

    When asking a question about a problem caused by your code, you will get much better answers if you provide code we can use to reproduce the problem. That code should be...

    • ...Minimal – Use as little code as possible that still produces the same problem
    • ...Complete – Provide all parts needed to reproduce the problem
    • ...Verifiable – Test the code you're about to provide to make sure it reproduces the problem

    For more information on how to provide an MCVE, please see the Stack Overflow documentation.

    Environment:

    • Version (git sha or release): v3.0.1
    • OS (e.g. from /etc/os-release or winver.exe): linux
    • go version: go 1.15
    bug 
    opened by s1040735149 1
  • Memory usage grows indefinitely when setting same key within eviction interval

    Memory usage grows indefinitely when setting same key within eviction interval

    This is effectively a follow-up on #109 which is closed for some reason.

    We have experienced a bug in production service when hard limits were removed and OOM killed the app. The app has to hold some data in cache which is then repeatedly re-read from DB and re-set in a fixed interval of ~30 min. Keys for saving data in memory are always the same. What we observed is that after few hours memory consumption in our service has grown above any limit.

    I slightly modified code snipped from #109 to experiment and reproduce Bigcache behavior:

    package main
    
    import (
    	"strconv"
    	"time"
    
    	"github.com/allegro/bigcache/v3"
    )
    
    func main() {
    	evictionInteval := time.Minute
    
    	cacheCfg := bigcache.DefaultConfig(evictionInteval)
    	// cacheCfg.CleanWindow = time.Second
    	cacheCfg.Verbose = false
    	// cacheCfg.HardMaxCacheSize = 100
    
    	cache, _ := bigcache.NewBigCache(cacheCfg)
    	data := []byte("TESTDATATESTDATATESTDATATESTDATATESTDATATESTDATATESTDATA")
    
    	for {
    		for i := 0; i < 10000; i++ {
    			if err := cache.Set(strconv.Itoa(i), data); err != nil {
    				panic(err)
    			}
    		}
    		time.Sleep(100 * time.Millisecond)
    	}
    }
    

    The memory usage growth depends only on evictionInterval. So, for instance, on my linux machine, when evictionInterval set to

    • 1 minute, RSS is ~877M
    • 2 minutes, RSS is ~1680M
    • etc

    So if evictionInterval is big enough and we keep setting data with same key, we would end up with OOM killer.

    It doesn't matter whether GODEBUG=madvdontneed=1 is set or not. I run with this param, but it just seems to not affect anything.

    Please note commented // cacheCfg.CleanWindow = time.Second line - I tried setting this param to different values starting from 1 sec, and it didn't help.

    As a result the only way to limit memory consumption and prevent OOM is to set HardMaxCacheSize.

    bug 
    opened by inliquid 0
Releases(v3.0.2)
Distributed cache and in-memory key/value data store.

Distributed cache and in-memory key/value data store. It can be used both as an embedded Go library and as a language-independent service.

Burak Sezer 2.2k Jun 22, 2022
The most concise and efficient algorithm of consistent hash based on golang

consistent This package of consistent is the most concise and efficient algorithm of consistent hash based on golang. Example Quick start: package mai

null 1 Dec 28, 2021
Eventually consistent distributed in-memory cache Go library

bcache A Go Library to create distributed in-memory cache inside your app. Features LRU cache with configurable maximum keys Eventual Consistency sync

Iwan Budi Kusnanto 88 Jun 22, 2022
Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead

fastcache - fast thread-safe inmemory cache for big number of entries in Go Features Fast. Performance scales on multi-core CPUs. See benchmark result

VictoriaMetrics 1.5k Jun 20, 2022
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major

Patrick Mylund Nielsen 6.3k Jun 29, 2022
groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

groupcache Summary groupcache is a distributed caching and cache-filling library, intended as a replacement for a pool of memcached nodes in many case

Go 11.5k Jun 25, 2022
Distributed cache with gossip peer membership enrollment.

Autocache Groupcache enhanced with memberlist for distributed peer discovery. TL;DR See /_example/ for usage. Run docker-compose -f _example/docker-co

null 95 Feb 19, 2022
MyCache - A distributed cache based on GeeCache

借鉴GeeCache实现了MyCache(https://geektutu.com/post/geecache.html) 主要功能: 1.实现了fifo和lr

null 2 Feb 18, 2022
A simple, fast, embeddable, persistent key/value store written in pure Go. It supports fully serializable transactions and many data structures such as list, set, sorted set.

NutsDB English | 简体中文 NutsDB is a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transacti

徐佳军 2.3k Jun 26, 2022
Lightweight RESTful database engine based on stack data structures

piladb [pee-lah-dee-bee]. pila means stack or battery in Spanish. piladb is a lightweight RESTful database engine based on stack data structures. Crea

Fernando Álvarez 194 Mar 21, 2022
Distributed reliable key-value store for the most critical data of a distributed system

etcd Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order

etcd-io 40.3k Jun 27, 2022
VectorSQL is a free analytics DBMS for IoT & Big Data, compatible with ClickHouse.

NOTICE: This project have moved to fuse-query VectorSQL is a free analytics DBMS for IoT & Big Data, compatible with ClickHouse. Features High Perform

VectorEngine 210 Jun 25, 2022
datatable is a Go package to manipulate tabular data, like an excel spreadsheet.

datatable is a Go package to manipulate tabular data, like an excel spreadsheet. datatable is inspired by the pandas python package and the data.frame R structure. Although it's production ready, be aware that we're still working on API improvements

Datasweet 218 Jul 1, 2022
This is a mongodb data comparison tool.

mongo-compare This is a mongodb data comparison tool. In the mongodb official tools, mongodb officially provides a series of tools such as mongodump,

null 31 May 8, 2022
Membin is an in-memory database that can be stored on disk. Data model smiliar to key-value but values store as JSON byte array.

Membin Docs | Contributing | License What is Membin? The Membin database system is in-memory database smiliar to key-value databases, target to effici

Membin 3 Jun 3, 2021
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

null 35 Jun 17, 2022
Mantil-template-form-to-dynamodb - Receive form data and write it to a DynamoDB table

This template is an example of serverless integration between Google Forms and DynamoDB

Christoph Berger 2 Jan 17, 2022
🔑A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak.

bitcask A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout

James Mills 7 Apr 15, 2022
Embedded key-value store for read-heavy workloads written in Go

Pogreb Pogreb is an embedded key-value store for read-heavy workloads written in Go. Key characteristics 100% Go. Optimized for fast random lookups an

Artem Krylysov 899 Jun 24, 2022