Fast key-value DB in Go.

Overview

BadgerDB GoDoc Go Report Card Sourcegraph Build Status Appveyor Coverage Status

Badger mascot

BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for Dgraph, a fast, distributed graph database. It's meant to be a performant alternative to non-Go-based key-value stores like RocksDB.

Use Discuss Issues for reporting issues about this repository.

Project Status [March 24, 2020]

Badger is stable and is being used to serve data sets worth hundreds of terabytes. Badger supports concurrent ACID transactions with serializable snapshot isolation (SSI) guarantees. A Jepsen-style bank test runs nightly for 8h, with --race flag and ensures the maintenance of transactional guarantees. Badger has also been tested to work with filesystem level anomalies, to ensure persistence and consistency. Badger is being used by a number of projects which includes Dgraph, Jaeger Tracing, UsenetExpress, and many more.

The list of projects using Badger can be found here.

Badger v1.0 was released in Nov 2017, and the latest version that is data-compatible with v1.0 is v1.6.0.

Badger v2.0 was released in Nov 2019 with a new storage format which won't be compatible with all of the v1.x. Badger v2.0 supports compression, encryption and uses a cache to speed up lookup.

The Changelog is kept fairly up-to-date.

For more details on our version naming schema please read Choosing a version.

Table of Contents

Getting Started

Installing

To start using Badger, install Go 1.12 or above. Badger v2 needs go modules. Run the following command to retrieve the library.

$ go get github.com/dgraph-io/badger/v3

This will retrieve the library.

Note: Badger does not directly use CGO but it relies on https://github.com/DataDog/zstd for compression and it requires gcc/cgo. If you wish to use badger without gcc/cgo, you can run CGO_ENABLED=0 go get github.com/dgraph-io/badger/v3 which will download badger without the support for ZSTD compression algorithm.

Installing Badger Command Line Tool

Download and extract the latest Badger DB release from https://github.com/dgraph-io/badger/releases and then run the following commands.

$ cd badger-<version>/badger
$ go install

This will install the badger command line utility into your $GOBIN path.

Choosing a version

BadgerDB is a pretty special package from the point of view that the most important change we can make to it is not on its API but rather on how data is stored on disk.

This is why we follow a version naming schema that differs from Semantic Versioning.

  • New major versions are released when the data format on disk changes in an incompatible way.
  • New minor versions are released whenever the API changes but data compatibility is maintained. Note that the changes on the API could be backward-incompatible - unlike Semantic Versioning.
  • New patch versions are released when there's no changes to the data format nor the API.

Following these rules:

  • v1.5.0 and v1.6.0 can be used on top of the same files without any concerns, as their major version is the same, therefore the data format on disk is compatible.
  • v1.6.0 and v2.0.0 are data incompatible as their major version implies, so files created with v1.6.0 will need to be converted into the new format before they can be used by v2.0.0.

For a longer explanation on the reasons behind using a new versioning naming schema, you can read VERSIONING.md.

Badger Documentation

Badger Documentation is available at https://dgraph.io/docs/badger

Resources

Blog Posts

  1. Introducing Badger: A fast key-value store written natively in Go
  2. Make Badger crash resilient with ALICE
  3. Badger vs LMDB vs BoltDB: Benchmarking key-value databases in Go
  4. Concurrent ACID Transactions in Badger

Design

Badger was written with these design goals in mind:

  • Write a key-value database in pure Go.
  • Use latest research to build the fastest KV database for data sets spanning terabytes.
  • Optimize for SSDs.

Badger’s design is based on a paper titled WiscKey: Separating Keys from Values in SSD-conscious Storage.

Comparisons

Feature Badger RocksDB BoltDB
Design LSM tree with value log LSM tree only B+ tree
High Read throughput Yes No Yes
High Write throughput Yes Yes No
Designed for SSDs Yes (with latest research 1) Not specifically 2 No
Embeddable Yes Yes Yes
Sorted KV access Yes Yes Yes
Pure Go (no Cgo) Yes No Yes
Transactions Yes, ACID, concurrent with SSI3 Yes (but non-ACID) Yes, ACID
Snapshots Yes Yes Yes
TTL support Yes Yes No
3D access (key-value-version) Yes4 No No

1 The WISCKEY paper (on which Badger is based) saw big wins with separating values from keys, significantly reducing the write amplification compared to a typical LSM tree.

2 RocksDB is an SSD optimized version of LevelDB, which was designed specifically for rotating disks. As such RocksDB's design isn't aimed at SSDs.

3 SSI: Serializable Snapshot Isolation. For more details, see the blog post Concurrent ACID Transactions in Badger

4 Badger provides direct access to value versions via its Iterator API. Users can also specify how many versions to keep per key via Options.

Benchmarks

We have run comprehensive benchmarks against RocksDB, Bolt and LMDB. The benchmarking code, and the detailed logs for the benchmarks can be found in the badger-bench repo. More explanation, including graphs can be found the blog posts (linked above).

Projects Using Badger

Below is a list of known projects that use Badger:

  • Dgraph - Distributed graph database.
  • Jaeger - Distributed tracing platform.
  • go-ipfs - Go client for the InterPlanetary File System (IPFS), a new hypermedia distribution protocol.
  • Riot - An open-source, distributed search engine.
  • emitter - Scalable, low latency, distributed pub/sub broker with message storage, uses MQTT, gossip and badger.
  • OctoSQL - Query tool that allows you to join, analyse and transform data from multiple databases using SQL.
  • Dkron - Distributed, fault tolerant job scheduling system.
  • smallstep/certificates - Step-ca is an online certificate authority for secure, automated certificate management.
  • Sandglass - distributed, horizontally scalable, persistent, time sorted message queue.
  • TalariaDB - Grab's Distributed, low latency time-series database.
  • Sloop - Salesforce's Kubernetes History Visualization Project.
  • Immudb - Lightweight, high-speed immutable database for systems and applications.
  • Usenet Express - Serving over 300TB of data with Badger.
  • gorush - A push notification server written in Go.
  • 0-stor - Single device object store.
  • Dispatch Protocol - Blockchain protocol for distributed application data analytics.
  • GarageMQ - AMQP server written in Go.
  • RedixDB - A real-time persistent key-value store with the same redis protocol.
  • BBVA - Raft backend implementation using BadgerDB for Hashicorp raft.
  • Fantom - aBFT Consensus platform for distributed applications.
  • decred - An open, progressive, and self-funding cryptocurrency with a system of community-based governance integrated into its blockchain.
  • OpenNetSys - Create useful dApps in any software language.
  • HoneyTrap - An extensible and opensource system for running, monitoring and managing honeypots.
  • Insolar - Enterprise-ready blockchain platform.
  • IoTeX - The next generation of the decentralized network for IoT powered by scalability- and privacy-centric blockchains.
  • go-sessions - The sessions manager for Go net/http and fasthttp.
  • Babble - BFT Consensus platform for distributed applications.
  • Tormenta - Embedded object-persistence layer / simple JSON database for Go projects.
  • BadgerHold - An embeddable NoSQL store for querying Go types built on Badger
  • Goblero - Pure Go embedded persistent job queue backed by BadgerDB
  • Surfline - Serving global wave and weather forecast data with Badger.
  • Cete - Simple and highly available distributed key-value store built on Badger. Makes it easy bringing up a cluster of Badger with Raft consensus algorithm by hashicorp/raft.
  • Volument - A new take on website analytics backed by Badger.
  • KVdb - Hosted key-value store and serverless platform built on top of Badger.
  • Terminotes - Self hosted notes storage and search server - storage powered by BadgerDB

If you are using Badger in a project please send a pull request to add it to the list.

Contributing

If you're interested in contributing to Badger see CONTRIBUTING.md.

Contact

Comments
  • RunValueLogGC crashed

    RunValueLogGC crashed

    What version of Go are you using (go version)?

    $ go version
    go version go1.13.4 linux/amd64
    

    What version of Badger are you using?

    v2.0.0

    Does this issue reproduce with the latest master?

    Never tried

    What are the hardware specifications of the machine (RAM, OS, Disk)?

    Linux 64 SSD

    What did you do?

    	opts := badger.DefaultOptions(dir)
    	opts.SyncWrites = sync
    	db, err := badger.Open(opts)
    	if err != nil {
    		return nil, err
    	}
    	db.RunValueLogGC(0.1)
    
    	go func() {
    		ticker := time.NewTicker(1 * time.Minute)
    		defer ticker.Stop()
    		for range ticker.C {
    			lsm, vlog := db.Size()
    			if lsm > 1024*1024*8 || vlog > 1024*1024*32 {
    				db.RunValueLogGC(0.5)
    			}
    		}
    	}()
    

    What did you expect to see?

    Run value log gc should work

    What did you see instead?

    mixin[28404]: github.com/dgraph-io/badger/v2/y.AssertTrue
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/y/error.go:55
    mixin[28404]: github.com/dgraph-io/badger/v2.(*valueLog).doRunGC.func2
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/value.go:1591
    mixin[28404]: github.com/dgraph-io/badger/v2.(*valueLog).iterate
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/value.go:480
    mixin[28404]: github.com/dgraph-io/badger/v2.(*valueLog).doRunGC
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/value.go:1557
    mixin[28404]: github.com/dgraph-io/badger/v2.(*valueLog).runGC
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/value.go:1685
    mixin[28404]: github.com/dgraph-io/badger/v2.(*DB).RunValueLogGC
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/db.go:1129
    mixin[28404]: github.com/MixinNetwork/mixin/storage.openDB.func1
    mixin[28404]:         /home/one/github/mixin/storage/badger.go:68
    mixin[28404]: runtime.goexit
    mixin[28404]:         /snap/go/4762/src/runtime/asm_amd64.s:1357
    

    badger.go:68 db.RunValueLogGC(0.5)

    kind/maintenance priority/P1 status/accepted area/crash 
    opened by cedricfung 46
  • ARMv7 segmentation fault in oracle.readTs when calling loadUint64

    ARMv7 segmentation fault in oracle.readTs when calling loadUint64

    I am facing an issue running badger on an ARMv7 architecture. The minimal test case below works quite fine on an amd64 machine but, unfortunately, not on ARMv7 32bit.

    The trace below shows that the issue originates in atomic.loadUint64() but I also run basic atomic operations tests against the golang runtime, and they work fine on this architecture.

    It looks to me that the underlying memory of oracle.curRead somehow vanishes but I am not sure.

    Below you find also a strace trace. There the segmentation fault happens after the madvise, but I am not sure if this is related.

    Badger version: 1.0.1 (89689ef36cae26ae094cb5ea79b7400d839f2d68) golang version: 1.8.5 and 1.9.2

    Test case:

    func TestPersistentCache_DirectBadger(t *testing.T) {
    	dir, err := ioutil.TempDir("", "")
    	if err != nil {
    		t.Fatal(err)
    	}
    	defer os.RemoveAll(dir)
    
    	config := badger.DefaultOptions
    	config.TableLoadingMode = options.MemoryMap
    	config.ValueLogFileSize = 16 << 20
    	config.LevelOneSize = 8 << 20
    	config.MaxTableSize = 2 << 20
    	config.Dir = dir
    	config.ValueDir = dir
    	config.SyncWrites = false
    
    	db, err := badger.Open(config)
    	if err != nil {
    		t.Fatalf("cannot open db at location %s: %v", dir, err)
    	}
    
    	err = db.View(func(txn *badger.Txn) error { return nil })
    
    	if err != nil {
    		t.Fatal(err)
    	}
    
    	if err = db.Close(); err != nil {
    		t.Fatal(err)
    	}
    }
    
    === RUN   TestPersistentCache_DirectBadger
    --- FAIL: TestPersistentCache_DirectBadger (0.01s)
    panic: runtime error: invalid memory address or nil pointer dereference [recovered]
            panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x4 pc=0x1150c]
    
    goroutine 5 [running]:
    testing.tRunner.func1(0x10a793b0)
            /usr/lib/go/src/testing/testing.go:711 +0x2a0
    panic(0x3e4bd8, 0x6bb478)
            /usr/lib/go/src/runtime/panic.go:491 +0x204
    sync/atomic.loadUint64(0x10a483cc, 0x200000, 0x0)
            /usr/lib/go/src/sync/atomic/64bit_arm.go:10 +0x3c
    github.com/grid-x/client/vendor/github.com/dgraph-io/badger.(*oracle).readTs(0x10a483c0, 0x14, 0x5)
            /home/robert/Projects/gridx/client/src/github.com/grid-x/client/vendor/github.com/dgraph-io/badger/transaction.go:87 +0x3c
    github.com/grid-x/client/vendor/github.com/dgraph-io/badger.(*DB).NewTransaction(0x10b06000, 0x0, 0x4cccc)
            /home/robert/Projects/gridx/client/src/github.com/grid-x/client/vendor/github.com/dgraph-io/badger/transaction.go:440 +0x20
    github.com/grid-x/client/vendor/github.com/dgraph-io/badger.(*DB).View(0x10b06000, 0x464e20, 0x0, 0x0)
            /home/robert/Projects/gridx/client/src/github.com/grid-x/client/vendor/github.com/dgraph-io/badger/transaction.go:457 +0x3c
    command-line-arguments.TestPersistentCache_DirectBadger(0x10a793b0)
            /home/robert/Projects/gridx/client/src/github.com/grid-x/client/pkg/cache/persistent_cache_test.go:64 +0x1e8
    testing.tRunner(0x10a793b0, 0x464e24)
            /usr/lib/go/src/testing/testing.go:746 +0xb0
    created by testing.(*T).Run
            /usr/lib/go/src/testing/testing.go:789 +0x258
    

    strace:

    [pid 15075] mmap2(NULL, 33554432, PROT_READ, MAP_SHARED, 6, 0) = 0xb4dff000                                   
    [pid 15075] madvise(0xb4dff000, 33554432, MADV_RANDOM) = 0                     
    [pid 15075] clock_gettime(CLOCK_MONOTONIC, {tv_sec=69709, tv_nsec=217038306}) = 0                            
    [pid 15075] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x4} ---
    [pid 15075] rt_sigreturn()              = 0                                       
    
    kind/bug 
    opened by gq0 35
  • Performance regression 1.6 to 2.0.2

    Performance regression 1.6 to 2.0.2

    What version of Go are you using (go version)?

    go version go1.12.7 darwin/amd64

    What version of Badger are you using?

    2.0.2 (upgrading from 1.6.0)

    Does this issue reproduce with the latest master?

    Haven't tried.

    What are the hardware specifications of the machine (RAM, OS, Disk)?

    GCP 8 CPU (Intel Haswell), 32 GB RAM, 750 GB local ssd

    What did you do?

    Running code which extracts data from Kafka and saves to Badger DB. I'm running on exact same hardware, disk and my code against exact same Kafka topic.

    What did you expect to see?

    Better or equal performance with Badger 2.

    What did you see instead?

    Severe slowdown after writing ~1,461,000 records. See below

    1.6.0 performance:

    Performance in 1.6.0 takes about 300-400ms to extract 1000 messages.

      Up to offset 1453000 Time[330ms] Events[1453001] UrisCreated[1975] PathsCreated[0] Bytes[11.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T07:35:07.000] EstTimeToFinish[4h17m58s]
      Up to offset 1454000 Time[360ms] Events[1454001] UrisCreated[1954] PathsCreated[0] Bytes[11.2 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T11:31:43.000] EstTimeToFinish[4h18m1s]
      Up to offset 1455000 Time[340ms] Events[1455001] UrisCreated[1969] PathsCreated[0] Bytes[11.0 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T15:33:31.000] EstTimeToFinish[4h18m4s]
      Up to offset 1456000 Time[360ms] Events[1456001] UrisCreated[1789] PathsCreated[0] Bytes[13.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T20:46:14.000] EstTimeToFinish[4h18m7s]
      Up to offset 1457000 Time[320ms] Events[1457001] UrisCreated[1720] PathsCreated[0] Bytes[13.0 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T06:56:07.000] EstTimeToFinish[4h18m9s]
      Up to offset 1458000 Time[300ms] Events[1458001] UrisCreated[1736] PathsCreated[1] Bytes[10.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T18:40:17.000] EstTimeToFinish[4h18m9s]
    badger 2020/02/17 15:10:15 DEBUG: Flushing memtable, mt.size=194491818 size of flushChan: 0
    badger 2020/02/17 15:10:15 DEBUG: Storing value log head: {Fid:1 Len:45 Offset:87078740}
      Up to offset 1459000 Time[380ms] Events[1459001] UrisCreated[2140] PathsCreated[0] Bytes[11.4 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T21:04:18.000] EstTimeToFinish[4h18m13s]
      Up to offset 1460000 Time[370ms] Events[1460001] UrisCreated[1776] PathsCreated[0] Bytes[10.4 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T00:02:01.000] EstTimeToFinish[4h18m17s]
    badger 2020/02/17 15:10:16 DEBUG: Flushing memtable, mt.size=119942867 size of flushChan: 0
    badger 2020/02/17 15:10:16 DEBUG: Storing value log head: {Fid:1 Len:45 Offset:87168065}
      Up to offset 1461000 Time[430ms] Events[1461001] UrisCreated[1753] PathsCreated[0] Bytes[10.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T06:01:21.000] EstTimeToFinish[4h18m23s]
      Up to offset 1462000 Time[370ms] Events[1462001] UrisCreated[1779] PathsCreated[0] Bytes[10.5 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T16:45:03.000] EstTimeToFinish[4h18m26s]
      Up to offset 1463000 Time[360ms] Events[1463001] UrisCreated[1735] PathsCreated[0] Bytes[11.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T20:10:04.000] EstTimeToFinish[4h18m29s]
      Up to offset 1464000 Time[370ms] Events[1464001] UrisCreated[1664] PathsCreated[0] Bytes[10.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T23:03:44.000] EstTimeToFinish[4h18m33s]
      Up to offset 1465000 Time[350ms] Events[1465001] UrisCreated[1732] PathsCreated[0] Bytes[10.2 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T02:38:13.000] EstTimeToFinish[4h18m35s]
      Up to offset 1466000 Time[380ms] Events[1466001] UrisCreated[1825] PathsCreated[0] Bytes[10.6 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T06:12:39.000] EstTimeToFinish[4h18m39s]
      Up to offset 1467000 Time[360ms] Events[1467001] UrisCreated[1868] PathsCreated[0] Bytes[11.1 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T10:08:51.000] EstTimeToFinish[4h18m42s]
      Up to offset 1468000 Time[380ms] Events[1468001] UrisCreated[1920] PathsCreated[1] Bytes[11.3 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T13:54:45.000] EstTimeToFinish[4h18m46s]
      Up to offset 1469000 Time[350ms] Events[1469001] UrisCreated[1875] PathsCreated[0] Bytes[11.5 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T17:20:47.000] EstTimeToFinish[4h18m48s]
      Up to offset 1470000 Time[350ms] Events[1470001] UrisCreated[1767] PathsCreated[0] Bytes[11.3 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T20:41:05.000] EstTimeToFinish[4h18m51s]
      Up to offset 1471000 Time[340ms] Events[1471001] UrisCreated[1768] PathsCreated[0] Bytes[10.8 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T23:51:59.000] EstTimeToFinish[4h18m54s]
      Up to offset 1472000 Time[370ms] Events[1472001] UrisCreated[1758] PathsCreated[0] Bytes[10.8 MiB] TotalBytes[11.9 GiB] Date[2014-11-14T03:28:45.000] EstTimeToFinish[4h18m57s]
    
    

    2.0.2 performance:

    Notice that at approximately offset 1462000 (1,462,000 records), things start slowing down from a rate of 300-400ms per 1,000 records to 25-30 seconds per 1,000 records! It happens after the very first Flushing memtable debug message. If you look above, the Flushing happens at the exact same place, but things continue speedily after.

      Up to offset 1453000 Time[360ms] Events[1453001] UrisCreated[1975] PathsCreated[0] Bytes[11.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T07:35:07.000] EstTimeToFinish[4h19m33s]
      Up to offset 1454000 Time[330ms] Events[1454001] UrisCreated[1954] PathsCreated[0] Bytes[11.2 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T11:31:43.000] EstTimeToFinish[4h19m35s]
      Up to offset 1455000 Time[380ms] Events[1455001] UrisCreated[1969] PathsCreated[0] Bytes[11.0 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T15:33:31.000] EstTimeToFinish[4h19m39s]
      Up to offset 1456000 Time[320ms] Events[1456001] UrisCreated[1789] PathsCreated[0] Bytes[13.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T20:46:14.000] EstTimeToFinish[4h19m41s]
      Up to offset 1457000 Time[340ms] Events[1457001] UrisCreated[1720] PathsCreated[0] Bytes[13.0 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T06:56:07.000] EstTimeToFinish[4h19m43s]
      Up to offset 1458000 Time[310ms] Events[1458001] UrisCreated[1736] PathsCreated[1] Bytes[10.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T18:40:17.000] EstTimeToFinish[4h19m44s]
    badger 2020/03/09 17:36:39 DEBUG: Flushing memtable, mt.size=194487650 size of flushChan: 0
    badger 2020/03/09 17:36:39 DEBUG: Storing value log head: {Fid:1 Len:32 Offset:74078864}
      Up to offset 1459000 Time[680ms] Events[1459001] UrisCreated[2140] PathsCreated[0] Bytes[11.4 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T21:04:18.000] EstTimeToFinish[4h20m0s]
      Up to offset 1460000 Time[500ms] Events[1460001] UrisCreated[1776] PathsCreated[0] Bytes[10.4 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T00:02:01.000] EstTimeToFinish[4h20m8s]
    badger 2020/03/09 17:36:40 DEBUG: Flushing memtable, mt.size=119942767 size of flushChan: 0
    badger 2020/03/09 17:36:40 DEBUG: Storing value log head: {Fid:1 Len:32 Offset:74168111}
      Up to offset 1461000 Time[430ms] Events[1461001] UrisCreated[1753] PathsCreated[0] Bytes[10.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T06:01:21.000] EstTimeToFinish[4h20m14s]
      Up to offset 1462000 Time[4.74s] Events[1462001] UrisCreated[1779] PathsCreated[0] Bytes[10.5 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T16:45:03.000] EstTimeToFinish[4h23m6s]
      Up to offset 1463000 Time[14.45s] Events[1463001] UrisCreated[1735] PathsCreated[0] Bytes[11.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T20:10:04.000] EstTimeToFinish[4h32m12s]
      Up to offset 1464000 Time[19.38s] Events[1464001] UrisCreated[1664] PathsCreated[0] Bytes[10.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T23:03:44.000] EstTimeToFinish[4h44m27s]
      Up to offset 1465000 Time[24.52s] Events[1465001] UrisCreated[1732] PathsCreated[0] Bytes[10.2 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T02:38:13.000] EstTimeToFinish[4h59m59s]
      Up to offset 1466000 Time[27.25s] Events[1466001] UrisCreated[1825] PathsCreated[0] Bytes[10.6 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T06:12:39.000] EstTimeToFinish[5h17m15s]
      Up to offset 1467000 Time[31.8s] Events[1467001] UrisCreated[1868] PathsCreated[0] Bytes[11.1 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T10:08:51.000] EstTimeToFinish[5h37m24s]
      Up to offset 1468000 Time[32.87s] Events[1468001] UrisCreated[1920] PathsCreated[1] Bytes[11.3 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T13:54:45.000] EstTimeToFinish[5h58m12s]
      Up to offset 1469000 Time[28.9s] Events[1469001] UrisCreated[1875] PathsCreated[0] Bytes[11.5 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T17:20:47.000] EstTimeToFinish[6h16m27s]
      Up to offset 1470000 Time[27.58s] Events[1470001] UrisCreated[1767] PathsCreated[0] Bytes[11.3 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T20:41:05.000] EstTimeToFinish[6h33m49s]
      Up to offset 1471000 Time[30.04s] Events[1471001] UrisCreated[1768] PathsCreated[0] Bytes[10.8 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T23:51:59.000] EstTimeToFinish[6h52m44s]
      Up to offset 1472000 Time[34.09s] Events[1472001] UrisCreated[1758] PathsCreated[0] Bytes[10.8 MiB] TotalBytes[11.9 GiB] Date[2014-11-14T03:28:45.000] EstTimeToFinish[7h14m13s]
    

    I tried the same with compression disabled and saw similar results. The options I'm using are DefaultOptions with the following tweaks:

    	actualOpts := opts.
    		WithMaxTableSize(256 << 20). // max size 256M
    		WithSyncWrites(false).       // don't sync writes for faster performance
    		WithCompression(options.None)
    

    I literally just started on the 2.0 migration today. I'm running the same code I've been running for 6 months.

    kind/enhancement priority/P0 area/performance status/accepted 
    opened by dougdonohoe 30
  • Discard invalid versions of keys during compaction

    Discard invalid versions of keys during compaction

    I'm hoping this is a configuration related issue but I've played around with the settings and I keep getting the same behavior. Tested on an i3.4XL in AWS, raid0 on the two SSD drives.

    Expected behavior of the code below:

    • keys/data are stored for 1hr, after a few hours the badger directory should stay fairly constant as you write/expire keys
    • I would expect to see sst files written and multiple size levels each level a larger file size
    • memory should stay fairly consistent

    Actual behavior seen:

    • OOM's after 12 hours
    • all sst files at 67MB (thousands of them)
    • disk fills up on a 4TB drive, no data appears to ttl out
    • file counts steadily increase until oom (there's no leveling off)
    • every hour the process stalls (assuming the stall count is being hit according to profiler)

    Please advise of what is wrong in the code below, thanks!

    3HRs of runtime you can see just linear growth https://imgur.com/a/2UUfIrG

    UPDATE: I've also tried with these settings and memory doesn't grow as fast but it linearly climbs until OOM as well and the same behavior as above

    dir := "/raid0/badgertest"
    opts := badger.DefaultOptions
    opts.Dir = dir
    opts.ValueDir = dir
    opts.TableLoadingMode = options.FileIO
    opts.SyncWrites = false
    db, err := badger.Open(opts)
    
    package usecases
    
    import (
    	"github.com/dgraph-io/badger"
    	"github.com/dgraph-io/badger/options"
    	"time"
    	"fmt"
    	"encoding/binary"
    	"github.com/spaolacci/murmur3"
    	"path/filepath"
    	"os"
    	"github.com/Sirupsen/logrus"
    )
    
    type writable struct {
    	key   []byte
    	value []byte
    }
    
    
    type BadgerTest struct {
    	db *badger.DB
    }
    
    func NewBadgerTest() *BadgerTest {
    
    	dir := "/raid0/badgertest"
    	opts := badger.DefaultOptions
    	opts.Dir = dir
    	opts.ValueDir = dir
    	opts.TableLoadingMode = options.MemoryMap
    	opts.NumCompactors = 1
    	opts.NumLevelZeroTables = 20
    	opts.NumLevelZeroTablesStall = 50
    	opts.SyncWrites = false
    	db, err := badger.Open(opts)
    	if err != nil {
    		panic(fmt.Sprintf("unable to open badger db; %s", err))
    	}
    	bt := &BadgerTest{
    		db: db,
    	}
    
    	go bt.filecounts(dir)
    	return bt
    
    }
    
    func (b *BadgerTest) Start() {
    
    	workers := 4
    	for i := 0; i < workers; i++ {
    		go b.write()
    	}
    	go b.badgerGC()
    
    }
    func (b *BadgerTest) Stop() {
    
    	b.db.Close()
    	logrus.Infof("shut down badger test")
    	time.Sleep(1 * time.Second)
    }
    
    func (b *BadgerTest) filecounts(dir string) {
    
    	ticker := time.NewTicker(60 * time.Second)
    	for _ = range ticker.C {
    
    		logFiles := int64(0)
    		sstFiles := int64(0)
    		_ = filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
    
    			if filepath.Ext(path) == ".sst" {
    				sstFiles++
    			}
    			if filepath.Ext(path) == ".vlog" {
    				logFiles++
    			}
    			return nil
    		})
    
    
    		logrus.Infof("updated gauges vlog=%d sst=%d", logFiles, sstFiles)
    
    	}
    
    }
    
    func (b *BadgerTest) write() {
    
    	data := `{"randomstring":"6446D58D6DFAFD58586D3EA85A53F4A6B3CC057F933A22BB58E188A74AC8F663","refID":12495494,"testfield1234":"foo bar baz","date":"2018-01-01 12:00:00"}`
    	batchSize := 20000
    	rows := []writable{}
    	var cnt uint64
    	for {
    		cnt++
    		ts := time.Now().UnixNano()
    		buf := make([]byte, 24)
    		offset := 0
    		binary.BigEndian.PutUint64(buf[offset:], uint64(ts))
    		offset = offset + 8
    		key := fmt.Sprintf("%d%d", ts, cnt)
    		mkey := murmur3.Sum64([]byte(key))
    		binary.BigEndian.PutUint64(buf[offset:], mkey)
    
    		offset = offset + 8
    		binary.BigEndian.PutUint64(buf[offset:], cnt)
    
    		w := writable{key: buf, value: []byte(data)}
    		rows = append(rows, w)
    		if len(rows) > batchSize {
    			b.saveRows(rows)
    			rows = []writable{}
    		}
    	}
    
    }
    
    func (b *BadgerTest) saveRows(rows []writable) {
    	ttl := 1 * time.Hour
    
    	_ = b.db.Update(func(txn *badger.Txn) error {
    		var err error
    		for _, row := range rows {
    			testMsgMeter.Mark(1)
    			if err := txn.SetWithTTL(row.key, row.value, ttl); err == badger.ErrTxnTooBig {
    				logrus.Infof("TX too big, committing...")
    				_ = txn.Commit(nil)
    				txn = b.db.NewTransaction(true)
    				err = txn.SetWithTTL(row.key, row.value, ttl)
    			}
    		}
    		return err
    	})
    }
    
    func (b *BadgerTest) badgerGC() {
    
    	ticker := time.NewTicker(1 * time.Minute)
    	for {
    		select {
    		case <-ticker.C:
    			logrus.Infof("CLEANUP starting to purge keys %s", time.Now())
    			err := b.db.PurgeOlderVersions()
    			if err != nil {
    				logrus.Errorf("badgerOps unable to purge older versions; %s", err)
    			}
    			err = b.db.RunValueLogGC(0.5)
    			if err != nil {
    				logrus.Errorf("badgerOps unable to RunValueLogGC; %s", err)
    			}
    			logrus.Infof("CLEANUP purge complete %s", time.Now())
    		}
    	}
    }
    
    
    
    kind/enhancement 
    opened by jiminoc 26
  • GC doesn't work? (not cleaning up SST files properly)

    GC doesn't work? (not cleaning up SST files properly)

    What version of Go are you using (go version)?

    $ go version
    1.13.8
    

    What version of Badger are you using?

    v1.6.0

    opts := badger.DefaultOptions(fmt.Sprintf(dir + "/" + name)) opts.SyncWrites = false opts.ValueLogLoadingMode = options.FileIO

    Does this issue reproduce with the latest master?

    With the latest master GC becomes much slower

    What are the hardware specifications of the machine (RAM, OS, Disk)?

    2TB NVME drive, 128 GB RAM

    What did you do?

    I have a Kafka topic with 12 partitions. For every partition I create a database. Each database grows quite quickly (about 12*30GB per hour) and the TTL for most of the events is 1h, so the size should stay at constant level. Now for every partition I create a separate transaction and I process read and write operations sequentially, there is no concurrency, when the transaction is getting to big I commit it, in separate go-routine I start RunValueLogGC(0.5). Most of GC runs end up with ErrNoRewrite. Even tried to repeat RunValueLogGC until I have 5 errors in the row, but still I was running out of disk space quite quickly. My current fix is to patch the Badger GC, make it run on every fid that is before the head. This works fine, but eventually becomes slow when I have too many log files.

    What did you expect to see?

    The size of each of twelve databases I created, should stay at constant level and has less then 20 GB

    What did you see instead?

    After running it for a day, if I look at one of twelve databases, I see 210 sst files, 68 vlog files, db size is 84 GB (and these numbers keep growing).

    If I run badger histogram it shows me this stats:

    Histogram of key sizes (in bytes) Total count: 4499955 Min value: 13 Max value: 108 Mean: 22.92 Range Count [ 8, 16) 2 [ 16, 32) 4499939 [ 64, 128) 14

    Histogram of value sizes (in bytes) Total count: 4499955 Min value: 82 Max value: 3603 Mean: 2428.16 Range Count [ 64, 128) 1 [ 256, 512) 19301 [ 512, 1024) 459 [ 1024, 2048) 569 [ 2048, 4096) 4479625

    2428*4479625=10GB

    kind/bug priority/P1 status/accepted area/gc 
    opened by adwinsky 25
  • Use pure Go based ZSTD implementation

    Use pure Go based ZSTD implementation

    Fixes https://github.com/dgraph-io/badger/issues/1162

    This PR proposes to use https://github.com/klauspost/compress/tree/master/zstd instead of CGO based https://github.com/DataDog/zstd .

    This PR also removes the CompressionLevel options since https://github.com/klauspost/compress/tree/master/zstd supports only two levels of ZSTD Compression. The default level is ZSTD Level 3 and the fastest level is ZSTD level 1. ZSTD level 1 will be the default level in badger.

    I've experimented will all the suggestions mentioned in https://github.com/klauspost/compress/issues/196#issuecomment-568905095 . Setting WithSingleSegment didn't seem to make a lot of speed difference (~ 1MB/s difference) WithNoEntropyCompression seemed to have ~ 3% of speed improvement (but that could also be because of non-deterministic nature of benchmarks)

    name                                       old time/op      new time/op (NoEntropy set)   delta
    Compression/ZSTD_-_Go_-_level1-16           35.7µs ± 1%     36.9µs ± 5%                 +3.41%  (p=0.008 n=5+5)
    Decompression/ZSTD_-_Go-16                  16.0µs ± 0%     15.9µs ± 1%                 -0.77%  (p=0.016 n=5+5)
    
    name                                    old speed      new speed (NoEntropy set)      delta
    Compression/ZSTD_-_Go_-_level1-16      115MB/s ± 1%   111MB/s ± 5%                -3.24%  (p=0.008 n=5+5)
    Decompression/ZSTD_-_Go-16             256MB/s ± 0%   258MB/s ± 1%                 +0.78%  (p=0.016 n=5+5)
    

    Benchmarks

    1. Table Data (contains some randomly generated data).
    Compression Ratio Datadog ZSTD level 1 3.1993720565149135
    Compression Ratio Datadog ZSTD level 3 3.099619771863118
    
    Compression Ratio Go ZSTD 3.2170481452249406
    Compression Ratio Go ZSTD level 3 3.1474903474903475
    
    name                                        time/op
    Compression/ZSTD_-_Datadog-level1-16    17.6µs ± 3%
    Compression/ZSTD_-_Datadog-level3-16    20.7µs ± 3%
    
    Compression/ZSTD_-_Go_-_level1-16       27.8µs ± 2%
    Compression/ZSTD_-_Go_-_Default-16      39.1µs ± 1%
    
    Decompression/ZSTD_-_Datadog-16         7.12µs ± 2%
    Decompression/ZSTD_-_Go-16              13.7µs ± 2%
    
    name                                       speed
    Compression/ZSTD_-_Datadog-level1-16   231MB/s ± 3%
    Compression/ZSTD_-_Datadog-level3-16   197MB/s ± 3%
    
    Compression/ZSTD_-_Go_-_level1-16      147MB/s ± 2%
    Compression/ZSTD_-_Go_-_Default-16     104MB/s ± 1%
    
    Decompression/ZSTD_-_Datadog-16        573MB/s ± 2%
    Decompression/ZSTD_-_Go-16             298MB/s ± 2%
    
    1. 4KB of text taken from https://gist.github.com/StevenClontz/4445774
    Compression Ratio ZSTD level 1 1.9294781382228492
    Compression Ratio ZSTD level 3 1.9322033898305084
    
    Compression Ratio Go ZSTD 1.894736842105263
    Compression Ratio Go ZSTD level 3 1.927665570690465
    
    name                                       time/op
    Compression/ZSTD_-_Datadog-level1-16    22.7µs ± 4%
    Compression/ZSTD_-_Datadog-level3-16    29.6µs ± 4%
    
    Compression/ZSTD_-_Go_-_level1-16       35.7µs ± 1%
    Compression/ZSTD_-_Go_-_Default-16      97.9µs ± 1%
    
    Decompression/ZSTD_-_Datadog-16         8.36µs ± 0%
    Decompression/ZSTD_-_Go-16              16.0µs ± 0%
    
    name                                       speed
    Compression/ZSTD_-_Datadog-level1-16   181MB/s ± 4%
    Compression/ZSTD_-_Datadog-level3-16   139MB/s ± 4%
    
    Compression/ZSTD_-_Go_-_level1-16      115MB/s ± 1%
    Compression/ZSTD_-_Go_-_Default-16    41.9MB/s ± 1%
    
    Decompression/ZSTD_-_Datadog-16        489MB/s ± 2%
    Decompression/ZSTD_-_Go-16             256MB/s ± 0%
    

    Here's the script I've used https://gist.github.com/jarifibrahim/91920e93d1ecac3006b269e0c05d6a24


    This change is Reviewable

    opened by jarifibrahim 25
  • Support encryption at rest

    Support encryption at rest

    Hi, Currently there is no authentication support. It will be a great feature to have. We are using badger for developing a banking solution and data privacy is a requirement. Kindly let me know if you can incorporate the security feature.

    Regards, Asim.

    priority/P2 area/security status/accepted kind/feature exp/expert 
    opened by asimpatnaik 25
  • Improve GC strategy to reclaim multiple logs

    Improve GC strategy to reclaim multiple logs

    Hello,

    let's take the following scenario:

    • open a database
    • insert 1M key/values in badgers, with distinct keys
    • delete all the key values
    • run PurgeOlderVersions()
    • run RunValueLogGC(0.5)
    • close the database

    Then the db directory has still a large size. It looks like disk space was not reclaimed. Am i doing something wrong ?

    Moreover, when i iterate over the now empty database, iteration time is still quite long, but no result is returned of course.

    Thanks, Stephane

    kind/enhancement kind/question 
    opened by stephane-martin 22
  • Mobile support.

    Mobile support.

    I currently use boltdb on mobiles. In bolts readme there are some minor adjustments required for mobiles.

    The code is then compiled into an aar or framework file for each is using gomobile.

    It's stupid easy to us :)

    Would the team be open to looking into mobile support ?

    kind/bug area/documentation priority/P2 status/more-info-needed 
    opened by joeblew99 22
  • BadgerDB open() call takes long time (> 2 min) to complete

    BadgerDB open() call takes long time (> 2 min) to complete

    What version of Go are you using (go version)?

    $ go version
    go version go1.13.3 linux/amd64
    

    What version of Badger are you using?

    github.com/dgraph-io/badger v1.6.0

    Does this issue reproduce with the latest master?

    Yes

    What are the hardware specifications of the machine (RAM, OS, Disk)?

    RAM - 16GB OS - Ubuntu 16.04 Disk - SSD

    What did you do?

    We are using BadgerDB for deduplication. We store message ID as the key and the value as nil. We open the badgerdb during the initialization.

    gateway.badgerDB, err = badger.Open(badger.DefaultOptions(path))
    

    Code that writes to badger DB

    		err := badgerDB.Update(func(txn *badger.Txn) error {
    			for _, messageID := range messageIDs {
    				e := badger.NewEntry([]byte(messageID), nil).WithTTL(dedupWindow * time.Second)
    				if err := txn.SetEntry(e); err == badger.ErrTxnTooBig {
    					_ = txn.Commit()
    					txn = badgerDB.NewTransaction(true)
    					_ = txn.SetEntry(e)
    				}
    			}
    			return nil
    		})
    
    $ du -ch -d 1 ./badgerdb
    18G	./badgerdb
    18G	total
    
    $ ls -l ./badgerdb/ | grep sst | wc -l
    270
    

    Over 1 day, we have 270 SST files and 18 GB data.

    What did you expect to see?

    The badger.Open call completing in a few seconds.

    What did you see instead?

    The badger.Open takes around 2.5 minutes to open 270 files.

    kind/enhancement priority/P2 area/performance status/accepted 
    opened by SumanthPuram 21
  • Infinite recursion in Item.yieldItemValue ?

    Infinite recursion in Item.yieldItemValue ?

    Hi,

    I face a difficult to debug problem with badger. It happens in the following situation:

    • ingest a lot of data (say 1M key-values)
    • delete that data
    • stop the program (properly closing the badger database)
    • relaunch the program

    Then it can happen that when the program reopens the badger database, go panics with a "runtime: goroutine stack exceeds 1000000000-byte limit".

    Further tries to start the program then always face a panic.

    The problem might be in my code of course, but I can't find anything strange. I disabled everything except opening the database and iterating over key values, and panic still happens.

    The traces show:

    goroutine 1 [running]:
    runtime.makeslice(0xef4340, 0x28, 0x28, 0xc425764000, 0x0, 0x7ff73adb46c8)
            /usr/local/go/src/runtime/slice.go:46 +0xf7 fp=0xc44cd70348 sp=0xc44cd70340 pc=0x4470f7
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*blockIterator).parseKV(0xc42d3aa990, 0xf00140000, 0xffffffff)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:114 +0x4bf fp=0xc44cd70430 sp=0xc44cd70348 pc=0xc749cf
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*blockIterator).Next(0xc42d3aa990)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:154 +0x191 fp=0xc44cd70480 sp=0xc44cd70430 pc=0xc74bd1
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*blockIterator).Init(0xc42d3aa990)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:54 +0x3d fp=0xc44cd70498 sp=0xc44cd70480 pc=0xc7414d
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*blockIterator).Seek(0xc42d3aa990, 0xc42d3a4cc0, 0x2b, 0x30, 0x0)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:84 +0x153 fp=0xc44cd704e8 sp=0xc44cd70498 pc=0xc74303
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*Iterator).seekHelper(0xc42d3a2600, 0x0, 0xc42d3a4cc0, 0x2b, 0x30)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:270 +0x11f fp=0xc44cd70550 sp=0xc44cd704e8 pc=0xc7551f
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*Iterator).seekFrom(0xc42d3a2600, 0xc42d3a4cc0, 0x2b, 0x30, 0x0)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:300 +0x12f fp=0xc44cd705b8 sp=0xc44cd70550 pc=0xc756bf
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*Iterator).seek(0xc42d3a2600, 0xc42d3a4cc0, 0x2b, 0x30)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:316 +0x55 fp=0xc44cd705f0 sp=0xc44cd705b8 pc=0xc75815
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*Iterator).Seek(0xc42d3a2600, 0xc42d3a4cc0, 0x2b, 0x30)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:417 +0x82 fp=0xc44cd70620 sp=0xc44cd705f0 pc=0xc75f92
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*levelHandler).get(0xc4203ae8a0, 0xc42d3a4cc0, 0x2b, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/level_handler.go:253 +0x265 fp=0xc44cd706f8 sp=0xc44cd70620 pc=0xc8acc5
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*levelsController).get(0xc420393e30, 0xc42d3a4cc0, 0x2b, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/levels.go:727 +0xf6 fp=0xc44cd70820 sp=0xc44cd706f8 pc=0xc90e76
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*DB).get(0xc42040c700, 0xc42d3a4cc0, 0x2b, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/db.go:507 +0x1fd fp=0xc44cd70940 sp=0xc44cd70820 pc=0xc818fd
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*Item).yieldItemValue(0xc4204202c0, 0xc42d3a4c30, 0x2b, 0x30, 0x2, 0x0, 0xc42d392c23)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/iterator.go:169 +0x414 fp=0xc44cd70aa8 sp=0xc44cd70940 pc=0xc86f94
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*Item).yieldItemValue(0xc4204202c0, 0xc42d3a4ba0, 0x2b, 0x30, 0x2, 0x0, 0xc42d392c03)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/iterator.go:178 +0x4d2 fp=0xc44cd70c10 sp=0xc44cd70aa8 pc=0xc87052
    

    And so on afterward. The calls to yieldItemValue stack until explosion.

    kind/bug 
    opened by stephane-martin 21
  • Revisit configurable logging

    Revisit configurable logging

    I was reading the comments and was quite disappointed in the solution from a few years back regarding logging. Just because I don’t want to see info messages, does not mean I don’t want to see warnings and errors. The problem is that the GO logging that is built in by default so so weak, it is not really ready for true commercial work. Being able to set the logger to nil throws the baby out with the bath water. Has there been any attempt to revisit this solution, to use logrus or zap? Especially with a database, logging is so important for it to be this badly implemented

    opened by kfries 4
  • [BUG]: Deleting keys in bulk doesn't delete all the keys even though the txn contains the expected pending writes

    [BUG]: Deleting keys in bulk doesn't delete all the keys even though the txn contains the expected pending writes

    What version of Badger are you using?

    github.com/dgraph-io/badger/v3 v3.2103.3

    What version of Go are you using?

    go version go1.18.5 darwin/arm64

    Have you tried reproducing the issue with the latest release?

    Yes

    What is the hardware spec (RAM, CPU, OS)?

    MacBook Pro (13-inch, M1, 2020) Apple M1 16 GB RAM

    What steps will reproduce the bug?

    package main
    
    import (
    	"bytes"
    	"fmt"
    	"github.com/dgraph-io/badger/v3"
    )
    
    func main() {
    	db, err := badger.Open(badger.DefaultOptions("").WithInMemory(true))
    	if err != nil {
    		panic(err)
    	}
    	n := 200
    
    	// Write 1,000 keys
    	err = db.Update(func(txn *badger.Txn) error {
    		for i := 0; i < n; i++ {
    			err = txn.Set([]byte(fmt.Sprintf("%v", i)), bytes.Repeat([]byte{0}, 1024))
    			if err != nil {
    				return err
    			}
    		}
    		return nil
    	})
    	if err != nil {
    		panic(err)
    	}
    
    	// Check the number of keys
    	if getKeyCount(db) != n {
    		panic("expected 200 elements")
    	}
    
    	// Delete all the elements
    	var keys [][]byte
    	err = db.Update(func(txn *badger.Txn) error {
    		it := txn.NewIterator(badger.DefaultIteratorOptions)
    		defer it.Close()
    		for it.Seek([]byte{}); it.ValidForPrefix([]byte{}); it.Next() {
    			keys = append(keys, it.Item().Key())
    		}
    		return nil
    	})
    	if err != nil {
    		panic(err)
    	}
    	containsDups(keys)
    
    	err = db.Update(func(txn *badger.Txn) error {
    		for _, k := range keys {
    			err = txn.Delete(k)
    			if err != nil {
    				return err
    			}
    		}
    		return nil
    	})
    	if err != nil {
    		panic(err)
    	}
    
    	// Check again
    	if i := getKeyCount(db); i != 0 {
    		panic(fmt.Sprintf("expected 0 elements, got %v", i))
    	}
    }
    
    func getKeyCount(db *badger.DB) (i int) {
    	err := db.View(func(txn *badger.Txn) error {
    		it := txn.NewIterator(badger.DefaultIteratorOptions)
    		defer it.Close()
    		for it.Seek([]byte{}); it.ValidForPrefix([]byte{}); it.Next() {
    			i++
    		}
    		return nil
    	})
    	if err != nil {
    		panic(err)
    	}
    	return
    }
    
    func containsDups(in [][]byte) {
    	for i, k := range in {
    		for j, v := range in {
    			if bytes.Equal(k, v) && i != j {
    				fmt.Printf("found dup %v=%v\n", i, j)
    			}
    		}
    	}
    }
    

    Prints the following

    found dup 1=0
    found dup 2=101
    found dup 3=103
    found dup 4=104
    found dup 5=105
    found dup 6=106
    found dup 7=107
    found dup 8=108
    found dup 9=109
    found dup 10=110
    found dup 11=111
    found dup 13=113
    found dup 24=124
    found dup 35=135
    found dup 46=146
    found dup 57=157
    found dup 68=168
    found dup 79=179
    found dup 90=190
    

    Expected behavior and actual result.

    I expected this code to not panic and for all the keys that I had added, to be deleted. Using the debugger, I verified that all the keys that I deleted do show up in the transaction's pending writes, but only 100 out of 1000 (consistently) are actually deleted. Deleting the keys in a separate transaction from the iterator doesn't seem to make a difference. I saw this issue in an internal project, and then reproduced it with this script.

    Additional information

    Another thing that's interesting is that if you set n to 100, the problem does not exist. Anything higher results in duplicates.

    No response

    kind/bug 
    opened by clarkmcc 2
  • [QUESTION]: db.HandoverSkiplist function not available

    [QUESTION]: db.HandoverSkiplist function not available

    Question.

    I am using the latest version of github.com/dgraph-io/badger/v3. It doesn't have (db *DB)HandoverSkiplist function. I have seen this function under the "main" tag on GitHub. Am I missing something or why is this function not in v3?

    Thanks

    kind/question 
    opened by evanoberholster 1
  • [BUG]: Iteration stops after 104854 items

    [BUG]: Iteration stops after 104854 items

    What version of Badger are you using?

    v3.2103.3

    What version of Go are you using?

    go1.19.1 linux/amd64

    Have you tried reproducing the issue with the latest release?

    Yes

    What is the hardware spec (RAM, CPU, OS)?

    Intel© Core™ i7-6820HQ CPU Linux Mint 20.3 Linux Kernel 5.2.7 RAM: 16GB

    What steps will reproduce the bug?

    Run the following code. The "keys remained in the DB" should be 0 but it displays: 5146 . That means the iteration stopped after (110000-5146 =) 104854 items. Increasing the number of items created in the first step will increase the number that has been left.

    package main
    
    import (
    	"encoding/binary"
    	"log"
    
    	badger "github.com/dgraph-io/badger/v3"
    )
    
    func main() {
    	db, err := badger.Open(badger.DefaultOptions("testdb"))
    	if err != nil {
    		log.Fatal(err)
    	}
    	defer db.Close()
    
    	//11 times create 10000 entries and then remove them
    	for i := 0; i < 11; i++ {
    		log.Println("Round ", i+1)
    		db.Update(func(txn *badger.Txn) error {
    
    			seq, _ := db.GetSequence([]byte("abc"), 1000)
    			b := make([]byte, 8)
    			for j := 0; j < 10000; j++ {
    				s, _ := seq.Next()
    				binary.LittleEndian.PutUint64(b, s)
    				key := make([]byte, 8)
    				copy(key, b)
    				txn.Set(key, []byte("Hasta la vista, baby!"))
    			}
    
    			return nil
    		})
    
    	}
    	db.Update(func(txn *badger.Txn) error {
    		opts := badger.DefaultIteratorOptions
    		opts.PrefetchSize = 10
    		it := txn.NewIterator(opts)
    		defer it.Close()
    		for it.Rewind(); it.Valid(); it.Next() {
    			txn.Delete(it.Item().KeyCopy(nil))
    		}
    		return nil
    	})
    
    	db.View(func(txn *badger.Txn) error {
    		opts := badger.DefaultIteratorOptions
    		opts.PrefetchSize = 10
    		it := txn.NewIterator(opts)
    		defer it.Close()
    		ctr := 0
    		for it.Rewind(); it.Valid(); it.Next() {
    			ctr++
    			//log.Println(string(it.Item().Key()))
    		}
    		log.Println(ctr, " keys remained in the DB.")
    		return nil
    	})
    
    }
    

    Expected behavior and actual result.

    The iteration loop over the items should remove all items, leave none.

    Additional information

    No response

    kind/bug 
    opened by lacikawiz 1
  • high read bytes/s

    high read bytes/s

    i use rocksdb on my machine with 100 GB SSD disk. and suddenly the read bytes / s is very high.

    why it will happen and how to avoid it?

    is there anything could help me debug reason like trace or metric? 截屏2022-10-09 上午11 24 37

    截屏2022-10-09 上午11 25 10

    opened by zdyj3170101136 1
  • badger crash in memory mode on windows x86

    badger crash in memory mode on windows x86

    env: GOARCH=386 GOOS=windows

    won't crash on GOARCH=amd64

    option MemTableSize smaller, easier appear

    panic log:

    panic({0x1e4e7c0, 0x3e58ac8})
    	/usr/local/go/src/runtime/panic.go:838 +0x1ba
    os.(*File).Name(...)
    	/usr/local/go/src/os/file.go:57
    github.com/dgraph-io/badger/v3/table.(*Table).block(0x1b01a770, 0x6, 0x1)
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/table/table.go:583 +0x85d
    github.com/dgraph-io/badger/v3/table.(*Iterator).seekHelper(0x2106a0e0, 0x6, {0x23272078, 0x13, 0x13})
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/table/iterator.go:254 +0x44
    github.com/dgraph-io/badger/v3/table.(*Iterator).seekFrom(0x2106a0e0, {0x23272078, 0x13, 0x13}, 0x0)
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/table/iterator.go:294 +0x128
    github.com/dgraph-io/badger/v3/table.(*Iterator).seek(...)
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/table/iterator.go:310
    github.com/dgraph-io/badger/v3/table.(*Iterator).Seek(0x2106a0e0, {0x23272078, 0x13, 0x13})
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/table/iterator.go:424 +0x51
    github.com/dgraph-io/badger/v3.(*levelHandler).get(0x1b167810, {0x23272078, 0x13, 0x13})
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/level_handler.go:293 +0x24f
    github.com/dgraph-io/badger/v3.(*levelsController).get(0x1b1677c0, {0x23272078, 0x13, 0x13}, {0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, ...}, ...)
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/levels.go:1601 +0x1e1
    github.com/dgraph-io/badger/v3.(*DB).get(0x1b234000, {0x23272078, 0x13, 0x13})
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/db.go:751 +0x443
    github.com/dgraph-io/badger/v3.(*Txn).Get(0x14c60720, {0x1aea61f0, 0xb, 0xb})
    	/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/txn.go:478 +0x373
    

    open db

            var opt = badger.DefaultOptions("").WithInMemory(true)
    	db, err := badger.Open(opt)
    

    in memory mode, MmapFile.Fd assign to nil

    // OpenInMemoryTable is similar to OpenTable but it opens a new table from the provided data.
    // OpenInMemoryTable is used for L0 tables.
    func OpenInMemoryTable(data []byte, id uint64, opt *Options) (*Table, error) {
    	mf := &z.MmapFile{
    		Data: data,
    		Fd:   nil, // here
    	}
    	t := &Table{
    		MmapFile:   mf,
    		ref:        1, // Caller is given one reference.
    		opt:        opt,
    		tableSize:  len(data),
    		IsInmemory: true,
    		id:         id, // It is important that each table gets a unique ID.
    	}
    
    	if err := t.initBiggestAndSmallest(); err != nil {
    		return nil, err
    	}
    	return t, nil
    }
    

    use t.Fd.Name() while t.Fd is nil, panic here

             if err = t.decompress(blk); err != nil {
    		return nil, y.Wrapf(err,
    			"failed to decode compressed data in file: %s at offset: %d, len: %d",
    			t.Fd.Name(), blk.offset, ko.Len()) // panic
    	}
    

    unexpected data block read

            var err error
    	if blk.data, err = t.read(blk.offset, int(ko.Len())); err != nil {
    		return nil, y.Wrapf(err,
    			"failed to read from file: %s at offset: %d, len: %d",
    			t.Filename(), blk.offset, ko.Len())
    	}
    

    if blk.data first 8 bytes is a very big number, z.Calloc may oom

    // decompress decompresses the data stored in a block.
    func (t *Table) decompress(b *block) error {
    	var dst []byte
    	var err error
    
    	// Point to the original b.data
    	src := b.data
    
    	switch t.opt.Compression {
    	case options.None:
    		// Nothing to be done here.
    		return nil
    	case options.Snappy:
    		if sz, err := snappy.DecodedLen(b.data); err == nil {
    			dst = z.Calloc(sz, "Table.Decompress") // may oom here
    		} else {
    			dst = z.Calloc(len(b.data)*4, "Table.Decompress") // Take a guess.
    		}
    		b.data, err = snappy.Decode(dst, b.data)
    		if err != nil {
    			z.Free(dst)
    			return y.Wrap(err, "failed to decompress")
    		}
    	case options.ZSTD:
    		sz := int(float64(t.opt.BlockSize) * 1.2)
    		dst = z.Calloc(sz, "Table.Decompress")
    		b.data, err = y.ZSTDDecompress(dst, b.data)
    		if err != nil {
    			z.Free(dst)
    			return y.Wrap(err, "failed to decompress")
    		}
    	default:
    		return errors.New("Unsupported compression type")
    	}
    
    opened by yixinin 0
Releases(v3.2103.4)
  • v3.2103.4(Nov 4, 2022)

    This patches an issue that could lead to manifest corruption. Fix was merged in #1756. Addresses this issue on Discuss andthis issue on Badger. We also bring the release branch to parity with main by updating the CI/CD jobs, Readme, Codeowners, PR and issue templates, etc.

    Fixed

    • fix(manifest): fix manifest corruption due to race condition in concurrent compactions (#1756)

    Chores

    • Add CI/CD jobs to release branch
    • Add PR and Issue templates to release branch
    • Update Codeowners in release branch
    • Update Readme in release branch

    Full Changelog: https://github.com/dgraph-io/badger/compare/v3.2103.3...v3.2103.4

    Source code(tar.gz)
    Source code(zip)
    badger-checksum-linux-amd64.sha256(65 bytes)
    badger-linux-amd64.tar.gz(9.01 MB)
  • v3.2103.3(Oct 14, 2022)

  • v3.2103.2(Oct 7, 2021)

    This patch release contains:

    Fixed

    • fix(compact): close vlog after the compaction at L0 has been completed (#1752)
    • fix(builder): put the upper limit on reallocation (#1748)
    • deps: Bump github.com/google/flatbuffers to v1.12.1 (#1746)
    • fix(levels): Avoid a deadlock when acquiring read locks in levels (#1744)
    • fix(pubsub): avoid deadlock in publisher and subscriber (#1749) (#1751)

    Full Changelog: https://github.com/dgraph-io/badger/compare/v3.2103.1...v3.2103.2

    Source code(tar.gz)
    Source code(zip)
  • v2.2007.4(Aug 25, 2021)

    Fixed

    • Fix build on Plan 9 (#1451) (#1508) (#1738)

    Features

    • feat(zstd): backport replacement of DataDog's zstd with Klauspost's zstd (#1736)
    Source code(tar.gz)
    Source code(zip)
  • v2.2007.3(Jul 21, 2021)

    This patch release contains:

    Fixed

    • fix(maxVersion): Use choosekey instead of KeyToList (#1532) #1533
    • fix(flatten): Add --num_versions flag (#1518) #1520
    • fix(build): Fix integer overflow on 32-bit architectures #1558
    • fix(pb): avoid protobuf warning due to common filename (#1519)

    Features

    • Add command to stream contents of DB into another DB. (#1486)

    New APIs

    • DB.StreamDB
    • DB.MaxVersion
    Source code(tar.gz)
    Source code(zip)
  • v3.2103.1(Jul 8, 2021)

    This release removes CGO dependency opf badger by using Klauspost's ZSTD instead of Datadog's ZSTD. Also, this has some of the fixes.

    Fixed

    • fix(compaction): copy over the file ID when building tables #1713
    • fix: Fix conflict detection for managed DB (#1716)
    • fix(pendingWrites): don't skip the pending entries with version=0 (#1721)

    Features

    • feat(zstd): replace datadog's zstd with Klauspost's zstd (#1709)
    Source code(tar.gz)
    Source code(zip)
    badger-checksum-linux-amd64.sha256(65 bytes)
    badger-linux-amd64.tar.gz(8.06 MB)
  • v3.2103.0(Jun 3, 2021)

    Breaking

    • Subscribe: Add option to subscribe with holes in prefixes. (#1658)

    Fixed

    • fix(compaction): Remove compaction backoff mechanism (#1686)
    • Add a name to mutexes to make them unexported (#1678)
    • fix(merge-operator): don't read the deleted keys (#1675)
    • fix(discard): close the discard stats file on db close (#1672)
    • fix(iterator): fix iterator when data does not exist in read only mode (#1670)
    • fix(badger): Do not reuse variable across badger commands (#1624)
    • fix(dropPrefix): check properly if the key is present in a table (#1623)

    Performance

    • Opt(Stream): Optimize how we deduce key ranges for iteration (#1687)
    • Increase value threshold from 1 KB to 1 MB (#1664)
    • opt(DropPrefix): check if there exist some data to drop before dropping prefixes (#1621)

    Features

    • feat(options): allow special handling and checking when creating options from superflag (#1688)
    • overwrite default Options from SuperFlag string (#1663)
    • Support SinceTs in iterators (#1653)
    • feat(info): Add a flag to parse and print DISCARD file (#1662)
    • feat(vlog): making vlog threshold dynamic 6ce3b7c (#1635)
    • feat(options): add NumGoroutines option for default Stream.numGo (#1656)
    • feat(Trie): Working prefix match with holes (#1654)
    • feat: add functionality to ban a prefix (#1638)
    • feat(compaction): Support Lmax to Lmax compaction (#1615)

    New APIs

    • Badger.DB
      • BanNamespace
      • BannedNamespaces
      • Ranges
    • Badger.Options
      • FromSuperFlag
      • WithNumGoRoutines
      • WithNamespaceOffset
      • WithVLogPercentile
    • Badger.Trie
      • AddMatch
      • DeleteMatch
    • Badger.Table
      • StaleDataSize
    • Badger.Table.Builder
      • AddStaleKey
    • Badger.InitDiscardStats

    Removed APIs

    • Badger.DB
      • KeySplits
    • Badger.Options
      • SkipVlog

    Changed APIs

    • Badger.DB
      • Subscribe
    • Badger.Options
      • WithValueThreshold
    Source code(tar.gz)
    Source code(zip)
  • v3.2011.1(Jan 22, 2021)

    fix(compaction): Set base level correctly after stream (#1631) (#1651) fix: update ristretto and use filepath (#1649) (#1652) fix(badger): Do not reuse variable across badger commands (#1624) (#1650) fix(build): fix 32-bit build (#1627) (#1646) fix(table): always sync SST to disk (#1625) (#1645)

    Source code(tar.gz)
    Source code(zip)
  • v3.2011.0(Jan 15, 2021)

    This release is not backward compatible with Badger v2.x.x

    Breaking:

    • opt(compactions): Improve compaction performance (#1574)
    • Change how Badger handles WAL (#1555)
    • feat(index): Use flatbuffers instead of protobuf (#1546)

    Fixed:

    • Fix(GC): Set bits correctly for moved keys (#1619)
    • Fix(tableBuilding): reduce scope of valuePointer (#1617)
    • Fix(compaction): fix table size estimation on compaction (#1613)
    • Fix(OOM): Reuse pb.KVs in Stream (#1609)
    • Fix race condition in L0StallMs variable (#1605)
    • Fix(stream): Stop produceKVs on error (#1604)
    • Fix(skiplist): Remove z.Buffer from skiplist (#1600)
    • Fix(readonly): fix the file opening mode (#1592)
    • Fix: Disable CompactL0OnClose by default (#1586)
    • Fix(compaction): Don't drop data when split overlaps with top tables (#1587)
    • Fix(subcompaction): Close builder before throttle.Done (#1582)
    • Fix(table): Add onDisk size (#1569)
    • Fix(Stream): Only send done markers if told to do so
    • Fix(value log GC): Fix a bug which caused value log files to not be GCed.
    • Fix segmentation fault when cache sizes are small. (#1552)
    • Fix(builder): Too many small tables when compression is enabled (#1549)
    • Fix integer overflow error when building for 386 (#1541)
    • Fix(writeBatch): Avoid deadlock in commit callback (#1529)
    • Fix(db): Handle nil logger (#1534)
    • Fix(maxVersion): Use choosekey instead of KeyToList (#1532)
    • Fix(Backup/Restore): Keep all versions (#1462)
    • Fix(build): Fix nocgo builds. (#1493)
    • Fix(cleanup): Avoid truncating in value.Open on error (#1465)
    • Fix(compaction): Don't use cache for table compaction (#1467)
    • Fix(compaction): Use separate compactors for L0, L1 (#1466)
    • Fix(options): Do not implicitly enable cache (#1458)
    • Fix(cleanup): Do not close cache before compaction (#1464)
    • Fix(replay): Update head for LSM entires also (#1456)
    • fix(levels): Cleanup builder resources on building an empty table (#1414)

    Performance

    • perf(GC): Remove move keys (#1539)
    • Keep the cheaper parts of the index within table struct. (#1608)
    • Opt(stream): Use z.Buffer to stream data (#1606)
    • opt(builder): Use z.Allocator for building tables (#1576)
    • opt(memory): Use z.Calloc for allocating KVList (#1563)
    • opt: Small memory usage optimizations (#1562)
    • KeySplits checks tables and memtables when number of splits is small. (#1544)
    • perf: Reduce memory usage by better struct packing (#1528)
    • perf(tableIterator): Don't do next on NewIterator (#1512)
    • Improvements: Manual Memory allocation via Calloc (#1459)
    • Various bug fixes: Break up list and run DropAll func (#1439)
    • Add a limit to the size of the batches sent over a stream. (#1412)
    • Commit does not panic after Finish, instead returns an error (#1396)
    • levels: Compaction incorrectly drops some delete markers (#1422)
    • Remove vlog file if bootstrap, syncDir or mmap fails (#1434)

    Features:

    • Use opencensus for tracing (#1566)
    • Export functions from Key Registry (#1561)
    • Allow sizes of block and index caches to be updated. (#1551)
    • Add metric for number of tables being compacted (#1554)
    • feat(info): Show index and bloom filter size (#1543)
    • feat(db): Add db.MaxVersion API (#1526)
    • Expose DB options in Badger. (#1521)
    • Feature: Add a Calloc based Buffer (#1471)
    • Add command to stream contents of DB into another DB. (#1463)
    • Expose NumAlloc metrics via expvar (#1470)
    • Support fully disabling the bloom filter (#1319)
    • Add --enc-key flag in badger info tool (#1441)

    New APIs

    • Badger.DB
      • CacheMaxCost (#1551)
      • Levels (#1574)
      • LevelsToString (#1574)
      • Opts (#1521)
    • Badger.Options
      • WithBaseLevelSize (#1574)
      • WithBaseTableSize (#1574)
      • WithMemTableSize (#1574)
    • Badger.KeyRegistry
      • DataKey (#1561)
      • LatestDataKey (#1561)

    Removed APIs

    • Badger.Options
      • WithKeepL0InMemory (#1555)
      • WithLevelOneSize (#1574)
      • WithLoadBloomsOnOpen (#1555)
      • WithLogRotatesToFlush (#1574)
      • WithMaxTableSize (#1574)
      • WithTableLoadingMode (#1555)
      • WithTruncate (#1555)
      • WithValueLogLoadingMode (#1555)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.2(Sep 11, 2020)

    Fixed

    • Fix Sequence generates duplicate values (#1281)
    • Ensure bitValuePointer flag is cleared for LSM entry values written to LSM (#1313)
    • Confirm badgerMove entry required before rewrite (#1302)
    • Drop move keys when its key prefix is dropped (#1331)
    • Compaction: Expired keys and delete markers are never purged (#1354)
    • Restore: Account for value size as well (#1358)
    • GC: Consider size of value while rewriting (#1357)
    • Rework DB.DropPrefix (#1381)
    • Update head while replaying value log (#1372)
    • Remove vlog file if bootstrap, syncDir or mmap fails (#1434)
    • Levels: Compaction incorrectly drops some delete markers (#1422)
    • Fix(replay) - Update head for LSM entries also (#1456)
    • Fix(Backup/Restore): Keep all versions (#1462)
    • Fix build on Plan 9 (#1451)
    Source code(tar.gz)
    Source code(zip)
  • v2.2007.2(Sep 1, 2020)

    Fixed

    • Compaction: Use separate compactors for L0, L1 (#1466)
    • Rework Block and Index cache (#1473)
    • Add IsClosed method (#1478)
    • Cleanup: Avoid truncating in vlog.Open on error (#1465)
    • Cleanup: Do not close cache before compactions (#1464)

    New APIs

    • Badger.DB
      • BlockCacheMetrics (#1473)
      • IndexCacheMetrics (#1473)
    • Badger.Option
      • WithBlockCacheSize (#1473)
      • WithIndexCacheSize (#1473)

    Removed APIs [Breaking Changes]

    • Badger.DB
      • DataCacheMetrics (#1473)
      • BfCacheMetrics (#1473)
    • Badger.Option
      • WithMaxCacheSize (#1473)
      • WithMaxBfCacheSize (#1473)
      • WithKeepBlockIndicesInCache (#1473)
      • WithKeepBlocksInCache (#1473)
    Source code(tar.gz)
    Source code(zip)
  • v2.2007.1(Aug 18, 2020)

    Fixed

    • Remove vlog file if bootstrap, syncDir or mmap fails (#1434)
    • levels: Compaction incorrectly drops some delete markers (#1422)
    • Replay: Update head for LSM entires also (#1456)
    Source code(tar.gz)
    Source code(zip)
  • v2.2007.0(Aug 18, 2020)

    Fixed

    • Add a limit to the size of the batches sent over a stream. (#1412)
    • Fix Sequence generates duplicate values (#1281)
    • Fix race condition in DoesNotHave (#1287)
    • Fail fast if cgo is disabled and compression is ZSTD (#1284)
    • Proto: make badger/v2 compatible with v1 (#1293)
    • Proto: Rename dgraph.badger.v2.pb to badgerpb2 (#1314)
    • Handle duplicates in ManagedWriteBatch (#1315)
    • Ensure bitValuePointer flag is cleared for LSM entry values written to LSM (#1313)
    • DropPrefix: Return error on blocked writes (#1329)
    • Confirm badgerMove entry required before rewrite (#1302)
    • Drop move keys when its key prefix is dropped (#1331)
    • Iterator: Always add key to txn.reads (#1328)
    • Restore: Account for value size as well (#1358)
    • Compaction: Expired keys and delete markers are never purged (#1354)
    • GC: Consider size of value while rewriting (#1357)
    • Force KeepL0InMemory to be true when InMemory is true (#1375)
    • Rework DB.DropPrefix (#1381)
    • Update head while replaying value log (#1372)
    • Avoid panic on multiple closer.Signal calls (#1401)
    • Return error if the vlog writes exceeds more than 4GB (#1400)

    Performance

    • Clean up transaction oracle as we go (#1275)
    • Use cache for storing block offsets (#1336)

    Features

    • Support disabling conflict detection (#1344)
    • Add leveled logging (#1249)
    • Support entry version in Write batch (#1310)
    • Add Write method to batch write (#1321)
    • Support multiple iterators in read-write transactions (#1286)

    New APIs

    • Badger.DB
      • NewManagedWriteBatch (#1310)
      • DropPrefix (#1381)
    • Badger.Option
      • WithDetectConflicts (#1344)
      • WithKeepBlockIndicesInCache (#1336)
      • WithKeepBlocksInCache (#1336)
    • Badger.WriteBatch
      • DeleteAt (#1310)
      • SetEntryAt (#1310)
      • Write (#1321)

    Changes to Default Options

    • DefaultOptions: Set KeepL0InMemory to false (#1345)
    • Increase default valueThreshold from 32B to 1KB (#1346)

    Deprecated

    • Badger.Option
      • WithEventLogging (#1203)

    Reverts

    This section lists the changes which were reverted because of non-reproducible crashes.

    • Compress/Encrypt Blocks in the background (#1227)
    Source code(tar.gz)
    Source code(zip)
  • v20.07.0(Aug 11, 2020)

    Fixed

    • Add a limit to the size of the batches sent over a stream. (#1412)
    • Fix Sequence generates duplicate values (#1281)
    • Fix race condition in DoesNotHave (#1287)
    • Fail fast if cgo is disabled and compression is ZSTD (#1284)
    • Proto: make badger/v2 compatible with v1 (#1293)
    • Proto: Rename dgraph.badger.v2.pb to badgerpb2 (#1314)
    • Handle duplicates in ManagedWriteBatch (#1315)
    • Ensure bitValuePointer flag is cleared for LSM entry values written to LSM (#1313)
    • DropPrefix: Return error on blocked writes (#1329)
    • Confirm badgerMove entry required before rewrite (#1302)
    • Drop move keys when its key prefix is dropped (#1331)
    • Iterator: Always add key to txn.reads (#1328)
    • Restore: Account for value size as well (#1358)
    • Compaction: Expired keys and delete markers are never purged (#1354)
    • GC: Consider size of value while rewriting (#1357)
    • Force KeepL0InMemory to be true when InMemory is true (#1375)
    • Rework DB.DropPrefix (#1381)
    • Update head while replaying value log (#1372)
    • Avoid panic on multiple closer.Signal calls (#1401)
    • Return error if the vlog writes exceeds more than 4GB (#1400)

    Performance

    • Clean up transaction oracle as we go (#1275)
    • Use cache for storing block offsets (#1336)

    Features

    • Support disabling conflict detection (#1344)
    • Add leveled logging (#1249)
    • Support entry version in Write batch (#1310)
    • Add Write method to batch write (#1321)
    • Support multiple iterators in read-write transactions (#1286)

    New APIs

    • Badger.DB
      • NewManagedWriteBatch (#1310)
      • DropPrefix (#1381)
    • Badger.Option
      • WithDetectConflicts (#1344)
      • WithKeepBlockIndicesInCache (#1336)
      • WithKeepBlocksInCache (#1336)
    • Badger.WriteBatch
      • DeleteAt (#1310)
      • SetEntryAt (#1310)
      • Write (#1321)

    Changes to Default Options

    • DefaultOptions: Set KeepL0InMemory to false (#1345)
    • Increase default valueThreshold from 32B to 1KB (#1346)

    Deprecated

    • Badger.Option
      • WithEventLogging (#1203)

    Reverts

    This section lists the changes which were reverted because of non-reproducible crashes.

    • Compress/Encrypt Blocks in the background (#1227)
    Source code(tar.gz)
    Source code(zip)
  • v20.07.0-rc3(Jul 21, 2020)

  • v20.07.0-rc2(Jul 15, 2020)

  • v20.07.0-rc1(Jul 11, 2020)

    Fixed

    • Fix Sequence generates duplicate values (#1281)
    • Fix race condition in DoesNotHave (#1287)
    • Fail fast if cgo is disabled and compression is ZSTD (#1284)
    • Proto: make badger/v2 compatible with v1 (#1293)
    • Proto: Rename dgraph.badger.v2.pb to badgerpb2 (#1314)
    • Handle duplicates in ManagedWriteBatch (#1315)
    • Ensure bitValuePointer flag is cleared for LSM entry values written to LSM (#1313)
    • DropPrefix: Return error on blocked writes (#1329)
    • Confirm badgerMove entry required before rewrite (#1302)
    • Drop move keys when its key prefix is dropped (#1331)
    • Iterator: Always add key to txn.reads (#1328)
    • Restore: Account for value size as well (#1358)
    • Compaction: Expired keys and delete markers are never purged (#1354)
    • GC: Consider size of value while rewriting (#1357)
    • Force KeepL0InMemory to be true when InMemory is true (#1375)
    • Rework DB.DropPrefix (#1381)
    • Update head while replaying value log (#1372)
    • Avoid panic on multiple closer.Signal calls (#1401)
    • Return error if the vlog writes exceeds more than 4GB (#1400)

    Performance

    • Clean up transaction oracle as we go (#1275)
    • Use cache for storing block offsets (#1336)

    Features

    • Support disabling conflict detection (#1344)
    • Add leveled logging (#1249)
    • Support entry version in Write batch (#1310)
    • Add Write method to batch write (#1321)
    • Support multiple iterators in read-write transactions (#1286)

    New APIs

    • Badger.DB
      • NewManagedWriteBatch (#1310)
      • DropPrefix (#1381)
    • Badger.Option
      • WithDetectConflicts (#1344)
      • WithKeepBlockIndicesInCache (#1336)
      • WithKeepBlocksInCache (#1336)
    • Badger.WriteBatch
      • DeleteAt (#1310)
      • SetEntryAt (#1310)
      • Write (#1321)

    Changes to Default Options

    • DefaultOptions: Set KeepL0InMemory to false (#1345)
    • Increase default valueThreshold from 32B to 1KB (#1346)

    Deprecated

    • Badger.Option
      • WithEventLogging (#1203)

    Reverts

    This sections lists the changes which were reverted because of non-reproducible crashes.

    • Compress/Encrypt Blocks in the background (#1227)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.3(Mar 27, 2020)

    Fixed

    • Add support for watching nil prefix in subscribe API (#1246)

    Performance

    • Compress/Encrypt Blocks in the background (#1227)
    • Disable cache by default (#1257)

    Features

    • Add BypassDirLock option (#1243)
    • Add separate cache for bloomfilters (#1260)

    New APIs

    • badger.DB
      • BfCacheMetrics (#1260)
      • DataCacheMetrics (#1260)
    • badger.Options
      • WithBypassLockGuard (#1243)
      • WithLoadBloomsOnOpen (#1260)
      • WithMaxBfCacheSize (#1260)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.3-rc1(Mar 26, 2020)

    Fixed

    • Add support for watching nil prefix in subscribe API (#1246)

    Performance

    • Compress/Encrypt Blocks in the background (#1227)
    • Disable cache by default (#1257)

    Features

    • Add BypassDirLock option (#1243)
    • Add separate cache for bloomfilters (#1260)

    New APIs

    • badger.DB
      • BfCacheMetrics (#1260)
      • DataCacheMetrics (#1260)
    • badger.Options
      • WithBypassLockGuard (#1243)
      • WithLoadBloomsOnOpen (#1260)
      • WithMaxBfCacheSize (#1260)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.1(Mar 26, 2020)

    New APIs

    • Badger.DB
      • NewWriteBatchAt (#948)
    • Badger.Options
      • WithEventLogging (#1035)
      • WithVerifyValueChecksum (#1052)
      • WithBypassLockGuard (#1243)

    Features

    • Support checksum verification for values read from vlog (#1052)
    • Add EventLogging option (#1035)
    • Support WriteBatch API in managed mode (#948)
    • Add support for watching nil prefix in Subscribe API (#1246)

    Fixed

    • Initialize vlog before starting compactions in db.Open (#1226)
    • Fix int overflow for 32bit (#1216)
    • Remove the 'this entry should've caught' log from value.go (#1170)
    • Fix merge iterator duplicates issue (#1157)
    • Fix segmentation fault in vlog.Read (header.Decode) (#1150)
    • Fix VerifyValueChecksum checks (#1138)
    • Fix windows dataloss issue (#1134)
    • Fix request increment ref bug (#1121)
    • Limit manifest's change set size (#1119)
    • Fix deadlock in discard stats (#1070)
    • Acquire lock before unmapping vlog files (#1050)
    • Set move key's expiresAt for keys with TTL (#1006)
    • Fix deadlock when flushing discard stats. (#976)
    • Fix table.Smallest/Biggest and iterator Prefix bug (#997)
    • Fix boundaries on GC batch size (#987)
    • Lock log file before munmap (#949)
    • VlogSize to store correct directory name to expvar.Map (#956)
    • Fix transaction too big issue in restore (#957)
    • Fix race condition in updateDiscardStats (#973)
    • Cast results of len to uint32 to fix compilation in i386 arch. (#961)
    • Drop discard stats if we can't unmarshal it (#936)
    • Open all vlog files in RDWR mode (#923)
    • Fix race condition in flushDiscardStats function (#921)
    • Ensure rewrite in vlog is within transactional limits (#911)
    • Fix prefix bug in key iterator and allow all versions (#950)
    • Fix discard stats moved by GC bug (#929)

    Performance

    • Use fastRand instead of locked-rand in skiplist (#1173)
    • Fix checkOverlap in compaction (#1166)
    • Optimize createTable in stream_writer.go (#1132)
    • Add capacity to slice creation when capacity is known (#1103)
    • Introduce fast merge iterator (#1080)
    • Introduce StreamDone in Stream Writer (#1061)
    • Flush vlog buffer if it grows beyond threshold (#1067)
    • Binary search based table picker (#983)
    • Making the stream writer APIs goroutine-safe (#959)
    • Replace FarmHash with AESHash for Oracle conflicts (#952)
    • Change file picking strategy in compaction (#894)
    • Use trie for prefix matching (#851)
    • Fix busy-wait loop in Watermark (#920)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.1-rc1(Mar 24, 2020)

    New APIs

    • Badger.DB
      • NewWriteBatchAt (#948)
    • Badger.Options
      • WithEventLogging (#1035)
      • WithVerifyValueChecksum (#1052)
      • WithBypassLockGuard (#1243)

    Features

    • Support checksum verification for values read from vlog (#1052)
    • Add EventLogging option (#1035)
    • Support WriteBatch API in managed mode (#948)
    • Add support for watching nil prefix in Subscribe API (#1246)

    Fixed

    • Initialize vlog before starting compactions in db.Open (#1226)
    • Fix int overflow for 32bit (#1216)
    • Remove the 'this entry should've caught' log from value.go (#1170)
    • Fix merge iterator duplicates issue (#1157)
    • Fix segmentation fault in vlog.Read (header.Decode) (#1150)
    • Fix VerifyValueChecksum checks (#1138)
    • Fix windows dataloss issue (#1134)
    • Fix request increment ref bug (#1121)
    • Limit manifest's change set size (#1119)
    • Fix deadlock in discard stats (#1070)
    • Acquire lock before unmapping vlog files (#1050)
    • Set move key's expiresAt for keys with TTL (#1006)
    • Fix deadlock when flushing discard stats. (#976)
    • Fix table.Smallest/Biggest and iterator Prefix bug (#997)
    • Fix boundaries on GC batch size (#987)
    • Lock log file before munmap (#949)
    • VlogSize to store correct directory name to expvar.Map (#956)
    • Fix transaction too big issue in restore (#957)
    • Fix race condition in updateDiscardStats (#973)
    • Cast results of len to uint32 to fix compilation in i386 arch. (#961)
    • Drop discard stats if we can't unmarshal it (#936)
    • Open all vlog files in RDWR mode (#923)
    • Fix race condition in flushDiscardStats function (#921)
    • Ensure rewrite in vlog is within transactional limits (#911)
    • Fix prefix bug in key iterator and allow all versions (#950)
    • Fix discard stats moved by GC bug (#929)

    Performance

    • Use fastRand instead of locked-rand in skiplist (#1173)
    • Fix checkOverlap in compaction (#1166)
    • Optimize createTable in stream_writer.go (#1132)
    • Add capacity to slice creation when capacity is known (#1103)
    • Introduce fast merge iterator
    • Introduce StreamDone in Stream Writer (#1061)
    • Flush vlog buffer if it grows beyond threshold (#1067)
    • Binary search based table picker (#983)
    • Making the stream writer APIs goroutine-safe (#959)
    • Replace FarmHash with AESHash for Oracle conflicts (#952)
    • Change file picking strategy in compaction (#894)
    • Use trie for prefix matching (#851)
    • Fix busy-wait loop in Watermark (#920)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2(Mar 2, 2020)

    Fixed

    • Cast sz to uint32 to fix compilation on 32 bit. (#1175)
    • Fix checkOverlap in compaction. (#1166)
    • Avoid sync in inmemory mode. (#1190)
    • Support disabling the cache completely. (#1185)
    • Add support for caching bloomfilters. (#1204)
    • Fix int overflow for 32bit. (#1216)
    • Remove the 'this entry should've caught' log from value.go. (#1170)
    • Rework concurrency semantics of valueLog.maxFid. (#1187)

    Performance

    • Use fastRand instead of locked-rand in skiplist. (#1173)
    • Improve write stalling on level 0 and 1. (#1186)
    • Disable compression and set ZSTD Compression Level to 1. (#1191)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2-rc1(Feb 26, 2020)

    Fixed

    • Cast sz to uint32 to fix compilation on 32 bit. (#1175)
    • Fix checkOverlap in compaction. (#1166)
    • Avoid sync in inmemory mode. (#1190)
    • Support disabling the cache completely. (#1185)
    • Add support for caching bloomfilters. (#1204)
    • Fix int overflow for 32bit. (#1216)
    • Remove the 'this entry should've caught' log from value.go. (#1170)
    • Rework concurrency semantics of valueLog.maxFid. (#1187)

    Performance

    • Use fastRand instead of locked-rand in skiplist. (#1173)
    • Improve write stalling on level 0 and 1. (#1186)
    • Disable compression and set ZSTD Compression Level to 1. (#1191)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Jan 2, 2020)

    New APIs

    • badger.Options

      • WithInMemory (f5b6321)
      • WithZSTDCompressionLevel (3eb4e72)
    • Badger.TableInfo

      • EstimatedSz (f46f8ea)

    Features

    • Introduce in-memory mode in badger. (#1113)

    Fixed

    • Limit manifest's change set size. (#1119)
    • Cast idx to uint32 to fix compilation on i386. (#1118)
    • Fix request increment ref bug. (#1121)
    • Fix windows dataloss issue. (#1134)
    • Fix VerifyValueChecksum checks. (#1138)
    • Fix encryption in stream writer. (#1146)
    • Fix segmentation fault in vlog.Read. (header.Decode) (#1150)
    • Fix merge iterator duplicates issue. (#1157)

    Performance

    • Set level 15 as default compression level in Zstd. (#1111)
    • Optimize createTable in stream_writer.go. (#1132)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1-rc1(Dec 23, 2019)

    New APIs

    • badger.Options

      • WithInMemory (f5b6321)
      • WithZSTDCompressionLevel (3eb4e72)
    • Badger.TableInfo

      • EstimatedSz (f46f8ea)

    Features

    • Introduce in-memory mode in badger. (#1113)

    Fixed

    • Limit manifest's change set size. (#1119)
    • Cast idx to uint32 to fix compilation on i386. (#1118)
    • Fix request increment ref bug. (#1121)
    • Fix windows dataloss issue. (#1134)
    • Fix VerifyValueChecksum checks. (#1138)
    • Fix encryption in stream writer. (#1146)
    • Fix segmentation fault in vlog.Read. (header.Decode) (#1150)
    • Fix merge iterator duplicates issue. (#1157)

    Performance

    • Set level 15 as default compression level in Zstd. (#1111)
    • Optimize createTable in stream_writer.go. (#1132)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Nov 13, 2019)

    New features

    The main new features are:

    Others

    There are various bug fixes, optimizations, and new options. See the CHANGELOG for details.

    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Jul 3, 2019)

    BadgerDB has changed a lot over the latest year so we released a new version with a brand new API.

    Read our CHANGELOG for more details on the exact changes, or the announcement post on our blog.

    New features

    The main new features are:

    • The Stream framework has been migrated from Dgraph into BadgerDB.
    • A new StreamWriter was added for concurrent writes for sorted streams.
    • You can now subscribe to changes in a DB with the DB.Subscribe method.
    • A new builder API has been added to reduce the boilerplate related to badger.Options.

    Breaking API changes

    The following changes might impact your code:

    • badger.ManagedDB has been deprecated and merged into badger.DB. You can still use badger.OpenManaged.
    • The badger.Options.DoNotCompact option has been removed.
    • badger.DefaultOptions and badger.LSMOnlyOptions are now functions that receive a directory path as a parameter.
    • All the methods on badger.Txn with name starting in SetWith have been deprecated and replaced with a builder API for type badger.Entry.
    • badger.Item.Value now receives a function that returns an error.
    • badger.Txn.Commit doesn't receive any params anymore.
    • badger.DB.Tables now accepts a boolean to decide whether keys should be counted.

    Others

    Many new commands and flags have been added to the badger CLI tool, read the CHANGELOG for more details.

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-rc1(Jun 20, 2019)

    BadgerDB has changed a lot over the latest year so we released a new version with a brand new API.

    BadgerDB v2.0.0 corresponds to the current status of master as June 20th, so if you're using latest you should not have any issues upgrading.

    Read our CHANGELOG for more details on the exact changes.

    New features

    The main new features are:

    • The Stream framework has been migrated from Dgraph into BadgerDB.
    • A new StreamWriter was added for concurrent writes for sorted streams.
    • You can now subscribe to changes in a DB with the DB.Subscribe method.
    • A new builder API has been added to reduce the boiler plate related to badger.Options.

    Breaking API changes

    The following changes might impact your code:

    • badger.ManagedDB has been deprecated and merged into badger.DB. You can still use badger.OpenManaged.
    • The badger.Options.DoNotCompact option has been removed.
    • badger.DefaultOptions and badger.LSMOnlyOptions are now functions that receive a directory path as a parameter.
    • All the methods on badger.Txn with name starting in SetWith have been deprecated and replaced with a builder API for type badger.Entry.
    • badger.Item.Value now receives a function that returns an error.
    • badger.Txn.Commit doesn't receive any params anymore.
    • badger.DB.Tables now accepts a boolean to decide whether keys should be counted.

    Others

    Many new commands and flags have been added to the badger CLI tool, read the CHANGELOG for more details.

    Source code(tar.gz)
    Source code(zip)
  • v1.5.5(Jun 20, 2019)

  • v1.5.3(Jul 11, 2018)

Owner
Dgraph
The Only Native GraphQL Database With A Graph Backend.
Dgraph
Fast key-value DB in Go.

BadgerDB BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for Dgraph, a fast,

Dgraph 11.5k Nov 27, 2022
A simple, fast, embeddable, persistent key/value store written in pure Go. It supports fully serializable transactions and many data structures such as list, set, sorted set.

NutsDB English | 简体中文 NutsDB is a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transacti

徐佳军 2.6k Nov 17, 2022
Fast and simple key/value store written using Go's standard library

Table of Contents Description Usage Cookbook Disadvantages Motivation Benchmarks Test 1 Test 4 Description Package pudge is a fast and simple key/valu

Vadim Kulibaba 331 Nov 17, 2022
BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go

BadgerDB BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for Dgraph, a fast,

Blizard 1 Dec 10, 2021
Eagle - Eagle is a fast and strongly encrypted key-value store written in pure Golang.

EagleDB EagleDB is a fast and simple key-value store written in Golang. It has been designed for handling an exaggerated read/write workload, which su

null 9 Sep 28, 2022
Badger - Fast Key-Value DB in Go

BadgerDB This is a fork of dgraph-io/badger, maintained by the Outcaste team. Ba

Outcaste, Inc. 103 Nov 20, 2022
An embedded key/value database for Go.

bbolt bbolt is a fork of Ben Johnson's Bolt key/value store. The purpose of this fork is to provide the Go community with an active maintenance and de

etcd-io 6k Nov 25, 2022
🔑A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak.

bitcask A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout

James Mills 10 Sep 26, 2022
BuntDB is an embeddable, in-memory key/value database for Go with custom indexing and geospatial support

BuntDB is a low-level, in-memory, key/value store in pure Go. It persists to disk, is ACID compliant, and uses locking for multiple readers and a sing

Josh Baker 3.9k Nov 17, 2022
ACID key-value database.

Coffer Simply ACID* key-value database. At the medium or even low latency it tries to provide greater throughput without losing the ACID properties of

Eduard 31 Sep 26, 2022
A disk-backed key-value store.

What is diskv? Diskv (disk-vee) is a simple, persistent key-value store written in the Go language. It starts with an incredibly simple API for storin

Peter Bourgon 1.2k Nov 26, 2022
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major

Patrick Mylund Nielsen 6.7k Nov 25, 2022
LevelDB key/value database in Go.

This is an implementation of the LevelDB key/value database in the Go programming language. Installation go get github.com/syndtr/goleveldb/leveldb R

Suryandaru Triandana 5.5k Nov 22, 2022
Embedded key-value store for read-heavy workloads written in Go

Pogreb Pogreb is an embedded key-value store for read-heavy workloads written in Go. Key characteristics 100% Go. Optimized for fast random lookups an

Artem Krylysov 962 Nov 15, 2022
Low-level key/value store in pure Go.

Description Package slowpoke is a simple key/value store written using Go's standard library only. Keys are stored in memory (with persistence), value

Vadim Kulibaba 100 Oct 14, 2022
Key-value store for temporary items :memo:

Tempdb TempDB is Redis-backed temporary key-value store for Go. Useful for storing temporary data such as login codes, authentication tokens, and temp

Rafael Jesus 17 Sep 26, 2022
A distributed key-value store. On Disk. Able to grow or shrink without service interruption.

Vasto A distributed high-performance key-value store. On Disk. Eventual consistent. HA. Able to grow or shrink without service interruption. Vasto sca

Chris Lu 242 Nov 10, 2022
Graviton Database: ZFS for key-value stores.

Graviton Database: ZFS for key-value stores. Graviton Database is simple, fast, versioned, authenticated, embeddable key-value store database in pure

null 405 Nov 19, 2022
Distributed reliable key-value store for the most critical data of a distributed system

etcd Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order

etcd-io 41.9k Nov 23, 2022