A lightweight document-oriented NoSQL database written in pure Golang.


CloverDB Logo CloverDB Logo

Lightweight document-oriented NoSQL Database

Mentioned in Awesome Go
Go Reference Go Report Card License: MIT codecov Join the chat at https://gitter.im/cloverDB/community

🇬🇧 English | 🇨🇳 简体中文 | 🇪🇸 Spanish

CloverDB is a lightweight NoSQL database designed for being simple and easily maintainable, thanks to its small code base. It has been inspired by tinyDB.


  • Document oriented
  • Written in pure Golang
  • Simple and intuitive api
  • Easily maintainable

Why CloverDB?

CloverDB has been written for being easily maintenable. As such, it trades performance with simplicity, and is not intented to be an alternative to more performant databases such as MongoDB or MySQL. However, there are projects where running a separate database server may result overkilled, and, for simple queries, network delay may be the major performance bottleneck. For there scenario, CloverDB may be a more suitable alternative.

Database Layout

CloverDB abstracts the way collections are stored on disk through the StorageEngine interface. The default implementation is based on the Badger database key-value store. However, you could easily write your own storage engine implementation.


Make sure you have a working Go environment (Go 1.13 or higher is required).

  GO111MODULE=on go get github.com/ostafen/clover

Databases and Collections

CloverDB stores data records as JSON documents, which are grouped together in collections. A database is made up of one or more collections.


To store documents inside collections, you have to open a Clover database using the Open() function.

import (
	c "github.com/ostafen/clover"


db, _ := c.Open("clover-db")

// or, if you don't need persistency
db, _ := c.Open("", c.InMemoryMode(true))

defer db.Close() // remember to close the db when you have done


CloverDB stores documents inside collections. Collections are the schemaless equivalent of tables in relational databases. A collection is created by calling the CreateCollection() function on a database instance. New documents can be inserted using the Insert() or InsertOne() methods. Each document is uniquely identified by a Version 4 UUID stored in the _id special field and generated during insertion.

db, _ := c.Open("clover-db")
db.CreateCollection("myCollection") // create a new collection named "myCollection"

// insert a new document inside the collection
doc := c.NewDocument()
doc.Set("hello", "clover!")

// InsertOne returns the id of the inserted document
docId, _ := db.InsertOne("myCollection", doc)

Importing and Exporting Collections

CloverDB is capable of easily importing and exporting collections to JSON format regardless of the storage engine used.

// dump the content of the "todos" collection in a "todos.json" file
db.ExportCollection("todos", "todos.json")


// recover the todos collection from the exported json file
db.ImportCollection("todos", "todos.json")

docs, _ := db.Query("todos").FindAll()
for _, doc := range docs {


CloverDB is equipped with a fluent and elegant API to query your data. A query is represented by the Query object, which allows to retrieve documents matching a given criterion. A query can be created by passing a valid collection name to the Query() method.

Select All Documents in a Collection

The FindAll() method is used to retrieve all documents satisfying a given query.

docs, _ := db.Query("myCollection").FindAll()

todo := &struct {
    Completed bool   `clover:"completed"`
    Title     string `clover:"title"`
    UserId    int    `clover:"userId"`

for _, doc := range docs {

Filter Documents with Criteria

In order to filter the documents returned by FindAll(), you have to specify a query Criteria using the Where() method. A Criteria object simply represents a predicate on a document, evaluating to true only if the document satisfies all the query conditions.

The following example shows how to build a simple Criteria, matching all the documents having the completed field equal to true.


// or equivalently

In order to build very complex queries, we chain multiple Criteria objects by using the And() and Or() methods, each returning a new Criteria obtained by appling the corresponding logical operator.

// find all completed todos belonging to users with id 5 and 8
db.Query("todos").Where(c.Field("completed").Eq(true).And(c.Field("userId").In(5, 8))).FindAll()

Sorting Documents

To sort documents in CloverDB, you need to use Sort(). It is a variadic function which accepts a sequence of SortOption, each allowing to specify a field and a sorting direction. A sorting direction can be one of 1 or -1, respectively corresponding to ascending and descending order. If no SortOption is provided, Sort() uses the _id field by default.

// Find any todo belonging to the most recent inserted user
db.Query("todos").Sort(c.SortOption{"userId", -1}).FindFirst()

Skip/Limit Documents

Sometimes, it can be useful to discard some documents from the output, or simply set a limit on the maximum number of results returned by a query. For this purpose, CloverDB provides the Skip() and Limit() functions, both accepting an interger $n$ as parameter.

// discard the first 10 documents from the output,
// also limiting the maximum number of query results to 100

Update/Delete Documents

The Update() method is used to modify specific fields of documents in a collection. The Delete() method is used to delete documents. Both methods belong to the Query object, so that it is easy to update and delete documents matching a particular query.

// mark all todos belonging to user with id 1 as completed
updates := make(map[string]interface{})
updates["completed"] = true


// delete all todos belonging to users with id 5 and 8

To update or delete a single document using a specific document id, use UpdateById() or DeleteById(), respectively:

docId := "1dbce353-d3c6-43b3-b5a8-80d8d876389b"
// update the document with the specified id
db.Query("todos").UpdateById(docId, map[string]interface{}{"completed": true})
// or delete it

Data Types

Internally, CloverDB supports the following primitive data types: int64, uint64, float64, string, bool and time.Time. When possible, values having different types are silently converted to one of the internal types: signed integer values get converted to int64, while unsigned ones to uint64. Float32 values are extended to float64.

For example, consider the following snippet, which sets an uint8 value on a given document field:

doc := c.NewDocument()
doc.Set("myField", uint8(10)) // "myField" is automatically promoted to uint64


Pointer values are dereferenced until either nil or a non-pointer value is found:

var x int = 10
var ptr *int = &x
var ptr1 **int = &ptr

doc.Set("ptr", ptr)
doc.Set("ptr1", ptr1)

fmt.Println(doc.Get("ptr").(int64) == 10)
fmt.Println(doc.Get("ptr1").(int64) == 10)

ptr = nil

doc.Set("ptr1", ptr1)
// ptr1 is not nil, but it points to the nil "ptr" pointer, so the field is set to nil
fmt.Println(doc.Get("ptr1") == nil)

Invalid types leaves the document untouched:

doc := c.NewDocument()
doc.Set("myField", make(chan struct{}))

log.Println(doc.Has("myField")) // will output false


CloverDB is actively developed. Any contribution, in the form of a suggestion, bug report or pull request, is well accepted 😊

Major contributions and suggestions have been gratefully received from (in alphabetical order):

  • cannot install using go 1.18.1

    cannot install using go 1.18.1

    It seems that the package id not running perhaps for a breaking change on the UUID package

     go get github.com/ostafen/clover
    # github.com/ostafen/clover
    ../../../go/src/github.com/ostafen/clover/db.go:49:9: multiple-value uuid.NewV4() (value of type (uuid.UUID, error)) in single-value context
    opened by gurugeek 29
  • about the function count( )

    about the function count( )

    bro, the fun count( ) , when there is a lot of data, it takes a long time to return. and i see the source code ,it is implemented by findall( ) and len( )the return it takes a lot time when the data is big such as ,now i have about 40,000 pieces of data. it takes more than a second... maybe we should change the fun implementation

    func (here *Db) SearchContent(names []string, num int, pg int) ([]*clover.Document, int) {
    	var name string
    	for i, v := range names {
    		if i < len(names)-1 {
    			name += "(.*" + regexp.QuoteMeta(v) + ".*)|"
    		} else {
    			name += "(.*" + regexp.QuoteMeta(v) + ".*)"
    	query := here.content.Where(clover.Field("name").Like(name))
    	startT := time.Now()
    	docs, _ := query.Skip(num * pg).Limit(num).FindAll()
    	fmt.Printf("time.Since(startT): %v\n", time.Since(startT))
    	startU := time.Now()
    	pgCount, _ := query.Count()
    	fmt.Printf("time.Since(startU): %v\n", time.Since(startU))
    	return docs, int(math.Floor(float64(pgCount/num) + 0.0/2.0))

    it print this

    time.Since(startT): 204.9991ms
    time.Since(startU): 1.40169s
    enhancement good first issue 
    opened by jinzhongjia 18
  • Plan about awesome warehouse

    Plan about awesome warehouse

    It is recommended to create a new warehouse called awesome-clover or awesome-cloverdb.

    CloverDB will have more people in the future, but the only program-self is not enough, although this database is a very small and complete library, but also needs ecology.Just like SQLITE and MongoDB, they have GUI , practice projects , driver and so on. At present, I have developed a novel coronavirus health code collector using cloverDB ,and it is being applied in small communities in China ,served a lot of people. I am very interested in contributing hands-on projects related to the awesome clover library to be developed, such as GUI and development examples, more complete hands-on tutorials and drivers. I will be the first to contribute to the project, and project-related maintenance is upgraded with the CloverDB upgrades. Creating new ecological power, the clover will get better and better.

    documentation enhancement 
    opened by ASWLaunchs 17
  • Most efficient solution for paging query results with a big collection

    Most efficient solution for paging query results with a big collection

    Hi there. Firstly thank you for this cool project - it seems quite well-suited for what I am building! I am hitting some performance issues though, so thought I'd explain what I'm doing (to give some context) and where the two bottlenecks are, and hopefully there is a different approach which I have simply overlooked.

    So first of all, I'm using CloverDB to store parsed emails. Each document contains some basic info about the email, including from, to, subject, timerstamp it was received (time.Time), as well as the entire raw email. Then I have a frontend which displays a paginated overview of the message "summaries" (ie: the basic data excluding the raw email itself), from newest to old (25 per page). The bottlenecks are caused by two things here, namely:

    1. Sorting by received timestamp (.Sort(clover.SortOption{Field: "Created", Direction: -1}))
    2. A count of all records (db.Count(mailbox))

    With about 20k emails in a collection a typical request for 25 records (let's say the latest 25) takes about 9 seconds, which includes a Count() of all documents. Removing the Count() of the all documents in the collection reduces the request to about 5 seconds, and then when also removing the sort I get the results in about 0.7 seconds.

    What is the most efficient manner to reverse sort the order that documents are stored in the collection? What is the most efficient way to count the total number of documents in a collection?

    In relation to the the above, I am considering spitting the email "summary fields" (to, from, subject etc) from the actual raw email content (which can also contains attachments), and storing them separately in two separate collections. Before I refactor all my code, do you think this approach is better? Preliminary tests (which simply exclude storing the raw email data in my collection) appear to more than halve the execution time above, although I don't know whether by storing the raw data in a separate collection it would slow things down again (I don't know quite how badger handles collections).

    Thank you for your time.

    enhancement question 
    opened by axllent 16
  • V2: RunValueLogGC(): Value log GC attempt didn't result in any cleanup

    V2: RunValueLogGC(): Value log GC attempt didn't result in any cleanup

    Hi @ostafen - I'm noticing every 5 minutes a RunValueLogGC(): Value log GC attempt didn't result in any cleanup in the logs - I know where it comes from in the code but ... I'm suspecting that the GC maybe not working as expected. To test I inserted 7.2GB of data yesterday (300k documents), and then deleted all the documents (so it's "empty" now). I's been > 24 hours now and I still have 7.2GB. Accessing the database is much slower than usual (probably because it now scans through 7GB of "empty" data to eventually return nothing, so the deleted data definitely appears to impact performance). I've inserted some more documents, deleted those etc etc (hoping some action would cause it to start working and prune the database), but it doesn't seem to be doing anything. All I see are those error messages on the 5 minute interval. I saw somewhere that you use a discard ratio of 0.5, however I would think that this situation should be a 100% discard ratio (or maybe 99.99%).

    Are you able to shed some light as to how this is supposed to work exactly, and whether there is anything I can do in Mailpit (or CloverDB) to "help" it reclaim space once a whole lot of data is deleted? I know you said "if you try to perform several insertion and deletion sequences, you will reach a point where disk space is reclaimed", however I can't seem to get that to work despite there being 300,000 less documents in two separate catalogues. I have considered closing the database and physically deleting the files, then recreating, but that seems like a very extreme solution.

    Any ideas / suggestions? Thanks!

    Lastly (a separate "issue"), that error output appears to be hard coded, in that any app using CloverDB will display those errors regardless. Ideally I would prefer to hide it (or see it only in debug/verbose mode) rather than always displaying it. I know the output is coming from a goroutine - but would you confider (at some stage) a clover "flag" to be able to turn that particular GC error message off?

    opened by axllent 15
  •  Empty database uses more than 2 GB of disk space

    Empty database uses more than 2 GB of disk space

    Hi all

    First of all: thanks for your work - it's tackling my need and works pretty well :) One quick question: Is it intended for a nearly empty database to use > 2 GB disk space?

    Example (see 000006.vlog):

    ▶ ls -alth test-clover-db        
    total 52K
    drwxrwxr-x 2 thiko thiko 4,0K Apr 24 07:41 .
    -rw------- 1 thiko thiko   58 Apr 24 07:41 MANIFEST
    -rw-rw-r-- 1 thiko thiko  399 Apr 24 07:41 000003.sst
    -rw-rw-r-- 1 thiko thiko   20 Apr 24 07:41 000005.vlog
    -rw-rw-r-- 1 thiko thiko 2,0G Apr 24 07:41 000006.vlog
    -rw-rw-r-- 1 thiko thiko 128M Apr 24 07:41 00006.mem
    -rw-rw-r-- 1 thiko thiko    5 Apr 24 07:41 LOCK
    -rw------- 1 thiko thiko   60 Apr 23 20:54 .directory
    -rw-rw-r-- 1 thiko thiko  399 Apr 23 20:53 000002.sst
    -rw-rw-r-- 1 thiko thiko  349 Apr 23 20:50 000001.sst
    -rw-rw-r-- 1 thiko thiko 1,0M Apr 23 20:03 DISCARD
    -rw------- 1 thiko thiko   28 Apr 23 20:03 KEYREGISTRY

    Thanks in advance!

    opened by thiko 14
  • [Feature Request] Utilize `encoding.BinaryMarshaller` and `encoding.BinaryUnmarshaller`

    [Feature Request] Utilize `encoding.BinaryMarshaller` and `encoding.BinaryUnmarshaller`

    If encoding.TextMarshaller is used for types found with clover.NewDocumentOf and encoding.TextUnmarshaller with clover.Document.Unmarshal, then standard library and custom types can be used for fields that aren't currently able to marshal/unmarshal correctly like a custom UUID type.

    opened by duckbrain 9
  • Example Request: Unmarshal/Update/Replace

    Example Request: Unmarshal/Update/Replace

    I can't seem to figure out how to do this. I have a struct whose fields are tagged for clover. I would like to read in a document, Unmarshal it to my struct, make changes to my struct, then update the existing document using my struct.

    import c "github.com/ostafen/clover/v2"
    type Example struct {
      Foo string `clover:"foo"`
    doc, .. := c.Query(Find...))
    var example Example
    example.Foo = example.Foo + " bar "
    c.ReplaceById( .... ) // can't use NewDocumentOf() since objectId isn't set?
    opened by idcmp 6
  • Chinese document synchronization

    Chinese document synchronization

    I updated the Chinese documents according to readme, but I found that some descriptions in the English documents did not seem to be correct.

    for example : Invalid types leaves the document untouched:

    doc := c.NewDocument()
    doc.Set("myField", make(chan struct{}))
    log.Println(doc.Has("myField")) // will output false

    The code above is printed out as true after I run it.

    This is the complete code.

    package main
    import (
    	c "github.com/ostafen/clover"
    func main() {
    	db, _ := c.Open("clover-db")
    	defer db.Close()
    	doc := c.NewDocument()
    	doc.Set("myField", make(chan struct{}))

    and when i want to insert the document to collection, it panic

    panic: interface conversion: interface {} is nil, not string
    goroutine 1 [running]:
    github.com/ostafen/clover.(*DB).InsertOne(0xc00063ff58?, {0x1492124?, 0xc000258b70?}, 0xc000217f40)
            C:/Users/jin/go/pkg/mod/github.com/ostafen/[email protected]/db.go:91 +0xa5
            D:/project/go/test/main.go:17 +0x185
    exit status 2

    And with regard to pointers, it completely adopts value copying and cancels automatic references, right?

    My English is not good. There may be some mistakes in my understanding.

    opened by jinzhongjia 6
  • v2: Constant CPU usage

    v2: Constant CPU usage

    Hi @ostafen. As you would probably have seen (in Mailpit), a user reported a constant CPU usage, which I believe I have traced back to CloverDB v2 (both 1 & 2 alpha).

    A simple test:

    package main
    import (
    func main() {
    	db, _ := clover.Open("", clover.InMemoryMode(true))
    	defer db.Close()
    	time.Sleep(600 * time.Second)

    Run the program and check CPU usage with top/htop - the running binary will be using a constant 2-3% CPU. This does not happen with CloverDB v1.

    Any ideas?

    opened by axllent 5
  • Allow to discover available fields for a document

    Allow to discover available fields for a document

    Currently one can only access the values of fields using Get(), requiring prior knowledge of available field names.

    Given the schema-less nature of Clover this feature would be helpful in many ways.

    Two possible options (non exclusive) :

    • access to a copy or read-only version of the internal map
    • method to list available fields (including sub-documents, ex. ["a", "b", "b.x", "b.y"...], for that matter having "b" standing alone is questionable)
    opened by fbilhaut 5
  • GT LT operation very slow

    GT LT operation very slow

    query like query :=clover.NewQuery("stat"). Where(clover.Field("stat_time").GtEq("2022-11-14"). And(clover.Field("stat_time").LtEq("2022-11-15")). And(clover.Field("ad_id").Eq("1749344214764583"))). Sort(clover.SortOption{Field: "stat_time", Direction: 1}) very fast. but query :=clover.NewQuery("stat"). Where(clover.Field("ad_id").Eq("1749344214764583")). And(clover.Field("stat_time").LtEq("2022-11-15")). And(clover.Field("stat_time").GtEq("2022-11-14")). Sort(clover.SortOption{Field: "stat_time", Direction: 1}) very slow. Why?

    opened by DreamVersion 3
  • Cannot allocate initial memory

    Cannot allocate initial memory


    I'm starting a project where, after some research, I would like to embed clover db. I've started to tried it but I'm getting kernel panic due to memory allocation.

    I was going through past issues and I read issue https://github.com/ostafen/clover/issues/35 , so I made sure I'm using db.Close(), actually my code is just your repply to that issue:

    db, err := c.Open("./db")
    defer db.Close()
    if err != nil {

    After which I get a kernel panic saying:

    cannot allocate memory while mmapping ./db2/000001.vlog with size: 2147483646

    My question is: is there a way to configure the db so it doesn't reserve so much memory from the start?

    Thank you for your work.

    opened by cmbbr 1
  • Add support for geospatial queries/indexing

    Add support for geospatial queries/indexing

    It would be nice to provide Near()/NearSphere()-like criteria for geo-spatial queries. Moreover, it should be possible to create a geospatial-index on a specific location field

    opened by ostafen 0
  • Feature slice indexing

    Feature slice indexing

    I had to change the way the loop was done so that I could "advance" the fields whenever we hit a slice field (not sure if that was the best approach though, I was trying not to change the function so much).

    Basically, every time we find a slice field, we step into lookupSliceField and do the indexing, returning the value found (after indexing) and the correct number of increments the control loop variable should do (aka skip the indexing fields in the current loop).

    I also needed to add a books.json file so that I could cover indexing.

    opened by DaniloMarques1 1
  • Implement slice indexing when accessing a field

    Implement slice indexing when accessing a field

    Hi, all. Currently, clover allows to access nested document fields by the following syntax:


    Now, suppose that field2 is a slice. It would be useful to support indexing elements by the following syntax:

    doc.Get("field1.field2.4.field3") // here, we are trying to access the fifth element of "field2"
    enhancement good first issue 
    opened by ostafen 4
  • v1.2.0(May 21, 2022)

    This version brings important changes to the core of the library.

    • Documents are now serialized using gob
    • Support for int64/uint64, time.Time and []byte
    • Possibility to write criteria with conditions involving multiple fields
    • Minor fixes and improvements
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Apr 30, 2022)

    This release adds several improvements, fixes and useful utilities:

    • Sorting
    • Skip/Limit criteria
    • JSON inport/export
    • An in-memory StorageEngine
    • More detailed documentation
    • UpdateById() and ReplaceById() methods
    • Save() method
    Source code(tar.gz)
    Source code(zip)
Stefano Scafiti
Passionate about algorithms and database design, with a strong preference for programming languages such as Java, Python and Golang.
Stefano Scafiti
A high performance NoSQL Database Server powered by Go

LedisDB Ledisdb is a high-performance NoSQL database library and server written in Go. It's similar to Redis but store data in disk. It supports many

LedisDB 3.9k Dec 26, 2022
Couchbase - distributed NoSQL cloud database

couchbase Couchbase is distributed NoSQL cloud database. create Scope CREATE SCO

Md Abu. Raihan 2 Feb 16, 2022
This is a simple graph database in SQLite, inspired by "SQLite as a document database".

About This is a simple graph database in SQLite, inspired by "SQLite as a document database". Structure The schema consists of just two structures: No

Denis Papathanasiou 1.2k Jan 3, 2023
NoSql DB using fileSystems using golang

No SQL DB using Go Prerequisite go 1.15 Run test go test -v ./... Env Var Variable Description Default Value Possible Values DB_DATA dir location to s

Neeraj 0 Nov 8, 2021
BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go

BadgerDB BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for Dgraph, a fast,

Blizard 1 Dec 10, 2021
pure golang key database support key have value. 非常高效实用的键值数据库。

orderfile32 pure golang key database support key have value The orderfile32 is standard alone fast key value database. It have two version. one is thi

null 3 Apr 30, 2022
A MySQL-compatible relational database with a storage agnostic query engine. Implemented in pure Go.

go-mysql-server is a SQL engine which parses standard SQL (based on MySQL syntax) and executes queries on data sources of your choice. A simple in-memory database and table implementation are provided, and you can query any data source you want by implementing a few interfaces.

DoltHub 947 Dec 27, 2022
Pure Go implementation of D. J. Bernstein's cdb constant database library.

Pure Go implementation of D. J. Bernstein's cdb constant database library.

John Barham 224 Oct 19, 2022
Eagle - Eagle is a fast and strongly encrypted key-value store written in pure Golang.

EagleDB EagleDB is a fast and simple key-value store written in Golang. It has been designed for handling an exaggerated read/write workload, which su

null 9 Dec 10, 2022
Lightweight RESTful database engine based on stack data structures

piladb [pee-lah-dee-bee]. pila means stack or battery in Spanish. piladb is a lightweight RESTful database engine based on stack data structures. Crea

Fernando Álvarez 200 Nov 27, 2022
The lightweight, distributed relational database built on SQLite.

rqlite is a lightweight, distributed relational database, which uses SQLite as its storage engine. Forming a cluster is very straightforward, it grace

rqlite 12.9k Jan 5, 2023
A simple, fast, embeddable, persistent key/value store written in pure Go. It supports fully serializable transactions and many data structures such as list, set, sorted set.

NutsDB English | 简体中文 NutsDB is a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transacti

徐佳军 2.7k Jan 1, 2023
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

null 34 Nov 9, 2022
Hard Disk Database based on a former database

Hard Disk Database based on a former database

null 0 Nov 1, 2021
Simple key value database that use json files to store the database

KValDB Simple key value database that use json files to store the database, the key and the respective value. This simple database have two gRPC metho

Francisco Santos 0 Nov 13, 2021
Beerus-DB: a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic operations

Beerus-DB · Beerus-DB is a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic

Beerus 7 Oct 29, 2022
Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on commands and key-regexes written by GO

Welcome to NIPO Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on command

Morteza Bashsiz 17 Dec 28, 2022
🤔 A minimize Time Series Database, written from scratch as a learning project.

mandodb ?? A minimize Time Series Database, written from scratch as a learning project. 时序数据库(TSDB: Time Series Database)大多数时候都是为了满足监控场景的需求,这里先介绍两个概念:

dongdong 563 Jan 3, 2023
GalaxyDB is a hobbyist key-value database written in Go.

GalaxyDB GalaxyDB is a hobbyist key-value database written in Go Author: Andrew N (git@nijmeh.xyz) Features Data is stored via keys Operations Grafana

nijmeh.xyz 3 Mar 30, 2022