Document-oriented, embedded SQL database

Overview

Genji

Genji

Document-oriented, embedded, SQL database

Table of contents

Introduction

Build Status go.dev reference Slack channel Fuzz

Genji is a schemaless database that allows running SQL queries on documents.

Checkout the SQL documentation, the Go doc and the usage example in the README to get started quickly.

⚠️ Genji's API is still unstable: Database compatibility is not guaranteed before reaching v1.0.0

Features

  • Optional schemas: Genji tables are schemaless, but it is possible to add constraints on any field to ensure the coherence of data within a table.
  • Multiple Storage Engines: It is possible to store data on disk or in ram, but also to choose between B-Trees and LSM trees. Genji relies on BoltDB and Badger to manage data.
  • Transaction support: Read-only and read/write transactions are supported by default.
  • SQL and Documents: Genji mixes the best of both worlds by combining powerful SQL commands with JSON.
  • Easy to use, easy to learn: Genji was designed for simplicity in mind. It is really easy to insert and read documents of any shape.
  • Compatible with the database/sql package

Installation

Install the Genji database

go get github.com/genjidb/genji

Usage

There are two ways of using Genji, either by using Genji's API or by using the database/sql package.

Using Genji's API

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/genjidb/genji"
    "github.com/genjidb/genji/document"
)

func main() {
    // Create a database instance, here we'll store everything on-disk using the BoltDB engine
    db, err := genji.Open("my.db")
    if err != nil {
        log.Fatal(err)
    }
    // Don't forget to close the database when you're done
    defer db.Close()

    // Attach context, e.g. (*http.Request).Context().
    db = db.WithContext(context.Background())

    // Create a table. Schemas are optional, you don't need to specify one if not needed
    err = db.Exec("CREATE TABLE user")

    // Create an index
    err = db.Exec("CREATE INDEX idx_user_name ON test (name)")

    // Insert some data
    err = db.Exec("INSERT INTO user (id, name, age) VALUES (?, ?, ?)", 10, "Foo1", 15)

    // Supported values can go from simple integers to richer data types like lists or documents
    err = db.Exec(`
    INSERT INTO user (id, name, age, address, friends)
    VALUES (
        11,
        'Foo2',
        20,
        {"city": "Lyon", "zipcode": "69001"},
        ["foo", "bar", "baz"]
    )`)

    // Go structures can be passed directly
    type User struct {
        ID              uint
        Name            string
        TheAgeOfTheUser float64 `genji:"age"`
        Address         struct {
            City    string
            ZipCode string
        }
    }

    // Let's create a user
    u := User{
        ID:              20,
        Name:            "foo",
        TheAgeOfTheUser: 40,
    }
    u.Address.City = "Lyon"
    u.Address.ZipCode = "69001"

    err = db.Exec(`INSERT INTO user VALUES ?`, &u)

    // Query some documents
    res, err := db.Query("SELECT id, name, age, address FROM user WHERE age >= ?", 18)
    // always close the result when you're done with it
    defer res.Close()

    // Iterate over the results
    err = res.Iterate(func(d document.Document) error {
        // When querying an explicit list of fields, you can use the Scan function to scan them
        // in order. Note that the types don't have to match exactly the types stored in the table
        // as long as they are compatible.
        var id int
        var name string
        var age int32
        var address struct {
            City    string
            ZipCode string
        }

        err = document.Scan(d, &id, &name, &age, &address)
        if err != nil {
            return err
        }

        fmt.Println(id, name, age, address)

        // It is also possible to scan the results into a structure
        var u User
        err = document.StructScan(d, &u)
        if err != nil {
            return err
        }

        fmt.Println(u)

        // Or scan into a map
        var m map[string]interface{}
        err = document.MapScan(d, &m)
        if err != nil {
            return err
        }

        fmt.Println(m)
        return nil
    })
}

Using database/sql

// import Genji as a blank import
import _ "github.com/genjidb/genji/sql/driver"

// Create a sql/database DB instance
db, err := sql.Open("genji", "my.db")
if err != nil {
    log.Fatal(err)
}
defer db.Close()

// Then use db as usual
res, err := db.ExecContext(...)
res, err := db.Query(...)
res, err := db.QueryRow(...)

Engines

Genji currently supports storing data in BoltDB, Badger and in-memory.

Using the BoltDB engine

import (
    "log"

    "github.com/genjidb/genji"
)

func main() {
    db, err := genji.Open("my.db")
    defer db.Close()
}

Using the memory engine

import (
    "log"

    "github.com/genjidb/genji"
)

func main() {
    db, err := genji.Open(":memory:")
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()
}

Using the Badger engine

First install the module

go get github.com/genjidb/genji/engine/badgerengine
import (
    "context"
    "log"

    "github.com/genjidb/genji"
    "github.com/genjidb/genji/engine/badgerengine"
    "github.com/dgraph-io/badger/v2"
)

func main() {
    // Create a badger engine
    ng, err := badgerengine.NewEngine(badger.DefaultOptions("mydb"))
    if err != nil {
        log.Fatal(err)
    }

    // Pass it to genji
    db, err := genji.New(context.Background(), ng)
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()
}

Genji shell

The genji command line provides an SQL shell that can be used to create, modify and consult Genji databases.

Make sure the Genji command line is installed:

go get github.com/genjidb/genji/cmd/genji

Example:

# Opening an in-memory database:
genji

# Opening a BoltDB database:
genji my.db

# Opening a Badger database:
genji --badger pathToData

Contributing

Contributions are welcome!

See ARCHITECTURE.md and CONTRIBUTING.md.

Thank you, contributors!

If you have any doubt, join the Gophers Slack channel or open an issue.

Issues
  • Database is deadlocking

    Database is deadlocking

    Hi, I've been experimenting with Genji v0.8.0 in one of my apps:

    https://github.com/simpleiot/simpleiot/blob/feature-genji2/db/genji/genji.go

    After I click around in the frontend a bit, API calls start timing out.

    I instrumented the db calls, and learned that if one of the calls starts before the previous one finishes, the API calls start timing out soon after that.

    Am I doing anything obviously wrong? I'm using boltdb backend and thought boltdb was thread safe. Do I need to wrap all db operations in transactions?

    bug 
    opened by cbrake 13
  • Added go-fuzz for parser

    Added go-fuzz for parser

    It’d be nice to integrate continuous fuzzing too, but I’m not aware of any easy to set up (and free) services.

    See also #252

    opened by tie 10
  • Automate releases

    Automate releases

    This PR automates publishing new releases.

    To publish a new release, just git push release. The new version is automatically inferred based on API changes. Alternatively, it’s possible to override this behavior (e.g. if we have breaking changes in cmd/genji but not in Go API) by pushing to release-v0.13.0 branch, or manually dispatching a workflow with version inputs.

    The workflow then creates version bump commits for Genji’s submodules and tags them. In the end it creates a draft release with changelog (additionally using CHANGELOG.md if it exists). When GitHub release is published, another CI workflow kicks in that builds and uploads binaries as release assets.

    Additionally, version bump commits are only reachable from tags so it should be safe to dispatch the workflow on main branch.

    That said, this should also eliminate the need to manually manage unreleased versions in go.mod.

    workflow

    Build matrix
    Build matrix

    Closes #269

    opened by tie 9
  • tableInfoStore should be scoped to transaction

    tableInfoStore should be scoped to transaction

    Currently database.Database holds a reference to tableInfoStore that is shared between all transactions which, among other things, may create/rename/alter/drop tables. This violates the transaction isolation.

    Make sure INSERT is indeed isolated.
    package main
    
    import (
    	"github.com/dgraph-io/badger/v2"
    	"github.com/genjidb/genji"
    	"github.com/genjidb/genji/engine/badgerengine"
    )
    
    func main() {
    	ng, err := badgerengine.NewEngine(badger.DefaultOptions("").WithInMemory(true))
    	if err != nil { panic(err) }
    	db, err := genji.New(ng)
    	if err != nil { panic(err) }
    
    	err = db.Exec("CREATE TABLE tb (id INTEGER PRIMARY KEY)")
    	if err != nil { panic(err) }
    
    	// Does panic with "duplicate document" error unless committed.
    	for i := 0; i < 2; i++ {
    		tx, err := db.Begin(true)
    		if err != nil { panic(err) }
    		defer tx.Rollback()
    		err = tx.Exec("INSERT INTO tb (id) VALUES (?)", 42)
    		if err != nil { panic(err) }
    	}
    }
    
    Make sure that CREATE INDEX is indeed isolated.
    package main
    
    import (
    	"github.com/dgraph-io/badger/v2"
    	"github.com/genjidb/genji"
    	"github.com/genjidb/genji/engine/badgerengine"
    )
    
    func main() {
    	ng, err := badgerengine.NewEngine(badger.DefaultOptions("").WithInMemory(true))
    	if err != nil { panic(err) }
    	db, err := genji.New(ng)
    	if err != nil { panic(err) }
    
    	err = db.Exec("CREATE TABLE tb")
    	if err != nil { panic(err) }
    
    	// Does panic with "index already exists" error unless committed.
    	for i := 0; i < 2; i++ {
    		tx, err := db.Begin(true)
    		if err != nil { panic(err) }
    		err = tx.Exec("CREATE UNIQUE INDEX idx ON tb(id)")
    		if err != nil { panic(err) }
    	}
    }
    
    Reproduce the bug. Panics with "table already exists" error.
    package main
    
    import (
    	"github.com/dgraph-io/badger/v2"
    	"github.com/genjidb/genji"
    	"github.com/genjidb/genji/engine/badgerengine"
    )
    
    func main() {
    	ng, err := badgerengine.NewEngine(badger.DefaultOptions("").WithInMemory(true))
    	if err != nil { panic(err) }
    	db, err := genji.New(ng)
    	if err != nil { panic(err) }
    
    	// We never commit the transaction, so these changes should be isolated
    	// from other concurrent (but not necessarily parallel) transactions.
    	for i := 0; i < 2; i++ {
    		tx, err := db.Begin(true)
    		if err != nil { panic(err) }
    		err = tx.Exec("CREATE TABLE tb")
    		if err != nil { panic(err) }
    	}
    }
    

    Tangentially related to #210 since it needs concurrent transactions.

    opened by tie 9
  • Support AUTO_INCREMENT.

    Support AUTO_INCREMENT.

    This a proposal draft for #43.

    SQL Server seems good because it proposes a customizable AUTO_INCREMENT with a "natural language".

    ### Default value
    CREATE TABLE foo (id INTEGER AUTO_INCREMENT);
    INSERT INTO foo VALUES {"a": "foo"};
    SELECT * FROM foo;
    { "a": "foo", "id": 1}
    
    ### Set a start index and an increment value;
    ###`AUTO_INCREMENT(startIndex, incBy)`
    ###The first value of the sequence start at 10 and the next is incremented by 5
    CREATE TABLE bar (id INTEGER AUTO_INCREMENT(10, 5));
    INSERT INTO bar VALUES {"a": "bar"};
    INSERT INTO bar VALUES {"a": "baz"};
    SELECT * FROM bar;
    { "a": "bar", "id": 10 }
    {  "a": "baz", "id": 15 }
    

    AUTO_INCREMENT have to be applied only on number value type. INTEGER and DOUBLE

    genji> CREATE TABLE foo(bar TEXT AUTO_INCREMENT);
    genji> found text, expected integer, double at line 1, char 27
    

    About ALTER TABLE table_name AUTO_INCREMENT=100, If we keep it like that, we should be able to write the both following syntaxes:

    ###For default value
    ALTER TABLE foo AUTO_INCREMENT=100;
    
    ### And this even if the creation was with default value
    ALTER TABLE foo AUTO_INCREMENT(100, 10);
    

    Thank you for your feedbacks.

    opened by tzzed 9
  • Add support for context.Context

    Add support for context.Context

    This PR adds context.Context support in Genji, starting with the engine package and then fixing compile errors everywhere. It doesn’t introduce any cancellation behavior though—that should be a separate, smaller, PR.

    • engine.Iterator usage now requires a it.Err() check after the loop.

      it := st.Iterator(engine.IteratorOptions{})
      defer it.Close()
      
      for it.Seek(ctx, nil); it.Valid(); it.Next(ctx) {
      	…
      }
      if err := it.Err(); err != nil {
      	…
      }
      

      Notice that Seek and Next now accept a context.Context parameter. If an error occurs, Valid returns false.

    • database.Table no longer directly implements document.Iterator since iteration may be I/O bound.

      // Before
      func (*Table) Iterate(func(d document.Document) error) error
      // After
      func (*Table) Iterator(context.Context) document.IteratorFunc
      func (*Table) Iterate(context.Context, func(d document.Document) error) error
      

    Closes #224 and #206

    opened by tie 8
  • SQL driver doesn't support timestamp (time.Time)

    SQL driver doesn't support timestamp (time.Time)

    I have seen an issue about supporting time.Time in sql driver: https://github.com/genjidb/genji/issues/154.

    But it seems that time.Time still doesn't work. Parser doesn't recognize keyword TIMESTAMP (not surprisingly, no changes was made in driver in corresponding MR for that issue).

    package main
    
    import (
    	"database/sql"
    	"fmt"
    	"time"
    
    	_ "github.com/genjidb/genji/sql/driver"
    )
    
    func main() {
    	db, err := sql.Open("genji", ":memory:")
    	if err != nil {
    		panic(err)
    	}
    	defer db.Close()
    
    	_, err = db.Exec(`CREATE TABLE foo (created_at TIMESTAMP NOT NULL)`)
    	if err != nil {
    		panic(err)
    	}
           //...
    

    Prints:

    panic: found TIMESTAMP, expected ) at line 1, char 30
    

    Scanning time.Time stored as TEXT doesn't work either:

    package main
    
    import (
    	"database/sql"
    	"fmt"
    	"time"
    
    	_ "github.com/genjidb/genji/sql/driver"
    )
    
    func main() {
    	db, err := sql.Open("genji", ":memory:")
    	if err != nil {
    		panic(err)
    	}
    	defer db.Close()
    
    	_, err = db.Exec(`CREATE TABLE foo (created_at TEXT NOT NULL)`)
    	if err != nil {
    		panic(err)
    	}
    
    	_, err = db.Exec(`INSERT INTO foo (created_at) VALUES (?)`, time.Now().UTC())
    	if err != nil {
    		panic(err)
    	}
    
    	rows, err := db.Query(`SELECT created_at FROM foo`)
    	if err != nil {
    		panic(err)
    	}
    	defer rows.Close()
    	for rows.Next() {
    		var createdAt time.Time
    		if err := rows.Scan(&createdAt); err != nil {
    			panic(err)
    		}
    		fmt.Printf("foo found: (%v)", createdAt)
    	}
    	if err := rows.Err(); err != nil {
    		panic(err)
    	}
    }
    

    Prints:

    panic: sql: Scan error on column index 0, name "created_at": unsupported Scan, storing driver.Value type string into type *time.Time
    

    Is this a bug or I misusing sql driver?

    bug 
    opened by Darkclainer 8
  • Add Bitcask backend

    Add Bitcask backend

    KV Store: https://github.com/prologic/bitcask

    Currently the master branch is (really) unstable as I'm basically breaking everything with a sledgehammer. The engine package though (the one that contains interfaces that need to be implemented) wasn't touch in the master branch so it should be good.

    Also, I'd really like to avoid having too many dependencies with Genji (especially because of the store implementations) so I think it should be better if the Bitcask package had its own go.mod. In next release, every backends will have their own go.mod so users will be able to choose explicitly the store they want to use. This also means that you'll have to base your work on the v0.1.0 tag, which is fine I suppose, as I said there aren't that many changes in the engine on the master branch.

    Regarding tests, there is a engine/enginetest package that contains importable tests that make sure your backend is compatible, you can take a look at the other backends if you need examples.

    engine 
    opened by asdine 7
  • DB migration

    DB migration

    See Doc

    opened by joe-getcouragenow 7
  • err = db.Exec(

    err = db.Exec("CREATE TABLE blob") fails with "panic: found BLOB, expected identifier at line 1, char "

    but this is fine

    err = db.Exec("CREATE TABLE files")
    

    Is blob a protected word maybe ?

    opened by joe-getcouragenow 7
  • CLI: restore from dump.sql to new db fails depending on the data in the dump.sql it seems.

    CLI: restore from dump.sql to new db fails depending on the data in the dump.sql it seems.

    What version of Genji are you using?

    tip
    

    Does this issue reproduce with the latest release?

    yes

    What did you do?

    repro:

    print:
    
    dep:
    	git clone https://github.com/genjidb/genji
    
    build:
    	# puts CLI to gopath
    	cd genji/cmd/genji && go build -o $(GOPATH)/bin/genji .
    run:
    	genji -h
    
    DB_PATH=$(PWD)/data/my.db
    DUMB_PATH=$(PWD)/data/my.sql
    RESTORE_DB_PATH=$(PWD)/data/my_restore.db
    DB_ENGINE=badger
    #DB_ENGINE=bolt
    
    data-delete:
    	rm -rf ./data
    	rm -rf *.db
    	rm -rf data_*
    
    run-json-insert: data-delete
    	# insert
    	genji insert -a -e $(DB_ENGINE) --db $(DB_PATH) '{"a": 1}' '{"a": 2}'
    
    	# dump
    	# to std out
    	genji dump -e $(DB_ENGINE) $(DB_PATH)
    	# to file
    	genji dump -e $(DB_ENGINE) -f $(DUMB_PATH) $(DB_PATH)
    
    	# restore
    	genji restore -e $(DB_ENGINE) $(DUMB_PATH) $(RESTORE_DB_PATH)
    	
    run-curl-insert: data-delete
    
    	# insert
    	curl https://api.github.com/repos/genjidb/genji/issues | genji insert -a -e $(DB_ENGINE) --db $(DB_PATH)
    
    	# dump
    	# to std out
    	genji dump -e $(DB_ENGINE) $(DB_PATH)
    	# to file
    	genji dump -e $(DB_ENGINE) -f $(DUMB_PATH) $(DB_PATH)
    
    	# restore
    	genji restore -e $(DB_ENGINE) $(DUMB_PATH) $(RESTORE_DB_PATH)
    
    run-std-insert: data-delete
    
    	# insert
    	echo '[{"a": 1},{"a": 2}]' | genji insert -a -e $(DB_ENGINE) --db $(DB_PATH)
    
    	# to std out
    	genji dump -e $(DB_ENGINE) $(DB_PATH)
    	# to file
    	genji dump -e $(DB_ENGINE) -f $(DUMB_PATH) $(DB_PATH) 
    
    	genji restore -e $(DB_ENGINE) $(DUMB_PATH) $(RESTORE_DB_PATH) 
    
    
    

    What did you expect to see?

    restore from dump.sql to new db should work.

    What did you see instead?

    make run-std-insert works fine make make run-curl-insert fails with

    error: found \r, expected } at line 1, char 2441
    

    What Go version and environment are you using?

    go version go1.16.6 darwin/amd64

    go env Output
    GO111MODULE="on"
    GOARCH="amd64"
    GOBIN=""
    GOCACHE="/Users/apple/Library/Caches/go-build"
    GOENV="/Users/apple/Library/Application Support/go/env"
    GOEXE=""
    GOFLAGS=""
    GOHOSTARCH="amd64"
    GOHOSTOS="darwin"
    GOINSECURE=""
    GOMODCACHE="/Users/apple/workspace/go/pkg/mod"
    GONOPROXY=""
    GONOSUMDB=""
    GOOS="darwin"
    GOPATH="/Users/apple/workspace/go"
    GOPRIVATE=""
    GOPROXY="https://proxy.golang.org,direct"
    GOROOT="/usr/local/opt/go/libexec"
    GOSUMDB="sum.golang.org"
    GOTMPDIR=""
    GOTOOLDIR="/usr/local/opt/go/libexec/pkg/tool/darwin_amd64"
    GOVCS=""
    GOVERSION="go1.16.6"
    GCCGO="gccgo"
    AR="ar"
    CC="clang"
    CXX="clang++"
    CGO_ENABLED="1"
    GOMOD="/Users/apple/workspace/go/src/github.com/gedw99/notes/db/genji/genjidb__genji/genji/go.mod"
    CGO_CFLAGS="-g -O2"
    CGO_CPPFLAGS=""
    CGO_CXXFLAGS="-g -O2"
    CGO_FFLAGS="-g -O2"
    CGO_LDFLAGS="-g -O2"
    PKG_CONFIG="pkg-config"
    GOGCCFLAGS="-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/wp/ff6sz9qs6g71jnm12nj2kbyw0000gp/T/go-build1815681598=/tmp/go-build -gno-record-gcc-switches -fno-common"
    

    bug 
    opened by gedw99 0
  • Update the integer overflow detection code (broken on arm64)

    Update the integer overflow detection code (broken on arm64)

    The code that detects it at the moment is implementation dependent (https://github.com/genjidb/genji/blob/main/document/cast.go#L88)

    It works on x64 but not on arm64. (https://github.com/golang/go/issues/47387)

    bug 
    opened by jhchabran 0
  • Index selection improvement

    Index selection improvement

    Dump:

    BEGIN TRANSACTION;
    CREATE TABLE foo;
    CREATE INDEX __genji_autoindex_foo_1 ON foo (a, b);
    INSERT INTO foo VALUES {"a": 1, "b": 2};
    COMMIT;
    

    Observed behavior:

    genji> EXPLAIN SELECT * FROM foo WHERE a > 1 AND b = 3;
    {
      "plan": "seqScan(foo) | filter(a > 1) | filter(b = 3)"
    }
    

    Expected behavior:

    genji> EXPLAIN SELECT * FROM foo WHERE a > 1 AND b = 3;
    {
      "plan": "indexScan(\"__genji_autoindex_foo_1\", [1, -1, true]) | filter(b = 3)"
    }
    

    The planner should select the composite index even if a is using a greater than operator.

    enhancement 
    opened by asdine 0
  • [WIP] go-orm proof of concept example

    [WIP] go-orm proof of concept example

    Currently fails due to issue #383

    This is a proof-of-concept of constructing a Go-orm instance backed by GenjiDB.

    opened by paralin 3
  • sql driver: conversion of lazy loaded document when Scanning

    sql driver: conversion of lazy loaded document when Scanning

    Now that #382 was merged, my go-orm test is proceeding past inserting objects into the db!

    The next hurdle is getting Scan() to work:

    panic: sql: Scan error on column index 0, name "*": unsupported Scan, storing driver.Value type *database.lazilyDecodedDocument into type *sql.RawBytes
    

    In database/sql/convert.go:219 (convertAssignRows) it tries to convert the *lazilyDecodedDocument into *sql.RawBytes.

    bug 
    opened by paralin 4
  • genji restore very slow on large dumps

    genji restore very slow on large dumps

    Because genji dump wraps everything between BEGIN and ROLLBACK, it takes a very long time to restore large databases (several 100s of Mb). Investigate how other databases deal with this issue, by generating a big sqlite database for example.

    bug cli 
    opened by asdine 1
  • bleve to get full text search and facets

    bleve to get full text search and facets

    Proposal

    Add bleve support ( https://github.com/blevesearch )

    Motivation

    Provides FTS like sqllite has and other DB's, to allow searching over documents. https://sqlite.org/fts3.html

    Provides facet based data analysis. A good demo of that concept is here in the video. https://datasette.io/ In the demo of datasette, every column can be faceted: https://global-power-plants.datasettes.com/global-power-plants/global-power-plants

    • this is a very powerful construct for developers and users

    Design

    For examle with SQLite it is a special Table in order to do FTS. Note that facets is a different and would need a different DSL.

    For example, if each of the 517430 documents in the "Enron E-Mail Dataset" is inserted into both an FTS table and an ordinary SQLite table created using the following SQL script:

    CREATE VIRTUAL TABLE enrondata1 USING fts3(content TEXT);     /* FTS3 table */
    CREATE TABLE enrondata2(content TEXT);                        /* Ordinary table */
    

    Then either of the two queries below may be executed to find the number of documents in the database that contain the word "linux" (351). Using one desktop PC hardware configuration, the query on the FTS3 table returns in approximately 0.03 seconds, versus 22.5 for querying the ordinary table.

    SELECT count(*) FROM enrondata1 WHERE content MATCH 'linux';  /* 0.03 seconds */
    SELECT count(*) FROM enrondata2 WHERE content LIKE '%linux%'; /* 22.5 seconds */
    

    Prior work:

    https://github.com/mosuka/blast https://github.com/mosuka/blast#search-documents

    $ ./bin/blast search '
    {
      "search_request": {
        "query": {
          "query": "+_all:search"
        },
        "size": 10,
        "from": 0,
        "fields": [
          "*"
        ],
        "sort": [
          "-_score"
        ]
      }
    }
    ' | jq .
    
    enhancement 
    opened by gedw99 3
  • Use genji.dev as package import path prefix

    Use genji.dev as package import path prefix

    Proposal

    There is a growing number of general-purpose packages in Genji (e.g. sql/query/glob). I propose using genji.dev/ import path prefix instead of github.com/genjidb/genji/ so that we can (re)map logical package names to their physical location using go-import HTML meta tags.

    Currently, importing Genji APIs is done as follows

    import (
    	"github.com/genjidb/genji"
    	"github.com/genjidb/genji/document"
    	"github.com/genjidb/genji/sql/driver"
    	"github.com/genjidb/genji/sql/query/glob"
    	// …
    )
    

    And after the proposed changes

    import (
    	"genji.dev/document"
    	"genji.dev/genji"
    	"genji.dev/glob"
    	"genji.dev/sql/driver"
    	// …
    )
    
    

    Motivation

    We’d like to make some general-purpose packages reusable outside of Genji. That is, they should be in a separate Go module that does not depend on Genji internal or external API. Instead of extracting them to their own repos, I think it’s better to keep the development and issue tracking centralized.

    See also: https://github.com/genjidb/genji/pull/306#issuecomment-719529182

    Changes

    Assuming we want to keep a monorepo with nested modules,

    • Start on a separate branch.

    • Move current root github.com/genjidb/genji package (not go.mod though) to genji directory. That would become genji.dev/genji import path.

    • Move sql/query/glob to glob directory. Add go.mod for this package.

    • Update all import paths to use genji.dev.

    • Add a single go-import meta tag to HTML pages on genji.dev domain. See https://golang.org/cmd/go/#hdr-Remote_import_paths

      <meta name="go-import" content="genji.dev git https://github.com/genjidb/genji">
      

      It looks like we don’t need go-source meta tag, see golang/pkgsite:internal/source/source.go.

      Note that multiple meta tags may not match the same import paths (special mod vsc is an exception), see golang/go:src/cmd/go/internal/vcs/vcs.go.

    • At this point we should be able to test these changes with local replace directive in go.mod pointing at the latest commit on test branch. Note: disable GOPROXY. If everything is working fine, push changes to the main branch.

    enhancement 
    opened by tie 0
  • SQL driver doesn't support msgpack.EncodedDocument

    SQL driver doesn't support msgpack.EncodedDocument

    Run simple program:

    package main
    
    import (
    	"database/sql"
    	"fmt"
    
    	_ "github.com/genjidb/genji/sql/driver"
    )
    
    type Doc struct {
    	Field1 string `genji:"field1"`
    	Field2 int    `genji:"field2"`
    }
    
    func main() {
    	db, err := sql.Open("genji", ":memory:")
    	if err != nil {
    		panic(err)
    	}
    	defer db.Close()
    
    	_, err = db.Exec(`CREATE TABLE foo (doc DOCUMENT NOT NULL)`)
    	if err != nil {
    		panic(err)
    	}
    
    	_, err = db.Exec(`INSERT INTO foo (doc) VALUES (?)`, Doc{
    		Field1: "123",
    		Field2: 123,
    	})
    	if err != nil {
    		panic(err)
    	}
    
    	rows, err := db.Query(`SELECT doc FROM foo`)
    	if err != nil {
    		panic(err)
    	}
    	defer rows.Close()
    	for rows.Next() {
    		var doc Doc
    		if err := rows.Scan(&doc); err != nil {
    			panic(err)
    		}
    		fmt.Printf("foo found: (%v)", doc)
    	}
    	if err := rows.Err(); err != nil {
    		panic(err)
    	}
    }
    

    Get this:

    panic: sql: Scan error on column index 0, name "doc": unsupported Scan, storing driver.Value type msgpack.EncodedDocument into type *main.Doc
    

    Is this a bug or I misusing sql driver?

    opened by KudinovKV 2
  • Insert operation is very slow

    Insert operation is very slow

    I have tested insert and query performance using genji. When using boltdb as database engine, the insert speed is about 83/s. When using badger, it's a little better, the insert speed is about 1700/s.

    select query (160k records) : badger: 220ms/op (table no index) , 0.5ms/op (table with index) Bolt: 110ms/op (table no index) 11ms/op (table with index)

    I also tested the database performance of boltdb and badger. The result is: Keysize:22bytes ValueSize:512bytes Num:2M Badger write 280K per second Boltdb write 20K per second

    RandomRead (2M records) Badger : 1166 ns/op 1061 B/op 19 allocs/op Boltdb: 644 ns/op 266 B/op 8 allocs/op

    Iterate on 2M keys and Values , valueSize 512bytes Badger : 1130ms Boltdb: 160ms

    A comparison with mysql: Innodb write 3200/s, query 60ms (no index) Myisam write 9400/s, query 30ms (no index)

    So I think genji should improve performance, and we can use it in more places.

    opened by SeniorPlayer 2
Releases(v0.13.0)
  • v0.13.0(Jul 21, 2021)

    SQL

    • Add concat operator
    • Add NOT operator
    • Add BETWEEN operator
    • Add INSERT ... RETURNING
    • Add ON CONFLICT
    • Add UNION ALL #408 (@jhchabran)
    • Add CREATE SEQUENCE
    • Add DROP SEQUENCE
    • Add NEXT VALUE FOR

    Core

    • Add document.NewFromCSV
    • Add prepared statements
    • Add new __genji_catalog table and drop __genji_table and __genji_indexes
    • Remove engine.NextSequence

    CLI

    • Add .import command

    This release also contains various bug fixes.

    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(May 1, 2021)

    SQL

    • Added support for composite indexes #376 (⚠️Breaking change) @jhchabran
    • Added support for UNIQUE table constraint #387
    • Added support for PRIMARY KEY table constraint #333
    • Added support for INSERT ... SELECT #385
    • Indexes can be created with a generated name now #386

    Core

    • Fix indexed array comparisons with numbers #378
    • Fix a bug preventing the use of time.Time with driver.Scanner #258

    CLI

    • Added a new way of writing integration tests: Examplar #380 @jhchabran
    • Added .schema command #339 @tzzed
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Mar 24, 2021)

    SQL

    • Add OFFSET, LIMIT and ORDER BY to DELETE statement #318 #319 #320 @jhchabran
    • fix: query returning no result when using ORDER BY with sql/driver implementation #352 @tzzed

    Core

    • New Stream API #343
    • Update Badger to v3 #350
    • Integers are now compacted when encoded #351 (⚠️Breaking change) @jhchabran
    • Infer constraints based on user-defined ones #358
    • Queries using primary keys are much faster #310
    • fix: Encoding of consecutive integers in arrays #362 @jhchabran
    • fix: Incorrect index state with UPDATE queries on indexed fields #355 #368
    • fix: Build index upon creation #371
    • fix: BLOB to byte array decoding #305

    CLI

    • New genji dump command #315 @tzzed
    • New genji restore command #321 @tzzed
    • fix: .dump not outputting DEFAULT clause #329 @jhchabran
    • fix: Deadlock when trying to open a locked database #365
    • fix: Panic on genji insert when specified engine doesn't exist #331
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Jan 26, 2021)

    SQL

    • fix: COUNT(*) with empty set #284
    • fix: Selection of field in GROUP BY expression #317
    • fix: Index selection of IN operator #281

    Core

    • Update msgpack #340 @kenshaw
    • fix: Panic when optimizer returns empty tree #283 @goku321

    CLI

    • Add .save command #311 @jhchabran
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Nov 12, 2020)

    SQL

    • Drop Duration type #212 @asdine
    • Expose the __genji_indexes table #214 @tzzed
    • Add ALTER TABLE ... ADD FIELD ... #96 @tdakkota
    • Prevent creating a table if constraints are incoherent #222 @tdakkota
    • Add LIKE operator #241 @tdakkota
    • Add SELECT DISTINCT #264 @tdakkota
    • fix: IN operator behavior with parentheses #207 @tdakkota
    • fix: Panic when parsing large integers #256 @goku321
    • fix: Ignore currently unsupported regex operators #252 @tie
    • fix: Panic on SELECT with LIMIT min(0)/max(0) #257 @goku321
    • fix: Panic when running GROUP BY on array values #208 @asdine

    Core

    • Implement driver.DriverContext and driver.Connector interfaces in sql/driver #213 @tie
    • Add support for context.Context #206 #224 @tie @asdine
    • Slightly more efficient conversion from JSON to document #270 @asdine
    • Add support for embedded structs to document.NewFromStruct #225 @tzzed
    • Numbers are stored as double by default, unless a field constraint is specified during table creation #312 @asdine (:warning: Breaking change)
    • fix: CI tests with nested modules #232 @tie
    • fix: badgerengine benchmarks #209 @tie
    • fix: Be more forgiving when scanning null values in StructScan #211 @tie
    • fix: Badger tests on Windows #288 @tie
    • ci: Add support for fuzzing #253 @tie

    CLI

    • Respect NO_COLOR and NO_HISTORY environment variables #215 @tie
    • Add .dump command #181 @tzzed
    • Add version command #184 @cvhariharan
    • Add support for exit and help commands without leading dot #180 @Amirhossein2000
    • fix: Autocompletion panic when there is no suggestion #178 @tdakkota
    • fix: Panic on genji version when compiled in GOPATH mode #261 @tdemin
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Oct 30, 2020)

  • v0.8.0(Sep 26, 2020)

    SQL

    • Add BEGIN, ROLLBACK, and COMMIT statements #78
    • Add support for paths in UPDATE statement #84
    • Add GROUP BY clause #6
    • Add COUNT aggregator #5
    • Add MIN aggregator #165
    • Add MAX aggregator #166
    • Add SUM aggregator #4

    Core

    • Add codec system #177
    • Remove introduction text when reading from STDIN #179
    • Improved how values are indexed #194
    • Indexes created on field who has a typed constraint are now typed #195
    • Add support for array and document indexes #199
    • Removed encoding/json dependency and use jsonparser instead #203

    CLI

    • Add table autocompletion #143
    • Add index autocompletion #157
    • Add command suggestions #163

    Bug fixes

    • Fix order of SELECT ast nodes #188
    • MapScan now decodes nested documents #191
    • Fix saving of history when .exit was called #196
    Source code(tar.gz)
    Source code(zip)
  • v0.7.1(Sep 1, 2020)

  • v0.7.0(Aug 25, 2020)

    SQL

    • Add REINDEX statement #72
    • Add ALTER TABLE ... RENAME TO ... statement #95
    • Add new __genji_tables table #152
    • Allow referencing current document in expression #147

    Core

    • Removed fixed size integers #130
    • Renamed float64 type to double #130
    • Integers are now converted to double prior to comparison and indexing #146
    • Encode documents using MessagePack #117
    • Move badgerengine into its own module #140
    • Replaced memoryengine with custom implementation #139
    • Store table information in memory #142
    • Add support for time.Time in document.StructScan

    CLI

    • Add .help command #160
    • Add .indexes command #100
    • Add table suggestions after FROM keyword #159
    • Ignore input with whitespace only #106

    Bug fixes

    • Prevent primary key overlap with concurrent inserts #74
    • Fix behavior of ValueBuffer#Copy #111
    • Fix UPDATE ... SET clause setting the wrong array indexes #91
    • Fix display of field names with backquotes #64
    • Arithmetic operators return null for incompatible types #105
    • Fix parentheses behavior #131
    • Fix CAST behavior #138
    • Remove transaction promotion #150
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Jun 28, 2020)

    SQL

    • Added support for IS and IS NOT #75
    • Added support for IN and NOT IN #76 #81
    • Added support for UPDATE ... UNSET #68
    • Added support for field constraints for missing types #85
    • Added support for EXPLAIN #102

    Core

    • Added support for Cursors to Engines #40
    • Added query planner #88

    Bug fixes

    • Fix null to text conversion #66
    • Make UPDATE ... SET set a field to every matching document #82
    • Fix panic when ORDER BY is used with an indexed field #71
    • Fix DROP TABLE behavior that used to remove all database indexes #99
    • Normalize behavior of boolean field constraint with incompatible types #89
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Mar 5, 2020)

    • Support NOT NULL field constraints
    • Support Bitwise operators
    • Support Duration values
    • Support SELECT without table reference
    • Support arithmetic operators
    • Support AS
    • Support CAST
    • Use Badger as main in memory engine
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jan 5, 2020)

    Core

    • Support for Documents
    • Functions for translating structs and maps into documents
    • Renamed references to Record to Document
    • Moved database logic to database package

    SQL

    • Insert documents using document notation
    • Select sub fields
    • Support for ORDER BY
    • Support for field constraints

    Misc

    • Removed code generation temporarily
    • Improved shell auto completion
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Nov 23, 2019)

    Changelog

    Core

    • New (db/tx).QueryRecord method to query a single record
    • New db.SQLDB method that wraps the db into a database/sql db
    • New tx.ListTables method
    • New tx.ReIndex method
    • New tx.ReIndexAll method
    • New index implementation
    • Moved package recordutil to record
    • New record.Scan method

    SQL

    • Support key() function to return the primary key
    • Support specifying primary key of any type during a CREATE TABLE statement
    • Generate autoincrementing default key instead of uuid
    • Support multiple wildcards, fields and functions in SELECT statement
    • Support != operator
    • Support == operator as an alias for =
    • Better support for NULL
    • Parsed integers are converted to the smallest int size that fits
    • Double quoted strings are now treated exclusively as identifiers
    • Where expression containing only the primary key will benefit from a primary key optimization
    • Float32 numbers no longer supported

    Engines

    • Open a BoltDB database without referencing the engine
    • Badger now has its own module
    • Upgrade Badger to v2
    • Renamed engine/memory to engine/memoryengine
    • Renamed engine/bolt to engine/boltengine
    • Renamed engine/badger to engine/badgerengine

    Command-line tool

    • Code generation moved under genji generate command
    • Struct fields are lowercased by default
    • Support for struct tag genji:"pk" removed
    • Struct tags are now used to rename a field
    • New SQL shell
    • Store history under $HOME/.genji_history

    database/sql

    • Registers as a driver at startup
    • Support record.Scanner as a scan target
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Oct 31, 2019)

    • Fix index not found error
    • Fix support for double quoted strings
    • Fix comparison of floats
    • Fix support for sql.NamedArg
    • Skip nil values during encoding of maps
    • Fix Update behaviour
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Oct 21, 2019)

    • BoltDB engine: Fix sparse DELETE by batching the deletion of records
    • Badger engine: Fix panic when using multiple iterators during UPDATE statement
    • DB.Query / DB.Exec: Run each statement in its own transaction
    • Prefetch indexes when getting the table
    • Expect iterator instead of stream in recordutil helpers
    • Fix comparison between list with one element and single value
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 19, 2019)

  • v0.1.1(Sep 20, 2019)

  • v0.1.0(Aug 16, 2019)

Owner
Genji
Document-oriented, embedded, SQL database
Genji
Command line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server

About xo xo is a command-line tool to generate Go code based on a database schema or a custom query. xo works by using database metadata and SQL intro

XO 2.9k Sep 15, 2021
Type safe SQL query builder and struct mapper for Go

sq (Structured Query) ?? ?? sq is a code-generated, type safe query builder and struct mapper for Go. ?? ?? Documentation • Reference • Examples This

null 120 Sep 5, 2021
Document-oriented, embedded SQL database

Genji Document-oriented, embedded, SQL database Table of contents Table of contents Introduction Features Installation Usage Using Genji's API Using d

Genji 732 Sep 7, 2021
golang orm and sql builder

gosql gosql is a easy ORM library for Golang. Style: var userList []UserModel err := db.FetchAll(&userList, gosql.Columns("id","name"), gosql.

RushTeam 146 Sep 9, 2021
SQL builder and query library for golang

__ _ ___ __ _ _ _ / _` |/ _ \ / _` | | | | | (_| | (_) | (_| | |_| | \__, |\___/ \__, |\__,_| |___/ |_| goqu is an expressive SQL bu

Doug Martin 1.3k Sep 13, 2021
Mocking your SQL database in Go tests has never been easier.

copyist Mocking your SQL database in Go tests has never been easier. The copyist library automatically records low-level SQL calls made during your te

CockroachDB 771 Aug 23, 2021
igor is an abstraction layer for PostgreSQL with a gorm like syntax.

igor igor is an abstraction layer for PostgreSQL, written in Go. Igor syntax is (almost) compatible with GORM. When to use igor You should use igor wh

Paolo Galeone 84 Aug 2, 2021
Fast SQL query builder for Go

sqlf A fast SQL query builder for Go. sqlf statement builder provides a way to: Combine SQL statements from fragments of raw SQL and arguments that ma

Vlad Glushchuk 22 Sep 5, 2021
Fluent SQL generation for golang

sqrl - fat-free version of squirrel - fluent SQL generator for Go Non thread safe fork of squirrel. The same handy fluffy helper, but with extra lette

Ivan Kirichenko 230 Sep 6, 2021
Query git repositories with SQL. Generate reports, perform status checks, analyze codebases. 🔍 📊

askgit askgit is a command-line tool for running SQL queries on git repositories. It's meant for ad-hoc querying of git repositories on disk through a

Augmentable 2.5k Sep 5, 2021
Additions to Go's database/sql for super fast performance and convenience. (fork of gocraft/dbr)

dbr (fork of gocraft/dbr) provides additions to Go's database/sql for super fast performance and convenience. Getting Started // create a connection (

Free and open source software developed at Mail.Ru 149 Aug 28, 2021
gosq is a parsing engine for a simplicity-focused, template-based SQL query builder for Go.

gosq is a parsing engine for a simplicity-focused, template-based SQL query builder for Go.

Sang-gon Lee 29 Sep 5, 2021
Fluent SQL generation for golang

Squirrel is "complete". Bug fixes will still be merged (slowly). Bug reports are welcome, but I will not necessarily respond to them. If another fork

null 4.2k Sep 14, 2021
Write your SQL queries in raw files with all benefits of modern IDEs, use them in an easy way inside your application with all the profit of compile time constants

About qry is a general purpose library for storing your raw database queries in .sql files with all benefits of modern IDEs, instead of strings and co

Sergey Treinis 20 Feb 3, 2021
Go fearless SQL. Sqlvet performs static analysis on raw SQL queries in your Go code base.

Sqlvet Sqlvet performs static analysis on raw SQL queries in your Go code base to surface potential runtime errors at build time. Feature highlights:

QP Hou 427 Sep 6, 2021
Go library for accessing multi-host SQL database installations

hasql hasql provides simple and reliable way to access high-availability database setups with multiple hosts. Status hasql is production-ready and is

Yandex 84 Sep 7, 2021
A Go library for collecting sql.DBStats in Prometheus format

sqlstats A Go library for collecting sql.DBStats and exporting them in Prometheus format. A sql.DB object represents a pool of zero or more underlying

Daniel Middlecote 128 Aug 8, 2021
Golang Sequel ORM that support Enum, JSON, Spatial and many more

sqlike A golang SQL ORM which anti toxic query and focus on latest features. Installation go get github.com/si3nloong/sqlike Fully compatible with nat

SianLoong 14 Sep 6, 2021
A Golang library for using SQL.

dotsql A Golang library for using SQL. It is not an ORM, it is not a query builder. Dotsql is a library that helps you keep sql files in one place and

Gustavo Chaín 598 Sep 4, 2021