RethinkDB-go - RethinkDB Driver for Go


RethinkDB-go - RethinkDB Driver for Go

GitHub tag GoDoc Build status

Go driver for RethinkDB

RethinkDB-go Logo

Current version: v6.2.1 (RethinkDB v2.4)

Please note that this version of the driver only supports versions of RethinkDB using the v0.4 protocol (any versions of the driver older than RethinkDB 2.0 will not work).

If you need any help you can find me on the RethinkDB slack in the #gorethink channel.


go get

Replace v6 with v5 or v4 to use previous versions.


package rethinkdb_test

import (

	r ""

func Example() {
	session, err := r.Connect(r.ConnectOpts{
		Address: url, // endpoint without http
	if err != nil {

	res, err := r.Expr("Hello World").Run(session)
	if err != nil {

	var response string
	err = res.One(&response)
	if err != nil {


	// Output:
	// Hello World


Basic Connection

Setting up a basic connection with RethinkDB is simple:

func ExampleConnect() {
	var err error

	session, err = r.Connect(r.ConnectOpts{
		Address: url,
	if err != nil {

See the documentation for a list of supported arguments to Connect().

Connection Pool

The driver uses a connection pool at all times, by default it creates and frees connections automatically. It's safe for concurrent use by multiple goroutines.

To configure the connection pool InitialCap, MaxOpen and Timeout can be specified during connection. If you wish to change the value of InitialCap or MaxOpen during runtime then the functions SetInitialPoolCap and SetMaxOpenConns can be used.

func ExampleConnect_connectionPool() {
	var err error

	session, err = r.Connect(r.ConnectOpts{
		Address:    url,
		InitialCap: 10,
		MaxOpen:    10,
	if err != nil {

Connect to a cluster

To connect to a RethinkDB cluster which has multiple nodes you can use the following syntax. When connecting to a cluster with multiple nodes queries will be distributed between these nodes.

func ExampleConnect_cluster() {
	var err error

	session, err = r.Connect(r.ConnectOpts{
		Addresses: []string{url},
		//  Addresses: []string{url1, url2, url3, ...},
	if err != nil {

When DiscoverHosts is true any nodes are added to the cluster after the initial connection then the new node will be added to the pool of available nodes used by RethinkDB-go. Unfortunately the canonical address of each server in the cluster MUST be set as otherwise clients will try to connect to the database nodes locally. For more information about how to set a RethinkDB servers canonical address set this page

User Authentication

To login with a username and password you should first create a user, this can be done by writing to the users system table and then grant that user access to any tables or databases they need access to. This queries can also be executed in the RethinkDB admin console.

err := r.DB("rethinkdb").Table("users").Insert(map[string]string{
    "id": "john",
    "password": "p455w0rd",
err = r.DB("blog").Table("posts").Grant("john", map[string]bool{
    "read": true,
    "write": true,

Finally the username and password should be passed to Connect when creating your session, for example:

session, err := r.Connect(r.ConnectOpts{
    Address: "localhost:28015",
    Database: "blog",
    Username: "john",
    Password: "p455w0rd",

Please note that DiscoverHosts will not work with user authentication at this time due to the fact that RethinkDB restricts access to the required system tables.

Query Functions

This library is based on the official drivers so the code on the API page should require very few changes to work.

To view full documentation for the query functions check the API reference or GoDoc

Slice Expr Example

r.Expr([]interface{}{1, 2, 3, 4, 5}).Run(session)

Map Expr Example

r.Expr(map[string]interface{}{"a": 1, "b": 2, "c": 3}).Run(session)

Get Example


Map Example (Func)

r.Expr([]interface{}{1, 2, 3, 4, 5}).Map(func (row Term) interface{} {
    return row.Add(1)

Map Example (Implicit)

r.Expr([]interface{}{1, 2, 3, 4, 5}).Map(r.Row.Add(1)).Run(session)

Between (Optional Args) Example

r.DB("database").Table("table").Between(1, 10, r.BetweenOpts{
    Index: "num",
    RightBound: "closed",

For any queries which use callbacks the function signature is important as your function needs to be a valid RethinkDB-go callback, you can see an example of this in the map example above. The simplified explanation is that all arguments must be of type r.Term, this is because of how the query is sent to the database (your callback is not actually executed in your Go application but encoded as JSON and executed by RethinkDB). The return argument can be anything you want it to be (as long as it is a valid return value for the current query) so it usually makes sense to return interface{}. Here is an example of a callback for the conflict callback of an insert operation:

r.Table("test").Insert(doc, r.InsertOpts{
    Conflict: func(id, oldDoc, newDoc r.Term) interface{} {
        return newDoc.Merge(map[string]interface{}{
            "count": oldDoc.Add(newDoc.Field("count")),

Optional Arguments

As shown above in the Between example optional arguments are passed to the function as a struct. Each function that has optional arguments as a related struct. This structs are named in the format FunctionNameOpts, for example BetweenOpts is the related struct for Between.

Cancelling queries

For query cancellation use Context argument at RunOpts. If Context is nil and ReadTimeout or WriteTimeout is not 0 from ConnectionOpts, Context will be formed by summation of these timeouts.

For unlimited timeouts for Changes() pass context.Background().


Different result types are returned depending on what function is used to execute the query.

  • Run returns a cursor which can be used to view all rows returned.
  • RunWrite returns a WriteResponse and should be used for queries such as Insert, Update, etc...
  • Exec sends a query to the server and closes the connection immediately after reading the response from the database. If you do not wish to wait for the response then you can set the NoReply flag.


res, err := r.DB("database").Table("tablename").Get(key).Run(session)
if err != nil {
    // error
defer res.Close() // Always ensure you close the cursor to ensure connections are not leaked

Cursors have a number of methods available for accessing the query results

  • Next retrieves the next document from the result set, blocking if necessary.
  • All retrieves all documents from the result set into the provided slice.
  • One retrieves the first document from the result set.


var row interface{}
for res.Next(&row) {
    // Do something with row
if res.Err() != nil {
    // error
var rows []interface{}
err := res.All(&rows)
if err != nil {
    // error
var row interface{}
err := res.One(&row)
if err == r.ErrEmptyResult {
    // row not found
if err != nil {
    // error


When passing structs to Expr(And functions that use Expr such as Insert, Update) the structs are encoded into a map before being sent to the server. Each exported field is added to the map unless

  • the field's tag is "-", or
  • the field is empty and its tag specifies the "omitempty" option.

Each fields default name in the map is the field name but can be specified in the struct field's tag value. The "rethinkdb" key in the struct field's tag value is the key name, followed by an optional comma and options. Examples:

// Field is ignored by this package.
Field int `rethinkdb:"-"`
// Field appears as key "myName".
Field int `rethinkdb:"myName"`
// Field appears as key "myName" and
// the field is omitted from the object if its value is empty,
// as defined above.
Field int `rethinkdb:"myName,omitempty"`
// Field appears as key "Field" (the default), but
// the field is skipped if empty.
// Note the leading comma.
Field int `rethinkdb:",omitempty"`
// When the tag name includes an index expression
// a compound field is created
Field1 int `rethinkdb:"myName[0]"`
Field2 int `rethinkdb:"myName[1]"`

NOTE: It is strongly recommended that struct tags are used to explicitly define the mapping between your Go type and how the data is stored by RethinkDB. This is especially important when using an Id field as by default RethinkDB will create a field named id as the primary key (note that the RethinkDB field is lowercase but the Go version starts with a capital letter).

When encoding maps with non-string keys the key values are automatically converted to strings where possible, however it is recommended that you use strings where possible (for example map[string]T).

If you wish to use the json tags for RethinkDB-go then you can call SetTags("rethinkdb", "json") when starting your program, this will cause RethinkDB-go to check for json tags after checking for rethinkdb tags. By default this feature is disabled. This function will also let you support any other tags, the driver will check for tags in the same order as the parameters.

NOTE: Old-style gorethink struct tags are supported but deprecated.


RethinkDB contains some special types which can be used to store special value types, currently supports are binary values, times and geometry data types. RethinkDB-go supports these data types natively however there are some gotchas:

  • Time types: To store times in RethinkDB with RethinkDB-go you must pass a time.Time value to your query, due to the way Go works type aliasing or embedding is not support here
  • Binary types: To store binary data pass a byte slice ([]byte) to your query
  • Geometry types: As Go does not include any built-in data structures for storing geometry data RethinkDB-go includes its own in the package, Any of the types (Geometry, Point, Line and Lines) can be passed to a query to create a RethinkDB geometry type.

Compound Keys

RethinkDB unfortunately does not support compound primary keys using multiple fields however it does support compound keys using an array of values. For example if you wanted to create a compound key for a book where the key contained the author ID and book name then the ID might look like this ["author_id", "book name"]. Luckily RethinkDB-go allows you to easily manage these keys while keeping the fields separate in your structs. For example:

type Book struct {
  AuthorID string `rethinkdb:"id[0]"`
  Name     string `rethinkdb:"id[1]"`
// Creates the following document in RethinkDB
{"id": [AUTHORID, NAME]}


Sometimes you may want to use a Go struct that references a document in another table, instead of creating a new struct which is just used when writing to RethinkDB you can annotate your struct with the reference tag option. This will tell RethinkDB-go that when encoding your data it should "pluck" the ID field from the nested document and use that instead.

This is all quite complicated so hopefully this example should help. First lets assume you have two types Author and Book and you want to insert a new book into your database however you dont want to include the entire author struct in the books table. As you can see the Author field in the Book struct has some extra tags, firstly we have added the reference tag option which tells RethinkDB-go to pluck a field from the Author struct instead of inserting the whole author document. We also have the rethinkdb_ref tag which tells RethinkDB-go to look for the id field in the Author document, without this tag RethinkDB-go would instead look for the author_id field.

type Author struct {
    ID      string  `rethinkdb:"id,omitempty"`
    Name    string  `rethinkdb:"name"`

type Book struct {
    ID      string  `rethinkdb:"id,omitempty"`
    Title   string  `rethinkdb:"title"`
    Author  Author `rethinkdb:"author_id,reference" rethinkdb_ref:"id"`

The resulting data in RethinkDB should look something like this:

    "author_id": "author_1",
    "id":  "book_1",
    "title":  "The Hobbit"

If you wanted to read back the book with the author included then you could run the following RethinkDB-go query:

r.Table("books").Get("1").Merge(func(p r.Term) interface{} {
    return map[string]interface{}{
        "author_id": r.Table("authors").Get(p.Field("author_id")),

You are also able to reference an array of documents, for example if each book stored multiple authors you could do the following:

type Book struct {
    ID       string  `rethinkdb:"id,omitempty"`
    Title    string  `rethinkdb:"title"`
    Authors  []Author `rethinkdb:"author_ids,reference" rethinkdb_ref:"id"`
    "author_ids": ["author_1", "author_2"],
    "id":  "book_1",
    "title":  "The Hobbit"

The query for reading the data back is slightly more complicated but is very similar:

r.Table("books").Get("book_1").Merge(func(p r.Term) interface{} {
    return map[string]interface{}{
        "author_ids": r.Table("authors").GetAll(r.Args(p.Field("author_ids"))).CoerceTo("array"),

Custom Marshalers/Unmarshalers

Sometimes the default behaviour for converting Go types to and from ReQL is not desired, for these situations the driver allows you to implement both the Marshaler and Unmarshaler interfaces. These interfaces might look familiar if you are using to using the encoding/json package however instead of dealing with []byte the interfaces deal with interface{} values (which are later encoded by the encoding/json package when communicating with the database).

An good example of how to use these interfaces is in the types package, in this package the Point type is encoded as the GEOMETRY pseudo-type instead of a normal JSON object.

On the other side, you can implement external encode/decode functions with SetTypeEncoding function.


By default the driver logs are disabled however when enabled the driver will log errors when it fails to connect to the database. If you would like more verbose error logging you can call r.SetVerbose(true).

Alternatively if you wish to modify the logging behaviour you can modify the logger provided by For example the following code completely disable the logger:

// Enabled
r.Log.Out = os.Stderr
// Disabled
r.Log.Out = ioutil.Discard


The driver supports opentracing-go. You can enable this feature by setting UseOpentracing to true in the ConnectOpts. Then driver will expect opentracing.Span in the RunOpts.Context and will start new child spans for queries. Also you need to configure tracer in your program by yourself.

The driver starts span for the whole query, from the first byte is sent to the cursor closed, and second-level span for each query for fetching data.

So you can trace how much time you program spends for RethinkDB queries.


The driver includes the ability to mock queries meaning that you can test your code without needing to talk to a real RethinkDB cluster, this is perfect for ensuring that your application has high unit test coverage.

To write tests with mocking you should create an instance of Mock and then setup expectations using On and Return. Expectations allow you to define what results should be returned when a known query is executed, they are configured by passing the query term you want to mock to On and then the response and error to Return, if a non-nil error is passed to Return then any time that query is executed the error will be returned, if no error is passed then a cursor will be built using the value passed to Return. Once all your expectations have been created you should then execute you queries using the Mock instead of a Session.

Here is an example that shows how to mock a query that returns multiple rows and the resulting cursor can be used as normal.

func TestSomething(t *testing.T) {
	mock := r.NewMock()
		map[string]interface{}{"id": 1, "name": "John Smith"},
		map[string]interface{}{"id": 2, "name": "Jane Smith"},
	}, nil)

	cursor, err := r.Table("people").Run(mock)
	if err != nil {
		t.Errorf("err is: %v", err)

	var rows []interface{}
	err = cursor.All(&rows)
	if err != nil {
		t.Errorf("err is: %v", err)

	// Test result of rows


If you want the cursor to block on some of the response values, you can pass in a value of type chan interface{} and the cursor will block until a value is available to read on the channel. Or you can pass in a function with signature func() interface{}: the cursor will call the function (which may block). Here is the example above adapted to use a channel.

func TestSomething(t *testing.T) {
	mock := r.NewMock()
	ch := make(chan []interface{})
	mock.On(r.Table("people")).Return(ch, nil)
	go func() {
		ch <- []interface{}{
			map[string]interface{}{"id": 1, "name": "John Smith"},
			map[string]interface{}{"id": 2, "name": "Jane Smith"},
		ch <- []interface{}{map[string]interface{}{"id": 3, "name": "Jack Smith"}}
	cursor, err := r.Table("people").Run(mock)
	if err != nil {
		t.Errorf("err is: %v", err)

	var rows []interface{}
	err = cursor.All(&rows)
	if err != nil {
		t.Errorf("err is: %v", err)

	// Test result of rows


The mocking implementation is based on amazing library, thanks to @stretchr for their awesome work!


Everyone wants their project's benchmarks to be speedy. And while we know that RethinkDB and the RethinkDB-go driver are quite fast, our primary goal is for our benchmarks to be correct. They are designed to give you, the user, an accurate picture of writes per second (w/s). If you come up with a accurate test that meets this aim, submit a pull request please.

Thanks to @jaredfolkins for the contribution.

Type Value
Model Name MacBook Pro
Model Identifier MacBookPro11,3
Processor Name Intel Core i7
Processor Speed 2.3 GHz
Number of Processors 1
Total Number of Cores 4
L2 Cache (per Core) 256 KB
L3 Cache 6 MB
Memory 16 GB
BenchmarkBatch200RandomWrites                20                              557227775                     ns/op
BenchmarkBatch200RandomWritesParallel10      30                              354465417                     ns/op
BenchmarkBatch200SoftRandomWritesParallel10  100                             761639276                     ns/op
BenchmarkRandomWrites                        100                             10456580                      ns/op
BenchmarkRandomWritesParallel10              1000                            1614175                       ns/op
BenchmarkRandomSoftWrites                    3000                            589660                        ns/op
BenchmarkRandomSoftWritesParallel10          10000                           247588                        ns/op
BenchmarkSequentialWrites                    50                              24408285                      ns/op
BenchmarkSequentialWritesParallel10          1000                            1755373                       ns/op
BenchmarkSequentialSoftWrites                3000                            631211                        ns/op
BenchmarkSequentialSoftWritesParallel10      10000                           263481                        ns/op


Many functions have examples and are viewable in the godoc, alternatively view some more full features examples on the wiki.

Another good place to find examples are the tests, almost every term will have a couple of tests that demonstrate how they can be used.

Further reading


Copyright 2013 Daniel Cannon

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

  • Terrible performance; extreme cpu usage; degrades further over time

    Terrible performance; extreme cpu usage; degrades further over time

    We are experiencing extremely bad performance with this driver, and it's easily reproducible.

    This is using the most recently committed gorethink driver and RethinkDB 1.14.1 The platform is a digitalocean host running Ubuntu 14.04 with 4 cores and 8GB ram, but I've reproduced the same thing on other hosts with varying configurations.

    A simple program to insert 30000 tiny test documents using 3 concurrent functions inserting 10000 documents each completes in:

    real 7m0.511s user 10m56.033s sys 0m47.415s

    Which works out to be 71 documents per second.

    And during this time CPU usage for go, not so much rethink, goes nuts:


    The really odd thing is that the performance gets worse as time goes on. It starts initially at around 300 ops/s and near the end of the test, the performance drops down to about 25 ops/s:


    I am posting this issue in gorethink instead of rethinkdb primarily because of the CPU - the driver is burning cpu like the devil inserting just a few thousand tiny documents.

    Here is the small app used in this test:

    package main
    import "log"
    import "strconv"
    import "runtime"
    import rdb ""
    type test_document struct {
            Id       string `gorethink:"name"`
            ParentId string `gorethink:"parent_id"`
            Username string `gorethink:"username"`
            Status   int    `gorethink:"status"`
    func he(cur *rdb.Cursor, err error) {
            if err != nil {
    func main() {
            log.Printf("Creating DB...")
            opts := rdb.ConnectOpts{
                    Address: "localhost:28015",
                    MaxIdle: 10,
            ses, err := rdb.Connect(opts)
            if err != nil {
            _, _ = rdb.DbDrop("perf_test").Run(ses)
            _, err = rdb.DbCreate("perf_test").Run(ses)
            if err != nil {
            tcopts := rdb.TableCreateOpts{PrimaryKey: "id", Durability: "soft"}
            log.Printf("Creating Table...\n")
            he(rdb.Db("perf_test").TableCreate("users", tcopts).Run(ses))
            log.Printf("Populating table...\n")
            go what(0, 10000, ses)
            go what(10000, 20000, ses)
            what(20000, 30000, ses)
    func what(begin int, end int, ses *rdb.Session) {
            for i := begin; i < end; i++ {
                    user := test_document{
                            Id:       strconv.Itoa(i),
                            Username: strconv.Itoa(i),
                            ParentId: strconv.Itoa(i - 1),
                            Status:   0,
    t:bug p:high s:confirmed 
    opened by wildattire 35
  • Bug when updating time.Time in a nested map

    Bug when updating time.Time in a nested map

    The driver loses type information from a nested map:

    when := time.Now()
          "LastSeen": map[string]time.Time{"some_tag": when}}

    The time ends up being updated as a string type instead of the expected {'$reql_type$': 'TIME'} type.

    t:bug p:medium 
    opened by or-else 28
  • Improve connection pool performance

    Improve connection pool performance

    During my investigation of #125 I noticed that the performance of the connection pool is pretty bad method I use for closing connections does not work well with RethinkDB due to the reuse of connections for continue + end queries.

    I have looked into removing the connection pool completely as most of the official drivers do not use connection pools however proper concurrency support is pretty important for a Go driver IMO.

    t:enhancement p:medium 
    opened by dancannon 28
  • "Token ## not in stream cache" error

    When making multiple async queries to the driver, we sometimes get the above error. It happens about 20-60% of the calls we're making, so we've had to disable async calls to our API. The Python driver appears to have had this issue as well and they resolved it:

    Is this something that can be address in this project as well?

    t:bug p:medium s:confirmed 
    opened by codisms 25
  • Unable to check if Get() returns nothing

    Unable to check if Get() returns nothing

    As with rethinkgo, Get() always returns a single row and does not provide any way of checking if a record exists.

    row := r.Table(table).Get("missing key").RunRow(rs)
    err := row.Scan(obj)
    // err == nil, obj has default values
    rows, err := r.Table(table).Get("missing key").Run(rs)
    // err == nil
    for rows.Next() { // returns true
        err := rows.Scan(obj)
        // err == nil, obj has default values

    GetAll works :

    rows, err := r.Table(table).GetAll("missing key").Run(rs)
    // err == nil
    for rows.Next() { // returns false
        err := rows.Scan(obj)

    RethinkDB returns a single NULL datum on Get queries returning no result, and this null value is scanned to the object.

    Proposition :

    • [BC break] ResultRow.Scan returns ErrNotFound on NULL datum (as works mgo for MongoDB in similar cases)
    • add func (*ResultRow) IsNull() (or IsEmpty()/IsNil())

    The code would work as follows :

    row := r.Table(table).Get("missing key").RunRow(rs)
    err := row.Scan(obj)
    // err == ErrNotFound
    rows, err := r.Table(table).Get("missing key").Run(rs)
    // err == nil
    for rows.Next() && !rows.IsNull() { // true + false
        err := rows.Scan(obj)

    If this is ok, I can prepare a PR.

    [EDIT: replaced RunRow by Run on last piece of code]

    opened by jfbus 23
  • This project is no longer maintained

    This project is no longer maintained

    Unfortunately I have decided to stop maintaining GoRethink, this is due to the following reasons:

    • Over the last few years while I have spent a lot of time maintaining this driver I have not used it very much for my own personal projects.
    • My job has been keeping me very busy lately and I don't have as much time to work on this project as I used to.
    • The company behind RethinkDB has shut down and while I am sure the community will keep the database going it seems like a good time for me to step away from the project.
    • The driver itself is in a relatively good condition and many companies are using the existing version in production.

    I hope you understand my decision to step back from the project, if you have any questions or would be interested in take over some of the maintenance of the project please let me know. To make this process easier I have also decided to move the repository to the GoRethink organisation. All existing imports should still work.

    Thanks to everybody who got involved with this project over the last ~4 years and helped out, I have truly enjoyed the time I have spent building this library and I hope both RethinkDB and this driver manage to keep going.

    p:high s:on-hold 
    opened by dancannon 21
  • Connection pool is exhausting connections, eventually hangs

    Connection pool is exhausting connections, eventually hangs

    After running my server for a while I start to see file descriptors being used up and when I dump goroutines I see tons of these:

    goroutine 123981 [chan receive, 455 minutes]:*Pool).conn(0xc20805a1b0, 0x7bbf80, 0x0, 0x0)
            /home/web/apps/fbrss/src/ +0x2a2*Pool).query(0xc20805a1b0, 0xc200000001, 0x0, 0xc20834b720, 0xc208226420, 0x0, 0x0, 0x0)
            /home/web/apps/fbrss/src/ +0x40*Pool).Query(0xc20805a1b0, 0x1, 0x0, 0xc20834b720, 0xc208226420, 0x0, 0x0, 0x0)
            /home/web/apps/fbrss/src/ +0x94, 0x5, 0x4700000000, 0x0, 0x0, 0xc2081dfdd0, 0x2, 0x2, 0xc208226390, 0xc208010310, ...)
            /home/web/apps/fbrss/src/ +0x10c
    main.LoadUser(0xc20834b5ea, 0x28, 0x945b50, 0x6, 0xc208446d90, 0x0, 0x0)
            /home/web/apps/fbrss/data.go:235 +0xa75
            /home/web/apps/fbrss/feed.go:182 +0x1b4

    (lines are off by one, 252 for me is

    Depending on how big I make the pool this starts happening after 30-90 minutes of running at 2-5 reqs/sec. My code isn't doing anything extraordinary, looks roughly like this:

        rows, err = r.Table("users").GetAllByIndex("cookie", value).Limit(1).Run(rethinkSession)
        if err != nil {
            return nil, nil
        var user User
        err = rows.One(&user)
        ...wrapping up....

    Weird thing is, after I start the server it keeps opening more connections until the limit is reached and it runs fine at the limit for a while. Eventually, something triggers the build up of opening new connections and everything stalls. Let me know if I can add anything else.

    t:bug p:high 
    opened by oliver006 21
  • Inserts get truncated data at high concurrency

    Inserts get truncated data at high concurrency

    I'm playing around with the driver at runtime.GOMAXPROCS(8) and trying to insert 29k documents using this Zip JSON. After a while, the driver gives this error at random location and stop

    gorethink: String `CALCASIEU` (truncated) contains NULL byte at offset 9. in:
    r.Insert(r.Table("zips"), {City="CALCASIEU\x00", Loc=[-91.875149, 31.906412], Pop=124, State="LA"})

    Although everything runs fine at runtime.GOMAXPROCS(4) and rethinkdb get to around 3k of inserts/sec. The sample code is here (

    opened by pengux 21
  • Driver panics on user data

    Driver panics on user data

    The driver panics on user data in The panic is not documented. The doc just states "If the value cannot be converted, an error is returned at query .Run(session) time" which is only partially true.

    Panicking on user data is not a good pattern. The driver should be able to handle any input without crashing.

    If I do an Insert on user-provided data, the only way to avoid the panic is to pre-parse it myself to ensure the maximum nesting depth is not exceeding the driver-imposed limit. Which is a lot of unnecessary code duplication and extra CPU cycles. Wrapping every call to gorethink with Recover does not seem like a clean solution either.

    Why not just

    return Term{
        termType: p.Term_DATUM,
        data:     nil,

    instead of panicking?

    t:enhancement p:low s:ready-for-release 
    opened by or-else 17
  • Changefeed crash

    Changefeed crash

    Having recurring crashes with the ChangeFeed-cursors (please see below).

    Thanks in advance for looking into it, Dan. :-)

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal 0xb code=0x1 addr=0x20 pc=0x62df73]
    goroutine 7932 [running]:*Cursor).bufferNextResponse(0xc820517b80, 0x0, 0x0)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/ +0x263*Cursor).seekCursor(0xc820517b80, 0x100000001, 0x0, 0x0)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/ +0xe7*Cursor).nextLocked(0xc820517b80, 0xd9bfe0, 0xc8200769a0, 0xfeb601, 0xc8200769a0, 0x0, 0x0)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/ +0x3c*Cursor).Next(0xc820517b80, 0xd9bfe0, 0xc8200769a0, 0xd9bfe0)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/ +0xb0*Cursor).Listen.func1(0xdd3720, 0xc8206341e0, 0xc820517b80)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/ +0x19d
    created by*Cursor).Listen
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/ +0x49
    t:bug p:high s:complete 
    opened by gthmac 15
  • fix: refactored tests dependant on float assertion

    fix: refactored tests dependant on float assertion

    Hi Dan,

    As promised, here are the fixes for the tests.

    I created some helper methods which essentially ripoff Go's stdlib.

    From there I refactored the coordinates into float64 vertices.

    I then used a helper method I created to compare the coordinates instantiated to the Lines and Points returned, throwing an error if the deviation is too great.

    I think this is readable but am open to feedback.


    opened by jaredfolkins 15
  • Utilize MarshalJSON and UnmarshalJSON interface implementations

    Utilize MarshalJSON and UnmarshalJSON interface implementations

    Is your feature request related to a problem? Please describe. Custom types can sometimes produce empty values in a RethinkDB document. I have implemented my own decimal type:

    type Decimal struct {
    	flags uint32
    	high  uint32
    	low   uint32
    	mid   uint32
    // MarshalJSON returns the decimal as a text string without quotes
    func (d Decimal) MarshalJSON() ([]byte, error) { return d.MarshalText() }
    // MarshalText encodes the receiver into UTF-8-encoded text and returns the result.
    func (d Decimal) MarshalText() (text []byte, err error) {
    	text = []byte(d.String())
    	return text, nil
    // UnmarshalJSON unmarshals the JSON value, ignoring quotes
    func (d *Decimal) UnmarshalJSON(text []byte) error {
    	return d.UnmarshalText(text)
    // UnmarshalText unmarshals the decimal from the provided text.
    func (d *Decimal) UnmarshalText(text []byte) (err error) {
    	*d, err = Parse(string(text))
    	return err

    It implements both json.Marshaler and json.Unmarshaler. This type encodes and decodes without issue using the standard encoding/json package. So I was surprised to see the following document stored in RethinkDB

    "candle": {
    "bar_seqno": 12024547 ,
    "close_price": { } ,
    "high_price": { } ,
    "low_price": { } ,
    "open_price": { } ,

    when using

    type Candle struct {
      BarSeqno int `json:"bar_seqno"`
      OpenPrice Decimal `json:"open_price"`
      HighPrice Decimal `json:"high_price"`
      LowPrice Decimal `json:"low_price"`
      ClosePrice Decimal `json:"close_price"`
    candle := Candle{
      BarSeqno: 12024547,
      OpenPrice: decimal.NewFromString("1.33028"),
      HighPrice: decimal.NewFromString("1.33028"),
      LowPrice: decimal.NewFromString("1.33028"),
      ClosePrice: decimal.NewFromString("1.33028"),
    err := r.Table("candles").Insert(candle).Exec(session)

    What I would expect to see is

    "candle": {
    "bar_seqno": 12024547 ,
    "close_price": 1.33028 ,
    "high_price": 1.33028 ,
    "low_price": 1.33028 ,
    "open_price": 1.33028 ,

    Describe the solution you'd like If this library could use the json.Marshaler and json.Unmarshaler implementations, I would get the expected value by just using

    err := r.Table("candles").Insert(candle).Exec(session)

    Describe alternatives you've considered My workaround comes from this issue and is basically:

    candles = []Candle{candle1, candle2, candle3}
    b := new(bytes.Buffer)
    	for _, candle := range candles {
    		if err = json.NewEncoder(b).Encode(candle); err != nil {
    			return err
    		if err = r.Table(name).Insert(r.JSON(b.String())).Exec(session); err != nil {
    			return err

    This is not only more verbose but also creates separate calls to Insert instead of sending a batch which impacts performance and also eliminates the transaction-like quality of sending a slice of objects to Insert.

    Additional context Is there something about RethinkDB or this library that would prevent adding this functionality? I would be happy to give it a try but not if someone has already proven it is a bad idea.

    opened by klokare 2
  • WriteResponse does not return GeneratedKeys

    WriteResponse does not return GeneratedKeys

    Describe the bug I am running below query

    alert := dbEntities.Alert{
    		Acknowledged:          false,
    		AcknowledgedTimestamp: nil,
    		AutoAcknowledged:      false,
    		Class:                 alertClass,
    		Count:                 1,
    		Level:                 alertLevel,
    		Message:               message,
    		Ref:                   ref,
    		Timestamp:             nil,
    		Type:                  alertType,
    		RuleId:                ruleId,
    res, err := r.Table(hConstant.TableAlert).Insert(alert).RunWrite(rethinkHelper.RethinkSession)

    There is no error. In the res, it says inserted is 1 but GeneratedKeys slice is empty

    To Reproduce Steps to reproduce the behavior:

    1. Create Database.
    2. Create Table
    3. run above query
    4. Check the response object

    Expected behavior GeneratedKeys should have more than 0 value, specially generated id for inserted record

    Screenshots image (1)

    System info

    • OS: [Ubuntu 18.04.6 LTS]
    • RethinkDB Version: [2.4.2~0bionic (GCC 7.5.0)]
    opened by GaikwadPratik 0
  • Contexts not working properly in certain scenarios

    Contexts not working properly in certain scenarios

    Describe the bug

    To describe the bug, I'd like to look at the following "database outage" scenario:

    • A microservice with pretty high workload, which connects to RethinkDB
    • The RethinkDB server goes down (maybe due to rolling update of a worker node in K8s or whatever...)

    What can then happen is:

    • Response times for users of the microservice are getting slower and slower, even though all DB queries are run with contexts properly set (max 30s, but response times can start to stack up quickly to >600s)
    • Go routines start to build up
    • Eventually the microservice get's OOM killed

    If I conclude correctly from the code, in the connection pool there is a mutex used while distributing queries to a connection (to prevent concurrent creation of a new connection?). I guess in my scenario it takes more time to create a connection (because the connection has gone bad it needs recreation) than requests are coming in. So, Go routines will queue up waiting for the mutex (until the database connection is re-established, which stops this behavior). In the logs of the application I eventually see the connection refused error from this driver.

    This shows that using mutexes has some disadvantages for these kind of scenarios because they cannot be left even when a context is provided. From my perspective, instead, the implementation should utilize something like go-lock or a construct using channels from which on the one hand the routines can be informed when the connection is ready and on the other hand a message from a context can be retrieved.

    Maybe one or the other will stumble upon the same problem and this helps to better understand the observed behavior.

    To Reproduce

    • Produce high workload
    • Shutdown RethinkDB server

    Expected behavior The queries to the database are cancelled by the context and do not queue up.


    Screenshot from 2022-04-07 15-07-05

    --> As soon as the DB server is shutdown, go routines are queueing up (depends on the workload how quickly)


    --> This is a bit complex as it's created with pprof for a real microservice, but the important information is at the bottom: go routines are queuing up in the conn function of the connection pool

    System info

    • RethinkDB Version: 2.4.1
    opened by Gerrit91 0
  • Connection and Cursor can be used concurrently

    Connection and Cursor can be used concurrently

    The "not thread safe" comments are seven years old and no longer apply.

    Reason for the change Prevent other developers from being misled if they read the library's documentation but not its code.


    Code examples



    opened by CodyDWJones 0
  • Added helper func to check if err is PK too long error

    Added helper func to check if err is PK too long error

    Signed-off-by: Wahab Ali [email protected]

    Reason for the change Ease of use for users consuming go-rethink API.

    Description This PR adds a helper function that checks if error returned by RethinkDB is a Primary Key too long error. RethinkDB has a unique limitation on length of PK, so I think having this helper useful for consumers of go-rethink API, especially if they are new to RethinkDB.

    Code examples N/A


    References N/A

    opened by wahabmk 0
  • v6.2.2(Jun 2, 2022)

  • v6.2.1(Mar 19, 2020)

  • v6.2.0(Mar 18, 2020)

  • v6.1.1(Mar 12, 2020)

  • v6.1.0(Mar 9, 2020)

    • Reworked and tested new connection pools with multiple queries per connection
    • Socket Read- and WriteTimeout replaced with context timeout
    • Mock assert fix
    • Connection pool fixed initial size
    • Changes added offsets
    Source code(tar.gz)
    Source code(zip)
  • v6.0.0(Dec 22, 2019)

    • Added JSON tags to ConnectOpts to make it serializable
    • Blocking mocks for responses
    • Fix Connect documentation
    • Added Type to ChangeResponse
    • Added bitwise operations support
    • Added write hooks support
    Source code(tar.gz)
    Source code(zip)
  • v5.0.1(Oct 18, 2018)

  • v5.0.0(Sep 12, 2018)

    • Moved to rethinkdb organization
    • Renamed to rethinkdb-go repo
    • Renamed to rethinkdb package
    • Fixed instability integration tests due to same tables names
    • Fixed wrong asserts in integration tests
    Source code(tar.gz)
    Source code(zip)
  • v4.1.0(Aug 29, 2018)


    • Rare Connection leaks if socket errors occurred
    • Updated ql2.proto file from rethinkdb repo


    • Support for independent custom type marshalers
    Source code(tar.gz)
    Source code(zip)
  • v4.0.0(Dec 14, 2017)


    • Connection work with sockets, now only a single goroutine reads from socket.
    • Optimized threadsafe operations in Connection with channels and atomics instead of mutex.
    • All tests with real db moved to integration folder


    • Added support for tracing with opentracing-go
    • Added a brand-new unit tests for Connection
    Source code(tar.gz)
    Source code(zip)
  • v3.0.5(Sep 28, 2017)

  • v3.0.4(Sep 4, 2017)

  • v3.0.3(Sep 3, 2017)

    • Added support to cancellation queries and timeouts with context.Context passed through RunOpts
    • Fixed import path for sirupsen/logrus due to repo was renamed
    Source code(tar.gz)
    Source code(zip)
  • v3.0.1(Jan 30, 2017)

  • v3.0.0(Dec 6, 2016)

    v3.0.0 - 2016-12-06

    Unfortunately this will likely be the last release I plan to work on. This is due to the following reasons:

    • Over the last few years while I have spent a lot of time maintaining this driver I have not used it very much for my own personal projects.
    • My job has been keeping me very busy lately and I don't have as much time to work on this project as I used to.
    • The company behind RethinkDB has shut down and while I am sure the community will keep the database going it seems like a good time for me to step away from the project.
    • The driver itself is in a relatively good condition and many companies are using the existing version in production.

    I hope you understand my decision to step back from the project, if you have any questions or would be interested in take over some of the maintenance of the project please let me know. To make this process easier I have also decided to move the repository to the GoRethink organisation. All existing imports should still work.

    Thanks to everybody who got involved with this project over the last ~4 years and helped out, I have truly enjoyed the time I have spent building this library and I hope both RethinkDB and this driver manage to keep going.


    • Moved project to gorethink organisation
    • Fixed behaviour when unmarshaling nil slices


    • Fix possible deadlock when calling Session.Reconnect
    • Fixed another bug with panic/infinite loop when closing cursor during reads
    • Fixed goroutine leak when calling Session.Close
    Source code(tar.gz)
    Source code(zip)
  • v2.2.2(Oct 2, 2016)


    • The gorethink struct tag is now always checked even after calling SetTags


    • Fixed infinite loop in cursor when closed during read
    Source code(tar.gz)
    Source code(zip)
  • v2.2.1(Sep 20, 2016)


    • Added State and Error to ChangeResponse


    • Fixed panic caused by cursor trying to read outstanding responses while closed
    • Fixed panic when using mock session
    Source code(tar.gz)
    Source code(zip)
  • v2.2.0(Aug 17, 2016)


    • Added support for optional arguments to r.JS()
    • Added NonVotingReplicaTags optional argument to TableCreateOpts
    • Added root term TypeOf, previously only the method term was supported
    • Added root version of Group terms (Group, GroupByIndex, MultiGroup, MultiGroupByIndex)
    • Added root version of Distinct
    • Added root version of Contains
    • Added root version of Count
    • Added root version of Sum
    • Added root version of Avg
    • Added root version of Min
    • Added root version of MinIndex
    • Added root version of Max
    • Added root version of MaxIndex
    • Added ReadMode to RunOpts
    • Added the Interface function to the Cursor which returns a queries result set as an interface{}
    • Added GroupOpts type
    • Added GetAllOpts type
    • Added MinOpts/MaxOpts types
    • Added OptArgs method to Term which allows optional arguments to be specified in an alternative way, for example:
        Index: "code_name",
    • Added ability to create compound keys from structs, for example:
    type User struct {
      Company string `gorethink:"id[0]"`
      Name    string `gorethink:"id[1]"`
      Age     int    `gorethink:"age"`
    // Creates
    {"id": [COMPANY, NAME], "age": AGE}
    • Added Merge function to encoding package that decodes data into a value without zeroing it first.


    • Renamed PrimaryTag to PrimaryReplicaTag in ReconfigureOpts
    • Renamed NotAtomic to NonAtomic in ReplaceOpts and UpdateOpts
    • Changed behaviour of function callbacks to allow arguments to be either of type r.Term or interface {} instead of only r.Term
    • Changed logging to be disabled by default, to enable logs change the output writer of the logger. For example: r.Log.Out = os.Stderr


    • Fixed All not working correctly when the cursor is created by Mock
    • Fixed byte arrays not being correctly converted to the BINARY pseudo-type
    Source code(tar.gz)
    Source code(zip)
  • v2.1.2(Jul 22, 2016)


    • Added the InitialCap field to ConnectOpts to replace MaxIdle as the name no longer made sense.


    • Improved documentation of ConnectOpts
    • Default value for KeepAlivePeriod changed from 0 to 30s


    • Deprecated the field MaxIdle in ConnectOpts, it has now been replaced by InitialCap which has the same behaviour as before. Setting both fields will still work until the field is removed in a future version.


    • Fixed issue causing changefeeds to hang if no data was received
    Source code(tar.gz)
    Source code(zip)
  • v2.1.1(Jul 12, 2016)


    • Added session.Database() which returns the current default database


    • Added more documentation


    • Fixed Random() not being implemented correctly and added tests (Thanks to @bakape for the PR)
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Jun 26, 2016)


    • Added ability to mock queries based on the library
      • Added the QueryExecutor interface and changed query runner methods (Run/Exec) to accept this type instead of *Session, Session will still be accepted as it implements the QueryExecutor interface.
      • Added the NewMock function to create a mock query executor
      • Queries can be mocked using On and Return, Mock also contains functions for asserting that the required mocked queries were executed.
      • For more information about how to mock queries see the readme and tests in mock_test.go.


    • Exported the Build() function on Query and Term.
    • Updated import of to
    Source code(tar.gz)
    Source code(zip)
  • v2.0.4(May 24, 2016)


    • Changed Connect to return the reason for connections failing (instead of just "no connections were made when creating the session")


    • Fixed queries not being retried when using Query(), queries are now retried if the request failed due to a bad connection.
    • Fixed Cursor methods panicking if using a nil cursor, please note that you should still always check if your queries return an error.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.3(May 24, 2016)

  • v2.0.2(Apr 18, 2016)


    • Fixed issue which prevented anonymous time.Time values from being encoded when used in a struct.
    • Fixed panic when attempting to run a query with a nil session
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Apr 14, 2016)


    • Added UnionWithOpts term which allows Union to be called with optional arguments (such as Interleave)
    • Added IncludeOffsets and IncludeTypes optional arguments to ChangesOpts
    • Added Conflict optional argument to InsertOpts


    • Fixed error when connecting to database as non-admin user, please note that DiscoverHosts will not work with user authentication at this time due to the fact that RethinkDB restricts access to the required system tables.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Apr 13, 2016)


    • GoRethink now uses the v1.0 RethinkDB protocol which supports RethinkDB v2.3 and above. If you are using RethinkDB 2.2 or older please set HandshakeVersion when creating a session. For example:
        HandshakeVersion: r.HandshakeV0_4,


    • Added support for username/password authentication. To login pass your username and password when creating a session using the Username and Password fields in the ConnectOpts.
    • Added the Grant term
    • Added the Ordered optional argument to EqJoin
    • Added the Fold term and examples
    • Added the ReadOne and ReadAll helper functions for quickly executing a query and scanning the result into a variable. For examples see the godocs.
    • Added the Peek and Skip functions to the Cursor.
    • Added support for referential arrays in structs
    • Added the Durability argument to RunOpts/ExecOpts


    • Deprecated the root Wait term, r.Table(...).Wait() should now be used instead.
    • Deprecated session authentication using AuthKey


    • Fixed issue with ReconfigureOpts field PrimaryTag

    Thanks to all contributors who helped out with this release, especially @rschmukler and @russmatney for spending the time to work with me on fixing some of the more difficult issues in this release.

    Source code(tar.gz)
    Source code(zip)
  • v1.4.1(Apr 16, 2016)


    • Fixed panic when closing a connection at the same time as using a changefeed.
    • Update imports to correctly use
    • Fixed race condition when using anonymous functions
    • Fixed IsConflictErr and IsTypeErr panicking when passed nil errors
    • RunWrite no longer misformats errors with formatting directives in them
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Mar 15, 2016)


    • Added the ability to reference subdocuments when inserting new documents, for more information see the documentation in the readme.
    • Added the SetTags function which allows GoRethink to override which tags are used when working with structs. For example to support the json add the following call SetTags("gorethink", "json").
    • Added helper functions for checking the error type of a write query, this is useful when calling RunWrite.
      • Added IsConflictErr which returns true when RethinkDB returns a duplicate key error.
      • Added IsTypeErr which returns true when RethinkDB returns an unexpected type error.
    • Added the RawQuery term which can be used to execute a raw JSON query, for more information about this query see the godoc.
    • Added the NextResponse function to Cursor which will return the next raw JSON response in the result set.
    • Added ability to set the keep alive period by setting the KeepAlivePeriod field in ConnectOpts.


    • Fixed an issue that could prevent bad connections from being removed from the connection pool.
    • Fixed certain connection errors not being returned as RqlConnectionError when calling Run, Exec or RunWrite.
    • Fixed potential dead lock in connection code caused when building the query.
    Source code(tar.gz)
    Source code(zip)
  • v1.3.2(Feb 1, 2016)

  • v1.3.1(Jan 22, 2016)


    • Added more documentation and examples for GetAll.


    • Fixed RunWrite not defering its call to Cursor.Close(). This could cause issues if an error occurred when decoding the result.
    • Fixed panic when calling Error() on a GoRethink rqlError.
    Source code(tar.gz)
    Source code(zip)
Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Go SQL Drivers 12.9k Jan 4, 2023
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Qiniu Cloud 1k Dec 28, 2022
Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

pqssh Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

mattn 52 Nov 6, 2022
Mirror of Apache Calcite - Avatica Go SQL Driver

Apache Avatica/Phoenix SQL Driver Apache Calcite's Avatica Go is a Go database/sql driver for the Avatica server. Avatica is a sub-project of Apache C

The Apache Software Foundation 103 Nov 3, 2022
Firebird RDBMS sql driver for Go (golang)

firebirdsql (Go firebird sql driver) Firebird RDBMS SQL driver for Go Requirements Firebird 2.5 or higher Golang 1.13 or higher

Hajime Nakagami 186 Dec 20, 2022
Microsoft ActiveX Object DataBase driver for go that using exp/sql

go-adodb Microsoft ADODB driver conforming to the built-in database/sql interface Installation This package can be installed with the go get command:

mattn 132 Dec 30, 2022
Microsoft SQL server driver written in go language

A pure Go MSSQL driver for Go's database/sql package Install Requires Go 1.8 or above. Install with go get . Connecti

null 1.7k Dec 26, 2022
Oracle driver for Go using database/sql

go-oci8 Description Golang Oracle database driver conforming to the Go database/sql interface Installation Install Oracle full client or Instant Clien

mattn 598 Dec 30, 2022
sqlite3 driver for go using database/sql

go-sqlite3 Latest stable version is v1.14 or later not v2. NOTE: The increase to v2 was an accident. There were no major changes or features. Descript

mattn 6.3k Jan 8, 2023
GO DRiver for ORacle DB

Go DRiver for ORacle godror is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wra

null 409 Jan 5, 2023
Go Sql Server database driver.

gofreetds Go FreeTDS wrapper. Native Sql Server database driver. Features: can be used as database/sql driver handles calling stored procedures handle

minus5 108 Dec 16, 2022
PostgreSQL driver and toolkit for Go

pgx - PostgreSQL Driver and Toolkit pgx is a pure Go driver and toolkit for PostgreSQL. pgx aims to be low-level, fast, and performant, while also ena

Jack Christensen 6.5k Jan 4, 2023
Pure Go Postgres driver for database/sql

pq - A pure Go postgres driver for Go's database/sql package Install go get Features SSL Handles bad connections for database/sql S

null 7.8k Jan 2, 2023
Lightweight Golang driver for ArangoDB

Arangolite Arangolite is a lightweight ArangoDB driver for Go. It focuses on pure AQL querying. See AranGO for a more ORM-like experience. IMPORTANT:

Fabien Herfray 73 Sep 26, 2022
goriak - Go language driver for Riak KV

goriak Current version: v3.2.1. Riak KV version: 2.0 or higher, the latest version of Riak KV is always recommended. What is goriak? goriak is a wrapp

Gustav Westling 29 Nov 22, 2022
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)

Mongo Go Models Important Note: We changed package name from Kamva) to kamva) in v

kamva 607 Jan 2, 2023
The MongoDB driver for Go

The MongoDB driver for Go This fork has had a few improvements by ourselves as well as several PR's merged from the original mgo repo that are current

GlobalSign 2k Jan 8, 2023
The Go driver for MongoDB

MongoDB Go Driver The MongoDB supported driver for Go. Requirements Installation Usage Bugs / Feature Reporting Testing / Development Continuous Integ

mongodb 7.1k Dec 31, 2022
SAP (formerly sybase) ASE/RS/IQ driver written in pure go

tds import "" Package tds is a pure Go Sybase ASE/IQ/RS driver for the database/sql package. Status This is a beta release. This dr

Thomas 52 Dec 7, 2022