PostgreSQL driver and toolkit for Go

Related tags

Database Drivers pgx

Build Status

pgx - PostgreSQL Driver and Toolkit

pgx is a pure Go driver and toolkit for PostgreSQL.

pgx aims to be low-level, fast, and performant, while also enabling PostgreSQL-specific features that the standard database/sql package does not allow for.

The driver component of pgx can be used alongside the standard database/sql package.

The toolkit component is a related set of packages that implement PostgreSQL functionality such as parsing the wire protocol and type mapping between PostgreSQL and Go. These underlying packages can be used to implement alternative drivers, proxies, load balancers, logical replication clients, etc.

The current release of pgx v4 requires Go modules. To use the previous version, checkout and vendor the v3 branch.

Example Usage

package main

import (


func main() {
	conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))
	if err != nil {
		fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
	defer conn.Close(context.Background())

	var name string
	var weight int64
	err = conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
	if err != nil {
		fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)

	fmt.Println(name, weight)

See the getting started guide for more information.

Choosing Between the pgx and database/sql Interfaces

It is recommended to use the pgx interface if:

  1. The application only targets PostgreSQL.
  2. No other libraries that require database/sql are in use.

The pgx interface is faster and exposes more features.

The database/sql interface only allows the underlying driver to return or receive the following types: int64, float64, bool, []byte, string, time.Time, or nil. Handling other types requires implementing the database/sql.Scanner and the database/sql/driver/driver.Valuer interfaces which require transmission of values in text format. The binary format can be substantially faster, which is what the pgx interface uses.


pgx supports many features beyond what is available through database/sql:

  • Support for approximately 70 different PostgreSQL types
  • Automatic statement preparation and caching
  • Batch queries
  • Single-round trip query mode
  • Full TLS connection control
  • Binary format support for custom types (allows for much quicker encoding/decoding)
  • Copy protocol support for faster bulk data loads
  • Extendable logging support including built-in support for log15adapter, logrus, zap, and zerolog
  • Connection pool with after-connect hook for arbitrary connection setup
  • Listen / notify
  • Conversion of PostgreSQL arrays to Go slice mappings for integers, floats, and strings
  • Hstore support
  • JSON and JSONB support
  • Maps inet and cidr PostgreSQL types to net.IPNet and net.IP
  • Large object support
  • NULL mapping to Null* struct or pointer to pointer
  • Supports database/sql.Scanner and database/sql/driver.Valuer interfaces for custom types
  • Notice response handling
  • Simulated nested transactions with savepoints


There are three areas in particular where pgx can provide a significant performance advantage over the standard database/sql interface and other drivers:

  1. PostgreSQL specific types - Types such as arrays can be parsed much quicker because pgx uses the binary format.
  2. Automatic statement preparation and caching - pgx will prepare and cache statements by default. This can provide an significant free improvement to code that does not explicitly use prepared statements. Under certain workloads, it can perform nearly 3x the number of queries per second.
  3. Batched queries - Multiple queries can be batched together to minimize network round trips.

Comparison with Alternatives

For prepared queries with small sets of simple data types, all drivers will have have similar performance. However, if prepared statements aren't being explicitly used, pgx can have a significant performance advantage due to automatic statement preparation. pgx also can perform better when using PostgreSQL-specific data types or query batching. See go_db_bench for some database driver benchmarks.

Compatibility with database/sql

pq is exclusively used with database/sql. go-pg does not use database/sql at all. pgx supports database/sql as well as its own interface.

Level of access, ORM

go-pg is a PostgreSQL client and ORM. It includes many features that traditionally sit above the database driver, such as ORM, struct mapping, soft deletes, schema migrations, and sharding support.

pgx is "closer to the metal" and such abstractions are beyond the scope of the pgx project, which first and foremost, aims to be a performant driver and toolkit.


pgx tests naturally require a PostgreSQL database. It will connect to the database specified in the PGX_TEST_DATABASE environment variable. The PGX_TEST_DATABASE environment variable can either be a URL or DSN. In addition, the standard PG* environment variables will be respected. Consider using direnv to simplify environment variable handling.

Example Test Environment

Connect to your PostgreSQL server and run:

create database pgx_test;

Connect to the newly-created database and run:

create domain uint64 as numeric(20,0);

Now, you can run the tests:

PGX_TEST_DATABASE="host=/var/run/postgresql database=pgx_test" go test ./...

In addition, there are tests specific for PgBouncer that will be executed if PGX_TEST_PGBOUNCER_CONN_STRING is set.

Supported Go and PostgreSQL Versions

pgx supports the same versions of Go and PostgreSQL that are supported by their respective teams. For Go that is the two most recent major releases and for PostgreSQL the major releases in the last 5 years. This means pgx supports Go 1.13 and higher and PostgreSQL 9.5 and higher.

Version Policy

pgx follows semantic versioning for the documented public API on stable releases. v4 is the latest stable major version.

PGX Family Libraries

pgx is the head of a family of PostgreSQL libraries. Many of these can be used independently. Many can also be accessed from pgx for lower-level control.

pgconn is a lower-level PostgreSQL database driver that operates at nearly the same level as the C library libpq.

pgxpool is a connection pool for pgx. pgx is entirely decoupled from its default pool implementation. This means that pgx can be used with a different pool or without any pool at all.

This is a database/sql compatibility layer for pgx. pgx can be used as a normal database/sql driver, but at any time, the native interface can be acquired for more performance or PostgreSQL specific functionality.

Over 70 PostgreSQL types are supported including uuid, hstore, json, bytea, numeric, interval, inet, and arrays. These types support database/sql interfaces and are usable outside of pgx. They are fully tested in pgx and pq. They also support a higher performance interface when used with the pgx driver.

pgproto3 provides standalone encoding and decoding of the PostgreSQL v3 wire protocol. This is useful for implementing very low level PostgreSQL tooling.

pglogrepl provides functionality to act as a client for PostgreSQL logical replication.

pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler).

tern is a stand-alone SQL migration system.

pgerrcode contains constants for the PostgreSQL error codes.

3rd Party Libraries with PGX Support

Library for scanning data from a database into Go structs and more.

  • Error codes helper function

    Error codes helper function

    How about adding a helper function like this:

    func ErrorCode(err error) (string, bool) {
    	pgerr, ok := err.(pgx.PgError)
    	if !ok {
    		return "", false
    	return pgerr.Code, true

    The reason for this proposal it that I end up creating such helper function in most of my codebases, and it seems to be reasonable to just include it in the pgx package itself.

    opened by MOZGIII 27
  • Possible rxMsg bug

    Possible rxMsg bug


    I have been trying to track down a mysterious hang and I think it may be a pgx bug. Here is a stack trace of the goroutine that is hung:

    goroutine 97 [IO wait, 18 minutes]:
    net.(*pollDesc).Wait(0xc20813a990, 0x72, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x47
    net.(*pollDesc).WaitRead(0xc20813a990, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x43
    net.(*netFD).Read(0xc20813a930, 0xc20819a000, 0x2000, 0x2000, 0x0, 0x7f028754edb0, 0xc20800b8b0)
            /home/ubuntu/go/src/ +0x40f
    net.(*conn).Read(0xc208036e08, 0xc20819a000, 0x2000, 0x2000, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xdc
    bufio.(*Reader).Read(0xc20813e900, 0xc20819a000, 0x2000, 0x2000, 0x5e, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x13a
    io.(*LimitedReader).Read(0xc2081b8060, 0xc20819a000, 0x2000, 0x2000, 0x5e, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xce
    io/ioutil.devNull.ReadFrom(0x0, 0x7f0287550bd8, 0xc2081b8060, 0x1ea, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xb1
    io/ioutil.(*devNull).ReadFrom(0xc20800a2f0, 0x7f0287550bd8, 0xc2081b8060, 0x7f0287550c28, 0x0, 0x0)
            <autogenerated>:9 +0xbb
    io.Copy(0x7f028754ecb0, 0xc20800a2f0, 0x7f0287550bd8, 0xc2081b8060, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x13d
    io.CopyN(0x7f028754ecb0, 0xc20800a2f0, 0x7f0287550430, 0xc20813e900, 0x5062, 0x16, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x113*msgReader).rxMsg(0xc2080b3710, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xa7*Conn).rxMsg(0xc2080b3200, 0x0, 0xc2080b3710, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x84*Conn).Prepare(0xc2080b3200, 0x0, 0x0, 0xc2081abd60, 0x91, 0xc208135680, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xcf6*Conn).Query(0xc2080b3200, 0xc2081abd60, 0x91, 0xc208111b20, 0x2, 0x2, 0x2, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x2c2*ConnPool).Query(0xc2080b00e0, 0xc2081abd60, 0x91, 0xc208111b20, 0x2, 0x2, 0x91, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x149*ConnPool).QueryRow(0xc2080b00e0, 0xc2081abd60, 0x91, 0xc208111b20, 0x2, 0x2, 0x3)
            /home/ubuntu/go/src/ +0x5a*DB).QueryRow(0xc20808f4e0, 0xc2081abd60, 0x91, 0xc208111b20, 0x2, 0x2, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x7e
    main.selectApp(0x7f0287553a10, 0xc20808f4e0, 0xc20813b1fa, 0x24, 0x0, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x275
    main.(*AppRepo).Get(0xc20812a2a0, 0xc20813b1fa, 0x24, 0x0, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x8c
    main.func·009(0x7f02875538a8, 0xc20812bd40, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0xdd, 0x7f02875538a8, 0xc20812bd40, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x55·002(0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0, 0xc208135640, 0x2, 0x2)
            /home/ubuntu/go/src/ +0xcb*Router).ServeHTTP(0xc20801e640, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x188
    main.func·008(0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x44b
    net/http.HandlerFunc.ServeHTTP(0xc20801ef40, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x41·004(0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x8a0
    net/http.HandlerFunc.ServeHTTP(0xc20812ec30, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x41·003(0x7f0287553830, 0xc2081abb80, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x25c
    net/http.HandlerFunc.ServeHTTP(0xc20801ef60, 0x7f0287553830, 0xc2081abb80, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x41
    net/http.serverHandler.ServeHTTP(0xc208034ae0, 0x7f0287553830, 0xc2081abb80, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x19a
            /home/ubuntu/go/src/ +0xb57
    created by net/http.(*Server).Serve
            /home/ubuntu/go/src/ +0x35e

    By the looks of it we tried to read when there were no bytes to read causing it to block on IO or something similar. Have you encountered this before and is there any debugging code I could insert to work out if this is pgx issue or application code issue? I can reliably reproduce.

    That said, it could still be a race condition because it has managed to get through my test without errors before but I can reproduce 9/10 times.

    opened by josephglanville 24
  • Multi-Tenant Databases Using pgx

    Multi-Tenant Databases Using pgx

    Hello all,

    I hope this is not a question with an obvious answer, but I am a Go beginner interested in using Go and pgx to design a multi-tenant application with a Go REST API.

    This obviously means I need a way of segregating tenant's data - all my tables will have a tenant_id. Now let's say that I include a tenant_id in each user's JWT being passed to the REST API. This tenant_id gets passed down the stack until I get to the methods accessing the Postgres database.

    However, I don't want to append WHERE tenant_id = ? for every query, as this is error prone and makes for longer queries. To ensure that data are seamlessly dealt out to the correct tenant, I want to use Postgres's Row Level Security with a session context variable as outlined here. This means I need a way to set the user on every database connection. I want to use the stdlib to get back a database/sql connection to use with my ORM (the excellent SQLBoiler).

    To do this, all I could think of was using pgx's AfterConnect hook on a connection pool and passing the tenant_id contained in the JWT into it to set the session variable on the connection for that user. However, the parameters of AfterConnect won't allow that and I want to try and avoid using AfterConnect accessing some global variable to set it (with methods further up the chain setting the variable) as then all the logic is not in the method. The idea would be that the tenant_id would have to be passed to the AfterConnect method with every call to OpenFromConnPool to ensure that programmers can't miss setting the Postgres session variable. Is it worth adding an empty interface to the AfterConnect method for these sorts of use cases?

    I am sorry if this is a stupid question and I am more than open to other ideas that people have about solving this problem, as I am quite stuck.

    opened by fluffybonkers 23
  • EOF Reading rows in TLS mode

    EOF Reading rows in TLS mode

    I noticed that pgx driver, when using TLS and reading a lot of rows would randomly go EOF. I am using PGX against Redshift, but I noticed a similar problem with Postgres too.

    opened by vincepri 21
  • handling dead connections and broken pipes

    handling dead connections and broken pipes

    Based on our experiences with this package, we have come up with what I consider a workaround for handling certain errors returned. In our apps, we use the ConnPool and therefore? have to handle dead connections. More recently, we found that the underlying connection can report an broken pipe and EOF. We think these errors are recoverable by the app by re-aquiring a new connection.

    The function below is what I currently use in all Dao functions. Find below an example of this.

    func doWithConnection(pool PgxConnPool, maxRetries int, work func(conn *pgx.Conn) error) error {
        conn, err := pool.Acquire()
        if err == pgx.ErrDeadConn {
            if maxRetries > 0 {
                glog.Debugf("%v, try acquiring a non-dead connection", err)
                return doWithConnection(pool, maxRetries-1, work)
        // abort if still err
        if err != nil {
            return err
        defer pool.Release(conn)
        // use the connection
        err = work(conn)
        // all ok?
        if err == nil {
            return nil
        // if not, can we retry?
        canRetry := (strings.HasSuffix(err.Error(), "broken pipe") ||
            strings.HasSuffix(err.Error(), "EOF"))
        if canRetry && maxRetries > 0 {
            glog.Debugf("failed to do work with connection because:[%v], retry:%d", err, maxRetries)
            return doWithConnection(pool, maxRetries-1, work)
        // give up
        return err


    func (d PgPropertyDao) DeleteKey(del PropertyAccess) error {
        return doWithConnection(d.pool, 10, func(conn *pgx.Conn) error {
            _, err := conn.Exec("select * from delete_key_api($1,$2,$3,$4,$5);",
            if err != nil {
                glog.Errorf("DeleteKey -> unable to delete_key_api(%#v):%v", del, err)
            return err

    The question is whether the pgx package should handle this internally and if not, it this the correct way to do so.


    opened by emicklei 19
  • pgxpool: health check should create new connections before destroying

    pgxpool: health check should create new connections before destroying

    Instead of closing connections before creating new ones this changes the health checker to create connections before closing them. It also will not close idle connections if that would put the pool below minimum connections.

    I also added more stats to provide better visibility into the churn of the pool.

    Fixes #1164

    opened by jameshartig 18
  • Connection pool not discarding bad connections

    Connection pool not discarding bad connections

    I have run into a situation in which my postgres database is being restarted (not sure if it's a graceful restart or not), thus getting rid of all of the connections on the server side.

    The connection pool does not seem to be responding well to this. When I try to query using one of the bad connections, I receive a write TCP timeout error. The connection pool does not recognize this (even though it should be checking if the connection is alive before every query), and puts the connection back into the pool.

    I've dug through the code a bit and am not seeing any obvious problems that could be causing this, but I'm going to keep looking.

    Can someone please tell me if I should expect the connection pool to discard the connection upon receiving a TCP timeout error? I have resorted to detecting this error in my application and resetting the entire pool, but this is too heavy of a solution to be used permanently in my opinion.


    opened by mhartenbower 17
  • possible bug with enum handling

    possible bug with enum handling

    I read #287, could it be it broke something with enum handling?

    I get a panic when trying to insert a custom type (with string being the underlying type):

    *pgtype.GenericText is not pgtype.BinaryEncoder: missing method EncodeBinary

    The panic occurs on line values.go:166 (function encodePreparedStatementArgument):

    argBuf, err := value.(pgtype.BinaryEncoder).EncodeBinary(ci, buf)

    Any idea?

    opened by dvic 17
  • Feature request: connection testing on checkout

    Feature request: connection testing on checkout


    I was looking at ways to make sure that connections handed from the pool are not dead.

    I'm aiming to support situations where the DB server might have been restarted, or the connection has been lost for whatever reason.

    I have read the discussion in #74. I see that transaction Begin() does retry the connection - and so far, wrapping all server-side DB interaction into a transaction works for us as an interim workaround.

    Other DB pools I have worked with (in the Java world) support setting up a test query to be run each time a connection is checked out from the pool (or alternatively returned to the pool). The test query might be just SELECT 1;, or it might be specific to the database driver.

    For Apache Commons DBCP, the options configuring this behavior are called validationQuery / testOnBorrow / testOnReturn:

    For the C3P0 project, these are called preferredTestQuery / testConnectionOnCheckout / testConnectionOnCheckin:

    Would adding similar options to pgx sound as a suitable addition?

    Cheers, Vlad

    opened by vladimir-mencl-eresearch 16
  • add ability to disable implicitly prepared queries.

    add ability to disable implicitly prepared queries.

    it looks like pgx implicitly prepares all queries particularly within the stdlib interface, which is both surprising and can sometimes be undesirable. I understand that pgx gets its performance boost from using them implicitly.

    Would we be open to making this behaviour configurable?

    Details: ran into this while debugging pgbouncer queries returning a ERROR: unsupported pkt type: 80 (SQLSTATE 08P01) when querying pgbouncer's internal state. I'm assuming the type is the ascii code for message format's first byte, which would make it a parse message.

    func (c *Conn) Query(query string, argsV []driver.Value) (driver.Rows, error) {
    	if !c.conn.IsAlive() {
    		return nil, driver.ErrBadConn
    	ps, err := c.conn.Prepare("", query)
    	if err != nil {
    		return nil, err
    	return c.queryPrepared("", argsV)
    opened by james-lawrence 16
  • Implement optional map[string]string hstore handler.

    Implement optional map[string]string hstore handler.

    It would be nice to be able to avoid a lot of the boilerplate Scan code and simply do something like this:

    type SomeType map[string]string
    func (s SomeType) Scan(vr *pgx.ValueReader) (err error) {
        //can't do this until hstore Oid is standardized
        //if vr.Type().DataType != pointOid {
        s, err = pgx.hstoreMapHandler(vr.ReadString(vr.Len()))

    And just end up with a populated map.

    I understand the utility of writing your own (t Type) Scan function, and I still do that in a lot of cases, but I think the hstore -> map[string]string is a common enough use case that it merits a useful handler.

    Also, currently I can

    SELECT hstore_to_array(hstorecol) FROM TABLE;

    And just use a []string, but I still think a generic handler would be a nice addition for those that understand the cost.

    opened by flowchartsman 16
  • Copyfrom can't work

    Copyfrom can't work

    I use the code from to test the copyfrom function, but it can't work, the error is:

    ERROR: insufficient data left in message (SQLSTATE 08P01)

    the postgresql version is: PostgreSQL 10.21 (Ubuntu 10.21-0ubuntu0.18.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit

    opened by zhanghaiyang9999 1
  • Why don't use `?` instead of `$1` `$2`

    Why don't use `?` instead of `$1` `$2`

    it is encouraged at database/sql to use query like

    err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)

    It will be a good idea to support this format

    opened by mozhata 1
  • pgxpool: kill connections after max configured life time.

    pgxpool: kill connections after max configured life time.

    We ran into issues of a web request erring out after 15 seconds due to the underlying DB request taking more than 15 seconds. Investigating the issue lead me to believe the destructor should also respect the configured timeout.

    Looking at the introduction, 39b096d0 ("pgxpool waits for connection cleanup to finish before making room in pool", 2020-08-20), I am unsure how to add a proper test.

    opened by stefanbeller 1
  • can't scan into dest[0]: cannot assign postgres date into custom time.Time type

    can't scan into dest[0]: cannot assign postgres date into custom time.Time type

    The similar issue as #1151

    create table test ( date date );

    type Date time.Time var d Date err := row.Scan(&d) // err: can't scan into dest[0]: unable to assign to *dt.Date

    opened by liulinhuai 3
  • Make generic function to call Select Statement

    Make generic function to call Select Statement

    hi guys, How can i turn this into a Generic function that accepts any argument.

    func SelectFromDb[T model.ToDoItem](db DbImpl, ctx context.Context, sqlStatement string, args ...interface{}) ([]model.ToDoItem, error) {
    	dbPool := db.GetConnection()
    	defer dbPool.Close()
    	rows, err := dbPool.Query(ctx, sqlStatement)
    	if rows != nil {
    		defer rows.Close()
    	if err != nil {
    		return nil, err
    	var results []model.ToDoItem
    	for rows.Next() {
    		var item model.ToDoItem
    		err = rows.Scan(
    			&item.ID, &item.Text, &item.IsDone, &item.CreatedOn, &item.UpdatedOn)
    		if err != nil {
    			return nil, err
    		results = append(results, item)
    	if rows.Err() != nil {
    		return nil, err
    	return results, nil

    This code works but i'm not sure how to make it Generic. This code line is tricky. err = rows.Scan(&item.ID, &item.Text, &item.IsDone, &item.CreatedOn, &item.UpdatedOn) Thanks

    opened by mingsterism 1
Jack Christensen
Jack Christensen
Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Go SQL Drivers 12.5k Aug 1, 2022
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Qiniu Cloud 938 Aug 3, 2022
Package pbpgx provides a toolkit for easier Protocol Buffers interaction with PostgreSQL databases.

PBPGX Package pbpgx provides a toolkit for easier Protocol Buffers interaction with PostgreSQL databases. Pbpgx supports the Protocol Buffer types gen

Tim Möhlmann 11 Jun 27, 2022
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)

Mongo Go Models Important Note: We changed package name from Kamva) to kamva) in v

kamva 536 Jul 29, 2022
Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)

gokv Simple key-value store abstraction and implementations for Go Contents Features Simple interface Implementations Value types Marshal formats Road

Philipp Gillé 452 Aug 2, 2022
Examples and code to assign a name to your MongoDB, MySQL, PostgreSQL, RabbitMQ, and redis connection.

your connection deserves a name ?? When your app interacts with an external system, assign a name to the connection. An external system in this contex

Andy Grunwald 25 Feb 15, 2022
A MongoDB compatible embeddable database and toolkit for Go.

lungo A MongoDB compatible embeddable database and toolkit for Go. Installation Example Motivation Architecture Features License Installation To get s

Joël Gähwiler 393 Aug 5, 2022
Mirror of Apache Calcite - Avatica Go SQL Driver

Apache Avatica/Phoenix SQL Driver Apache Calcite's Avatica Go is a Go database/sql driver for the Avatica server. Avatica is a sub-project of Apache C

The Apache Software Foundation 97 Jun 28, 2022
Firebird RDBMS sql driver for Go (golang)

firebirdsql (Go firebird sql driver) Firebird RDBMS SQL driver for Go Requirements Firebird 2.5 or higher Golang 1.13 or higher

Hajime Nakagami 173 Aug 1, 2022
Microsoft ActiveX Object DataBase driver for go that using exp/sql

go-adodb Microsoft ADODB driver conforming to the built-in database/sql interface Installation This package can be installed with the go get command:

mattn 127 Jun 28, 2022
Microsoft SQL server driver written in go language

A pure Go MSSQL driver for Go's database/sql package Install Requires Go 1.8 or above. Install with go get . Connecti

null 1.6k Aug 3, 2022
Oracle driver for Go using database/sql

go-oci8 Description Golang Oracle database driver conforming to the Go database/sql interface Installation Install Oracle full client or Instant Clien

mattn 591 Jul 27, 2022
sqlite3 driver for go using database/sql

go-sqlite3 Latest stable version is v1.14 or later not v2. NOTE: The increase to v2 was an accident. There were no major changes or features. Descript

mattn 5.9k Aug 4, 2022
GO DRiver for ORacle DB

Go DRiver for ORacle godror is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wra

null 375 Jul 28, 2022
Go Sql Server database driver.

gofreetds Go FreeTDS wrapper. Native Sql Server database driver. Features: can be used as database/sql driver handles calling stored procedures handle

minus5 106 Jan 23, 2022
Pure Go Postgres driver for database/sql

pq - A pure Go postgres driver for Go's database/sql package Install go get Features SSL Handles bad connections for database/sql S

null 7.5k Jul 31, 2022
Lightweight Golang driver for ArangoDB

Arangolite Arangolite is a lightweight ArangoDB driver for Go. It focuses on pure AQL querying. See AranGO for a more ORM-like experience. IMPORTANT:

Fabien Herfray 72 Jun 17, 2022
Go language driver for RethinkDB

RethinkDB-go - RethinkDB Driver for Go Go driver for RethinkDB Current version: v6.2.1 (RethinkDB v2.4) Please note that this version of the driver on

RethinkDB 1.6k Jul 17, 2022
goriak - Go language driver for Riak KV

goriak Current version: v3.2.1. Riak KV version: 2.0 or higher, the latest version of Riak KV is always recommended. What is goriak? goriak is a wrapp

Gustav Westling 27 Jan 23, 2022