PostgreSQL driver and toolkit for Go

Related tags

Database Drivers pgx

Build Status

pgx - PostgreSQL Driver and Toolkit

pgx is a pure Go driver and toolkit for PostgreSQL.

pgx aims to be low-level, fast, and performant, while also enabling PostgreSQL-specific features that the standard database/sql package does not allow for.

The driver component of pgx can be used alongside the standard database/sql package.

The toolkit component is a related set of packages that implement PostgreSQL functionality such as parsing the wire protocol and type mapping between PostgreSQL and Go. These underlying packages can be used to implement alternative drivers, proxies, load balancers, logical replication clients, etc.

The current release of pgx v4 requires Go modules. To use the previous version, checkout and vendor the v3 branch.

Example Usage

package main

import (


func main() {
	conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))
	if err != nil {
		fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
	defer conn.Close(context.Background())

	var name string
	var weight int64
	err = conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
	if err != nil {
		fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)

	fmt.Println(name, weight)

See the getting started guide for more information.

Choosing Between the pgx and database/sql Interfaces

It is recommended to use the pgx interface if:

  1. The application only targets PostgreSQL.
  2. No other libraries that require database/sql are in use.

The pgx interface is faster and exposes more features.

The database/sql interface only allows the underlying driver to return or receive the following types: int64, float64, bool, []byte, string, time.Time, or nil. Handling other types requires implementing the database/sql.Scanner and the database/sql/driver/driver.Valuer interfaces which require transmission of values in text format. The binary format can be substantially faster, which is what the pgx interface uses.


pgx supports many features beyond what is available through database/sql:

  • Support for approximately 70 different PostgreSQL types
  • Automatic statement preparation and caching
  • Batch queries
  • Single-round trip query mode
  • Full TLS connection control
  • Binary format support for custom types (allows for much quicker encoding/decoding)
  • Copy protocol support for faster bulk data loads
  • Extendable logging support including built-in support for log15adapter, logrus, zap, and zerolog
  • Connection pool with after-connect hook for arbitrary connection setup
  • Listen / notify
  • Conversion of PostgreSQL arrays to Go slice mappings for integers, floats, and strings
  • Hstore support
  • JSON and JSONB support
  • Maps inet and cidr PostgreSQL types to net.IPNet and net.IP
  • Large object support
  • NULL mapping to Null* struct or pointer to pointer
  • Supports database/sql.Scanner and database/sql/driver.Valuer interfaces for custom types
  • Notice response handling
  • Simulated nested transactions with savepoints


There are three areas in particular where pgx can provide a significant performance advantage over the standard database/sql interface and other drivers:

  1. PostgreSQL specific types - Types such as arrays can be parsed much quicker because pgx uses the binary format.
  2. Automatic statement preparation and caching - pgx will prepare and cache statements by default. This can provide an significant free improvement to code that does not explicitly use prepared statements. Under certain workloads, it can perform nearly 3x the number of queries per second.
  3. Batched queries - Multiple queries can be batched together to minimize network round trips.

Comparison with Alternatives

For prepared queries with small sets of simple data types, all drivers will have have similar performance. However, if prepared statements aren't being explicitly used, pgx can have a significant performance advantage due to automatic statement preparation. pgx also can perform better when using PostgreSQL-specific data types or query batching. See go_db_bench for some database driver benchmarks.

Compatibility with database/sql

pq is exclusively used with database/sql. go-pg does not use database/sql at all. pgx supports database/sql as well as its own interface.

Level of access, ORM

go-pg is a PostgreSQL client and ORM. It includes many features that traditionally sit above the database driver, such as ORM, struct mapping, soft deletes, schema migrations, and sharding support.

pgx is "closer to the metal" and such abstractions are beyond the scope of the pgx project, which first and foremost, aims to be a performant driver and toolkit.


pgx tests naturally require a PostgreSQL database. It will connect to the database specified in the PGX_TEST_DATABASE environment variable. The PGX_TEST_DATABASE environment variable can either be a URL or DSN. In addition, the standard PG* environment variables will be respected. Consider using direnv to simplify environment variable handling.

Example Test Environment

Connect to your PostgreSQL server and run:

create database pgx_test;

Connect to the newly-created database and run:

create domain uint64 as numeric(20,0);

Now, you can run the tests:

PGX_TEST_DATABASE="host=/var/run/postgresql database=pgx_test" go test ./...

In addition, there are tests specific for PgBouncer that will be executed if PGX_TEST_PGBOUNCER_CONN_STRING is set.

Supported Go and PostgreSQL Versions

pgx supports the same versions of Go and PostgreSQL that are supported by their respective teams. For Go that is the two most recent major releases and for PostgreSQL the major releases in the last 5 years. This means pgx supports Go 1.13 and higher and PostgreSQL 9.5 and higher.

Version Policy

pgx follows semantic versioning for the documented public API on stable releases. v4 is the latest stable major version.

PGX Family Libraries

pgx is the head of a family of PostgreSQL libraries. Many of these can be used independently. Many can also be accessed from pgx for lower-level control.

pgconn is a lower-level PostgreSQL database driver that operates at nearly the same level as the C library libpq.

pgxpool is a connection pool for pgx. pgx is entirely decoupled from its default pool implementation. This means that pgx can be used with a different pool or without any pool at all.

This is a database/sql compatibility layer for pgx. pgx can be used as a normal database/sql driver, but at any time, the native interface can be acquired for more performance or PostgreSQL specific functionality.

Over 70 PostgreSQL types are supported including uuid, hstore, json, bytea, numeric, interval, inet, and arrays. These types support database/sql interfaces and are usable outside of pgx. They are fully tested in pgx and pq. They also support a higher performance interface when used with the pgx driver.

pgproto3 provides standalone encoding and decoding of the PostgreSQL v3 wire protocol. This is useful for implementing very low level PostgreSQL tooling.

pglogrepl provides functionality to act as a client for PostgreSQL logical replication.

pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler).

tern is a stand-alone SQL migration system.

pgerrcode contains constants for the PostgreSQL error codes.

3rd Party Libraries with PGX Support

Library for scanning data from a database into Go structs and more.

  • pgx v5 beta 5 available for testing

    pgx v5 beta 5 available for testing

    I'm pleased to announce the first beta release of pgx v5 is available.

    The code is in branch v5-dev. There are a significant number of changes. Check out the changelog to see what's new.

    Try it with:

    go get[email protected]

    If no major issues are reported I plan to release v5.0.0 in September.

    There have only been minor changes since v5.0.0-alpha.5.

    • Renamed pgxpool.NewConfig() to NewWithConfig()
    • Cancelling a context that caused the pgxpool to start a establishing a connection will not interrupt it.
    • Added pgxpool.Pool.Reset() (and prerequisite puddle update)
    • Port sslpassword support in pgconn from pgx v4
    • Documentation updates
    • Miscellaneous bug fixes
    opened by jackc 37
  • Error codes helper function

    Error codes helper function

    How about adding a helper function like this:

    func ErrorCode(err error) (string, bool) {
    	pgerr, ok := err.(pgx.PgError)
    	if !ok {
    		return "", false
    	return pgerr.Code, true

    The reason for this proposal it that I end up creating such helper function in most of my codebases, and it seems to be reasonable to just include it in the pgx package itself.

    opened by MOZGIII 29
  • How to improve logging and tracing support in future release?

    How to improve logging and tracing support in future release?

    The original design goal for logging was simply to log what queries are being run and how long they take.

    There have been a few requests to support additional logging / tracing points or new ways of configuring what is currently logged (#853, #998). The current logging system works fine for me, but as I'm considering the possibility of a new major release I'd like to open the door to breaking changes to the logging / tracing interface.

    As I do not have a first-hand need for this I do not have any opinion on what these changes might be, and I would probably not implement them myself, but I wanted to bring up the possible opportunity for those who may have the interest and ability while there is a window open for possible breaking changes.

    opened by jackc 27
  • Database restarts are not handled gracefully

    Database restarts are not handled gracefully

    When the database is rebooted gracefully, it sends an <E meaning FATAL: terminating connection due to administrator command (SQLSTATE 57P01)

    lib/pq discards such connection upfront and the next execution attempts to create a new connection.

    Pgx returns an error at the next execution which is a wasted error, because typically the database has already rebooted, or a hot replica is already available,

    How can we make pgx realise upfront of similar problems with the connection so that we don't waste a usage attempt with an error when we didn't need to?

    opened by ellulpatrick 27
  • Multi-Tenant Databases Using pgx

    Multi-Tenant Databases Using pgx

    Hello all,

    I hope this is not a question with an obvious answer, but I am a Go beginner interested in using Go and pgx to design a multi-tenant application with a Go REST API.

    This obviously means I need a way of segregating tenant's data - all my tables will have a tenant_id. Now let's say that I include a tenant_id in each user's JWT being passed to the REST API. This tenant_id gets passed down the stack until I get to the methods accessing the Postgres database.

    However, I don't want to append WHERE tenant_id = ? for every query, as this is error prone and makes for longer queries. To ensure that data are seamlessly dealt out to the correct tenant, I want to use Postgres's Row Level Security with a session context variable as outlined here. This means I need a way to set the user on every database connection. I want to use the stdlib to get back a database/sql connection to use with my ORM (the excellent SQLBoiler).

    To do this, all I could think of was using pgx's AfterConnect hook on a connection pool and passing the tenant_id contained in the JWT into it to set the session variable on the connection for that user. However, the parameters of AfterConnect won't allow that and I want to try and avoid using AfterConnect accessing some global variable to set it (with methods further up the chain setting the variable) as then all the logic is not in the method. The idea would be that the tenant_id would have to be passed to the AfterConnect method with every call to OpenFromConnPool to ensure that programmers can't miss setting the Postgres session variable. Is it worth adding an empty interface to the AfterConnect method for these sorts of use cases?

    I am sorry if this is a stupid question and I am more than open to other ideas that people have about solving this problem, as I am quite stuck.

    opened by fluffybonkers 24
  • Possible rxMsg bug

    Possible rxMsg bug


    I have been trying to track down a mysterious hang and I think it may be a pgx bug. Here is a stack trace of the goroutine that is hung:

    goroutine 97 [IO wait, 18 minutes]:
    net.(*pollDesc).Wait(0xc20813a990, 0x72, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x47
    net.(*pollDesc).WaitRead(0xc20813a990, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x43
    net.(*netFD).Read(0xc20813a930, 0xc20819a000, 0x2000, 0x2000, 0x0, 0x7f028754edb0, 0xc20800b8b0)
            /home/ubuntu/go/src/ +0x40f
    net.(*conn).Read(0xc208036e08, 0xc20819a000, 0x2000, 0x2000, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xdc
    bufio.(*Reader).Read(0xc20813e900, 0xc20819a000, 0x2000, 0x2000, 0x5e, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x13a
    io.(*LimitedReader).Read(0xc2081b8060, 0xc20819a000, 0x2000, 0x2000, 0x5e, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xce
    io/ioutil.devNull.ReadFrom(0x0, 0x7f0287550bd8, 0xc2081b8060, 0x1ea, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xb1
    io/ioutil.(*devNull).ReadFrom(0xc20800a2f0, 0x7f0287550bd8, 0xc2081b8060, 0x7f0287550c28, 0x0, 0x0)
            <autogenerated>:9 +0xbb
    io.Copy(0x7f028754ecb0, 0xc20800a2f0, 0x7f0287550bd8, 0xc2081b8060, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x13d
    io.CopyN(0x7f028754ecb0, 0xc20800a2f0, 0x7f0287550430, 0xc20813e900, 0x5062, 0x16, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x113*msgReader).rxMsg(0xc2080b3710, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xa7*Conn).rxMsg(0xc2080b3200, 0x0, 0xc2080b3710, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x84*Conn).Prepare(0xc2080b3200, 0x0, 0x0, 0xc2081abd60, 0x91, 0xc208135680, 0x0, 0x0)
            /home/ubuntu/go/src/ +0xcf6*Conn).Query(0xc2080b3200, 0xc2081abd60, 0x91, 0xc208111b20, 0x2, 0x2, 0x2, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x2c2*ConnPool).Query(0xc2080b00e0, 0xc2081abd60, 0x91, 0xc208111b20, 0x2, 0x2, 0x91, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x149*ConnPool).QueryRow(0xc2080b00e0, 0xc2081abd60, 0x91, 0xc208111b20, 0x2, 0x2, 0x3)
            /home/ubuntu/go/src/ +0x5a*DB).QueryRow(0xc20808f4e0, 0xc2081abd60, 0x91, 0xc208111b20, 0x2, 0x2, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x7e
    main.selectApp(0x7f0287553a10, 0xc20808f4e0, 0xc20813b1fa, 0x24, 0x0, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x275
    main.(*AppRepo).Get(0xc20812a2a0, 0xc20813b1fa, 0x24, 0x0, 0x0, 0x0, 0x0)
            /home/ubuntu/go/src/ +0x8c
    main.func·009(0x7f02875538a8, 0xc20812bd40, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0xdd, 0x7f02875538a8, 0xc20812bd40, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x55·002(0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0, 0xc208135640, 0x2, 0x2)
            /home/ubuntu/go/src/ +0xcb*Router).ServeHTTP(0xc20801e640, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x188
    main.func·008(0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x44b
    net/http.HandlerFunc.ServeHTTP(0xc20801ef40, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x41·004(0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x8a0
    net/http.HandlerFunc.ServeHTTP(0xc20812ec30, 0x7f02875538e8, 0xc20812bb60, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x41·003(0x7f0287553830, 0xc2081abb80, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x25c
    net/http.HandlerFunc.ServeHTTP(0xc20801ef60, 0x7f0287553830, 0xc2081abb80, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x41
    net/http.serverHandler.ServeHTTP(0xc208034ae0, 0x7f0287553830, 0xc2081abb80, 0xc2081a12b0)
            /home/ubuntu/go/src/ +0x19a
            /home/ubuntu/go/src/ +0xb57
    created by net/http.(*Server).Serve
            /home/ubuntu/go/src/ +0x35e

    By the looks of it we tried to read when there were no bytes to read causing it to block on IO or something similar. Have you encountered this before and is there any debugging code I could insert to work out if this is pgx issue or application code issue? I can reliably reproduce.

    That said, it could still be a race condition because it has managed to get through my test without errors before but I can reproduce 9/10 times.

    opened by josephglanville 24
  • EOF Reading rows in TLS mode

    EOF Reading rows in TLS mode

    I noticed that pgx driver, when using TLS and reading a lot of rows would randomly go EOF. I am using PGX against Redshift, but I noticed a similar problem with Postgres too.

    opened by vincepri 21
  • Stack overflows on upgrading from 5.0.1 to 5.0.2

    Stack overflows on upgrading from 5.0.1 to 5.0.2

    After upgrading from 5.0.1 to 5.0.2, our application panics due to a stack overflow with the following stack trace:

    fatal error: stack overflow
    runtime stack:
    runtime.throw({0x193e3b9?, 0x25fc140?})
    	runtime/panic.go:1047 +0x5d fp=0x7f0427ffecb8 sp=0x7f0427ffec88 pc=0x43a95d
    	runtime/stack.go:1103 +0x5cc fp=0x7f0427ffee70 sp=0x7f0427ffecb8 pc=0x453fec
    	runtime/asm_amd64.s:570 +0x8b fp=0x7f0427ffee78 sp=0x7f0427ffee70 pc=0x469acb
    goroutine 320 [running]:
    fmt.(*fmt).padString(0xc0005712f0?, {0x1948620, 0x17})
    	fmt/format.go:108 +0x299 fp=0xc0261003a8 sp=0xc0261003a0 pc=0x4ec699
    fmt.(*fmt).fmtS(0x0?, {0x1948620?, 0xc026100420?})
    	fmt/format.go:359 +0x3f fp=0xc0261003e0 sp=0xc0261003a8 pc=0x4ed1bf
    fmt.(*pp).fmtString(0x40c28d?, {0x1948620?, 0xc00e6dbdc0?}, 0x0?)
    	fmt/print.go:477 +0xc5 fp=0xc026100430 sp=0xc0261003e0 pc=0x4f0005
    fmt.(*pp).handleMethods(0xc0005712b0, 0x1925b90?)
    	fmt/print.go:651 +0x216 fp=0xc026100680 sp=0xc026100430 pc=0x4f1356
    fmt.(*pp).printArg(0xc0005712b0, {0x1771fc0?, 0xc00e6dbdc0}, 0x73)
    	fmt/print.go:740 +0x69b fp=0xc026100720 sp=0xc026100680 pc=0x4f201b
    fmt.(*pp).doPrintf(0xc0005712b0, {0x196cb37, 0x37}, {0xc0261008e8?, 0x5, 0x5})
    	fmt/print.go:1057 +0x288 fp=0xc026100818 sp=0xc026100720 pc=0x4f4b08
    fmt.Errorf({0x196cb37, 0x37}, {0xc0261008e8, 0x5, 0x5})
    	fmt/errors.go:20 +0x6c fp=0xc026100880 sp=0xc026100818 pc=0x4ebe2c{0x17370e0, 0xc00e6dbd90}, 0x17370e0?, 0xe6dbd90?, 0xc0?, {0x1cebf60?, 0xc00e6dbdc0})[email protected]/pgtype/pgtype.go:1958 +0x205 fp=0xc026100948 sp=0xc026100880 pc=0x8c9aa5*Map).Encode(0x9f07bc?, 0x1d09cb0?, 0x0?, {0x17370e0, 0xc00e6dbd90}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1971 +0x11e fp=0xc026100998 sp=0xc026100948 pc=0x8c9c3e*encodePlanDriverValuer).Encode(0xc00e6dbd50, {0x18cc1a0?, 0xc01b0469c0}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1376 +0xbd fp=0xc026100a10 sp=0xc026100998 pc=0x8c46bd*Map).Encode(0x9f07bc?, 0x1b4b6000?, 0xc0?, {0x18cc1a0, 0xc01b0469c0}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1974 +0x8a fp=0xc026100a60 sp=0xc026100a10 pc=0x8c9baa*encodePlanDriverValuer).Encode(0xc00e6dbc90, {0x18cc1a0?, 0xc01b046950}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1393 +0x195 fp=0xc026100ad8 sp=0xc026100a60 pc=0x8c4795*Map).Encode(0x9f07bc?, 0x1b493f50?, 0xc0?, {0x18cc1a0, 0xc01b046950}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1974 +0x8a fp=0xc026100b28 sp=0xc026100ad8 pc=0x8c9baa*encodePlanDriverValuer).Encode(0xc00e6dbbd0, {0x18cc1a0?, 0xc01b0468f0}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1393 +0x195 fp=0xc026100ba0 sp=0xc026100b28 pc=0x8c4795*Map).Encode(0x9f07bc?, 0x1b493ec0?, 0xc0?, {0x18cc1a0, 0xc01b0468f0}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1974 +0x8a fp=0xc026100bf0 sp=0xc026100ba0 pc=0x8c9baa*encodePlanDriverValuer).Encode(0xc00e6dbb00, {0x18cc1a0?, 0xc01b046880}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1393 +0x195 fp=0xc026100c68 sp=0xc026100bf0 pc=0x8c4795*Map).Encode(0x9f07bc?, 0x1b493e30?, 0xc0?, {0x18cc1a0, 0xc01b046880}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1974 +0x8a fp=0xc026100cb8 sp=0xc026100c68 pc=0x8c9baa*encodePlanDriverValuer).Encode(0xc00e6dba30, {0x18cc1a0?, 0xc01b0467f0}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1393 +0x195 fp=0xc026100d30 sp=0xc026100cb8 pc=0x8c4795*Map).Encode(0x9f07bc?, 0x1b493da0?, 0xc0?, {0x18cc1a0, 0xc01b0467f0}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1974 +0x8a fp=0xc026100d80 sp=0xc026100d30 pc=0x8c9baa*encodePlanDriverValuer).Encode(0xc00e6db960, {0x18cc1a0?, 0xc01b046780}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1393 +0x195 fp=0xc026100df8 sp=0xc026100d80 pc=0x8c4795*Map).Encode(0x9f07bc?, 0x1b493d10?, 0xc0?, {0x18cc1a0, 0xc01b046780}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1974 +0x8a fp=0xc026100e48 sp=0xc026100df8 pc=0x8c9baa*encodePlanDriverValuer).Encode(0xc00e6db8a0, {0x18cc1a0?, 0xc01b046720}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1393 +0x195 fp=0xc026100ec0 sp=0xc026100e48 pc=0x8c4795*Map).Encode(0x9f07bc?, 0x1b493c80?, 0xc0?, {0x18cc1a0, 0xc01b046720}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1974 +0x8a fp=0xc026100f10 sp=0xc026100ec0 pc=0x8c9baa*encodePlanDriverValuer).Encode(0xc00e6db7e0, {0x18cc1a0?, 0xc01b0466b0}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1393 +0x195 fp=0xc026100f88 sp=0xc026100f10 pc=0x8c4795*Map).Encode(0x9f07bc?, 0x1b493bf0?, 0xc0?, {0x18cc1a0, 0xc01b0466b0}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1974 +0x8a fp=0xc026100fd8 sp=0xc026100f88 pc=0x8c9baa*encodePlanDriverValuer).Encode(0xc00e6db6f0, {0x18cc1a0?, 0xc01b046620}, {0xc000ef2100, 0x0, 0x100})[email protected]/pgtype/pgtype.go:1393 +0x195 fp=0xc026101050 sp=0xc026100fd8 pc=0x8c4795*Map).Encode(0x9f07bc?, 0x1b493b60?, 0xc0?, {0x18cc1a0, 0xc01b046620}, {0xc000ef2100, 0x0, 0x100})
    ... more of the same ...

    This happens in a large application and I have not yet debugged it to a point where I can create a small reproducer.

    opened by oschwald 19
  • handling dead connections and broken pipes

    handling dead connections and broken pipes

    Based on our experiences with this package, we have come up with what I consider a workaround for handling certain errors returned. In our apps, we use the ConnPool and therefore? have to handle dead connections. More recently, we found that the underlying connection can report an broken pipe and EOF. We think these errors are recoverable by the app by re-aquiring a new connection.

    The function below is what I currently use in all Dao functions. Find below an example of this.

    func doWithConnection(pool PgxConnPool, maxRetries int, work func(conn *pgx.Conn) error) error {
        conn, err := pool.Acquire()
        if err == pgx.ErrDeadConn {
            if maxRetries > 0 {
                glog.Debugf("%v, try acquiring a non-dead connection", err)
                return doWithConnection(pool, maxRetries-1, work)
        // abort if still err
        if err != nil {
            return err
        defer pool.Release(conn)
        // use the connection
        err = work(conn)
        // all ok?
        if err == nil {
            return nil
        // if not, can we retry?
        canRetry := (strings.HasSuffix(err.Error(), "broken pipe") ||
            strings.HasSuffix(err.Error(), "EOF"))
        if canRetry && maxRetries > 0 {
            glog.Debugf("failed to do work with connection because:[%v], retry:%d", err, maxRetries)
            return doWithConnection(pool, maxRetries-1, work)
        // give up
        return err


    func (d PgPropertyDao) DeleteKey(del PropertyAccess) error {
        return doWithConnection(d.pool, 10, func(conn *pgx.Conn) error {
            _, err := conn.Exec("select * from delete_key_api($1,$2,$3,$4,$5);",
            if err != nil {
                glog.Errorf("DeleteKey -> unable to delete_key_api(%#v):%v", del, err)
            return err

    The question is whether the pgx package should handle this internally and if not, it this the correct way to do so.


    opened by emicklei 19
  • pgxpool: health check should create new connections before destroying

    pgxpool: health check should create new connections before destroying

    Instead of closing connections before creating new ones this changes the health checker to create connections before closing them. It also will not close idle connections if that would put the pool below minimum connections.

    I also added more stats to provide better visibility into the churn of the pool.

    Fixes #1164

    opened by jameshartig 18
  • Connection pool not discarding bad connections

    Connection pool not discarding bad connections

    I have run into a situation in which my postgres database is being restarted (not sure if it's a graceful restart or not), thus getting rid of all of the connections on the server side.

    The connection pool does not seem to be responding well to this. When I try to query using one of the bad connections, I receive a write TCP timeout error. The connection pool does not recognize this (even though it should be checking if the connection is alive before every query), and puts the connection back into the pool.

    I've dug through the code a bit and am not seeing any obvious problems that could be causing this, but I'm going to keep looking.

    Can someone please tell me if I should expect the connection pool to discard the connection upon receiving a TCP timeout error? I have resorted to detecting this error in my application and resetting the entire pool, but this is too heavy of a solution to be used permanently in my opinion.


    opened by mhartenbower 17
  • Custom dial func causes pgx to hang indefinitely

    Custom dial func causes pgx to hang indefinitely

    The Cloud SQL Go Connector works with pgx by providing an implementation of pgconn.DialFunc that creates a TLS 1.3 connection in a seamless way for callers.

    The connector wraps the resulting TLS connection in part to support a MySQL health check by adding an implementation of syscall.SyscallConn. The full details are here, but in short, the MySQL driver uses syscall.SyscallConn to do a zero byte read to ensure the connection is alive, and otherwise knows to recycle it. Even though the Go standard library advises that reading and writing to the underlying connection will corrupt the TLS session, this works because no reads or writes are actually performed.

    Meanwhile, in the pgx v5, there is now a new NetConn type that uses the syscall.SyscallConn interface to perform non-blocking reads and writes.

    When I try to upgrade the Go Connector to pgx v5, though, the reads and writes hang indefinitely, presumably because pgx is trying to use the syscall.SyscallConn interface which otherwise does not work given the connection is actually encrypted.

    The options for fixing this in the Go Connector seem to be:

    1. Drop support for the MySQL health check
    2. Don't use pgx v5
    3. Have pgx add a configuration value for opting out of the non-blocking reads and writes

    None of the above are ideal. Are there any other options here?

    opened by enocom 1
  • Add QueryExecMode to CopyFrom

    Add QueryExecMode to CopyFrom

    CopyFrom had to create a prepared statement to get the OIDs of the data types that were going to be copied into the table. Every COPY operation required an extra round trips to retrieve the type information. There was no way to customize this behavior.

    By leveraging the QueryExecMode feature, like in Conn.Query, users can specify if they want to cache the prepared statements, execute them on every request (like the old behavior), or bypass the prepared statement relying on the pgtype.Map to get the type information.

    The QueryExecMode behave exactly like in Conn.Query in the way the data type OIDs are fetched, meaning that:

    • QueryExecModeCacheStatement: caches the statement.
    • QueryExecModeCacheDescribe: caches the statement and assumes they do not change.
    • QueryExecModeDescribeExec: gets the statement description on every execution. This is like to the old behavior of CopyFrom.
    • QueryExecModeExec: assumes the PostgreSQL query parameter types based on the Go type of the arguments.
    • QueryExecModeSimpleProtocol: assumes the PostgreSQL query parameter types based on the Go type of the arguments.

    The QueryExecMode can be set via ConnConfig.DefaultQueryExecMode or as an optional argument to the CopyFrom function:

      []string{"a", "b", "c", "d", "e", "f", "g"},
    opened by alejandrodnm 3
  • Proposal: CollectOneRow and QueryRow should return an error if multiple rows are returned

    Proposal: CollectOneRow and QueryRow should return an error if multiple rows are returned

    Is your feature request related to a problem? Please describe.

    CollectOneRow() and QueryRow() return an error if no rows are returned by the query. However, if multiple rows are returned the first row is used and all other rows are ignored. These functions imply that the caller expects exactly one row in response. Multiple rows being returned is (almost?) always a bug.

    Describe the solution you'd like

    A new error (e.g. pgx.ErrTooManyRows) should be returned if more than one row is found. The first row would still be scanned or returned.

    Describe alternatives you've considered

    • New functions could be introduced with this behavior. e.g. CollectExactlyOneRow() or CollectOneRowStrict(). QueryRow would be left alone in this case.
    • A strict mode flag could be added to connection config that enabled / disabled this behavior. (I think this is probably a bad idea.)

    Additional context

    This is a behavior change so it's arguable whether this change should be allowed in a minor version change. But I can't think of any legitimate use for the existing behavior. I think it is always a bug and any breakage would actually be revealing that bug.

    Also, it's not clear to me whether, how, or if this behavior should cascade to database/sql (stdlib).

    opened by jackc 0
  • Add pgtype.Uint64 to support pg_lsn type

    Add pgtype.Uint64 to support pg_lsn type

    I'd like to add pgtype.Uint64 to support pg_lsn type.

    I read and noticed type LSN is defined at

    However I think it's more consistent to add pgtype.Uint64 since pgtype.Uint32 was added to replace CID, OID, OIDValue, and XID at

    My attempt to add pgtype.Uint64 to support pg_lsn type is at

    With this change, I can define PgLSN type like below just as described at[email protected]/pgtype#hdr-Extending_Existing_Type_Support

    type PgLSN struct {
    	LSN   uint64
    	Valid bool
    func (n *PgLSN) ScanUint64(v pgtype.Uint64) error {
    	*n = PgLSN{
    		LSN:   v.Uint64,
    		Valid: v.Valid,
    	return nil
    func (n PgLSN) Uint64Value() (pgtype.Uint64, error) {
    	return pgtype.Uint64{
    		Uint64: n.LSN,
    		Valid:  true,
    	}, nil
    func (src *PgLSN) String() string {
    	if !src.Valid {
    		return ""
    	return fmt.Sprintf("%X/%X", src.LSN>>32, uint32(src.LSN))
    type PgLSNCodec struct {
    func (PgLSNCodec) DecodeValue(tm *pgtype.Map, oid uint32, format int16, src []byte) (interface{}, error) {
    	if src == nil {
    		return nil, nil
    	var target PgLSN
    	scanPlan := tm.PlanScan(oid, format, &target)
    	if scanPlan == nil {
    		return nil, fmt.Errorf("PlanScan did not find a plan")
    	err := scanPlan.Scan(src, &target)
    	if err != nil {
    		return nil, err
    	return target, nil
    opened by hnakamur 6
  • cannot convert -infinity to Timestamptz

    cannot convert -infinity to Timestamptz

    I'm using to copy data from pgx.Rows retrieved from a query into a target table.

    When I tried copying into a table with a timestamp with time zone default '-infinity'::timestamp with time zone column, it fails with the message cannot convert -infinity to Timestamptz.

    query := fmt.Sprintf(CopyOutTableData, strings.Join(columns, ","), schema, table, strings.Join(keyColumns, ","), offset, limit)
    			dataRows, err := sourceConn.Query(query)
    			if err != nil {
    				log.Fatalf("error - %s", err.Error())
    				resChan <- Response{CopyErr: err}
    			log.Println("Start Copying chunk", offset)
    			_, err = targetConn.CopyFrom(pgx.Identifier{schema, table}, columns, dataRows)
    			if err != nil {
    				log.Fatalf("error - %s", err.Error())
    				resChan <- Response{CopyErr: err}

    Is there any way around this?

    Thank you

    needs response 
    opened by tuxaanand 1
Jack Christensen
Jack Christensen
Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Go SQL Drivers 12.9k Dec 4, 2022
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Qiniu Cloud 1k Dec 5, 2022
Package pbpgx provides a toolkit for easier Protocol Buffers interaction with PostgreSQL databases.

PBPGX Package pbpgx provides a toolkit for easier Protocol Buffers interaction with PostgreSQL databases. Pbpgx supports the Protocol Buffer types gen

Tim Möhlmann 11 Jun 27, 2022
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)

Mongo Go Models Important Note: We changed package name from Kamva) to kamva) in v

kamva 588 Nov 20, 2022
Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)

gokv Simple key-value store abstraction and implementations for Go Contents Features Simple interface Implementations Value types Marshal formats Road

Philipp Gillé 493 Nov 28, 2022
Examples and code to assign a name to your MongoDB, MySQL, PostgreSQL, RabbitMQ, and redis connection.

your connection deserves a name ?? When your app interacts with an external system, assign a name to the connection. An external system in this contex

Andy Grunwald 28 Dec 6, 2022
A MongoDB compatible embeddable database and toolkit for Go.

lungo A MongoDB compatible embeddable database and toolkit for Go. Installation Example Motivation Architecture Features License Installation To get s

Joël Gähwiler 397 Nov 19, 2022
Mirror of Apache Calcite - Avatica Go SQL Driver

Apache Avatica/Phoenix SQL Driver Apache Calcite's Avatica Go is a Go database/sql driver for the Avatica server. Avatica is a sub-project of Apache C

The Apache Software Foundation 103 Nov 3, 2022
Firebird RDBMS sql driver for Go (golang)

firebirdsql (Go firebird sql driver) Firebird RDBMS SQL driver for Go Requirements Firebird 2.5 or higher Golang 1.13 or higher

Hajime Nakagami 184 Nov 25, 2022
Microsoft ActiveX Object DataBase driver for go that using exp/sql

go-adodb Microsoft ADODB driver conforming to the built-in database/sql interface Installation This package can be installed with the go get command:

mattn 132 Dec 5, 2022
Microsoft SQL server driver written in go language

A pure Go MSSQL driver for Go's database/sql package Install Requires Go 1.8 or above. Install with go get . Connecti

null 1.7k Dec 6, 2022
Oracle driver for Go using database/sql

go-oci8 Description Golang Oracle database driver conforming to the Go database/sql interface Installation Install Oracle full client or Instant Clien

mattn 595 Dec 4, 2022
sqlite3 driver for go using database/sql

go-sqlite3 Latest stable version is v1.14 or later not v2. NOTE: The increase to v2 was an accident. There were no major changes or features. Descript

mattn 6.2k Dec 6, 2022
GO DRiver for ORacle DB

Go DRiver for ORacle godror is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wra

null 406 Dec 3, 2022
Go Sql Server database driver.

gofreetds Go FreeTDS wrapper. Native Sql Server database driver. Features: can be used as database/sql driver handles calling stored procedures handle

minus5 106 Oct 19, 2022
Pure Go Postgres driver for database/sql

pq - A pure Go postgres driver for Go's database/sql package Install go get Features SSL Handles bad connections for database/sql S

null 7.7k Nov 29, 2022
Lightweight Golang driver for ArangoDB

Arangolite Arangolite is a lightweight ArangoDB driver for Go. It focuses on pure AQL querying. See AranGO for a more ORM-like experience. IMPORTANT:

Fabien Herfray 73 Sep 26, 2022
Go language driver for RethinkDB

RethinkDB-go - RethinkDB Driver for Go Go driver for RethinkDB Current version: v6.2.1 (RethinkDB v2.4) Please note that this version of the driver on

RethinkDB 1.6k Nov 22, 2022
goriak - Go language driver for Riak KV

goriak Current version: v3.2.1. Riak KV version: 2.0 or higher, the latest version of Riak KV is always recommended. What is goriak? goriak is a wrapp

Gustav Westling 29 Nov 22, 2022