Pure Go Postgres driver for database/sql

Related tags

Database Drivers pq
Overview

pq - A pure Go postgres driver for Go's database/sql package

GoDoc

Install

go get github.com/lib/pq

Features

  • SSL
  • Handles bad connections for database/sql
  • Scan time.Time correctly (i.e. timestamp[tz], time[tz], date)
  • Scan binary blobs correctly (i.e. bytea)
  • Package for hstore support
  • COPY FROM support
  • pq.ParseURL for converting urls to connection strings for sql.Open.
  • Many libpq compatible environment variables
  • Unix socket support
  • Notifications: LISTEN/NOTIFY
  • pgpass support
  • GSS (Kerberos) auth

Tests

go test is used for testing. See TESTS.md for more details.

Status

This package is effectively in maintenance mode and is not actively developed. Small patches and features are only rarely reviewed and merged. We recommend using pgx which is actively maintained.

Issues
  • Multiple

    Multiple "pq: unexpected describe rows response" errors

    We've been experiencing errors with lib/pq that eventually results in a state where no database connections in the pool are free. Max connections as set by SetMaxOpenConns is not reached.

    The errors normally start with db.Prepare:

    Could not prepare statement: pq: unexpected describe rows response: '3'
    

    We also see the unexpected response 'C'. These develop into multiple occurrences of

    sql: statement expects 0 inputs; got 4
    

    and similar errors on statements that have previously been prepared with no errors.

    To reiterate, we don't query any prepared statements that returned errors from Prepare. These errors are returned from queries on successfully prepared statements.

    Then, we see occurrences of failures of Begin:

    Could not start a transaction: pq: unknown response for simple query: '1'
    Could not start a transaction: unexpected command tag 
    Could not start a transaction: pq: unexpected transaction status idle in transaction
    

    The only error in postgres around this time is:

    FATAL:  invalid frontend message type 90
    

    Which happened around 20s after the initial errors were seen in our application log.

    pprof indicates that all goroutines querying the database were netpolling:

    #   0x423886    netpollblock+0xa6                       /usr/local/src/go/src/pkg/runtime/netpoll.goc:280
    #   0x4231ea    net.runtime_pollWait+0x6a                   /usr/local/src/go/src/pkg/runtime/netpoll.goc:116
    #   0x695534    net.(*pollDesc).Wait+0x34                   /usr/local/src/go/src/pkg/net/fd_poll_runtime.go:81
    #   0x695590    net.(*pollDesc).WaitRead+0x30                   /usr/local/src/go/src/pkg/net/fd_poll_runtime.go:86
    #   0x696910    net.(*netFD).Read+0x2a0                     /usr/local/src/go/src/pkg/net/fd_unix.go:204
    #   0x6a5825    net.(*conn).Read+0xc5                       /usr/local/src/go/src/pkg/net/net.go:122
    #   0x48a1a0    bufio.(*Reader).fill+0x110                  /usr/local/src/go/src/pkg/bufio/bufio.go:91
    #   0x48a5a4    bufio.(*Reader).Read+0x1a4                  /usr/local/src/go/src/pkg/bufio/bufio.go:159
    #   0x46f566    io.ReadAtLeast+0xf6                     /usr/local/src/go/src/pkg/io/io.go:288
    #   0x46f6d1    io.ReadFull+0x71                        /usr/local/src/go/src/pkg/io/io.go:306
    #   0x4f5f9b    github.com/lib/pq.(*conn).recvMessage+0x10b         /home/matt/dev/go/src/github.com/lib/pq/conn.go:637
    #   0x4f6357    github.com/lib/pq.(*conn).recv1+0x27                /home/matt/dev/go/src/github.com/lib/pq/conn.go:690
    #   0x4f5242    github.com/lib/pq.(*conn).prepareToSimpleStmt+0x822     /home/matt/dev/go/src/github.com/lib/pq/conn.go:508
    #   0x4f4997    github.com/lib/pq.(*conn).prepareTo+0x87            /home/matt/dev/go/src/github.com/lib/pq/conn.go:486
    #   0x4f5670    github.com/lib/pq.(*conn).Prepare+0x120             /home/matt/dev/go/src/github.com/lib/pq/conn.go:539
    #   0x4e5199    database/sql.(*driverConn).prepareLocked+0x49           /usr/local/src/go/src/pkg/database/sql/sql.go:250
    #   0x4e7ce1    database/sql.(*DB).prepare+0xb1                 /usr/local/src/go/src/pkg/database/sql/sql.go:828
    #   0x4e7b9c    database/sql.(*DB).Prepare+0x5c                 /usr/local/src/go/src/pkg/database/sql/sql.go:808
    

    In a separate instance of unresponsiveness, all the goroutines were waiting on mutexes, e.g.:

    #   0x4241d0    sync.runtime_Semacquire+0x30                            /usr/local/src/go/src/pkg/runtime/sema.goc:199
    #   0x47f136    sync.(*Mutex).Lock+0xd6                             /usr/local/src/go/src/pkg/sync/mutex.go:66
    #   0x4e6dd2    database/sql.(*DB).conn+0x42                            /usr/local/src/go/src/pkg/database/sql/sql.go:616
    #   0x4e7c82    database/sql.(*DB).prepare+0x32                         /usr/local/src/go/src/pkg/database/sql/sql.go:823
    #   0x4e7bbc    database/sql.(*DB).Prepare+0x5c                         /usr/local/src/go/src/pkg/database/sql/sql.go:808
    

    The logged errors weren't present in that case, so these could be two totally different problems.

    We're running Ubuntu 12.04 and pg 9.2 with go 1.2.1, and we're not assuming a trouble-free network by any means.

    I appreciate this is somewhat vague and we don't have a reproducible test case (yet), but any pointers for further investigation would be appreciated. Let me know if further background info would be helpful.

    opened by mattrco 51
  • Add initial support for LISTEN/NOTIFY.

    Add initial support for LISTEN/NOTIFY.

    Using db.Query("LISTEN relname"), returning an infinite Rows object. This is not optimal as a Rows.Next() call cannot be interrupted, but should fit the API well.

    • It's somewhat ugly and shoe-horned, but seems to work.
    • I strived for minimum impact on the normal code path.
    • Only db.Query() or db.Prepare() will cause the new path to be taken. db.Exec() and Tx-specific code is unaffected. db.Exec() doesn't make sense as we need a return value, and we don't care about transactions.
    enhancement 
    opened by tommie 51
  • Add support for binary_mode

    Add support for binary_mode

    Here's my work so far towards making #209 happen. It passes all tests with both binary_mode off and on, which suggests that we don't have enough tests.

    This functionality is split into three separate commits:

    1. Refactor writeBuf into something which can be used to send multiple messages in a single write() system call. This should reduce the overhead a bit when there are a lot of threads writing at the same time.
    2. Decoupling rows and stmt so that a rows object can live without having an associated stmt object. Needed for the single round trip mode.
    3. The implementation of binary_mode, where all []byte values are sent over and marked to be "binary", and everything else is sent over in text. Something like this is necessary since if we want to do only a single round-trip to the server per a call to *sql.Query(), we don't know the SQL types of the input parameters.

    There's still likely a lot of work to do:

    1. There's no documentation
    2. There are no additional tests
    3. This might break some driver.Valuers which return a byte slice which isn't actually valid as input for the type in binary mode. If we can adopt a general guideline of "string means text, []byte means binary" then things might work out great, but e.g. allocation penalties of such a guideline are not clear to me. This has some impact on e.g. the ongoing work for supporting arrays. Another approach might be to give up on binary mode altogether and instead always send everything over as text, but provide a special Valuer for bytea values (similarly to how we're likely going to have to have a separate values for bytea arrays). Both approached would

    I'd really like to stress point 3 directly above; this is not the only way to achieve single roundtrip mode, and at the moment I'm not convinced it's the best one, either. The performance gains from using binary mode have been in my tests almost negligible (though I've mostly tested with bytea values, since this patch doesn't use binary mode for e.g. ints even though it theoretically could), so don't get too hung up on that.

    Any thoughts?

    opened by johto 48
  • Unchecked range error on slice.

    Unchecked range error on slice.

    I've refactored my code to work around this, but there's an uncheked range error sometimes raising a panic(), when for some reason the list is empty:

    https://github.com/lib/pq/blob/master/conn.go#L736

    I've added a logging line, and I see that typically something like this:

    === RUN Test_CreatingANewUserSuccessfully
    About to read index 0 of [25 25 1043 1043]
    About to read index 1 of [25 25 1043 1043]
    About to read index 2 of [25 25 1043 1043]
    About to read index 3 of [25 25 1043 1043]
    
    About to read index 0 of [2950]
    

    I have seen (but after refactoring, and tidying up on my side, I can''t now replicate) an issue where I see a panic raised, after output like:

    === RUN Test_CreatingANewUserSuccessfully
    About to read index 0 of []
    

    I don't quite understand how I could get an empty slice here, I'm passing something like:

        query := "INSERT INTO users (uuid, name, email, password_hash) VALUES (CASE WHEN $1::text = '' THEN uuid_generate_v4() ELSE $1::uuid END, $2, $3, $4) RETURNING uuid"
      err = store.tx.QueryRow(query, u.Uuid, u.Name, u.Email, password_hash).Scan(&uuid)
        if err != nil {
            return "", err
        }
    
    opened by leehambley 45
  • Incorrect

    Incorrect "idle in transaction" on Tx.Commit()

    Hi there,

    I don't know if this has been reported before or isn't considered a bug. I searched the issues for something like it but didn't find anything.

    I encountered the following situation:

    • open a transaction
    • execute a query that returns rows
    • forget to close the rows
    • call Commit()
    • repeat

    The call to Commit() returns an error but the transaction actually gets committed (as can be checked via psql). The next call to Begin() then returns an error saying "pq: unexpected transaction status idle in transaction", even though the previous tx got committed and PostgreSQL thinks there's no idle-in-transaction connection either (ps output says "idle", not "idle in transaction").

    Maybe a little code:

    func Test(user string, age int, conn *sql.DB) (int, error) {
        tx, err := conn.Begin()
            if err != nil {
                return 0, err
        }
        rollback := false
    
        defer func() {
            if rollback {
                tx.Rollback()
            } else {
                err := tx.Commit()
                fmt.Println(err)
            }
        }()
    
        rows, err := tx.Query("INSERT INTO users(id, name, age) VALUES (DEFAULT, $1, $2) RETURNING (id)", user, age)
        if err != nil {
            rollback = true
            return 0, err
        }
    
        var id int
        rows.Next() // because I know there will only be one row
        err = rows.Scan(&id)
        if err != nil {
            rollback = true
            return 0, err
        }
    
        return id, nil
    }
    

    Call this function twice and the second call will fail even though the data from the first call reached the database correctly. If the connection really were idle in transaction the data could not be seen in the corresponding database table.

    Note: I know I should have used QueryRow (and I currently do), but that's how I stumbled over this. Inserting a rows.Close() call before returning works, too.

    But it looks like pq and PostgreSQL have differing ideas about what constitutes a successful commit. Btw, the call to Commit() returns "unexpected command tag INSERT" in this case.

    I believe this is a bug in pq or should at least be clarified in the documentation.

    opened by cuboci 39
  • Array Support

    Array Support

    I've been browsing the issues and PRs to see what the status of array support is. Along with a bunch of others, support is something I'd like to see.

    I'm curious as to what the consensus is as of now. Would it be nice for support to be baked into this package, or should it come from another package? Should it affect the encode/decode process, or simply be a set of scanners?

    To speak more generally, what should be supported, what cases should be handled, and what should be avoided?

    This is something I'm willing to work on; I just want to know what will be mergable.

    opened by erykwalder 30
  • Very slow on Windows

    Very slow on Windows

    I've run pq on both Linux (Ubuntu 12.10) and Windows7. I've downloaded the latest version as of 15-Jan-2012.

    On Linux, no problems encountered, but I haven't done a lot of testing.

    On Windows, it is very slow. eg. simple SELECT on small table taking (say) 2 seconds. A minor update involving an update on one table and an insert on another with COMMIT taking 4.5 seconds.

    psql runs find on Windows, no slowness detected.

    Running "GO test" takes 0.321 seconds (Linux), and it takes 39.287 seconds (Windows). Subsequent runs on Linux get it just under 0.100 seconds, and on Windows just under 39 seconds.

    I had to alter conn_test.go (on Windows) because running the test gave repeated errors of 'pq: Role "ta10\Brian" does not exist'. I attempted to create role 'ta10\Brian', but psql gave error. I created role 'Brian', but still got error. Therefore, I altered "conn_test.go" as follows:

        ////conn, err := sql.Open("postgres", "")
        conn, err := sql.Open("postgres",
        "user=postgres dbname=postgres password=super")
    
    opened by brianoh 29
  • Add Scanner/Value implementations for Postgres network address types.

    Add Scanner/Value implementations for Postgres network address types.

    An implementation of Scanner / Value types for Postgres network address types, intended to make it easier to work with these types. Includes the three current network address types - cidr, inet, macaddr. Addresses issue #121

    Some quick comments:

    1. The package name 'netaddr' was chosen to conform with Go recommended best practices. Happy to change this package name if there's a strong preference on the maintainer's part for another package name.
    2. I decided to give all of the types a consistent behavior with regards to NULL handling. Like the native value types (Bool, Int64, etc.) the corresponding structs have a 'Valid' member. When false, this indicates that the corresponding database value is NULL. Technically the native types for inet and macaddr support a nil value, and so for these cases a simpler struct could have been used. But I opted for a more consistent interface.
    3. It was unclear to me from the existing Hstore implementation whether an error in a Scan should result in a panic or the explict return of an error. I chose to explicitly return an error rather than panic if Scan received a value that cannot be coerced to a byte array.

    Happy to discuss or revisit any of the above.

    opened by petergoldstein 27
  • Simplify error handling, add error codes

    Simplify error handling, add error codes

    This work simplifies and improves the custom error type(s). It partially reverts an earlier contribution.

    Primary change is moving to an Error type with fields corresponding to the underlying pq error fields.

    opened by tmc 27
  • No way to see an Insert error from QueryRow()

    No way to see an Insert error from QueryRow()

    I want to make an insert into a table with constraints. I also want to get the inserted id value if successful. The problem is if QueryRow() with an INSERT...RETURNING fails, then the error is a rather unhelpful no rows in result set.

    The error I want would be something like: duplicate key violates unique constraint "user_email_unique" which is the error that's returned by Exec() but then I don't get the last inserted id if everything goes well.

    opened by chris-baynes 27
  • implement ConnPrepareContext/StmtQueryContext/StmtExecContext interfaces

    implement ConnPrepareContext/StmtQueryContext/StmtExecContext interfaces

    See #1046 for more context. (pun intended)

    This is almost a line for line copy of #921 with test cases (thanks @kylejbrock). Please let me know what else needs to be done to get this merged.

    Also dropped ci testing of unsupported versions (9.5 and 9.4) per pg docs here and added ci testing of versions 11, 12, and 13.

    closes #921 closes #1046

    opened by michaelshobbs 26
  • input more err wsarecv: An existing connection was forcibly closed by the remote host.

    input more err wsarecv: An existing connection was forcibly closed by the remote host.

    When I write an add statement, I usually don't report an error, but some sql statements keep reporting an error and disconnection. After I take out the sql, it executes normally on the client side. I don't know what caused it. Here are some of my sql cases. Insert into test(a,b,c,.....) values as normal (1, 2, 3, 4, 'xxx',....), (1, 2, 3, 4, 'xxx' ,....), error commit insert into test(a,b,c,.....) values (1, 2, 3, 4, 'xxx',....), (1, 2, 3, 4, 'xxx',....), what is even more amazing is that the correct statement is changed to insert into test(a,b,c,.....) values (1, 2, 3, 4 , 'ttt',....), (1, 2, 3, 4, 'ttt',.,...), execute normally on the client side, but report an error in the program

    opened by 943885179 0
  • Use pointer receiver on pq.Error.Error()

    Use pointer receiver on pq.Error.Error()

    The library returns *pq.Error and not pq.Error.

    By using a value receiver, the library was documenting that consumers should expect returned error values to contain pq.Error.

    While *pq.Error implements all methods on pq.Error, *pq.Error is not assignable to pq.Error and so you can't type assert an error value into pq.Error if it actually contains *pq.Error.

    In particular, this is a problem with errors.As. The following if condition will always return false.

    var pqe pq.Error
    if errors.As(err, &pqe) {
      // Never reached as *pq.Error is not assignable to pqe.
      ...
    }
    
    opened by nhooyr 0
  • conn: Implement driver.Validator, SessionResetter for cancelation

    conn: Implement driver.Validator, SessionResetter for cancelation

    Commit 8446d16b89 released in 1.10.4 changed how some cancelled query errors were returned. This has caused a lib/pq application I work on to start returning "driver: bad connection". This is because we were cancelling a query, after looking at some of the rows. This causes a "bad" connection to be returned to the connection pool.

    To prevent this, implement the driver.Validator and driver.SessionResetter interfaces. The database/sql/driver package recommends implementing them:

    "All Conn implementations should implement the following interfaces: Pinger, SessionResetter, and Validator"

    Add two tests for this behaviour. One of these tests passed with 1.10.3 but fails with newer versions. The other never passed, but does after this change.

    opened by evanj 0
  • Issue downloading auth/kerberos package

    Issue downloading auth/kerberos package

    Trying to import "github.com/lib/pq/auth/kerberos" but getting error

    no required module provides package github.com/lib/pq/auth/kerberos; to add it: go get github.com/lib/pq/auth/kerberos

    When I try and download, I get this error:

    go get github.com/lib/pq/auth/kerberos go: downloading github.com/lib/pq v1.10.4 go: downloading github.com/lib/pq/auth/kerberos v0.0.0-20211108200635-8446d16b8935 go get: module github.com/lib/[email protected] found (v1.10.4), but does not contain package github.com/lib/pq/auth/kerberos

    go env is:

    ─ GO111MODULE="on" [18/1281] GOARCH="amd64" GOBIN="" GOCACHE="/var/tmp/build-daloia/.gocache" GOENV="/v/global/user/d/da/daloia/.config/go/env" GOEXE="" GOFLAGS="-modcacherw" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GOMODCACHE="/var/tmp/build-daloia/go/pkg/mod" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/var/tmp/build-daloia/go" GOPRIVATE="" GOPROXY="https://daloia:AKCp5fUPDT[email protected]msde-docker-prod.ms.com/api/go/go-all" GOROOT="/ms/dist/go/PROJ/go/1.16.3/.exec/@sys" GO111MODULE="on" [18/1281] GOARCH="amd64" GOBIN="" GOCACHE="/var/tmp/build-daloia/.gocache" GOENV="/v/global/user/d/da/daloia/.config/go/env" GOEXE="" GOFLAGS="-modcacherw" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GOMODCACHE="/var/tmp/build-daloia/go/pkg/mod" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/var/tmp/build-daloia/go" GOPRIVATE="" GOPROXY="https://daloia:AKCp5fUPDT[email protected]msde-docker-prod.ms.com/api/go/go-all" GOROOT="/ms/dist/go/PROJ/go/1.16.3/.exec/@sys" GO111MODULE="on" [18/1281] GOARCH="amd64" GOBIN="" GOCACHE="/var/tmp/build-daloia/.gocache" GOENV="/v/global/user/d/da/daloia/.config/go/env" GOEXE="" GOFLAGS="-modcacherw" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GOMODCACHE="/var/tmp/build-daloia/go/pkg/mod" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/var/tmp/build-daloia/go" GOPRIVATE="" GOPROXY="https://daloia:AKCp5fUPDT[email protected]msde-docker-prod.ms.com/api/go/go-all" GOROOT="/ms/dist/go/PROJ/go/1.16.3/.exec/@sys"

    opened by mdaloia23 0
  • all: switch internal API's to use driver.NamedValue instead of driver.Value

    all: switch internal API's to use driver.NamedValue instead of driver.Value

    database/sql defaults to using the QueryContext and ExecContext API's. Previously, we would need to allocate in order to convert the driver.NamedValue parameter that each of those accepts to a driver.Value. By using driver.NamedValue consistently internally, we can save allocations and improve performance.

    See #1067 for a full set of benchmarks.

    opened by kevinburke 0
  • Swap driver.Value for driver.NamedValue in internal APIs

    Swap driver.Value for driver.NamedValue in internal APIs

    The new QueryContext and ExecContext API's both take a driver.NamedValue instead of a driver.Value. Because pq internally uses driver.Value this means that the first thing that happens with both API's is a copy:

    // Implement the "StmtExecContext" interface
    func (st *stmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error) {
    	list := make([]driver.Value, len(args))
    	for i, nv := range args {
    		list[i] = nv.Value
    	}
    
    

    This means that every call to this function with arguments allocates. Note also that database/sql will use QueryContext if it exists, so every call from database/sql is going through that call path now:

    // queryDC executes a query on the given connection.
    // The connection gets released by the releaseConn function.
    // The ctx context is from a query method and the txctx context is from an
    // optional transaction context.
    func (db *DB) queryDC(ctx, txctx context.Context, dc *driverConn, releaseConn func(error), query string, args []interface{}) (*Rows, error) {
    	queryerCtx, ok := dc.ci.(driver.QueryerContext)
    	var queryer driver.Queryer
    	if !ok {
    		queryer, ok = dc.ci.(driver.Queryer)
    	}
    	if ok {
    		var nvdargs []driver.NamedValue
    		var rowsi driver.Rows
    		var err error
    		withLock(dc, func() {
    			nvdargs, err = driverArgsConnLocked(dc.ci, nil, args)
    			if err != nil {
    				return
    			}
    			rowsi, err = ctxDriverQuery(ctx, queryerCtx, queryer, query, nvdargs)
    		})
    

    Instead of using driver.Value internally, if all of the pq internal API's use driver.NamedValue, this saves an allocation in the most common case.

    The patch implemented here: https://github.com/kevinburke/pq/compare/named-value?expand=1 improves on the PreparedSelect benchmark by about 4% on my Mac (the rest of the results appear to be noise)

    name                                  old time/op    new time/op    delta
    BoolArrayScanBytes-10                    530ns ± 1%     530ns ± 1%    ~     (p=0.548 n=5+5)
    BoolArrayValue-10                       66.3ns ± 1%    66.7ns ± 0%    ~     (p=0.095 n=5+5)
    ByteaArrayScanBytes-10                   980ns ± 2%     976ns ± 1%    ~     (p=0.690 n=5+5)
    ByteaArrayValue-10                       279ns ± 2%     281ns ± 2%    ~     (p=0.310 n=5+5)
    Float64ArrayScanBytes-10                 960ns ± 1%     956ns ± 4%    ~     (p=0.310 n=5+5)
    Float64ArrayValue-10                     969ns ± 2%     966ns ± 1%    ~     (p=0.421 n=5+5)
    Int64ArrayScanBytes-10                   626ns ± 1%     624ns ± 1%    ~     (p=0.421 n=5+5)
    Int64ArrayValue-10                       483ns ± 2%     485ns ± 2%    ~     (p=0.841 n=5+5)
    Float32ArrayScanBytes-10                 941ns ± 1%     950ns ± 2%    ~     (p=0.246 n=5+5)
    Float32ArrayValue-10                     654ns ± 0%     663ns ± 2%  +1.36%  (p=0.016 n=5+5)
    Int32ArrayScanBytes-10                   623ns ± 1%     619ns ± 1%    ~     (p=0.246 n=5+5)
    Int32ArrayValue-10                       328ns ± 1%     333ns ± 1%  +1.53%  (p=0.008 n=5+5)
    StringArrayScanBytes-10                 1.36µs ±10%    1.33µs ± 1%    ~     (p=0.579 n=5+5)
    StringArrayValue-10                     2.51µs ± 2%    2.58µs ± 9%    ~     (p=0.421 n=5+5)
    GenericArrayScanScannerSliceBytes-10    2.62µs ± 1%    2.66µs ± 3%    ~     (p=0.095 n=5+5)
    GenericArrayValueBools-10                642ns ± 1%     647ns ± 1%    ~     (p=0.151 n=5+5)
    GenericArrayValueFloat64s-10            1.91µs ± 1%    1.89µs ± 1%    ~     (p=0.151 n=5+5)
    GenericArrayValueInt64s-10              1.09µs ± 0%    1.11µs ± 1%  +1.26%  (p=0.024 n=5+5)
    GenericArrayValueByteSlices-10          2.69µs ± 1%    2.71µs ± 2%    ~     (p=0.690 n=5+5)
    GenericArrayValueStrings-10             2.88µs ± 1%    2.89µs ± 0%    ~     (p=0.206 n=5+5)
    SelectString-10                         28.8µs ± 3%    28.6µs ± 1%    ~     (p=0.690 n=5+5)
    SelectSeries-10                         52.1µs ± 2%    52.0µs ± 1%    ~     (p=1.000 n=5+5)
    MockSelectString-10                      663ns ± 1%     665ns ± 2%    ~     (p=1.000 n=5+5)
    MockSelectSeries-10                     7.12µs ± 1%    7.12µs ± 0%    ~     (p=0.690 n=5+5)
    PreparedSelectString-10                 28.2µs ± 5%    27.0µs ± 1%  -4.47%  (p=0.008 n=5+5)
    PreparedSelectSeries-10                 45.1µs ± 1%    44.7µs ± 1%  -0.86%  (p=0.032 n=5+5)
    MockPreparedSelectString-10              336ns ± 1%     342ns ± 3%  +1.68%  (p=0.016 n=5+5)
    MockPreparedSelectSeries-10             6.77µs ± 1%    6.79µs ± 0%    ~     (p=0.310 n=5+5)
    EncodeInt64-10                          22.6ns ± 0%    22.7ns ± 1%    ~     (p=0.579 n=5+5)
    EncodeFloat64-10                        65.0ns ± 2%    64.6ns ± 1%    ~     (p=0.548 n=5+5)
    EncodeByteaHex-10                       78.5ns ± 2%    80.9ns ± 4%  +3.07%  (p=0.016 n=5+5)
    EncodeByteaEscape-10                     125ns ± 1%     125ns ± 1%    ~     (p=0.508 n=5+5)
    EncodeBool-10                           14.8ns ± 0%    14.9ns ± 1%    ~     (p=0.143 n=4+5)
    EncodeTimestamptz-10                     264ns ± 1%     264ns ± 1%    ~     (p=0.690 n=5+5)
    DecodeInt64-10                          32.5ns ± 1%    32.6ns ± 1%    ~     (p=0.548 n=5+5)
    DecodeFloat64-10                        43.7ns ± 1%    44.0ns ± 2%    ~     (p=0.595 n=5+5)
    DecodeBool-10                           2.56ns ± 0%    2.55ns ± 1%  -0.62%  (p=0.032 n=5+5)
    DecodeTimestamptz-10                     146ns ± 0%     147ns ± 1%    ~     (p=0.063 n=5+5)
    DecodeTimestamptzMultiThread-10          176ns ± 2%     179ns ± 3%    ~     (p=0.222 n=5+5)
    LocationCache-10                        37.1ns ± 1%    36.7ns ± 1%  -1.18%  (p=0.008 n=5+5)
    LocationCacheMultiThread-10              162ns ± 1%     160ns ± 1%  -1.11%  (p=0.008 n=5+5)
    ResultParsing-10                        3.81ms ± 0%    3.81ms ± 0%    ~     (p=0.730 n=4+5)
    _writeBuf_string-10                     1.58ns ± 1%    1.56ns ± 0%  -1.70%  (p=0.008 n=5+5)
    CopyIn-10                                309ns ± 4%     308ns ± 2%    ~     (p=0.889 n=5+5)
    AppendEscapedText-10                    2.29µs ± 1%    2.30µs ± 1%    ~     (p=0.548 n=5+5)
    AppendEscapedTextNoEscape-10            1.00µs ± 0%    1.01µs ± 0%  +0.54%  (p=0.024 n=5+5)
    DecodeUUIDBinary-10                     37.8ns ± 1%    38.0ns ± 1%    ~     (p=0.095 n=5+5)
    

    This patch improves performance on my rickover dequeue benchmark (github.com/kevinburke/rickover), which measures how fast I can get rows out of the database. I can try to get statistically significant results, but you can see it reduces the number of allocations and it's reasonable to assume that performance is also improved.

    $ benchstat /tmp/old /tmp/new
    name                 old time/op    new time/op    delta
    Dequeue/Dequeue1-10    8.00ms ±10%    7.74ms ± 4%   ~     (p=0.421 n=5+5)
    
    name                 old speed      new speed      delta
    Dequeue/Dequeue1-10   0.00B/s        0.00B/s        ~     (all equal)
    
    name                 old alloc/op   new alloc/op   delta
    Dequeue/Dequeue1-10    12.3kB ±13%    12.1kB ± 2%   ~     (p=0.690 n=5+5)
    
    name                 old allocs/op  new allocs/op  delta
    Dequeue/Dequeue1-10       160 ±13%       155 ± 1%   ~     (p=1.000 n=5+5)
    
    opened by kevinburke 0
Releases(v1.10.4)
Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

pqssh Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

mattn 47 Mar 3, 2022
Microsoft ActiveX Object DataBase driver for go that using exp/sql

go-adodb Microsoft ADODB driver conforming to the built-in database/sql interface Installation This package can be installed with the go get command:

mattn 128 Jun 8, 2022
Oracle driver for Go using database/sql

go-oci8 Description Golang Oracle database driver conforming to the Go database/sql interface Installation Install Oracle full client or Instant Clien

mattn 593 Jun 29, 2022
sqlite3 driver for go using database/sql

go-sqlite3 Latest stable version is v1.14 or later not v2. NOTE: The increase to v2 was an accident. There were no major changes or features. Descript

mattn 5.8k Jun 22, 2022
Go Sql Server database driver.

gofreetds Go FreeTDS wrapper. Native Sql Server database driver. Features: can be used as database/sql driver handles calling stored procedures handle

minus5 106 Jan 23, 2022
Attach hooks to any database/sql driver

sqlhooks Attach hooks to any database/sql driver. The purpose of sqlhooks is to provide a way to instrument your sql statements, making really easy to

Gustavo Chaín 551 Jun 23, 2022
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Qiniu Cloud 913 Jun 21, 2022
SAP (formerly sybase) ASE/RS/IQ driver written in pure go

tds import "github.com/thda/tds" Package tds is a pure Go Sybase ASE/IQ/RS driver for the database/sql package. Status This is a beta release. This dr

Thomas 50 Apr 24, 2022
Mirror of Apache Calcite - Avatica Go SQL Driver

Apache Avatica/Phoenix SQL Driver Apache Calcite's Avatica Go is a Go database/sql driver for the Avatica server. Avatica is a sub-project of Apache C

The Apache Software Foundation 96 Jun 21, 2022
Firebird RDBMS sql driver for Go (golang)

firebirdsql (Go firebird sql driver) Firebird RDBMS http://firebirdsql.org SQL driver for Go Requirements Firebird 2.5 or higher Golang 1.13 or higher

Hajime Nakagami 171 Jun 1, 2022
Microsoft SQL server driver written in go language

A pure Go MSSQL driver for Go's database/sql package Install Requires Go 1.8 or above. Install with go get github.com/denisenkom/go-mssqldb . Connecti

null 1.6k Jun 23, 2022
GO DRiver for ORacle DB

Go DRiver for ORacle godror is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wra

null 368 Jun 29, 2022
PostgreSQL driver and toolkit for Go

pgx - PostgreSQL Driver and Toolkit pgx is a pure Go driver and toolkit for PostgreSQL. pgx aims to be low-level, fast, and performant, while also ena

Jack Christensen 5.6k Jun 22, 2022
Lightweight Golang driver for ArangoDB

Arangolite Arangolite is a lightweight ArangoDB driver for Go. It focuses on pure AQL querying. See AranGO for a more ORM-like experience. IMPORTANT:

Fabien Herfray 72 Jun 17, 2022
Go language driver for RethinkDB

RethinkDB-go - RethinkDB Driver for Go Go driver for RethinkDB Current version: v6.2.1 (RethinkDB v2.4) Please note that this version of the driver on

RethinkDB 1.6k Jun 20, 2022
goriak - Go language driver for Riak KV

goriak Current version: v3.2.1. Riak KV version: 2.0 or higher, the latest version of Riak KV is always recommended. What is goriak? goriak is a wrapp

Gustav Westling 27 Jan 23, 2022
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)

Mongo Go Models Important Note: We changed package name from github.com/Kamva/mgm/v3(uppercase Kamva) to github.com/kamva/mgm/v3(lowercase kamva) in v

kamva 512 Jun 20, 2022
The MongoDB driver for Go

The MongoDB driver for Go This fork has had a few improvements by ourselves as well as several PR's merged from the original mgo repo that are current

GlobalSign 1.9k Jun 24, 2022
The Go driver for MongoDB

MongoDB Go Driver The MongoDB supported driver for Go. Requirements Installation Usage Bugs / Feature Reporting Testing / Development Continuous Integ

mongodb 6.7k Jun 20, 2022