Zero boilerplate database operations for Go



(Now compatible with MySQL and PostgreSQL!)

Everyone knows that performing simple DATABASE queries in Go takes numerous lines of code that is often repetitive. If you want to avoid the cruft, you have two options: A heavy-duty ORM that is not up to the standard of Laravel or Django. Or DBQ!

⚠️ WARNING: You will seriously reduce your database code to a few lines

the project to show your appreciation.

What is included

  • Supports ANY type of query
  • MySQL and PostgreSQL compatible
  • Convenient and Developer Friendly
  • Accepts any type of slice for query args
  • Flattens query arg slices to individual values
  • Bulk Insert seamlessly
  • Automatically unmarshal query results directly to a struct using mapstructure package
  • Lightweight
  • Compatible with mysql-go for proper MySQL query cancelation
  • Automatically retry query with exponential backoff if operation fails
  • Transaction management (automatic rollback)


NOTE: For mysql driver, parseTime=true setting can interfere with unmarshaling to civil.* types.


go get -u


Let's assume a table called users:

id name age created_at
1 Sally 12 2019-03-01
2 Peter 15 2019-02-01
3 Tom 18 2019-01-01


Q ordinarily returns []map[string]interface{} results, but you can automatically unmarshal to a struct. You will need to type assert the results.

type user struct {
  ID        int       `dbq:"id"`
  Name      string    `dbq:"name"`
  Age       int       `dbq:"age"`
  CreatedAt time.Time `dbq:"created_at"`

opts := &dbq.Options{ConcreteStruct: user{}, DecoderConfig:x}

results, err := dbq.Q(ctx, db, "SELECT * FROM users", opts)
results, err := dbq.Qs(ctx, db, "SELECT * FROM users", user{}, nil)


([]*main.user) (len=6 cap=8) {
  ID: (int) 1,
  Name: (string) (len=5) "Sally",
  Age: (int) 12,
  CreatedAt: (time.Time) 2019-03-01 00:00:00 +0000 UTC
  ID: (int) 2,
  Name: (string) (len=5) "Peter",
  Age: (int) 15,
  CreatedAt: (time.Time) 2019-02-01 00:00:00 +0000 UTC
  ID: (int) 3,
  Name: (string) (len=3) "Tom",
  Age: (int) 18,
  CreatedAt: (time.Time) 2019-01-01 00:00:00 +0000 UTC

Query Single Row

If you know that the query will return at maximum 1 row:

result := dbq.MustQ(ctx, db, "SELECT * FROM users LIMIT 1", dbq.SingleResult)
if result == nil {
  // no result
} else {

Bulk Insert

You can insert multiple rows at once.

db, _ := sql.Open("mysql", "user:[email protected](localhost:3306)/db")

type Row struct {
  Name      string
  Age       int
  CreatedAt time.Time

users := []interface{}{
  dbq.Struct(Row{"Brad", 45, time.Now()}),
  dbq.Struct(Row{"Ange", 36, time.Now()}),
  dbq.Struct(Row{"Emily", 22, time.Now()}),

stmt := dbq.INSERTStmt("users", []string{"name", "age", "created_at"}, len(users))

dbq.E(ctx, db, stmt, nil, users)

Flatten Query Args

All slices are flattened automatically.

args1 := []string{"A", "B", "C"}
args2 := []interface{}{2, "D"}
args3 := dbq.Struct(Row{"Brad Pitt", 45, time.Now()})

results := dbq.MustQ(ctx, db, stmt, args1, args2, args3)

// Placeholder arguments will get flattened to:
results := dbq.MustQ(ctx, db, stmt, "A", "B", "C", 2, "D", "Brad Pitt", 45, time.Now())

NOTE: FlattenArgs function can be used more generally.

MySQL cancelation

To properly cancel a MySQL query, you need to use the mysql-go package. dbq plays nicely with it.

import sql ""

pool, _ := sql.Open("user:[email protected](localhost:3306)/db")

conn, err := pool.Conn(ctx)

opts := &dbq.Options{
  SingleResult: true,
  PostFetch: func(ctx context.Context) error {
    return conn.Close()

result := dbq.MustQ(ctx, conn, "SELECT * FROM users LIMIT 1", opts)
if result == nil {
  // no result
} else {


After fetching the results, you can further modify the results by implementing the PostUnmarshaler interface. The PostUnmarshal function must be attached to the pointer of the struct.

type user struct {
  ID        int       `dbq:"id"`
  Name      string    `dbq:"name"`
  Age       int       `dbq:"age"`
  CreatedAt time.Time `dbq:"created_at"`
  HashedID  string    `dbq:"-"`          // Obfuscate ID

func (u *user) PostUnmarshal(ctx context.Context, row, total int) error {
  u.HashedID = obfuscate(u.ID)
  return nil


The ScanFaster interface eradicates the use of the reflect package when unmarshaling. If you don't need to perform fancy time conversions or interpret weakly typed data, then it is more performant.

type user struct {
  ID       int    `dbq:"id"`
  Name     string `dbq:"name"`

func (u *user) ScanFast() []interface{} {
  return []interface{}{&u.ID, &u.Name}

Retry with Exponential Backoff

If the database operation fails, you can automatically retry with exponentially increasing intervals between each retry attempt. You can also set the maximum number of retries.

opts := &dbq.Options{
  RetryPolicy:  dbq.ExponentialRetryPolicy(60 * time.Second, 3),

Transaction Management

You can conveniently perform numerous complex database operations within a transaction without having to worry about rolling back. Unless you explicitly commit, it will automatically rollback.

You have access to the Q and E function as well as the underlying tx for performance purposes.

ctx := context.Background()
pool, _ := sql.Open("mysql", "user:[email protected](localhost:3306)/db")

dbq.Tx(ctx, pool, func(tx interface{}, Q dbq.QFn, E dbq.EFn, txCommit dbq.TxCommit) {
  stmt := dbq.INSERTStmt("table", []string{"name", "age", "created_at"}, 1)
  res, err := E(ctx, stmt, nil, "test name", 34, time.Now())
  if err != nil {
    return // Automatic rollback
  txCommit() // Commit

Custom Queries

The v2/x subpackage will house functions to perform custom SQL queries. If they are general to both MySQL and PostgreSQL, they are inside the x subpackage. If they are specific to MySQL xor PostgreSQL, they are in the x/mysql xor x/pg subpackage respectively.

This is your package too!

If you want your own custom functions included, just submit a PR and place it in your own directory inside v2/x. As long as it compiles and is well documented it is welcome.

Bulk Update

As a warmup, I have included a Bulk Update function that works with MySQL and PostgreSQL. It allows you to update thousands of rows in 1 query without a transaction!

Difference between v1 and v2

When a ConcreteStruct is provided, in v1, the Q and MustQ functions return []interface{} while in v2 they return []*struct.

NOTE: v1 is obsolete and will no longer receive updates.

Other useful packages

  • dataframe-go - Statistics and data manipulation
  • electron-alert - SweetAlert2 for Electron Applications
  • igo - A Go transpiler with cool new syntax such as fordefer (defer for for-loops)
  • mysql-go - Properly cancel slow MySQL queries
  • react - Build front end applications using Go
  • remember-go - Cache slow database queries

Legal Information

The license is a modified MIT license. Refer to the LICENSE file for more details.

© 2019-20 PJ Engineering and Business Solutions Pty. Ltd.

Final Notes

Feel free to enhance features by issuing pull-requests. Note that the project is written in igo and transpiled into Go.

  • Congratulations and benchmarks project

    Congratulations and benchmarks project

    My happiest congratulations on this project! Very good!

    Can I ask you for an opinion on this benchmark project?

    What do you think about adding dbq too?

    It's very easy, just copy one of these files and edit it with dbq code.

    I found dbq today and I still need to understand it.

    Again, thanks for your amazing project!

    opened by frederikhors 29
  • Add support for PostgreSQL arrays

    Add support for PostgreSQL arrays


    PostgreSQL has support for arrays. For example, we have this table:

    CREATE TABLE example (
        bytes BYTEA -- byte array

    In order to insert data into it, we need this query:

    INSERT INTO example (bytes) VALUES ({0, 1, 2, 3, 4});


    INSERT INTO example (bytes) VALUES (ARRAY[0, 1, 2, 3, 4]);

    Using dbq that code should've looked like:

    stmt := dbq.INSERT("example", []string{"bytes"}, 1, dbq.PostgreSQL)
    _, _ = dbq.E(context.Background(), db, stmt, nil, []byte{0, 1, 2, 3, 4})

    Expected behavior of this code

    It will create statement

    INSERT INTO example (bytes) VALUES ({0, 1, 2, 3, 4});

    which will correctly insert array into the table.

    Actual behavior

    It creates statement

    INSERT INTO example (bytes) VALUES (0, 1, 2, 3, 4);

    which is invalid and doesn't work because each array element is treated like separate field.

    I've tried wrapping byte array into []interface{} and even [][]interface{} - it doesn't resolve the issue, dbq still treats arrays as lists of fields.

    opened by Rirush 8
  • Postgres/pq bulk insert syntax issue

    Postgres/pq bulk insert syntax issue


    	_ ""

    Code of issue:

    	psqlInfo := fmt.Sprintf("%s://%s:%[email protected]%s:%s/%s?sslmode=%s",
    		"postgres", "postgres", "postgres", "localhost", "5432", "", "disable",
    	db, _ := sql.Open("postgres", psqlInfo)
    	type Row struct {
    		Username string
    		Password string
    	users := []interface{}{
    		dbq.Struct(Row{"John", "abc123"}),
    		dbq.Struct(Row{"Mark", "abc123"}),
    		dbq.Struct(Row{"David", "abc123"}),
    	stmt := dbq.INSERTStmt("accounts", []string{"username", "password"}, len(users))
    	_, err := dbq.E(ctx, db, stmt, nil, users) // <- always results in syntax error
    	if err != nil {
    		panic(err) // <- pq: syntax error at or near ","

    Output: pq: syntax error at or near ","

    Consistently results in a syntax error. If the cause is the above code, let me know 👍

    opened by KeithAlt 4
  • todo


    • ~add example of how Ph function works~
    • ~add example to readme of how postunmarshaler works. It is currently a "hidden" feature.~
    • ~update godocs links to v2~
    • ~Note that v1 is deprecated and no updates will be done to it~
    opened by pjebs 2
  • check performance difference

    check performance difference

    Can you benchmark and check the performance difference between the speed-up branch (which will definitely be merged to master) and return-slice-of-struct branch (which will probably go to v2).

    The difference is that when you provide a ConcreteStruct, the speed-up branch returns []interface{} containing *struct whilst return-slice-of-struct returns []*struct

    return-slice-of-struct uses fore reflect pkg and is not backward compatible.

    opened by pjebs 0
  • Return slice of map string interface{}

    Return slice of map string interface{}

    • this fixes an existing bug (as per documentation)
    • unfortunately it may break existing applications (but that is only because they did a workaround instead of complaining about the bug with respect to documentation)
    opened by rocketlaunchr-cto 0
  • TODO:


    For JSON columns, give option to unmarshal to map[string]interface{} or custom object.

    v3 Relations

    1. If struct contains field which is struct (pointer or concrete), then fetch.
    2. If struct contains field which is slice of struct, then fetch from join table
    3. Ability to escape sql injection protection: "Dangerous"
    opened by pjebs 10
[mirror] the database client and tools for the Go vulnerability database

The Go Vulnerability Database This repository is a prototype of the Go Vulnerability Database. Read the Draft Design. Neither the

Go 224 Dec 29, 2022
Database - Example project of database realization using drivers and models

database Golang based database realization Description Example project of databa

Denis 1 Feb 10, 2022
A flexible and powerful SQL string builder library plus a zero-config ORM.

SQL builder for Go Install Usage Basic usage Pre-defined SQL builders Build SQL for MySQL, PostgreSQL or SQLite Using Struct as a light weight ORM Nes

Huan Du 906 Dec 30, 2022
🏋️ dbbench is a simple database benchmarking tool which supports several databases and own scripts

dbbench Table of Contents Description Example Installation Supported Databases Usage Custom Scripts Troubeshooting Development Acknowledgements Descri

Simon Jürgensmeyer 80 Dec 30, 2022
Database wrapper that manage read write connections

rwdb Database wrapper that manage read write connections Install go get Create connections package main import "

Andy Zhang 17 Dec 10, 2022
Vitess is a database clustering system for horizontal scaling of MySQL.

Vitess Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding. By encapsulating shard-routing logic, Vite

Vitess 15.3k Jan 3, 2023
sqlx is a library which provides a set of extensions on go's standard database/sql library

sqlx is a library which provides a set of extensions on go's standard database/sql library. The sqlx versions of sql.DB, sql.TX, sql.Stmt, et al. all leave the underlying interfaces untouched, so that their interfaces are a superset on the standard ones. This makes it relatively painless to integrate existing codebases using database/sql with sqlx.

Jason Moiron 13.1k Jan 7, 2023
Convert data exports from various services to a single SQLite database

Bionic Bionic is a tool to convert data exports from web apps to a single SQLite database. Bionic currently supports data exports from Google, Apple H

Bionic 175 Dec 9, 2022
Dumpling is a fast, easy-to-use tool written by Go for dumping data from the database(MySQL, TiDB...) to local/cloud(S3, GCP...) in multifarious formats(SQL, CSV...).

?? Dumpling Dumpling is a tool and a Go library for creating SQL dump from a MySQL-compatible database. It is intended to replace mysqldump and mydump

PingCAP 267 Nov 9, 2022
Vitess is a database clustering system for horizontal scaling of MySQL.

Vitess Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding. By encapsulating shard-routing logic, Vite

Vitess 15.3k Jan 4, 2023
Additions to Go's database/sql for super fast performance and convenience.

gocraft/dbr (database records) gocraft/dbr provides additions to Go's database/sql for super fast performance and convenience. $ go get -u

null 1.7k Jan 1, 2023
Goose database migration tool - fork of

goose Goose is a database migration tool. Manage your database schema by creating incremental SQL changes or Go functions. Goals of this fork github.c

Pressly Inc. 3.3k Dec 30, 2022
Database migrations. CLI and Golang library.

migrate Database migrations written in Go. Use as CLI or import as library. Migrate reads migrations from sources and applies them in correct order to

null 10.6k Jan 9, 2023
Interceptors for database/sql

sqlmw sqlmw provides an absurdly simple API that allows a caller to wrap a database/sql driver with middleware. This provides an abstraction similar t

null 434 Dec 27, 2022
database to golang struct

中文文档 mysql database to golang struct conversion tools base on gorm(v1/v2),You can automatically generate golang sturct from mysql database. big Camel-

xxj 2.1k Jan 9, 2023
A fully-featured AWS Athena database driver (+ athenareader

?? athenadriver - A fully-featured AWS Athena database driver for Go ?? athenareader - A moneywise command line utililty to query athena in command li

Uber Open Source 118 Jan 3, 2023
Generate applications powered by your database.

DBCore (ALPHA) DBCore is a code generator build around database schemas and an API specification. Included with DBCore are templates for generating a

Phil Eaton 505 Dec 16, 2022
A MySQL-compatible relational database with a storage agnostic query engine. Implemented in pure Go.

go-mysql-server go-mysql-server is a SQL engine which parses standard SQL (based on MySQL syntax) and executes queries on data sources of your choice.

DoltHub 951 Jan 2, 2023