Data access layer for PostgreSQL, CockroachDB, MySQL, SQLite and MongoDB with ORM-like features.

Overview
Comments
  • Make column name quoting optional or configurable

    Make column name quoting optional or configurable

    While I understand why this is happening by default it makes a bunch of Postgresql queries impossible.

    Examples:

    • GroupBy function - GROUP BY count(x) will not work, because count(x) will be put in quotes. If I stick db.Raw in GroupBy I get the whole string in curly brackets instead of quotes - still fails
    • Array search WHERE search_value = ANY(array_column) again, impossible, because either search value or function will be put in double quotes.
    opened by c2h5oh 17
  • Prepared statements cache may grow up too much under certain situations

    Prepared statements cache may grow up too much under certain situations

    There seems to be something odd with prepared statements + MySQL which could mean we have another prepared statement leak.

    This is a followup of: https://github.com/upper/db/issues/270#issuecomment-261646680

    It would be ideal to find a query that gives the actual number of prepared statements on different databases, and add that query to our testing.

    bug 
    opened by xiam 15
  • Support ON CONFLICT clause

    Support ON CONFLICT clause

    @xiam can you please finish this PR?

    I started simple -- only for INSERTs -- and passing string instead of db.Raw or something. (We should support UPDATE too)

    opened by VojtechVitek 13
  • Open() to connect to multiple db servers

    Open() to connect to multiple db servers

    I really like the design of upper/db, it's super clean and minimal. However, I believe db.Open() should accept an array of settings objects. For example, it's common with mongodb to pass a bunch of hosts which are in a cluster and then the client will determine the master, etc. etc. I presume MySQL / Postgresl would have something similar in a master/slave arrangement. Thoughts?

    opened by pkieltyka 13
  • Use id.ID() in save creation to work with uuids

    Use id.ID() in save creation to work with uuids

    Since uuids raw are byte arrays, the query for the inserted item will fail. This makes it impossible to save a record with a uuid (string) primary key

    The argument passed will be []interface {}{(*db.InsertResult)(0xc00041c070)} and the error will be incorrect binary data format

    bug v4 
    opened by kipply 12
  • Go get certificate expired?

    Go get certificate expired?

    I'm currently getting this message when running dep ensure:

    ✗ unable to deduce repository and source type for "upper.io/db.v3": unable to read metadata: unable to fetch raw metadata: failed HTTP request to URL "http://upper.io/db.v3?go-get=1": Get https://upper.io/db.v3?go-get=1: x509: certificate has expired or is not yet valid

    Below is my Gopkg.toml:

    [[constraint]]
      version = "0.3"
      name = "github.com/thoas/go-funk"
    
    [[constraint]]
      name = "upper.io/db.v3"
      version = "v3.5.1"
    
    [[constraint]]
      version = "v0.7.2"
      name = "github.com/hypnoglow/oas2"
    
    [[constraint]]
      version = "v1.0.6"
      name = "github.com/sirupsen/logrus"
    
    [[constraint]]
      version = "v3.2.0"
      name = "github.com/dgrijalva/jwt-go"
    
    [[constraint]]
      version = "0.2"
      name = "github.com/google/uuid"
    
    [[constraint]]
      version = "v2.3.0"
      name = "github.com/pressly/goose"
    
    [[constraint]]
      name = "golang.org/x/crypto"
      revision = "614d502a4dac94afa3a6ce146bd1736da82514c6"
    
    [[constraint]]
      name = "github.com/tebeka/selenium"
      revision = "a49cf4b98a36c2b21b1ccb012852bd142d5fc04a"
    
    [[constraint]]
      name = "github.com/getsentry/raven-go"
      revision = "a9457d81ec91fa6d538567f14c6138e9ce5a37fb"
    
    [[constraint]]
      name = "github.com/dnaeon/go-vcr"
      revision = "aafff18a5cc28fa0b2f26baf6a14472cda9b54c6"
    
    [prune]
      go-tests = true
      unused-packages = true
    
    opened by erichulburd 12
  • Sqlgen to provide a nice builder interface for any query

    Sqlgen to provide a nice builder interface for any query

    It would be cool if the sqlgen was more like: https://github.com/lann/squirrel .. we could use it to build queries internally, but also expose it if someone wants to write their own crazy raw query

    opened by pkieltyka 11
  • cockroachdb support

    cockroachdb support

    We should add support for https://github.com/cockroachdb/cockroach via our existing postgresql work. Cockroachdb uses the postgres wire protocol, works with lib/pq, but also adds some of its own sql extensions.

    opened by pkieltyka 10
  • db: support for simple pagination and cursor-based pagination

    db: support for simple pagination and cursor-based pagination

    Pagination lets you split the results of a query into chunks containing a fixed number of items.

    The PR here will add the same pagination tools for both db.Result and sqlbuilder.Selector.

    Simple pagination

    res = sess.Collection("posts").Paginate(20) // 20 results per page
    
    err = res.All(&posts) // First 20 results of the query
    
    err = res.Page(2).All(&posts) // Results from page 2 (limit 20, offset 40)
    

    Or with the SQL builder:

    q = sess.SelectFrom("posts").Paginate(20) 
    
    // same as above
    

    Cursor based pagination

    res = sess.Collection("posts").
      Paginate(20). // 20 results per page
      Cursor("id") // using id as cursor
    
    err = res.All(&posts) // First 20 results of the query
    
    // Get the next 20 results starting from the last item of the previous query.
    res = res.NextPage(posts[len(posts)-1].ID)
    err = res.All(&posts) // Results from page 1, limit 20, offset 20
    

    Other tools

    res = res.Paginate(23)
    
    totalNumberOfEntries, err = res.TotalEntries()
    
    totalNumberOfPages, err = res.TotalPages()
    

    The first page from a paginator is always 0.

    pending-release 
    opened by xiam 10
  • Multiple Where() calls overwrite each other in query builder

    Multiple Where() calls overwrite each other in query builder

    I'm using v3.

    Example: db.Select("foo", "bar").From("table").Where("foo = 1").Where("bar = 2") will produce SELECT foo, bar FROM table WHERE bar = 2

    opened by kyteague 10
  • Database defined default values don't update go values on save/update

    Database defined default values don't update go values on save/update

    To get primary key set on save you just leave it empty and tag it with pk,omitempty - this works just fine.

    What you can't do is get the same thing to happen for fields that are not primary key:

    Let's say I've got a struct:

    type data struct{
        Id          int64 `db:"id,pk,omitempty"`
        CreatedAt   *time.Time `db:"created_at,omitempty"`
    }
    

    that gets saved to a table

    CREATE SEQUENCE data_id_seq
        START WITH 1
        INCREMENT BY 1
        NO MINVALUE
        NO MAXVALUE
        CACHE 1;
    
    CREATE TABLE data (
        id bigint DEFAULT nextval('data_id_seq'::regclass) NOT NULL,
        created_at timestamp DEFAULT now() NOT NULL
    );
    

    after saving data struct will have Id set to the next value of db sequence, but CreatedAt will remain nil despite the fact it too has been set to a default value defined in the db.

    I think we need one more tag: dbprovided, dbdefault or something along those lines that will add those fields to RETURNING part of SQL query and make the update happen.

    The example is for creation, but it can easily apply to update as well.

    opened by c2h5oh 10
  • Roll back transaction on failed ExecContext

    Roll back transaction on failed ExecContext

    Adds a call to sqlTx.Rollback() when an error is returned from compat.ExecContext in the SQLite adapter. This addresses #669 SQLite Adapter doesn't roll back transaction on error, which I ran into while writing a SQLite-backed web app.

    This seems to have also addressed https://github.com/upper/db/issues/636 - I was able to remove the case that skipped the test for SQLite.

    opened by emm035 0
  • Sharing connections between goroutines

    Sharing connections between goroutines

    Hi,

    We're building an API using net/http (server) and github.com/gorilla/mux (router) and have some difficulties understanding how to best manage open connections in an efficient way using upper/db for mysql.

    I believe the underlying sql/database module manages a pool of connections that each goroutine can use when needed and then release it back to the pool for others to use. However, when running the following code in a stress test, we see some weird behavior.

    https://go.dev/doc/database/manage-connections

    According to your documentation, we should start a session (open an actual connection) and then specify pool configuration details like max open connections, max idle connection, and connection max idle time which seems a bit weird to do that on an open/active session (connection).

    After the session (connection) has been established, it's assigned to the global DB variable and used throughout the application including goroutines.

    The weird behavior is that under stress, upper sometimes log slow queries - which is fine. However, each log statement includes information about the session in use. All of our logs state that a given query was slow for the session with id 00001. Given that our max open connections is set to 10, I would expect to see different session id's between 00001-00010. It seems like the same session is always being used and not balanced between a pool of 10-15 connections.

    I've tried opening the connection (session) in each router handler (goroutine) but we don't want to open too many connections to the database.

    I know that something is wrong with my code and my understanding, but I haven't been able to find any good answers, so... I hope someone can help!

    import (
    	"fmt"
    	"log"
    	"net/http"
    	"time"
    
    	"github.com/gorilla/mux"
    	"github.com/upper/db/v4"
    	"github.com/upper/db/v4/adapter/mysql"
    )
    
    var DB db.Session
    
    func init() {
    	databaseSettings := mysql.ConnectionURL{
    		Host:     "host",
    		Database: "database",
    		User:     "user",
    		Password: "password",
    	}
    
    	session, err := mysql.Open(databaseSettings) // this actually opens a connection!
    
    	if err != nil {
    		log.Fatalln("Cant connect to database")
    	}
    
    	session.SetMaxOpenConns(10)
    	session.SetMaxIdleConns(5)                   
    	session.SetConnMaxIdleTime(time.Minute * 10)
    
    	DB = session
    }
    
    func main() {
    	router := mux.NewRouter()
    	router.HandleFunc("/path", Index).Methods("GET")
    
    	server := &http.Server{
    		Handler:      router,
    		Addr:         ":8080",
    		ReadTimeout:  time.Second * 5,
    		WriteTimeout: time.Second * 60 * 2,
    	}
    
    	if err := server.ListenAndServe(); err != nil {
    		log.Fatalln(err)
    	}
    }
    
    // This function is called within it's own goroutine
    func Index(w http.ResponseWriter, r *http.Request) {
    	w.WriteHeader(http.StatusOK)
    
    	// Dont mind the actual query, it's just to show the connection in use
    	iterator := DB.SQL().Select("table.column").From("table").Iterator()
    
    	defer iterator.Close()
    
    	// Scan and more etc.
    
    	fmt.Fprintf(w, "OK")
    }
    
    opened by ChristianGerdes 0
  • argument not replaced when inside subquery

    argument not replaced when inside subquery

    I have a query in this form

    SELECT ...
    FROM  ...
    JOIN (SELECT ...
              FROM ...
              WHERE ... IN ? )
    ....
    

    I've tried running it with Query(argSlice) or even with a prepared statement but the ? is never replaced with the contents of argSlice. I also tried bulding subqueries with chained methods with no success.

    Is there a better way of doing this?

    Thanks

    opened by rafaelvanoni 0
  • SQLite Adapter doesn't roll back transaction on error

    SQLite Adapter doesn't roll back transaction on error

    In the SQLite adapter's StatementExec method, the opened transaction doesn't get rolled back if compat.ExecContext returns an error. This came up in a personal project after hitting a constraint violation - all further write calls to the DB would return database is locked errors until I restarted the application.

    I was able to reproduce the behavior using this test:

    db_test.go
    package db_test
    
    import (
    	"path/filepath"
    	"testing"
    
    	"github.com/mattn/go-sqlite3"
    	"github.com/upper/db/v4"
    	"github.com/upper/db/v4/adapter/sqlite"
    )
    
    type A struct {
    	A int64 `db:"a"`
    }
    
    type B struct {
    	B int64 `db:"b"`
    }
    
    func checkErr(t *testing.T, err error) {
    	if err != nil {
    		t.Fatal(err)
    	}
    }
    
    func TestDatabaseIsLocked(t *testing.T) {
    	dbFile := filepath.Join(t.TempDir(), "test.db")
    
    	// Open DB
    	sess, err := sqlite.Open(sqlite.ConnectionURL{
    		Database: dbFile,
    		Options: map[string]string{
    			"foreign_keys": "on",
    		},
    	})
    	checkErr(t, err)
    	defer sess.Close()
    
    	// Set up scenario
    	_, err = sess.SQL().Exec("CREATE TABLE IF NOT EXISTS table1 (a INTEGER NOT NULL PRIMARY KEY)")
    	checkErr(t, err)
    	_, err = sess.Collection("table1").Insert(&A{1})
    	checkErr(t, err)
    	_, err = sess.SQL().Exec("CREATE TABLE IF NOT EXISTS table2 (b INTEGER NOT NULL PRIMARY KEY)")
    	checkErr(t, err)
    	_, err = sess.Collection("table2").Insert(&B{2})
    	checkErr(t, err)
    
    	// Trigger constraint violation on table 1
    	_, err = sess.Collection("table1").Insert(&A{1})
    	if err.(sqlite3.Error).Code != sqlite3.ErrConstraint {
    		panic(err)
    	}
    
    	// Run tests
    	t.Run("Read from table 1", testReadTable1(sess))
    	t.Run("Read from table 2", testReadTable2(sess))
    	t.Run("Insert into table 1", testInsertTable1(sess))
    	t.Run("Insert into table 2", testInsertTable2(sess))
    }
    
    func testReadTable1(sess db.Session) func(t *testing.T) {
    	return func(t *testing.T) {
    		err := sess.Collection("table1").Find().One(new(A))
    		if err != nil {
    			t.Error(err)
    		}
    	}
    }
    
    func testReadTable2(sess db.Session) func(t *testing.T) {
    	return func(t *testing.T) {
    		err := sess.Collection("table2").Find().One(new(B))
    		if err != nil {
    			t.Error(err)
    		}
    	}
    }
    
    func testInsertTable1(sess db.Session) func(t *testing.T) {
    	return func(t *testing.T) {
    		_, err := sess.Collection("table1").Insert(&A{3})
    		if err != nil {
    			t.Error(err)
    		}
    	}
    }
    
    func testInsertTable2(sess db.Session) func(t *testing.T) {
    	return func(t *testing.T) {
    		_, err := sess.Collection("table2").Insert(&B{3})
    		if err != nil {
    			t.Error(err)
    		}
    	}
    }
    
    

    with the following output:

    ~/src/emm035/playground $ go test -v -count 1 .
    --- FAIL: TestActions (20.81s)
        --- FAIL: TestActions/Insert_into_table_1 (10.39s)
            db_test.go:85: database is locked
        --- FAIL: TestActions/Insert_into_table_2 (10.41s)
            db_test.go:94: database is locked
    FAIL
    FAIL    github.com/emm035/playground  23.884s
    FAIL
    

    Adding a rollback statement to the error handling branch here seemed to stop the locking in my application and the test is passing:

    ~/src/emm035/playground $ go test -v -count 1 .
    ok      github.com/emm035/playground  0.114s
    

    I'd be more than happy to submit a PR with the change, but wanted to make sure this is fixing the actual problem and not just obscuring another issue 🙂

    opened by emm035 0
Releases(v4.6.0-rc1)
  • v4.6.0-rc1(Aug 30, 2022)

  • v4.6.0(Sep 4, 2022)

  • v4.5.0(Jan 3, 2022)

    Release notes

    The github.com/lib/pq package that we used as driver for PostgreSQL and CockroachDB went under maintenance mode, the package maintainers recommended users to switch to github.com/jackc/pgx. We followed that recommendation, and this release uses github.com/jackc/pgx as driver for both PostgreSQL and CockroachDB.

    For some codebases, the update will be completely transparent, but others cases might require slight modifications to their code, specially if they're using custom field types. If you're upgrading to v4.5.0 make sure you're either using the special types with driver.Value and sql.Scanner methods provided by the github.com/upper/db/v4/adapter/postgresql package, or provide your own ones.

    If the change is too complicated, you'll be able to use github.com/lib/pq for a while by including the pgx build constraint along the other build tags. See:

    go build -tags=pgx ...
    

    however, this is a temporary solution, support for the github.com/lib/pq driver is going to be definitely removed on February 28th 2022.

    Upgrading your codebase

    If you upgraded to v4.5.0 and your code doesn't compile anymore or if it throws runtime errors around conversions between Go types and database types you'll have to tweak it a bit. Basically, you'll have to make sure you're providing scanners and values for all your types. Basic Go types, like int, string, bool or even a slice of basic types won't require any special handling, but some other types like *[]intor []uint will do.

    For instance, if your struct looks like this:

    type Foo struct {
    	ID     uint64    `db:"id"`
    	Names  *[]string `db:"names"`
    }
    

    you'll have to convert it into:

    import (
    	"github.com/upper/db/v4/adapter/postgresql"
    )
    
    type Foo struct {
    	ID      uint64                  `db:"id"` // no need to be updated
    	Names   *postgresql.StringArray `db:"names"` // changed
    }
    

    The table below has some of the most common equivalences between old types and new types:

    | instead of | use | |:-----------| ----| | *[]string | *postgresql.StringArray | | *[]int64 | *postgresql.Int64Array | | *[]float64 | *postgresql.Float64Array | | *[]float32 | *postgresql.Float32Array | | *[]bool | *postgresql.BoolArray | | *map[string]interface{} | *postgresql.JSONBMap | | *[]interface{} | *postgresql.JSONBArray |

    If you're using custom types that are not provided in the table above, like map[string]bool, you'll have to provide your own driver.Value and sql.Scanner methods.

    If the destination type is JSONB you can use postgresql.JSONBValue and postgresql.ScanJSONB to make those conversions easier, see the example below:

    package foo
    
    import (
    	"github.com/upper/db/v4/adapter/postgresql"
    )
    
    type CustomJSONB map[string]bool
    
    func (c CustomJSONB) Value() (driver.Value, error) {
    	return postgresql.JSONBValue(c)
    }
    
    func (c *CustomJSONB) Scan(src interface{}) error {
    	return postgresql.ScanJSONB(c, src)
    }
    

    You can also use the *postgresql.JSONBConverter type to add automatic conversions to some types, like this:

    package foo
    
    import (
    	"github.com/upper/db/v4/adapter/postgresql"
    )
    
    type CustomJSONB struct {
    	N string  `json:"name"`
    	V float64 `json:"value"`
    
    	*postgresql.JSONBConverter
    }
    
    type CustomJSONBObjectArray []postgresql.CustomJSONB
    
    func (CustomJSONBObjectArray) ConvertValue(in interface{}) interface {
    	sql.Scanner
    	driver.Valuer
    } {
    	return &postgresql.JSONB{in}
    }
    
    type CustomJSONBObjectMap map[string]CustomJSONB
    
    func (c CustomJSONBObjectMap) Value() (driver.Value, error) {
    	return postgresql.JSONBValue(c)
    }
    
    func (c *CustomJSONBObjectMap) Scan(src interface{}) error {
    	return postgresql.ScanJSONB(c, src)
    }
    

    Thanks for using db/v4!

    What's Changed

    • Replace lib/pq with pgx by @xiam in https://github.com/upper/db/pull/627
    • Fix sqlite.New panic by @sdvcrx in https://github.com/upper/db/pull/635
    • Fix condition that forced the statement cache to be skipped by @xiam in https://github.com/upper/db/pull/638

    New Contributors

    • @sdvcrx made their first contribution in https://github.com/upper/db/pull/635

    Acknowledgements

    Special thanks to @pkieltyka and Horizon Blockchain Games for their continuos support over the years! Thank you very much

    Full Changelog: https://github.com/upper/db/compare/v4.2.1...v4.5.0

    Source code(tar.gz)
    Source code(zip)
  • v4.5.0-rc3(Jan 3, 2022)

  • v4.5.0-rc2(Jan 3, 2022)

  • v4.2.1(Jul 13, 2021)

  • v4.2.0(Jun 29, 2021)

    The most relevant change for this release is the behaviour of Save(): it will attempt to update a record if and only if the record exists, otherwise it will attempt to create a record.

    Source code(tar.gz)
    Source code(zip)
  • v3.6.4(Feb 26, 2020)

  • v3.6(Aug 14, 2019)

  • v3.5.7(Dec 27, 2018)

  • v3.5.6(Dec 10, 2018)

  • v3.5.5(Sep 25, 2018)

  • v3.5.4(Aug 29, 2018)

  • v3.5.0(Oct 2, 2017)

    Changelog for v3.5.0

    Support for comparison operations

    Starting on 3.5.0 you'll be able to use comparison operators that are compatible across different database engines, see some examples:

    // foo = "bar"
    db.Cond{
      "foo": db.Eq("bar")
    }
    
    // foo <= 15
    db.Cond{
      "foo": db.Lte(15)
    }
    
    // foo >= 17
    db.Cond{
      "foo": db.Gte(17)
    }
    
    // foo BETWEEN DATEA AND DATEB
    db.Cond{
      "foo": db.Between(dateA, dateB)
    }
    
    // foo IS NOT NULL
    db.Cond{
      "foo": db.IsNotNull(),
    }
    
    // foo -> 45
    db.Cond{
      "foo": db.Op("->", 45),
    }
    

    This is the full list of comparison functions:

    db.Eq(interface{})
    db.NotEq(interface{})
    
    db.Gte(interface{})
    db.Gt(interface{})
    
    db.Lte(interface{})
    db.Lt(interface{})
    
    db.Between(interface{}, interface{})
    db.NotBetween(interface{}, interface{})
    
    db.In(interface{})
    db.NotIn(interface{})
    
    db.After(time.Time)
    db.Before(time.Time)
    db.OnOrAfter(time.Time)
    db.OnOrBefore(time.Time)
    
    db.Is(interface{})
    db.IsNot(interface{})
    
    db.IsNull()
    db.IsNotNull()
    
    db.Like(string)
    db.NotLike(string)
    
    db.RegExp(string)
    db.NotRegExp(string)
    
    db.Op(string, interface{})
    

    The old syntax (db.Cond{"column operator": value}) will continue to be supported along to the new syntax, no code changes are required to upgrade from 3.4.x.

    Thanks

    Thank you for using upper-db!

    Source code(tar.gz)
    Source code(zip)
  • v3.4.0(Aug 19, 2017)

    Changelog for 3.4.0

    Allow escaping ? marks inside queries

    You can now escape ? marks inside queries by using ??:

    // ...WHERE someColumn ? "Some Value"
    col.Find().Where(`someColumn ?? ?`, "Some Value")
    

    PostgreSQL: jsonb tag and JSONB values

    Fields tagged with stringarray and int64array will no longer emit a deprecation message, as []string and []int64 will be converted automatically to the corresponding PostgreSQL array types like before. We recommend removing them anyways.

    Fields tagged with jsonb will still emit an error when used, no matter what their type is they all need to satisfy sql.Scanner and driver.Valuer (or sqlbuilder.ScannerValuer).

    We know that providing all those types with Scan(interface{}) error and Value() (driver.Value, error) by hand can be annoying, that's why we're providing this guide with common cases hoping that it might help you upgrading your codebase:

    Structs

    If your struct looks like:

    type MyModel struct {
      ...
      MyField CustomStruct `db:"my_field,jsonb"`
    }
    
    type CustomStruct struct {
      Foo string `db:"foo"`
      ...
    }
    

    Change it into:

    // 1. Import "upper.io/db.v3/postgresql"
    import "upper.io/db.v3/postgresql"
    
    type MyModel struct {
      ...
      MyField CustomStruct `db:"my_field,jsonb"`
    }
    
    type CustomStruct struct {
      Foo string `db:"foo"`
      ...
      // 2. Embed this type.
      *postgresql.JSONBConverter
    }
    
    // 3. (Optional) Import "upper.io/db.v3/lib/sqlbuilder" and add a compile-time type check
    var _ sqlbuilder.ValueWrapper = &CustomStruct{}
    

    Maps

    If your struct looks like:

    type MyModel struct {
      ...
      MyField map[string]interface{} `db:"my_field,jsonb"`
    }
    

    Change it into:

    // 1. Import "upper.io/db.v3/postgresql"
    import "upper.io/db.v3/postgresql"
    
    type MyModel struct {
      ...
      // 2. Just remove "jsonb"
      MyField map[string]interface{} `db:"my_field"`
    }
    

    Interfaces

    If your struct looks like:

    type MyModel struct {
      ...
      MyField interface{} `db:"my_field,jsonb"`
    }
    

    Change it into:

    // 1. Import "upper.io/db.v3/postgresql"
    import "upper.io/db.v3/postgresql"
    
    type MyModel struct {
      ...
      // 2. Change interface{} into postgresql.JSONB
      MyField postgresql.JSONB `db:"my_field,jsonb"`
    }
    

    You'll probably have to update more code and use myModel.MyField.V (the actual interface) instead of myModel.MyField.

    Arrays

    If your struct looks like:

    type MyModel struct {
      ...
      MyField CustomStructArray `db:"my_field,jsonb"`
    }
    
    type CustomStructArray []CustomStruct
    

    Change it into:

    // 1. Import "upper.io/db.v3/postgresql"
    import "upper.io/db.v3/postgresql"
    
    
    type MyModel struct {
      ...
      MyField CustomStructArray `db:"my_field,jsonb"`
    }
    
    type CustomStructArray []CustomStruct
    
    // 2. Add a WrapValue method to satisfy sqlbuilder.ValueWrapper
    func (cs CustomStructArray) WrapValue(v interface{}) interface{} {
      return postgresql.Array(v)
    }
    
    // 3. Add Scan method to CustomStruct, if needed
    func (c *CustomStruct) Scan(in interface{}) error {
      ...
    }
    

    Other types

    Some types are going to be converted automatically to a proper type:

    | Go type | PostgreSQL type | | --- | --- | |[]string|TEXT[]| |[]int64|INTEGER[]| |[]bool|BOOLEAN[]| |[]float64|DOUBLE PRECISION[]| |map[string]interface{}|JSONB|

    Chances are you're using a different type, in that case you'll have to provide a proper driver.Valuer and sql.Scanner for them. If you're using a struct and embedding *postgresql.JSONBConverter does not solve your case, you can try with postgresql.JSONBValue and postgresql.ScanJSONB:

    import (
      "database/sql/driver"
      "upper.io/db.v3/postgresql"
    )
    
    type Settings struct {
      Name string `json:"name"`
      Num  int64  `json:"num"`
    }
    
    func (s *Settings) Scan(src interface{}) error {
      return postgresql.ScanJSONB(s, src)
    }
    func (s Settings) Value() (driver.Value, error) {
      return postgresql.JSONBValue(s)
    }
    
    // var _ sqlbuilder.ScannerValuer = &Settings{} // Optional type check
    

    Thanks

    Thanks for using upper-db. Feel free to open a ticket if you need help.

    Source code(tar.gz)
    Source code(zip)
  • v3.3.0(Jul 22, 2017)

    Changelog

    Pagination

    The new pagination API lets you split the results of a query into chunks containing a fixed number of items.

    Simple pagination for db.Result

    res = sess.Collection("posts").Paginate(20) // 20 results per page
    
    err = res.All(&posts) // First 20 results of the query
    
    err = res.Page(2).All(&posts) // Results from page 2 (limit 20, offset 40)
    

    Simple pagination for SQL builder

    q = sess.SelectFrom("posts").Paginate(20) 
    
    err = res.All(&posts) // First 20 results of the query
    
    err = res.Page(2).All(&posts) // Results from page 2 (limit 20, offset 40)
    

    Cursor based pagination (both for db.Result and SQL Builder)

    res = sess.Collection("posts").
      Paginate(20). // 20 results per page
      Cursor("id") // using id as cursor
    
    err = res.All(&posts) // First 20 results of the query
    
    // Get the next 20 results starting from the last item of the previous query.
    res = res.NextPage(posts[len(posts)-1].ID)
    err = res.All(&posts) // Results from page 1, limit 20, offset 20
    

    Other commonly used pagination tools

    res = res.Paginate(23)
    
    totalNumberOfEntries, err = res.TotalEntries()
    
    totalNumberOfPages, err = res.TotalPages()
    

    Support for binary and text mode (PostgreSQL)

    pgbouncer requires binary mode (binary_parameters="yes") to be enabled, but all of the auto-fields that were working with text mode started failing with binary mode.

    Starting with v3.3.0 upper-db supports binary mode. This is how you can enable binary mode in your PostgreSQL session:

    settings = postgresql.ConnectionURL {
      ...
      Options: map[string]string{
        "binary_parameters": "yes",
      },
    }
    

    Unfortunately, for this to work we had to push a breaking change: any fields that used stringarray, int64array and jsonb options need to be changed into an special type.

    // Before v3.3.0 
    type Demo struct {
       MyIntegers []int64 `db:"my_integers,int64array"`
       MyStrings []string `db:"my_strings,stringarray"`
       MyWhatevs map[string]interface{}`db:"field,jsonb"`
    }
    
    // v3.3.0+
    type Demo struct {
       MyIntegers postgresql.Int64Array `db:"my_integers"`
       MyStrings postgresql.StringArray `db:"my_strings"`
       MyWhatevs postgresql.JSONBMap`db:"field"`
    }
    

    To make sure the user is aware of this, the mapping methods will now throw an error when stringarray, int64array or jsonb field tags are detected.

    You can use any other custom struct as long as it satisfies driver.Valuer and sql.Scanner. The postgresql package providers some handy functions for when you want to encode something into JSONB format:

    import (
      ...
      "database/sql/driver"
      "database/sql"
    
      "upper.io/db.v3/postgresql"
    )
    
    type Demo struct {
       MyWhatevs customJSONB `db:"field"`
    }
    
    type customJSONB struct {
    	N string  `json:"name"`
    	V float64 `json:"value"`
    }
    
    func (c customJSONB) Value() (driver.Value, error) {
    	return postgresql.EncodeJSONB(c)
    }
    
    func (c *customJSONB) Scan(src interface{}) error {
    	return postgresql.DecodeJSONB(c, src)
    }
    
    var (
    	_ driver.Valuer = &customJSONB{}
    	_ sql.Scanner = &customJSONB{}
    )
    

    Here are some other types provided by the postgresql package:

    postgresql.Float64Array // []float64
    postgresql.GenericArray // []interface{}
    postgresql.BoolArray // []bool
    postgresql.JSONB // interface{}
    postgresql.JSONBMap // map[string]interface{}
    postgresql.JSONBArray // []interface{}
    

    Feel free to open a ticket if you find any bug or need help!

    Source code(tar.gz)
    Source code(zip)
  • v3.2.1(Jul 2, 2017)

    Changelog

    This is a maintenance release that takes care of these issues:

    • https://github.com/upper/db/issues/383
    • https://github.com/upper/db/issues/381
    • https://github.com/upper/db/issues/380
    Source code(tar.gz)
    Source code(zip)
  • v3.2.0(Jun 17, 2017)

    Changelog

    • Multiple Where() calls append conditions instead of overwriting previous ones. See: https://github.com/upper/db/issues/357
    • Support for UUID in InsertReturning and UpdateReturning. See: https://github.com/upper/db/issues/370
    • Added postgresql.JSONB, postgresql.StringArray and postgresql.Int64Array custom types for PostgreSQL, which will deprecate optional tags int64array, stringarray and jsonb while providing full support for their Scanners and Valuers.
    Source code(tar.gz)
    Source code(zip)
  • v3.1.0(May 7, 2017)

  • v3.0.0(Feb 17, 2017)

    Changelog

    This release includes new features and overall design improvements.

    • db.v3 uses a new import path: upper.io/db.v3
    go get -u upper.io/db.v3
    
    // A regular query
    res, err = sess.QueryContext(ctx, "SELECT * FROM authors")
    
    // A transaction, all statements within this transaction run under the same context
    sess.Tx(ctx, func() (tx sqlbuilder.Tx) error {
      res := tx.Find()
      // ...
    })
    

    With context.Context you'll be able to cancel queries or set a timeout. db.v3 provides you with different tools to make it easy to use context.Context:

    sess := sess.WithContext(ctx) // A copy of `sess` on the given context.
    
    res, err = sess.Query(...) // Will use the above `ctx` by default.
    
    res, err = sess.QueryContext(anotherCtx, ...) // Uses a different context.
    

    db.v3 can be compiled with go1.7 too (in this case, database/sql will ignore context features).

    • Provides per-database settings and logging (optional).
    • The query builder on db.v2 modified the internal query state everytime you used a method on it:
    q := sess.SelectFrom("users")
    
    q.Where(...) // This method modifies `q`'s internal state and returns the same `q`.
    

    In v3, we decided to make the syntax more explicit to avoid potential side-effects.

    q := sess.SelectFrom("users")
    q = q.Where(...) // The `q` above does not get affected by Where(), unless we reassign `q`.
    
    row, err := q.Query() // or q.QueryContext(ctx)
    

    The new immutable behavior applies to all query builder methods:

    q := sess.DeleteFrom("users").Where(...)
    q = q.And(...)
    q = q.Limit(5)
    
    res, err := q.Exec() // Or q.ExecContext(ctx)
    
    q := sess.Update("users").Set("foo", 3)
    q = q.Where(...)
    
    res, err := q.Exec() // Or q.ExecContext(ctx)
    
    q := sess.InsertInto("users").Values(item1)
    q = q.Values(item2)
    
    res, err := q.Exec() // Or res.ExecContext(ctx)
    

    And it also applies to db.And, db.Or and db.Result:

    params := db.And(db.Cond{"foo" :1})
    params = params.And(db.Cond{"bar": 1})
    
    params := db.Or(db.Cond{"foo" :1})
    params = params.Or(db.Cond{"bar": 1})
    
    res := col.Find()
    res = res.Limit(5).Offset(1)
    
    err = res.One(&item)
    

    This is an important difference from db.v2 that makes queries safer and more consistent.

    Migration notes

    For users who want to migrate from db.v2 to db.v3:

    • Remember to change all import paths to upper.io/db.v3
    • Use the dbcheck tool to find statements that you need to port to the new immutable syntax.
    # Install dbcheck
    go get -u github.com/upper/cmd/dbcheck
    
    # Use "..." at the end to check all github.com/my/package's subpackages.
    dbcheck github.com/my/package/...
    

    Thanks

    Thanks to our awesome sponsor Pressly! we could not have done this without you.

    pressly

    Pressly makes it easy for enterprises to curate and share amazing content.

    Source code(tar.gz)
    Source code(zip)
Examples and code to assign a name to your MongoDB, MySQL, PostgreSQL, RabbitMQ, and redis connection.

your connection deserves a name ?? When your app interacts with an external system, assign a name to the connection. An external system in this contex

Andy Grunwald 28 Dec 6, 2022
Go-mongodb - Practice Go with MongoDB because why not

Practice Mongo DB with Go Because why not. Dependencies gin-gonic go mongodb dri

Rizky Syawal 0 Jan 5, 2022
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Qiniu Cloud 1k Dec 5, 2022
Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Go SQL Drivers 12.9k Dec 4, 2022
SQLite with pure Go

Sqinn-Go is a Go (Golang) library for accessing SQLite databases in pure Go. It uses Sqinn https://github.com/cvilsmeier/sqinn under the hood. It star

Christoph Vilsmeier 130 Nov 28, 2022
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)

Mongo Go Models Important Note: We changed package name from github.com/Kamva/mgm/v3(uppercase Kamva) to github.com/kamva/mgm/v3(lowercase kamva) in v

kamva 588 Nov 20, 2022
A MongoDB compatible embeddable database and toolkit for Go.

lungo A MongoDB compatible embeddable database and toolkit for Go. Installation Example Motivation Architecture Features License Installation To get s

Joël Gähwiler 397 Nov 19, 2022
The MongoDB driver for Go

The MongoDB driver for Go This fork has had a few improvements by ourselves as well as several PR's merged from the original mgo repo that are current

GlobalSign 2k Nov 30, 2022
The Go driver for MongoDB

MongoDB Go Driver The MongoDB supported driver for Go. Requirements Installation Usage Bugs / Feature Reporting Testing / Development Continuous Integ

mongodb 7.1k Nov 30, 2022
💲 Golang, Go Fiber, RabbitMQ, MongoDB, Docker, Kubernetes, GitHub Actions

Bank Projeto para simular empréstimos financeiros em um banco para clientes Tecnologias Utilizadas Golang MongoDB RabbitMQ Github Actions Docker Hub D

Jailton Junior 6 Oct 13, 2022
Go-odm, a Golang Object Document Mapping for MongoDB.

A project of SENROK Open Source Go ODM Go-odm, a Golang Object Document Mapping for MongoDB. Table of contents Features Installation Get started Docum

SENROK 4 Nov 4, 2022
Golang MongoDB Integration Examples

Get Program Get a copy of the program: git clone https://github.com/hmdhszd/Go

Hamid Hosseinzadeh 1 Feb 1, 2022
PostgreSQL driver and toolkit for Go

pgx - PostgreSQL Driver and Toolkit pgx is a pure Go driver and toolkit for PostgreSQL. pgx aims to be low-level, fast, and performant, while also ena

Jack Christensen 6.4k Dec 5, 2022
pogo is a lightweight Go PostgreSQL internal state query engine.

pogo is a lightweight Go PostgreSQL internal state query engine. It focuses on the data that are highly dynamic in nature, and provides some conv

Sang-gon Lee 6 Sep 19, 2021
logical is tool for synchronizing from PostgreSQL to custom handler through replication slot

logical logical is tool for synchronizing from PostgreSQL to custom handler through replication slot Required Postgresql 10.0+ Howto Download Choose t

梦飞 6 Sep 2, 2022
Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

pqssh Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

mattn 52 Nov 6, 2022
PostgreSQL API Client

PostgreSQL API language search PostgreSQL API functions response: We don't use PostgreSQL in the usual way. We do everything through API functions, wh

Derek Sivers 14 May 9, 2022
Package pbpgx provides a toolkit for easier Protocol Buffers interaction with PostgreSQL databases.

PBPGX Package pbpgx provides a toolkit for easier Protocol Buffers interaction with PostgreSQL databases. Pbpgx supports the Protocol Buffer types gen

Tim Möhlmann 11 Jun 27, 2022
Books-rest api - Simple CRUD Rest API architecture using postgresql db with standard Library

books-rest_api Simple CRUD Rest API architecture using postgresql db with standa

Edho Guntur Adhitama 2 Feb 8, 2022