Golang driver for ClickHouse

Overview

ClickHouse Build Status Go Report Card codecov

Golang SQL database driver for Yandex ClickHouse

Key features

  • Uses native ClickHouse tcp client-server protocol
  • Compatibility with database/sql
  • Round Robin load-balancing
  • Bulk write support : begin->prepare->(in loop exec)->commit
  • LZ4 compression support (default to use pure go lz4, switch to use cgo lz4 by turn clz4 build tags on)
  • External Tables support

DSN

  • username/password - auth credentials
  • database - select the current default database
  • read_timeout/write_timeout - timeout in second
  • no_delay - disable/enable the Nagle Algorithm for tcp socket (default is 'true' - disable)
  • alt_hosts - comma separated list of single address host for load-balancing
  • connection_open_strategy - random/in_order (default random).
    • random - choose random server from set
    • in_order - first live server is choosen in specified order
    • time_random - choose random(based on current time) server from set. This option differs from random in that randomness is based on current time rather than on amount of previous connections.
  • block_size - maximum rows in block (default is 1000000). If the rows are larger then the data will be split into several blocks to send them to the server. If one block was sent to the server, the data will be persisted on the server disk, we can't rollback the transaction. So always keep in mind that the batch size no larger than the block_size if you want atomic batch insert.
  • pool_size - maximum amount of preallocated byte chunks used in queries (default is 100). Decrease this if you experience memory problems at the expense of more GC pressure and vice versa.
  • debug - enable debug output (boolean value)
  • compress - enable lz4 compression (integer value, default is '0')

SSL/TLS parameters:

  • secure - establish secure connection (default is false)
  • skip_verify - skip certificate verification (default is false)
  • tls_config - name of a TLS config with client certificates, registered using clickhouse.RegisterTLSConfig(); implies secure to be true, unless explicitly specified

example:

tcp://host1:9000?username=user&password=qwerty&database=clicks&read_timeout=10&write_timeout=20&alt_hosts=host2:9000,host3:9000

Supported data types

  • UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64
  • Float32, Float64
  • String
  • FixedString(N)
  • Date
  • DateTime
  • IPv4
  • IPv6
  • Enum
  • UUID
  • Nullable(T)
  • Array(T) (one-dimensional) godoc

TODO

  • Support other compression methods(zstd ...)

Install

go get -u github.com/ClickHouse/clickhouse-go

Example

package main

import (
	"database/sql"
	"fmt"
	"log"
	"time"

	"github.com/ClickHouse/clickhouse-go"
)

func main() {
	connect, err := sql.Open("clickhouse", "tcp://127.0.0.1:9000?debug=true")
	if err != nil {
		log.Fatal(err)
	}
	if err := connect.Ping(); err != nil {
		if exception, ok := err.(*clickhouse.Exception); ok {
			fmt.Printf("[%d] %s \n%s\n", exception.Code, exception.Message, exception.StackTrace)
		} else {
			fmt.Println(err)
		}
		return
	}

	_, err = connect.Exec(`
		CREATE TABLE IF NOT EXISTS example (
			country_code FixedString(2),
			os_id        UInt8,
			browser_id   UInt8,
			categories   Array(Int16),
			action_day   Date,
			action_time  DateTime
		) engine=Memory
	`)

	if err != nil {
		log.Fatal(err)
	}
	var (
		tx, _   = connect.Begin()
		stmt, _ = tx.Prepare("INSERT INTO example (country_code, os_id, browser_id, categories, action_day, action_time) VALUES (?, ?, ?, ?, ?, ?)")
	)
	defer stmt.Close()

	for i := 0; i < 100; i++ {
		if _, err := stmt.Exec(
			"RU",
			10+i,
			100+i,
			clickhouse.Array([]int16{1, 2, 3}),
			time.Now(),
			time.Now(),
		); err != nil {
			log.Fatal(err)
		}
	}

	if err := tx.Commit(); err != nil {
		log.Fatal(err)
	}

	rows, err := connect.Query("SELECT country_code, os_id, browser_id, categories, action_day, action_time FROM example")
	if err != nil {
		log.Fatal(err)
	}
	defer rows.Close()

	for rows.Next() {
		var (
			country               string
			os, browser           uint8
			categories            []int16
			actionDay, actionTime time.Time
		)
		if err := rows.Scan(&country, &os, &browser, &categories, &actionDay, &actionTime); err != nil {
			log.Fatal(err)
		}
		log.Printf("country: %s, os: %d, browser: %d, categories: %v, action_day: %s, action_time: %s", country, os, browser, categories, actionDay, actionTime)
	}

	if err := rows.Err(); err != nil {
		log.Fatal(err)
	}

	if _, err := connect.Exec("DROP TABLE example"); err != nil {
		log.Fatal(err)
	}
}

Use sqlx

package main

import (
	"log"
	"time"

	"github.com/jmoiron/sqlx"
	_ "github.com/ClickHouse/clickhouse-go"
)

func main() {
	connect, err := sqlx.Open("clickhouse", "tcp://127.0.0.1:9000?debug=true")
	if err != nil {
		log.Fatal(err)
	}
	var items []struct {
		CountryCode string    `db:"country_code"`
		OsID        uint8     `db:"os_id"`
		BrowserID   uint8     `db:"browser_id"`
		Categories  []int16   `db:"categories"`
		ActionTime  time.Time `db:"action_time"`
	}

	if err := connect.Select(&items, "SELECT country_code, os_id, browser_id, categories, action_time FROM example"); err != nil {
		log.Fatal(err)
	}

	for _, item := range items {
		log.Printf("country: %s, os: %d, browser: %d, categories: %v, action_time: %s", item.CountryCode, item.OsID, item.BrowserID, item.Categories, item.ActionTime)
	}
}

External tables support

package main

import (
	"database/sql"
    "database/sql/driver"
	"fmt"
    "github.com/ClickHouse/clickhouse-go/lib/column"
	"log"
	"time"

	"github.com/ClickHouse/clickhouse-go"
)

func main() {
	connect, err := sql.Open("clickhouse", "tcp://127.0.0.1:9000?debug=true")
	if err != nil {
		log.Fatal(err)
	}
	if err := connect.Ping(); err != nil {
		if exception, ok := err.(*clickhouse.Exception); ok {
			fmt.Printf("[%d] %s \n%s\n", exception.Code, exception.Message, exception.StackTrace)
		} else {
			fmt.Println(err)
		}
		return
	}

	_, err = connect.Exec(`
		CREATE TABLE IF NOT EXISTS example (
			country_code FixedString(2),
			os_id        UInt8,
			browser_id   UInt8,
			categories   Array(Int16),
			action_day   Date,
			action_time  DateTime
		) engine=Memory
	`)

	if err != nil {
		log.Fatal(err)
	}
	var (
		tx, _   = connect.Begin()
		stmt, _ = tx.Prepare("INSERT INTO example (country_code, os_id, browser_id, categories, action_day, action_time) VALUES (?, ?, ?, ?, ?, ?)")
	)
	defer stmt.Close()

	for i := 0; i < 100; i++ {
		if _, err := stmt.Exec(
			"RU",
			10+i,
			100+i,
			clickhouse.Array([]int16{1, 2, 3}),
			time.Now(),
			time.Now(),
		); err != nil {
			log.Fatal(err)
		}
	}

	if err := tx.Commit(); err != nil {
		log.Fatal(err)
	}

	col, err := column.Factory("country_code", "String", nil)
	if err != nil {
		log.Fatal(err)
	}
	countriesExternalTable := clickhouse.ExternalTable{
		Name: "countries",
		Values: [][]driver.Value{
			{"RU"},
		},
		Columns: []column.Column{col},
	}
	
    rows, err := connect.Query("SELECT country_code, os_id, browser_id, categories, action_day, action_time "+
            "FROM example WHERE country_code IN ?", countriesExternalTable)
	if err != nil {
		log.Fatal(err)
	}
	defer rows.Close()

	for rows.Next() {
		var (
			country               string
			os, browser           uint8
			categories            []int16
			actionDay, actionTime time.Time
		)
		if err := rows.Scan(&country, &os, &browser, &categories, &actionDay, &actionTime); err != nil {
			log.Fatal(err)
		}
		log.Printf("country: %s, os: %d, browser: %d, categories: %v, action_day: %s, action_time: %s", country, os, browser, categories, actionDay, actionTime)
	}

	if err := rows.Err(); err != nil {
		log.Fatal(err)
	}

	if _, err := connect.Exec("DROP TABLE example"); err != nil {
		log.Fatal(err)
	}
}
Issues
  • Error driver: bad connection

    Error driver: bad connection

    1. I'll be prompted when the table I'm querying doesn't exist: Table xxx doesn't exist.
    2. When the table I'm requesting again exists, I prompt driver: Bad connection problem Use in a go web environment
    investigate 
    opened by y761350477 23
  • numInput returns wrong number if query contains '@'.

    numInput returns wrong number if query contains '@'.

    I run the following statement:

    INSERT INTO `dbr_people` (`id`,`name`,`email`) VALUES (258,'jonathan','[email protected]')
    

    Expected result: it completes successful. Actual Result: it fails with error: sql: expected 1 arguments, got 0

    It occurs because numInput counts @uservoice as variable and returns 1 as number of input parameters. :(

    investigate 
    opened by bgaifullin 17
  • New type Object('JSON') is not supported

    New type Object('JSON') is not supported

    Hi,

    New type Object('JSON') has appeared recently in ClickHouse 22.3.2.1. It would be super cool to add it in clickhouse-go. New feature allows you to significantly speed up a request(search) for json data type.

    https://clickhouse.com/docs/en/whats-new/changelog/#223 Experimental Feature

    New data type Object(<schema_format>), which supports storing of semi-structured data (for now JSON only). Data is written to such types as string. Then all paths are extracted according to format of semi-structured data and written as separate columns in most optimal types, that can store all their values. Those columns can be queried by names that match paths in source data. E.g data.key1.key2 or with cast operator data.key1.key2::Int64.

    Support of dynamic subcolumns (JSON data type) #23932 https://github.com/ClickHouse/ClickHouse/pull/23932

    opened by Astlol 16
  • SIGSEGV on multiple prepared statements

    SIGSEGV on multiple prepared statements

    Hello, I have an application which can make many concurrent INSERT statements. My code is working fine, but when the INSERT statements number increase, touching some hundred of concurrents, I have this panic:

    panic: runtime error: invalid memory address or nil pointer dereference
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x4c848e]
    goroutine 340 [running]:
    database/sql.(*Stmt).Close(0x0, 0x0, 0x0)
    /usr/lib/go-1.11/src/database/sql/sql.go:2545 +0x2e
    panic(0x6918c0, 0x8d8730)
    /usr/lib/go-1.11/src/runtime/panic.go:513 +0x1b9
    database/sql.(*Stmt).ExecContext(0x0, 0x72aa20, 0xc0000be010, 0xc00a6a5f20, 0x6, 0x6, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/go-1.11/src/database/sql/sql.go:2301 +0x4a
    database/sql.(*Stmt).Exec(0x0, 0xc00a6a5f20, 0x6, 0x6, 0x10, 0x0, 0x0, 0xc001f44000)
    /usr/lib/go-1.11/src/database/sql/sql.go:2330 +0x65
    main.insertIP(0xc0002b2cc0, 0xc00b614090, 0xb, 0xc0001bd770, 0x34, 0xc0001bd7b4, 0xd, 0xc0001bd7a5, 0x2, 0xc00b6140a0)
    /root/project/MessageHandler.go:158 +0x546
    created by main.handleMessage
    /root/project/MessageHandler.go:378 +0x269 
    

    Each statement is prepared with about 150 values, and the entire cycle (Begin, Prepare, Exec and Commit) is done inside a goroutine. There are multiple goroutines running at the same time, and they share the same connection.

    bug 
    opened by AlessandroSechi 16
  • ClickHouse can produce blocks bigger than 1 mil rows.

    ClickHouse can produce blocks bigger than 1 mil rows.

    ClickHouse can produce blocks bigger than 1 mil rows, in case you have GROUP BY in your query.

    https://github.com/ClickHouse/clickhouse-go/blob/e9d187591f80acb3da4d24adf8ff61c2231d39ee/lib/proto/block.go#L144

    bug 
    opened by UnamedRus 15
  • Decimal: read value not valid

    Decimal: read value not valid

    Hi i am trying to read decimal value from clickhouse but it returns wrong value. clickhouse-server: 18.16.0 Code to reproduce

    package main
    
    import (
    	"database/sql"
    	"fmt"
    	"log"
    
    	"github.com/kshvakov/clickhouse"
    )
    
    func main() {
    	connect, err := sql.Open("clickhouse", "tcp://127.0.0.1:9000?debug=true")
    	checkErr(err)
    	if err := connect.Ping(); err != nil {
    		if exception, ok := err.(*clickhouse.Exception); ok {
    			fmt.Printf("[%d] %s \n%s\n", exception.Code, exception.Message, exception.StackTrace)
    		} else {
    			fmt.Println(err)
    		}
    		return
    	}
    
    	_, err = connect.Exec(`
    		CREATE TABLE IF NOT EXISTS example (
    			v Decimal(18,10)
    		) engine=Memory
    	`)
    
    	checkErr(err)
    	tx, err := connect.Begin()
    	checkErr(err)
    	stmt, err := tx.Prepare("INSERT INTO example (v) VALUES (?)")
    	checkErr(err)
    
    	if _, err := stmt.Exec(
    		0.08,
    	); err != nil {
    		log.Fatal(err)
    	}
    
    	checkErr(tx.Commit())
    	rows, err := connect.Query("SELECT v FROM example")
    	checkErr(err)
    	for rows.Next() {
    		var (
    			v               float64
    		)
    		checkErr(rows.Scan(&v))
    		log.Printf("v: %f", v)
    	}
    
    	if _, err := connect.Exec("DROP TABLE example"); err != nil {
    		log.Fatal(err)
    	}
    }
    
    func checkErr(err error) {
    	if err != nil {
    		log.Fatal(err)
    	}
    }
    

    it would print 800000000.000000

    bug 
    opened by minaevmike 15
  • Queries on new / idle Connections timeout and fail

    Queries on new / idle Connections timeout and fail

    I'm having connection issues on new / idle connections. I open the database and ping it on instance startup to make sure that my instances is healthy. The problem is that the first time I execute QueryContext after starting the instance, the query always fails with a 1 min timeout. Subsequent calls succeed and work perfectly.

    If I leave the connection idle for a while (maybe an hour), the first query I run times out after 1 min, the same as when the instance first loads.

    Any suggestions on how to debug this?

    bug 
    opened by derekperkins 14
  • Weird errors while selecting data

    Weird errors while selecting data

    Table definition:

    CREATE TABLE IF NOT EXISTS flamegraph (
                            timestamp Int64,
                            graph_type String,
                            cluster String,
                            id UInt64,
                            name String,
                            total UInt64,
                            value UInt64,
                            children_ids Array(UInt64),
                            date Date
                    ) engine=MergeTree(date, (timestamp, graph_type, cluster, value, date), 8192)
    

    Then I fill it out with data (several million queries).

    Then I'm trying to select data from it from a go program:

            rows, err := connect.Query("SELECT total FROM flamegraph WHERE timestamp=" + ts + " AND id = " + idQuery + " AND cluster='" + cluster + "'")
    	total := uint64(0)
    	for rows.Next() {
    		err = rows.Scan(&total)
    		if err != nil {
    			log.Fatal(err)
    		}
    	}
    
    	minValue := uint64(float64(total) * removeLowestPct)
    	minValueQuery := strconv.FormatUint(minValue, 10)
    
    	rows, err = connect.Query("SELECT timestamp, graph_type, cluster, id, name, total, value, children_ids FROM flamegraph WHERE timestamp=" + ts + " AND cluster='" + cluster + "' AND value > " + minValueQuery)
    	if err != nil {
    		log.Fatal(err)
    	}
    

    This results in the following queries:

    SELECT timestamp, graph_type, cluster, id, name, total, value, children_ids FROM flamegraph WHERE timestamp=1490791477 AND cluster='example' AND value > 4778
    

    But each time I do that I got the following output:

    [clickhouse][receive packet] <- data: columns=8, rows=0
    [clickhouse][receive packet] err: unhandled type abnormal
    [clickhouse][stmt] close
    
    [clickhouse][receive packet] <- data: columns=8, rows=0
    [clickhouse][receive packet] err: unhandled type all
    [clickhouse][stmt] close
    

    It seems that 'all', 'abnormal', etc is one of the names from the select.

    At this moment I was unable to create a minimal test case for that.

    bug investigate 
    opened by Civil 14
  • Memory leakage

    Memory leakage

    Hi,

    so I have a program that reads data from kafka nad pushes it to clickhouse. So here is a function establishing a connection.

    func NewConnection(ch models.CHStatBase) (sdp *sqlx.DB, err error) {
    	sdpStr := fmt.Sprintf("tcp://%s:%d?username=%s&password=%s&database=%s&debug=false&compress=true&pool_size=20", ch.Server, ch.Port, os.Getenv("CLICKHOUSE_USER"), os.Getenv("CLICKHOUSE_PASSWORD"), ch.Database)
    	sdp, err = sqlx.Connect("clickhouse", sdpStr)
    	return sdp, err
    }
    

    Here is a function to insert the data.

            tx, err := chDB.Begin()
    	if err != nil {
    		return err
    	}
    
    	stmt, err := tx.Prepare(fmt.Sprintf(`INSERT INTO %s
    		(EventDate,	EventDateTime)
    		VALUES (?, ?)`, table))
    	if err != nil {
    		return err
    	}
    	defer stmt.Close()
    
    	for _, evd := range evds {
    		if _, err = stmt.Exec(evd.LastVisit, evd.LastVisit); err != nil {
    			return err
    		}
    	}
    
    	err = tx.Commit()
    
    	return err
    

    so there is a problem of memory leakage. Here is a heap inuse memory from pprof

    File: kafka-consumer
    Build ID: 1760c52b6ea4f03c44eddf2dd30c1a553e513f14
    Type: inuse_space
    Time: Oct 7, 2020 at 6:54am (MSK)
    Entering interactive mode (type "help" for commands, "o" for options)
    (pprof) top
    Showing nodes accounting for 5347.94MB, 99.15% of 5393.71MB total
    Dropped 121 nodes (cum <= 26.97MB)
    Showing top 10 nodes out of 53
          flat  flat%   sum%        cum   cum%
     2668.40MB 49.47% 49.47%  2668.40MB 49.47%  github.com/ClickHouse/clickhouse-go/lib/binary.NewCompressWriter (inline)
     2242.72MB 41.58% 91.05%  2242.72MB 41.58%  github.com/ClickHouse/clickhouse-go/lib/binary.NewCompressReader (inline)
      186.01MB  3.45% 94.50%   186.01MB  3.45%  encoding/json.(*decodeState).literalStore
      102.04MB  1.89% 96.39%  1471.23MB 27.28%  main.runOldEVHandler.func1
       77.52MB  1.44% 97.83%   317.95MB  5.89%  main.runOldPVHandler.func1
       61.39MB  1.14% 98.97%    61.39MB  1.14%  github.com/ClickHouse/clickhouse-go/lib/leakypool.GetBytes
        5.52MB   0.1% 99.07%   240.43MB  4.46%  main.runOldPVHandler.func1.1
        1.84MB 0.034% 99.11%  1369.19MB 25.39%  main.runOldEVHandler.func1.1
        1.50MB 0.028% 99.13%   474.59MB  8.80%  github.com/ClickHouse/clickhouse-go.(*stmt).ExecContext
           1MB 0.019% 99.15%  1367.35MB 25.35%  git.wildberries.ru/statistics/kafka-consumer/service.InsertEvents
    

    And it keeps growing, any ideas on fixing?

    opened by Akado2009 11
  • IN (?) with []string

    IN (?) with []string

    I'm making SELECT query using IN with []string in WHERE clause:

    import (
        "github.com/jmoiron/sqlx"
        "github.com/kshvakov/clickhouse"
    )
    func main() {
        db, _ := sqlx.Open("clickhouse", "tcp://localhost:9000?debug=true")
        db.Select(&out, "SELECT * FROM Metrics WHERE MetricsName IN (?)", []string{"a", "b", "c"})
    }
    

    After executing I'm getting in console:

    [clickhouse][connect=1][prepare] SELECT * FROM Metrics WHERE MetricsName IN (?)
    [clickhouse][connect=1][send query] SELECT * FROM Metrics WHERE MetricsName IN (&{<nil> [a b c] 0xc000215700})
    [clickhouse][connect=1][read meta] <- exception
    [clickhouse][connect=1][stmt] close
    

    Expected:

    SELECT * FROM Metrics WHERE MetricsName IN ('a', 'b', 'c')
    

    Version 1.3.4

    opened by monstarnn 10
  • `INSERT` prohibits `SELECT`

    `INSERT` prohibits `SELECT`

    I'd like to use INSERT INTO ... SELECT as per the CH docs (https://clickhouse.yandex/docs/en/query_language/insert_into/#inserting-the-results-of-select) in code I'm writing, but it looks like this driver flags these sorts of insert statements as "not an insert":

    var selectRe = regexp.MustCompile(`\s+SELECT\s+`)
    
    func isInsert(query string) bool {
    	if f := strings.Fields(query); len(f) > 2 {
    		return strings.EqualFold("INSERT", f[0]) && strings.EqualFold("INTO", f[1]) && !selectRe.MatchString(strings.ToUpper(query))
    	}
    	return false
    }
    

    This is valid according to the docs and is needed, since my statement inserts an aggregation function, which isn't supported when using VALUES:

    INSERT INTO source_tombstones (org_id, source_id, tombstone_event, update_date)
    SELECT (1, 1441, maxState(970969982093492350), now())
    

    Is there a reason to prohibit these insert statements? Can we get this functionality back?

    opened by joemadeus 10
  • Support JSONEachRow as response format

    Support JSONEachRow as response format

    Useful for debugging and some use cases e.g. https://github.com/grafana/clickhouse-datasource/issues/148 All responses would be marshalled into a map or struct.

    enhancement low priority 
    opened by gingerwizard 0
  • `SELECT toString(number) FROM numbers(500000000)` is slow

    `SELECT toString(number) FROM numbers(500000000)` is slow

    Issue description

    Using benchmark at here, I found somehow SELECT toString(number) FROM numbers(500000000) is very slow on my VM. Changing type to Decimal may have similar issue.

    Query Result
    SELECT number FROM numbers(500000000)
    
    5.6s 500000000
    	Command being timed: "go run main.go"
    	User time (seconds): 5.19
    	System time (seconds): 3.99
    	Percent of CPU this job got: 154%
    	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:05.92
    	Average shared text size (kbytes): 0
    	Average unshared data size (kbytes): 0
    	Average stack size (kbytes): 0
    	Average total size (kbytes): 0
    	Maximum resident set size (kbytes): 90576
    	Average resident set size (kbytes): 0
    	Major (requiring I/O) page faults: 1
    	Minor (reclaiming a frame) page faults: 64764
    	Voluntary context switches: 36844
    	Involuntary context switches: 15003
    	Swaps: 0
    	File system inputs: 8
    	File system outputs: 96
    	Socket messages sent: 0
    	Socket messages received: 0
    	Signals delivered: 0
    	Page size (bytes): 4096
    	Exit status: 0
    
    SELECT toString(number) FROM numbers(500000000)
    
    1m30.674s 500000000
    	Command being timed: "go run main.go"
    	User time (seconds): 89.83
    	System time (seconds): 55.28
    	Percent of CPU this job got: 159%
    	Elapsed (wall clock) time (h:mm:ss or m:ss): 1:31.05
    	Average shared text size (kbytes): 0
    	Average unshared data size (kbytes): 0
    	Average stack size (kbytes): 0
    	Average total size (kbytes): 0
    	Maximum resident set size (kbytes): 89944
    	Average resident set size (kbytes): 0
    	Major (requiring I/O) page faults: 1
    	Minor (reclaiming a frame) page faults: 613890
    	Voluntary context switches: 165692
    	Involuntary context switches: 51110
    	Swaps: 0
    	File system inputs: 0
    	File system outputs: 16
    	Socket messages sent: 0
    	Socket messages received: 0
    	Signals delivered: 0
    	Page size (bytes): 4096
    	Exit status: 0
    

    Example code

    https://github.com/go-faster/ch-bench/blob/c2627b1d0fa1a8abc7d3560816d3b919b983600b/ch-bench-official/main.go#L26

    Error log

    N/A

    Configuration

    OS: Linux myserver 5.18.5-100.fc35.x86_64 # 1 SMP PREEMPT_DYNAMIC Thu Jun 16 14:44:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

    Interface: E.g. native, database/sql

    Driver version: v2.1.0

    Go version: 1.18.3

    ClickHouse Server version: 22.3

    bug 
    opened by zhicwu 2
  • Rewrite parameter substitution

    Rewrite parameter substitution

    Current parameter substitution in bind.go isn't robust and a source of pain. Ideally, we would move to ClickHouse and pass the parameters with the query.

    Depends on https://github.com/ClickHouse/ClickHouse/issues/38235

    enhancement 
    opened by gingerwizard 0
  • Utilise ch-go for lower level implementation

    Utilise ch-go for lower level implementation

    @ernado has kindly contributed ch-go to ClickHouse. This library offers superior performance at the expense of a more complex and stricter API. It offers excellent fundamentals, however, on which we would like to build this client.

    Our objective is for ch-go to represent the low-level client, with this client will provide high-level constructs. ch-go will be used for all column encoding and block formulation. Users will then have an option:

    1. ch-go where performance is critical e.g. insert heavy usecases
    2. clickhouse-go when convenience is required and some performance can be satisfied e.g. query use cases.

    This ticket will track this work. We aim to minimize the impact on users and make no changes to the API of this client - users using this library only experience performance improvements and lower resource overhead.

    enhancement 
    opened by gingerwizard 0
  • HTTP Part 1

    HTTP Part 1

    Issue #597

    HTTP Connection Update

    Add Query and Execution over Native Format Query reuse rows structure Execution return text of error

    HTTP Batch

    Add HTTP Batch Implement all methods It works with Native Format

    Tests

    Expand tests with types to check HTTP connection and batch

    Drawbacks

    HTTP Format Native has 0 revision, so solution has 2 Drawbacks.

    1. Batch should recover table schema. Link
    2. Clear DateTime don't have a Time Zone. https://github.com/ClickHouse/ClickHouse/issues/38209

    If you have any ideas how to solve them, I will be grateful

    cc @gingerwizard

    enhancement 
    opened by ortyomka 8
  • AppendRow for Array.go not working with  {}interface|[]interface{} in v2.

    AppendRow for Array.go not working with {}interface|[]interface{} in v2.

    Issue description

    We are using v1 clickhouse-go and we are trying to move from v2 recently. But we are seeing ColumnConvertError for a case which is working fine in v1. Can someone please help.

    AppendRow]: converting []interface {} to Array(String) is unsupported. try using []string
    

    Example code

    
    	values := []interface{}{}
    	cols := []string{"one", "two", "three"}
    
    	mymap := make(map[string]interface{})
    	mymap["one"] = "two"
    	mymap["two"] = []string{"one", "two", "three"}
    	mymap["three"] = []interface{}{"one", "two", "theree"}
    	mymap["four"] = []interface{}{1, 2, 3}
    
    	for _, col := range cols {
    		values = append(values, mymap[col])
    	}
    
           stmt.ExecContext(ctx, values...); 
        
    

    Error log

    [AppendRow]: converting []interface {} to Array(String) is unsupported. try using []string
    

    Configuration

    macOS BigSur

    Interface: : database/sql

    Driver version: v2 Go version: 1.17

    ClickHouse Server version: 21.8.3.44

    cc: @nikhresna

    discuss 
    opened by abhishek-buragadda 3
Releases(v2.2.0)
Owner
ClickHouse
ClickHouse
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Qiniu Cloud 920 Jul 1, 2022
Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

pqssh Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

mattn 47 Mar 3, 2022
Jaeger ClickHouse storage plugin implementation

This is implementation of Jaeger's storage plugin for ClickHouse.

Jaeger - Distributed Tracing Platform 116 Jun 29, 2022
Uptrace - Distributed tracing backend using OpenTelemetry and ClickHouse

Distributed tracing backend using OpenTelemetry and ClickHouse Uptrace is a dist

Rohan 0 Mar 8, 2022
Firebird RDBMS sql driver for Go (golang)

firebirdsql (Go firebird sql driver) Firebird RDBMS http://firebirdsql.org SQL driver for Go Requirements Firebird 2.5 or higher Golang 1.13 or higher

Hajime Nakagami 171 Jun 1, 2022
Lightweight Golang driver for ArangoDB

Arangolite Arangolite is a lightweight ArangoDB driver for Go. It focuses on pure AQL querying. See AranGO for a more ORM-like experience. IMPORTANT:

Fabien Herfray 72 Jun 17, 2022
Golang MySQL driver

Install go get github.com/vczyh/go-mysql-driver Usage import _ "github.com/vczyh

Zhang,Yuheng 0 Jan 27, 2022
Mirror of Apache Calcite - Avatica Go SQL Driver

Apache Avatica/Phoenix SQL Driver Apache Calcite's Avatica Go is a Go database/sql driver for the Avatica server. Avatica is a sub-project of Apache C

The Apache Software Foundation 97 Jun 28, 2022
Microsoft ActiveX Object DataBase driver for go that using exp/sql

go-adodb Microsoft ADODB driver conforming to the built-in database/sql interface Installation This package can be installed with the go get command:

mattn 128 Jun 8, 2022
Microsoft SQL server driver written in go language

A pure Go MSSQL driver for Go's database/sql package Install Requires Go 1.8 or above. Install with go get github.com/denisenkom/go-mssqldb . Connecti

null 1.6k Jul 4, 2022
Oracle driver for Go using database/sql

go-oci8 Description Golang Oracle database driver conforming to the Go database/sql interface Installation Install Oracle full client or Instant Clien

mattn 593 Jun 29, 2022
sqlite3 driver for go using database/sql

go-sqlite3 Latest stable version is v1.14 or later not v2. NOTE: The increase to v2 was an accident. There were no major changes or features. Descript

mattn 5.8k Jul 4, 2022
GO DRiver for ORacle DB

Go DRiver for ORacle godror is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wra

null 368 Jun 29, 2022
Go Sql Server database driver.

gofreetds Go FreeTDS wrapper. Native Sql Server database driver. Features: can be used as database/sql driver handles calling stored procedures handle

minus5 106 Jan 23, 2022
PostgreSQL driver and toolkit for Go

pgx - PostgreSQL Driver and Toolkit pgx is a pure Go driver and toolkit for PostgreSQL. pgx aims to be low-level, fast, and performant, while also ena

Jack Christensen 5.6k Jul 1, 2022
Pure Go Postgres driver for database/sql

pq - A pure Go postgres driver for Go's database/sql package Install go get github.com/lib/pq Features SSL Handles bad connections for database/sql S

null 7.4k Jun 29, 2022
Go language driver for RethinkDB

RethinkDB-go - RethinkDB Driver for Go Go driver for RethinkDB Current version: v6.2.1 (RethinkDB v2.4) Please note that this version of the driver on

RethinkDB 1.6k Jun 20, 2022
goriak - Go language driver for Riak KV

goriak Current version: v3.2.1. Riak KV version: 2.0 or higher, the latest version of Riak KV is always recommended. What is goriak? goriak is a wrapp

Gustav Westling 27 Jan 23, 2022
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)

Mongo Go Models Important Note: We changed package name from github.com/Kamva/mgm/v3(uppercase Kamva) to github.com/kamva/mgm/v3(lowercase kamva) in v

kamva 517 Jun 30, 2022