A pure go library to handle MySQL network protocol and replication.

Related tags

go-mysql
Overview

go-mysql

A pure go library to handle MySQL network protocol and replication.

semver example workflow gomod version

How to migrate to this repo

To change the used package in your repo it's enough to add this replace directive to your go.mod:

replace github.com/siddontang/go-mysql => github.com/go-mysql-org/go-mysql v1.2.1

v1.2.1 - is the last tag in repo, feel free to choose what you want.

Changelog

This repo uses Changelog.


Content

Replication

Replication package handles MySQL replication protocol like python-mysql-replication.

You can use it as a MySQL slave to sync binlog from master then do something, like updating cache, etc...

Example

import (
	"github.com/go-mysql-org/go-mysql/replication"
	"os"
)
// Create a binlog syncer with a unique server id, the server id must be different from other MySQL's. 
// flavor is mysql or mariadb
cfg := replication.BinlogSyncerConfig {
	ServerID: 100,
	Flavor:   "mysql",
	Host:     "127.0.0.1",
	Port:     3306,
	User:     "root",
	Password: "",
}
syncer := replication.NewBinlogSyncer(cfg)

// Start sync with specified binlog file and position
streamer, _ := syncer.StartSync(mysql.Position{binlogFile, binlogPos})

// or you can start a gtid replication like
// streamer, _ := syncer.StartSyncGTID(gtidSet)
// the mysql GTID set likes this "de278ad0-2106-11e4-9f8e-6edd0ca20947:1-2"
// the mariadb GTID set likes this "0-1-100"

for {
	ev, _ := streamer.GetEvent(context.Background())
	// Dump event
	ev.Dump(os.Stdout)
}

// or we can use a timeout context
for {
	ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
	ev, err := s.GetEvent(ctx)
	cancel()

	if err == context.DeadlineExceeded {
		// meet timeout
		continue
	}

	ev.Dump(os.Stdout)
}

The output looks:

=== RotateEvent ===
Date: 1970-01-01 08:00:00
Log position: 0
Event size: 43
Position: 4
Next log name: mysql.000002

=== FormatDescriptionEvent ===
Date: 2014-12-18 16:36:09
Log position: 120
Event size: 116
Version: 4
Server version: 5.6.19-log
Create date: 2014-12-18 16:36:09

=== QueryEvent ===
Date: 2014-12-18 16:38:24
Log position: 259
Event size: 139
Salve proxy ID: 1
Execution time: 0
Error code: 0
Schema: test
Query: DROP TABLE IF EXISTS `test_replication` /* generated by server */

Canal

Canal is a package that can sync your MySQL into everywhere, like Redis, Elasticsearch.

First, canal will dump your MySQL data then sync changed data using binlog incrementally.

You must use ROW format for binlog, full binlog row image is preferred, because we may meet some errors when primary key changed in update for minimal or noblob row image.

A simple example:

package main

import (
	"github.com/siddontang/go-log/log"
	"github.com/go-mysql-org/go-mysql/canal"
)

type MyEventHandler struct {
	canal.DummyEventHandler
}

func (h *MyEventHandler) OnRow(e *canal.RowsEvent) error {
	log.Infof("%s %v\n", e.Action, e.Rows)
	return nil
}

func (h *MyEventHandler) String() string {
	return "MyEventHandler"
}

func main() {
	cfg := canal.NewDefaultConfig()
	cfg.Addr = "127.0.0.1:3306"
	cfg.User = "root"
	// We only care table canal_test in test db
	cfg.Dump.TableDB = "test"
	cfg.Dump.Tables = []string{"canal_test"}

	c, err := canal.NewCanal(cfg)
	if err != nil {
		log.Fatal(err)
	}

	// Register a handler to handle RowsEvent
	c.SetEventHandler(&MyEventHandler{})

	// Start canal
	c.Run()
}

You can see go-mysql-elasticsearch for how to sync MySQL data into Elasticsearch.

Client

Client package supports a simple MySQL connection driver which you can use it to communicate with MySQL server.

Example

import (
	"github.com/go-mysql-org/go-mysql/client"
)

// Connect MySQL at 127.0.0.1:3306, with user root, an empty password and database test
conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test")

// Or to use SSL/TLS connection if MySQL server supports TLS
//conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test", func(c *Conn) {c.UseSSL(true)})

// Or to set your own client-side certificates for identity verification for security
//tlsConfig := NewClientTLSConfig(caPem, certPem, keyPem, false, "your-server-name")
//conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test", func(c *Conn) {c.SetTLSConfig(tlsConfig)})

conn.Ping()

// Insert
r, _ := conn.Execute(`insert into table (id, name) values (1, "abc")`)

// Get last insert id
println(r.InsertId)
// Or affected rows count
println(r.AffectedRows)

// Select
r, err := conn.Execute(`select id, name from table where id = 1`)

// Close result for reuse memory (it's not necessary but very useful)
defer r.Close()

// Handle resultset
v, _ := r.GetInt(0, 0)
v, _ = r.GetIntByName(0, "id")

// Direct access to fields
for _, row := range r.Values {
	for _, val := range row {
		_ = val.Value() // interface{}
		// or
		if val.Type == mysql.FieldValueTypeFloat {
			_ = val.AsFloat64() // float64
		}
	}   
}

Tested MySQL versions for the client include:

  • 5.5.x
  • 5.6.x
  • 5.7.x
  • 8.0.x

Example for SELECT streaming (v.1.1.1)

You can use also streaming for large SELECT responses. The callback function will be called for every result row without storing the whole resultset in memory. result.Fields will be filled before the first callback call.

// ...
var result mysql.Result
err := conn.ExecuteSelectStreaming(`select id, name from table LIMIT 100500`, &result, func(row []mysql.FieldValue) error {
    for idx, val := range row {
    	field := result.Fields[idx]
    	// You must not save FieldValue.AsString() value after this callback is done.
    	// Copy it if you need.
    	// ...
    }
    return false, nil
})

// ...

Server

Server package supplies a framework to implement a simple MySQL server which can handle the packets from the MySQL client. You can use it to build your own MySQL proxy. The server connection is compatible with MySQL 5.5, 5.6, 5.7, and 8.0 versions, so that most MySQL clients should be able to connect to the Server without modifications.

Example

import (
	"github.com/go-mysql-org/go-mysql/server"
	"net"
)

l, _ := net.Listen("tcp", "127.0.0.1:4000")

c, _ := l.Accept()

// Create a connection with user root and an empty password.
// You can use your own handler to handle command here.
conn, _ := server.NewConn(c, "root", "", server.EmptyHandler{})

for {
	conn.HandleCommand()
}

Another shell

mysql -h127.0.0.1 -P4000 -uroot -p 
//Becuase empty handler does nothing, so here the MySQL client can only connect the proxy server. :-) 

NewConn() will use default server configurations:

  1. automatically generate default server certificates and enable TLS/SSL support.
  2. support three mainstream authentication methods 'mysql_native_password', 'caching_sha2_password', and 'sha256_password' and use 'mysql_native_password' as default.
  3. use an in-memory user credential provider to store user and password.

To customize server configurations, use NewServer() and create connection via NewCustomizedConn().

Failover

Failover supports to promote a new master and let other slaves replicate from it automatically when the old master was down.

Failover supports MySQL >= 5.6.9 with GTID mode, if you use lower version, e.g, MySQL 5.0 - 5.5, please use MHA or orchestrator.

At the same time, Failover supports MariaDB >= 10.0.9 with GTID mode too.

Why only GTID? Supporting failover with no GTID mode is very hard, because slave can not find the proper binlog filename and position with the new master. Although there are many companies use MySQL 5.0 - 5.5, I think upgrade MySQL to 5.6 or higher is easy.

Driver

Driver is the package that you can use go-mysql with go database/sql like other drivers. A simple example:

package main

import (
	"database/sql"

	_ "github.com/go-mysql-org/go-mysql/driver"
)

func main() {
	// dsn format: "user:[email protected]?dbname"
	dsn := "[email protected]:3306?test"
	db, _ := sql.Open(dsn)
	db.Close()
}

We pass all tests in https://github.com/bradfitz/go-sql-test using go-mysql driver. :-)

Donate

If you like the project and want to buy me a cola, you can through:

PayPal 微信
[

Feedback

go-mysql is still in development, your feedback is very welcome.

Gmail: [email protected]

Issues
  • ERRO[0006] close sync with err: data len 0 < expected 1

    ERRO[0006] close sync with err: data len 0 < expected 1

    mysql 5.7 binlog canal 使用的时候,gtid在85649a98-8b3e-11e5-b9c9-5510536e2f9f:240 左右会报错 

    2018/05/08 00:07:11 [Info] HjEventHandler.go:27 mysql gtid set : 85649a98-8b3e-11e5-b9c9-5510536e2f9f:242 ERRO[0006] close sync with err: data len 0 < expected 1 INFO[0006] table structure changed, clear table cache: huajuan.hj_goods_preferential

    2018/05/08 00:07:11 [Info] HjEventHandler.go:27 mysql gtid set : 85649a98-8b3e-11e5-b9c9-5510536e2f9f:243 ERRO[0006] canal start sync binlog err: data len 0 < expected 1

    示例的binlog文件我上传上来了。希望谁可以看看 mysql-bin.000003.zip

    opened by domyway 40
  • Fix replication of TIMESTAMP on non-UTC machines

    Fix replication of TIMESTAMP on non-UTC machines

    To get the timestamp in any timezone, in the codebase consuming this library, add (and substitute with the timezone of your choice):

    func init() {
    	replication.TimeStringLocation = time.UTC
    }
    

    Fixes: #63

    Also see: github/gh-ost#182 and Shopify/ghostferry#23

    review: @siddontang

    opened by shuhaowu 27
  • support parse time config

    support parse time config

    @shlomi-noach

    I support a ParseTime config to decode timestamp and datetime to a Time structure.

    opened by siddontang 15
  • Add support for MySQL 8.0 and support for TLS/SSL for both Server and Client

    Add support for MySQL 8.0 and support for TLS/SSL for both Server and Client

    This PR:

    • Added support for MySQL 8.0 for the Client: starting from MySQL 8.0.4, MySQL uses 'caching_sha2_password' as default authentication method, which caused the connection to fail. This PR fixed the problem by supporting three mainstream auth methods 'mysql_native_password', 'caching_sha2_password', and 'sha256_password', which covers a wide range of MySQL versions from MySQL 5.5 to MySQL 8.0.
    • Added support of new auth methods for the Server: the server now accepts 'mysql_native_password', 'caching_sha2_password', and 'sha256_password' as auth methods. Other old and deprecated auth methods are not supported.
    • Supports TLS/SSL for the Client and the Server: the new design maintains compatibility with the previous releases. Customizations are optional.

    Since the upgrade of auth methods affects the Client and Server design, I made lots of changes to the code while trying to reuse the old code. Some minor existing bugs are fixed too. For instance, the buffer reader for the net.Conn is now removed because it causes the SSL handshake to fail.

    Changing and refactoring can be a bad thing though as they can introduce new bugs. However, I tried my best to make the old tests pass and added new feature tests for different MySQL versions using docker compose to increase testing coverage. For now all test cases passed.

    opened by michael2008 15
  • 求教

    求教

    看了mysql包下的代码,Result中有一个Status属性(uint16),请问操作正确的返回码是什么?

    我做了一下测试,发现正确返回信息时,Status的值是2

    想要确认一下是否正确

    opened by always-waiting 14
  • Add more MySQL-8.0 meta data to GTIDEvent and TableMapEvent

    Add more MySQL-8.0 meta data to GTIDEvent and TableMapEvent

    This pr adds more meta data added after MySQL-8.0:

    GTIDEvent:

    • immediate/original commit timestamp for this trx
    • transaction length (in bytes) for all binlog events of this trx, including this GTIDEvent. This is useful to detect transaction boundry
    • immediate/original server version

    TableMapEvent:

    • column names
    • primary key info
    • signedness info for numeric columns
    • collation info for character and enum/set columns
    • geometry type info

    These can be hopefully to solve #427 if using MySQL-8.0

    Example GTIDEvent dump (MySQL-8.0):

    === GTIDEvent === 
    Date: 2020-02-01 19:15:26
    Log position: 3440812
    Event size: 88
    Commit flag: 1                                                                                                                                           
    GTID_NEXT: 5aa72a7f-44a8-11ea-947f-0242ac190002:55
    LAST_COMMITTED: 54
    SEQUENCE_NUMBER: 55
    Immediate commmit timestamp: 1580555726309342 (2020-02-01T19:15:26.309342+08:00)
    Orignal commmit timestamp: 0 (<n/a>)
    Transaction length: 197
    Immediate server version: 80019
    Orignal server version: 0
    

    Example GTIDEvent dump for the same event (MySQL-5.7):

    === GTIDEvent ===
    Date: 2020-02-01 19:15:26
    Log position: 15156
    Event size: 65
    Commit flag: 1
    GTID_NEXT: 5aa72a7f-44a8-11ea-947f-0242ac190002:55
    LAST_COMMITTED: 49                                                                                                                                       
    SEQUENCE_NUMBER: 50
    Immediate commmit timestamp: 0 (<n/a>)
    Orignal commmit timestamp: 0 (<n/a>)
    Transaction length: 0
    Immediate server version: 0
    Orignal server version: 0
    
    

    Example TableMapEvent dump (MySQL-8.0):

    === TableMapEvent ===
    Date: 2020-03-10 15:24:58
    Log position: 78747
    Event size: 580
    TableID: 118
    TableID size: 6
    Flags: 1
    Schema: test
    Table: _types
    Column count: 42
    Column type: 
    00000000  10 01 01 02 09 03 08 f6  04 05 01 02 09 03 08 f6  |................|
    00000010  04 05 0d 0a 13 13 12 12  11 11 fe 0f fe 0f fc fc  |................|
    00000020  fc fc fc fc fc fc fe fe  ff f5                    |..........|
    NULL bitmap: 
    00000000  00 00 fc c0 ff 03                                 |......|
    Signedness bitmap: 
    00000000  00 7f 80                                          |...|
    Default charset: []
    Column charset: [224 224 63 63 63 63 63 63 224 224 224 224]
    Set str value: [[1 2]]
    Enum str value: [[a b]]
    Column name: [b_bit n_boolean n_tinyint n_smallint n_mediumint n_int n_bigint n_decimal n_float n_double nu_tinyint nu_smallint nu_mediumint nu_int nu_bigint nu_decimal nu_float nu_double t_year t_date t_time t_ftime t_datetime t_fdatetime t_timestamp t_ftimestamp c_char c_varchar c_binary c_varbinary c_tinyblob c_blob c_mediumblob c_longblob c_tinytext c_text c_mediumtext c_longtext e_enum s_set g_geometry j_json]
    Geometry type: [0]
    Primary key: []
    Primary key prefix: []
    Enum/set default charset: [224]
    Enum/set column charset: []
    
    

    Example TableMapEvent dump for the same event (MySQL-5.7):

    === TableMapEvent ===
    Date: 2020-03-10 15:24:58
    Log position: 31058
    Event size: 133
    TableID: 117
    TableID size: 6
    Flags: 1
    Schema: test
    Table: _types
    Column count: 42
    Column type: 
    00000000  10 01 01 02 09 03 08 f6  04 05 01 02 09 03 08 f6  |................|
    00000010  04 05 0d 0a 13 13 12 12  11 11 fe 0f fe 0f fc fc  |................|
    00000020  fc fc fc fc fc fc fe fe  ff f5                    |..........|
    NULL bitmap: 
    00000000  00 00 fc c0 ff 03                                 |......|
    Signedness bitmap: 
    Default charset: []
    Column charset: []
    Set str value: []
    Enum str value: []
    Column name: []
    Geometry type: []
    Primary key: []
    Primary key prefix: []
    Enum/set default charset: []
    Enum/set column charset: []
    
    

    ref:

    • https://mysqlhighavailability.com/more-metadata-is-written-into-binary-log/
    • https://mysqlhighavailability.com/taking-advantage-of-new-transaction-length-metadata/
    • https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_row_metadata
    • https://dev.mysql.com/doc/dev/mysql-server/latest/classbinary__log_1_1Gtid__event.html
    • https://dev.mysql.com/doc/dev/mysql-server/latest/classbinary__log_1_1Table__map__event.html
    opened by huangjunwen 13
  • should be possible to exit from processing file when conditions met

    should be possible to exit from processing file when conditions met

    • adding propery 'exit bool';
    • adding function SetExit to control it.
    opened by svart-ravn 13
  • TWEAK: use gtid from dump position, and incrementally update it with binlog syncer

    TWEAK: use gtid from dump position, and incrementally update it with binlog syncer

    This is an improvement/bugfix to the previous PR.

    1. In the prev PR, we continue with file-based binlog replication from gtid dump, and that introduces a special case. Instead, we can use global.gtid_purged.

    2. Reimplement SyncedGTIDSet because one single gtid is generally useless to restore sync from.

    3. Renaming some gtid to gset, which is a set of gtid, to be clear

    #260

    opened by taylorchu 12
  • Allow to synchronise GTIDs 'OnPosSynced'

    Allow to synchronise GTIDs 'OnPosSynced'

    We want to keep track of the GTID when a transaction has ended or a table has been updated

    opened by bejelith 11
  • parse gtid for parseHandler if MySQL works in GTID_MODE, and begin to startWithGTID after mysqldump is done

    parse gtid for parseHandler if MySQL works in GTID_MODE, and begin to startWithGTID after mysqldump is done

    Purpose of This PR:

    1. if MySQL is GTID_MODE=ON, canal would be start to replicate with GTID after mysqldump is done.
    2. Add one more func GtidSet() for parseHandler, to save gtid set after parsing gtid from mysqldump.
    opened by jianhaiqing 10
  • Test pr

    Test pr

    opened by 3762285 1
  • extended ExecuteSelectStreaming

    extended ExecuteSelectStreaming

    The other day I saw the memory usage of my mysql server implementation (built with go-mysql) surge to 150GB and on further inspection I learned this was caused by a SELECT-query with a rather large resultset. This spiked my interest in the recently implemented ExecuteSelectStreaming function but I had a hard time implementing anything useful with it. My server implementation acts as a proxy between clients and multiple mysql servers. Using the ExecuteSelectStreaming function instead of Execute in for my handler's HandleQuery function lacked any way of properly streaming the resultset from the backend mysql server back to my client. This PR solves this issue for me without being too obtrusive.

    Biggest change is the additional callback ExecuteSelectStreaming type SelectPerResultCallback. Once the SELECT query was successful the preliminary Result, without the actual data, is passed to this callback. This allows me to write this result directly back to the client, which at that point means only the number of fields and field information is fed back to the client followed by an EOF. Then, when rowdata comes in and therefor calls to the SelectPerRowCallback callback, I directly write these rows to the client with the new writeFieldValues function. For this to work I had to export formatTextValue and writeValue.

    This is a relevant snippet of my HandleQuery where ExecuteSelectStreaming is used:

    func (q MyHandler) HandleQuery(query string) (*mysql.Result, error) {
    	// ...
    
    	// I use vitess' sqlparsers for the exact same reason as #579
    	if stmt == sqlparser.StmtSelect {
    		// for SELECT queries, stream the result and its rows directly back to the
    		// client rather than reading the whole resultset in memory and then send
    		// that
    		var stream mysql.Result
    		err = backendConn.ExecuteSelectStreaming(query, &stream,
    			// called per row within result
    			func(row []mysql.FieldValue) error {
    				return clientConn.WriteValue(row)
    			},
    			// called per result
    			func(r *mysql.Result) error {
    				return clientConn.WriteValue(r)
    			},
    		)
    
    		return &stream, err
    	} else {
    		// for any query other than SELECT
    		return backendConn.Execute(query)
    	}
    }
    

    Using ExecuteSelectStreaming this way shows no spike or memory usage increase at all when I run the exact same query that consumed 150G of memory before.

    I couldn't find any relevant examples for the use of ExecuteSelectStreaming and I know this breaks the API for ExecuteSelectStreaming but if this PR would be merged and released in a version like 1.4.0 that doesn't necessarily has to matter. I can imagine this added flexibility could make ExecuteSelectStreaming much more suitable to solve more people's problems.

    Besides that, I added this functionality for prepared SELECT statements as well.

    opened by skoef 0
  • Parse Transaction Sql Should Continue It

    Parse Transaction Sql Should Continue It

    parse query(SAVEPOINT trans2) err line 1 column 9 near "SAVEPOINT trans2" , will skip this event

    opened by HZMarico 1
  • program will crash when calling RowNumber() function of Result which is returned by DDL or insert/update/delete statement

    program will crash when calling RowNumber() function of Result which is returned by DDL or insert/update/delete statement

    program will panic when calling RowNumber() function of Result which is returned by DDL or insert/update/delete statement

    how to repeat:

    sql := `create table if not exists t01(id int);`
    result, err := conn.Execute(sql)
    if err != nil {
        fmt.Println(err.Error())
    }
    fmt.Println(result.RowNumber())
    

    result.RowNumber() will cause panic, I know getting row number of those statements is meaningless, but I think it's better to return zero rather than panic the whole program.

    how to fix: I've created a PR #578

    opened by romberli 0
  • why mysql.Position no time

    why mysql.Position no time

    why mysql.Position no time . func (h *handler) OnXID(nextPos mysql.Position) error {} ,this func can be have a property time,and every event must have time property, OK?

    opened by j262965682 0
  • To implement a connection pool for client/Conn

    To implement a connection pool for client/Conn

    enhancement 
    opened by atercattus 0
  • Run test also on mysql 8.x

    Run test also on mysql 8.x

    Current GitHub action .yml uses ubuntu-18.04 with mysql 5.7.

    It would be good to test the package also on mysql 8.x.

    enhancement 
    opened by atercattus 0
  • Streaming MySql ResultSet

    Streaming MySql ResultSet

    Hi,

    I wanted to use MySQL proxy for big data analytics which sometimes takes a couple of hours to finish the execution have a full result. Is it possible to stream the ResultSet while the query is executing?

    enhancement 
    opened by SananGuliyev 4
  • go-mysql进行binlog备份的速度远小于mysqlbinlog

    go-mysql进行binlog备份的速度远小于mysqlbinlog

    实测多次,速度差距有两倍左右,不知道作者有没有实验过?

    opened by paulmarkyes 2
Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Go SQL Drivers 11.1k Jul 23, 2021
logical is tool for synchronizing from PostgreSQL to custom handler through replication slot

logical logical is tool for synchronizing from PostgreSQL to custom handler through replication slot Required Postgresql 10.0+ Howto Download Choose t

梦飞 6 Jul 1, 2021
PostgreSQL driver and toolkit for Go

pgx - PostgreSQL Driver and Toolkit pgx is a pure Go driver and toolkit for PostgreSQL. pgx aims to be low-level, fast, and performant, while also ena

Jack Christensen 4.3k Jul 25, 2021
Microsoft SQL server driver written in go language

A pure Go MSSQL driver for Go's database/sql package Install Requires Go 1.8 or above. Install with go get github.com/denisenkom/go-mssqldb . Connecti

null 1.4k Jul 22, 2021
SQLite with pure Go

Sqinn-Go is a Go (Golang) library for accessing SQLite databases in pure Go. It uses Sqinn https://github.com/cvilsmeier/sqinn under the hood. It star

Christoph Vilsmeier 69 Jul 18, 2021
Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)

gokv Simple key-value store abstraction and implementations for Go Contents Features Simple interface Implementations Value types Marshal formats Road

Philipp Gillé 347 Jul 10, 2021
Distributed WebSocket broker

dSock dSock is a distributed WebSocket broker (in Go, using Redis). Clients can authenticate & connect, and you can send text/binary message as an API

Charles Crete 200 Jul 22, 2021
SAP (formerly sybase) ASE/RS/IQ driver written in pure go

tds import "github.com/thda/tds" Package tds is a pure Go Sybase ASE/IQ/RS driver for the database/sql package. Status This is a beta release. This dr

Thomas 47 May 30, 2021
Pure Go Postgres driver for database/sql

pq - A pure Go postgres driver for Go's database/sql package Install go get github.com/lib/pq Features SSL Handles bad connections for database/sql S

null 6.7k Jul 26, 2021
High-performance framework for building redis-protocol compatible TCP servers/services

Redeo The high-performance Swiss Army Knife for building redis-protocol compatible servers/services. Parts This repository is organised into multiple

Black Square Media 397 Jun 21, 2021
🐺 Deploy Databases and Services Easily for Development and Testing Pipelines.

Peanut provides an API and a command line tool to deploy and configure the commonly used services like databases, message brokers, graphing tools ... etc. It perfectly suited for development, manual testing, automated testing pipelines where mocking is not possible and test drives.

Ahmed 35 Jul 24, 2021
Data access layer for PostgreSQL, CockroachDB, MySQL, SQLite and MongoDB with ORM-like features.

upper/db is a productive data access layer (DAL) for Go that provides agnostic tools to work with different data sources

upper.io 2.7k Jul 24, 2021
Open source Firebase + Heroku to develop, scale and secure serverless apps on Kubernetes

Develop, Deploy and Secure Serverless Apps on Kubernetes. Website • Docs • Support Space Cloud is a Kubernetes based serverless platform that provides

Space Up Technologies 3.1k Jul 21, 2021
The Couchbase Go SDK

Couchbase Go Client This is the official Couchbase Go SDK. If you are looking for our previous unofficial prototype Go client library, please see: htt

null 327 Jul 15, 2021