A MySQL-compatible relational database with a storage agnostic query engine. Implemented in pure Go.

Overview

go-mysql-server

go-mysql-server is a SQL engine which parses standard SQL (based on MySQL syntax) and executes queries on data sources of your choice. A simple in-memory database and table implementation are provided, and you can query any data source you want by implementing a few interfaces.

go-mysql-server also provides a server implementation compatible with the MySQL wire protocol. That means it is compatible with MySQL ODBC, JDBC, or the default MySQL client shell interface.

Dolt, a SQL database with Git-style versioning, is the main database implementation of this package. Check out that project for reference implmenentations.

Scope of this project

These are the goals of go-mysql-server:

  • Be a generic extensible SQL engine that performs queries on your data sources.
  • Provide a simple database implementation suitable for use in tests.
  • Define interfaces you can implement to query your own data sources.
  • Provide a runnable server speaking the MySQL wire protocol, connected to data sources of your choice.
  • Optimize query plans.
  • Allow implementators to add their own analysis steps and optimizations.
  • Support indexed lookups and joins on data tables that support them.
  • Support external index driver implementations such as pilosa.
  • With few caveats and using a full database implementation, be a drop-in MySQL database replacement.

Non-goals of go-mysql-server:

  • Be an application/server you can use directly.
  • Provide any kind of backend implementation (other than the memory one used for testing) such as json, csv, yaml. That's for clients to implement and use.

What's the use case of go-mysql-server?

go-mysql-server has two primary uses case:

  1. Stand-in for MySQL in a golang test environment, using the built-in memory database implementation.

  2. Providing access to aribtrary data sources with SQL queries by implementing a handful of interfaces. The most complete real-world implementation is Dolt.

Installation

The import path for the package is github.com/dolthub/go-mysql-server.

To install it, run:

go get github.com/dolthub/go-mysql-server

Go Documentation

SQL syntax

The goal of go-mysql-server is to support 100% of the statements that MySQL does. We are continuously adding more functionality to the engine, but not everything is supported yet. To see what is currently included check the SUPPORTED file.

Third-party clients

We support and actively test against certain third-party clients to ensure compatibility between them and go-mysql-server. You can check out the list of supported third party clients in the SUPPORTED_CLIENTS file along with some examples on how to connect to go-mysql-server using them.

Available functions

Name Description
ABS(expr) returns the absolute value of an expression
ACOS(expr) returns the arccos of an expression
ARRAY_LENGTH(json) if the json representation is an array, this function returns its size.
ASIN(expr) returns the arcsin of an expression
ATAN(expr) returs the arctan of an expression
AVG(expr) returns the average value of expr in all rows.
CEIL(number) returns the smallest integer value that is greater than or equal to number.
CEILING(number) returns the smallest integer value that is greater than or equal to number.
CHARACTER_LENGTH(str) returns the length of the string in characters.
CHAR_LENGTH(str) returns the length of the string in characters.
COALESCE(...) returns the first non-null value in a list.
CONCAT(...) concatenates any group of fields into a single string.
CONCAT_WS(sep, ...) concatenates any group of fields into a single string. The first argument is the separator for the rest of the arguments. The separator is added between the strings to be concatenated. The separator can be a string, as can the rest of the arguments. If the separator is NULL, the result is NULL.
CONNECTION_ID() returns the current connection ID.
COS(expr) returns the cosine of an expression.
COT(expr) returns the arctangent of an expression.
COUNT(expr) returns a count of the number of non-NULL values of expr in the rows retrieved by a SELECT statement.
CURRENT_USER() returns the current user
DATE(date) returns the date part of the given date.
DATETIME(expr) returns a DATETIME value for the expression given (e.g. the string '2020-01-02').
DATE_ADD(date, interval) adds the interval to the given date.
DATE_SUB(date, interval) subtracts the interval from the given date.
DAY(date) is a synonym for DAYOFMONTH().
DAYOFMONTH(date) returns the day of the month (0-31).
DAYOFWEEK(date) returns the day of the week of the given date.
DAYOFYEAR(date) returns the day of the year of the given date.
DEGREES(expr) returns the number of degrees in the radian expression given.
EXPLODE(...) generates a new row in the result set for each element in the expressions provided.
FIRST(expr) returns the first value in a sequence of elements of an aggregation.
FLOOR(number) returns the largest integer value that is less than or equal to number.
FROM_BASE64(str) decodes the base64-encoded string str.
GREATEST(...) returns the greatest numeric or string value.
HOUR(date) returns the hours of the given date.
IFNULL(expr1, expr2) if expr1 is not NULL, it returns expr1; otherwise it returns expr2.
IF(expr1, expr2, expr3) if expr1 evaluates to true, retuns expr2. Otherwise returns expr3.
INSTR(str1, str2) returns the 1-based index of the first occurence of str2 in str1, or 0 if it does not occur.
IS_BINARY(blob) returns whether a blob is a binary file or not.
JSON_EXTRACT(json_doc, path, ...) extracts data from a json document using json paths. Extracting a string will result in that string being quoted. To avoid this, use JSON_UNQUOTE(JSON_EXTRACT(json_doc, path, ...)).
JSON_UNQUOTE(json) unquotes JSON value and returns the result as a utf8mb4 string.
LAST(expr) returns the last value in a sequence of elements of an aggregation.
LEAST(...) returns the smaller numeric or string value.
LEFT(str, int) returns the first N characters in the string given.
LENGTH(str) returns the length of the string in bytes.
LN(X) returns the natural logarithm of X.
LOG(X), LOG(B, X) if called with one parameter, this function returns the natural logarithm of X. If called with two parameters, this function returns the logarithm of X to the base B. If X is less than or equal to 0, or if B is less than or equal to 1, then NULL is returned.
LOG10(X) returns the base-10 logarithm of X.
LOG2(X) returns the base-2 logarithm of X.
LOWER(str) returns the string str with all characters in lower case.
LPAD(str, len, padstr) returns the string str, left-padded with the string padstr to a length of len characters.
LTRIM(str) returns the string str with leading space characters removed.
MAX(expr) returns the maximum value of expr in all rows.
MID(str, pos, [len]) returns a substring from the provided string starting at pos with a length of len characters. If no len is provided, all characters from pos until the end will be taken.
MIN(expr) returns the minimum value of expr in all rows.
MINUTE(date) returns the minutes of the given date.
MONTH(date) returns the month of the given date.
NOW() returns the current timestamp.
NULLIF(expr1, expr2) returns NULL if expr1 = expr2 is true, otherwise returns expr1.
POW(X, Y) returns the value of X raised to the power of Y.
POWER(X, Y) synonym for POW
RADIANS(expr) returns the radian value of the degrees argument given
RAND(expr?) returns a random number in the range 0 <= x < 1. If an argument is given, it is used to seed the random number generator.
REGEXP_MATCHES(text, pattern, [flags]) returns an array with the matches of the pattern in the given text. Flags can be given to control certain behaviours of the regular expression. Currently, only the i flag is supported, to make the comparison case insensitive.
REPEAT(str, count) returns a string consisting of the string str repeated count times.
REPLACE(str,from_str,to_str) returns the string str with all occurrences of the string from_str replaced by the string to_str.
REVERSE(str) returns the string str with the order of the characters reversed.
ROUND(number, decimals) rounds the number to decimals decimal places.
RPAD(str, len, padstr) returns the string str, right-padded with the string padstr to a length of len characters.
RTRIM(str) returns the string str with trailing space characters removed.
SECOND(date) returns the seconds of the given date.
SIN(expr) returns the sine of the expression given.
SLEEP(seconds) waits for the specified number of seconds (can be fractional).
SOUNDEX(str) returns the soundex of a string.
SPLIT(str,sep) returns the parts of the string str split by the separator sep as a JSON array of strings.
SQRT(X) returns the square root of a nonnegative number X.
SUBSTR(str, pos, [len]) returns a substring from the string str starting at pos with a length of len characters. If no len is provided, all characters from pos until the end will be taken.
SUBSTRING(str, pos, [len]) returns a substring from the string str starting at pos with a length of len characters. If no len is provided, all characters from pos until the end will be taken.
SUBSTRING_INDEX(str, delim, count) Returns a substring after count appearances of delim. If count is negative, counts from the right side of the string.
SUM(expr) returns the sum of expr in all rows.
TAN(expr) returns the tangent of the expression given.
TIMEDIFF(expr1, expr2) returns expr1 − expr2 expressed as a time value. expr1 and expr2 are time or date-and-time expressions, but both must be of the same type.
TIMESTAMP(expr) returns a timestamp value for the expression given (e.g. the string '2020-01-02').
TO_BASE64(str) encodes the string str in base64 format.
TRIM(str) returns the string str with all spaces removed.
UNIX_TIMESTAMP(expr?) returns the datetime argument to the number of seconds since the Unix epoch. With nor argument, returns the number of execonds since the Unix epoch for the current time.
UPPER(str) returns the string str with all characters in upper case.
USER() returns the current user name.
UTC_TIMESTAMP() returns the current UTC timestamp.
WEEKDAY(date) returns the weekday of the given date.
YEAR(date) returns the year of the given date.
YEARWEEK(date, mode) returns year and week for a date. The year in the result may be different from the year in the date argument for the first and the last week of the year.

Configuration

The behaviour of certain parts of go-mysql-server can be configured using either environment variables or session variables.

Session variables are set using the following SQL queries:

SET <variable name> = <value>
Name Type Description
INMEMORY_JOINS environment If set it will perform all joins in memory. Default is off.
inmemory_joins session If set it will perform all joins in memory. Default is off. This has precedence over INMEMORY_JOINS.
MAX_MEMORY environment The maximum number of memory, in megabytes, that can be consumed by go-mysql-server. Any in-memory caches or computations will no longer try to use memory when the limit is reached. Note that this may cause certain queries to fail if there is not enough memory available, such as queries using DISTINCT, ORDER BY or GROUP BY with groupings.
DEBUG_ANALYZER environment If set, the analyzer will print debug messages. Default is off.

Example

go-mysql-server contains a SQL engine and server implementation. So, if you want to start a server, first instantiate the engine and pass your sql.Database implementation.

It will be in charge of handling all the logic to retrieve the data from your source. Here you can see an example using the in-memory database implementation:

package main

import (
    "time"

    "github.com/dolthub/go-mysql-server/auth"
    "github.com/dolthub/go-mysql-server/memory"
    "github.com/dolthub/go-mysql-server/server"
    "github.com/dolthub/go-mysql-server/sql"
    sqle "github.com/dolthub/go-mysql-server"
)

func main() {
    driver := sqle.NewDefault()
    driver.AddDatabase(createTestDatabase())

    config := server.Config{
        Protocol: "tcp",
        Address:  "localhost:3306",
        Auth:     auth.NewNativeSingle("user", "pass", auth.AllPermissions),
    }

    s, err := server.NewDefaultServer(config, driver)
    if err != nil {
        panic(err)
    }

    s.Start()
}

func createTestDatabase() *memory.Database {
    const (
        dbName    = "test"
        tableName = "mytable"
    )

    db := memory.NewDatabase(dbName)
    table := memory.NewTable(tableName, sql.Schema{
        {Name: "name", Type: sql.Text, Nullable: false, Source: tableName},
        {Name: "email", Type: sql.Text, Nullable: false, Source: tableName},
        {Name: "phone_numbers", Type: sql.JSON, Nullable: false, Source: tableName},
        {Name: "created_at", Type: sql.Timestamp, Nullable: false, Source: tableName},
    })

    db.AddTable(tableName, table)
    ctx := sql.NewEmptyContext()

    rows := []sql.Row{
        sql.NewRow("John Doe", "[email protected]", []string{"555-555-555"}, time.Now()),
        sql.NewRow("John Doe", "[email protected]", []string{}, time.Now()),
        sql.NewRow("Jane Doe", "[email protected]", []string{}, time.Now()),
        sql.NewRow("Evil Bob", "[email protected]", []string{"555-666-555", "666-666-666"}, time.Now()),
	}

    for _, row := range rows {
        table.Insert(ctx, row)
    }

    return db
}

Then, you can connect to the server with any MySQL client:

> mysql --host=127.0.0.1 --port=3306 -u user -ppass test -e "SELECT * FROM mytable"
+----------+-------------------+-------------------------------+---------------------+
| name     | email             | phone_numbers                 | created_at          |
+----------+-------------------+-------------------------------+---------------------+
| John Doe | [email protected]      | ["555-555-555"]               | 2018-04-18 10:42:58 |
| John Doe | [email protected]   | []                            | 2018-04-18 10:42:58 |
| Jane Doe | [email protected]      | []                            | 2018-04-18 10:42:58 |
| Evil Bob | [email protected] | ["555-666-555","666-666-666"] | 2018-04-18 10:42:58 |
+----------+-------------------+-------------------------------+---------------------+

See the complete example here.

Queries examples

SELECT count(name) FROM mytable
+---------------------+
| COUNT(mytable.name) |
+---------------------+
|                   4 |
+---------------------+

SELECT name,year(created_at) FROM mytable
+----------+--------------------------+
| name     | YEAR(mytable.created_at) |
+----------+--------------------------+
| John Doe |                     2018 |
| John Doe |                     2018 |
| Jane Doe |                     2018 |
| Evil Bob |                     2018 |
+----------+--------------------------+

SELECT email FROM mytable WHERE name = 'Evil Bob'
+-------------------+
| email             |
+-------------------+
| [email protected] |
+-------------------+

Custom data source implementation

To create your own data source implementation you need to implement the following interfaces:

  • sql.Database interface. This interface will provide tables from your data source. You can also implement other interfaces on your database to unlock additional functionality:

    • sql.TableCreator to support creating new tables
    • sql.TableDropper to support dropping tables
    • sql.TableRenamer to support renaming tables
    • sql.ViewCreator to support creating persisted views on your tables
    • sql.ViewDropper to support dropping persisted views
  • sql.Table interface. This interface will provide rows of values from your data source. You can also implement other interfaces on your table to unlock additional functionality:

    • sql.InsertableTable to allow your data source to be updated with INSERT statements.
    • sql.UpdateableTable to allow your data source to be updated with UPDATE statements.
    • sql.DeletableTable to allow your data source to be updated with DELETE statements.
    • sql.ReplaceableTable to allow your data source to be updated with REPLACE statements.
    • sql.AlterableTable to allow your data source to have its schema modified by adding, dropping, and altering columns.
    • sql.IndexedTable to declare your table's native indexes to speed up query execution.
    • sql.IndexAlterableTable to accept the creation of new native indexes.
    • sql.ForeignKeyAlterableTable to signal your support of foreign key constraints in your table's schema and data.
    • sql.ProjectedTable to return rows that only contain a subset of the columns in the table. This can make query execution faster.
    • sql.FilteredTable to filter the rows returned by your table to those matching a given expression. This can make query execution faster (if your table implementation can filter rows more efficiently than checking an expression on every row in a table).

You can see a really simple data source implementation in the memory package.

Testing your data source implementation

go-mysql-server provides a suite of engine tests that you can use to validate that your implementation works as expected. See the enginetest package for details and examples.

Indexes

go-mysql-server exposes a series of interfaces to allow you to implement your own indexes so you can speed up your queries.

Native indexes

Tables can declare that they support native indexes, which means that they support efficiently returning a subset of their rows that match an expression. The memory package contains an example of this behavior, but please note that it is only for example purposes and doesn't actually make queries faster (although we could change this in the future).

Integrators should implement the sql.IndexedTable interface to declare which indexes their tables support and provide a means of returning a subset of the rows based on an sql.IndexLookup provided by their sql.Index implementation. There are a variety of extensions to sql.Index that can be implemented, each of which unlocks additional capabilities:

  • sql.Index. Base-level interface, supporting equality lookups for an index.
  • sql.AscendIndex. Adds support for > and >= indexed lookups.
  • sql.DescendIndex. Adds support for < and <= indexed lookups.
  • sql.NegateIndex. Adds support for negating other index lookups.
  • sql.MergeableIndexLookup. Adds support for merging two sql.IndexLookups together to create a new one, representing AND and OR expressions on indexed columns.

Custom index driver implementation

Index drivers provide different backends for storing and querying indexes, without the need for a table to store and query its own native indexes. To implement a custom index driver you need to implement a few things:

  • sql.IndexDriver interface, which will be the driver itself. Not that your driver must return an unique ID in the ID method. This ID is unique for your driver and should not clash with any other registered driver. It's the driver's responsibility to be fault tolerant and be able to automatically detect and recover from corruption in indexes.
  • sql.Index interface, returned by your driver when an index is loaded or created.
  • sql.IndexValueIter interface, which will be returned by your sql.IndexLookup and should return the values of the index.
  • Don't forget to register the index driver in your sql.Context using context.RegisterIndexDriver(mydriver) to be able to use it.

To create indexes using your custom index driver you need to use extension syntax USING driverid on the index creation statement. For example:

CREATE INDEX foo ON table USING driverid (col1, col2)

go-mysql-server does not provide a production index driver implementation. We previously provided a pilosa implementation, but removed it due to the difficulty of supporting it on all platforms (pilosa doesn't work on Windows).

You can see an example of a driver implementation in the memory package.

Metrics

go-mysql-server utilizes github.com/go-kit/kit/metrics module to expose metrics (counters, gauges, histograms) for certain packages (so far for engine, analyzer, regex). If you already have metrics server (prometheus, statsd/statsite, influxdb, etc.) and you want to gather metrics also from go-mysql-server components, you will need to initialize some global variables by particular implementations to satisfy following interfaces:

// Counter describes a metric that accumulates values monotonically.
type Counter interface {
	With(labelValues ...string) Counter
	Add(delta float64)
}

// Gauge describes a metric that takes specific values over time.
type Gauge interface {
	With(labelValues ...string) Gauge
	Set(value float64)
	Add(delta float64)
}

// Histogram describes a metric that takes repeated observations of the same
// kind of thing, and produces a statistical summary of those observations,
// typically expressed as quantiles or buckets.
type Histogram interface {
	With(labelValues ...string) Histogram
	Observe(value float64)
}

You can use one of go-kit implementations or try your own. For instance, we want to expose metrics for prometheus server. So, before we start mysql engine, we have to set up the following variables:

import(
    "github.com/go-kit/kit/metrics/prometheus"
    promopts "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promhttp"
)

//....

// engine metrics
sqle.QueryCounter = prometheus.NewCounterFrom(promopts.CounterOpts{
		Namespace: "go_mysql_server",
		Subsystem: "engine",
		Name:      "query_counter",
	}, []string{
		"query",
	})
sqle.QueryErrorCounter = prometheus.NewCounterFrom(promopts.CounterOpts{
    Namespace: "go_mysql_server",
    Subsystem: "engine",
    Name:      "query_error_counter",
}, []string{
    "query",
    "error",
})
sqle.QueryHistogram = prometheus.NewHistogramFrom(promopts.HistogramOpts{
    Namespace: "go_mysql_server",
    Subsystem: "engine",
    Name:      "query_histogram",
}, []string{
    "query",
    "duration",
})

// analyzer metrics
analyzer.ParallelQueryCounter = prometheus.NewCounterFrom(promopts.CounterOpts{
    Namespace: "go_mysql_server",
    Subsystem: "analyzer",
    Name:      "parallel_query_counter",
}, []string{
    "parallelism",
})

// regex metrics
regex.CompileHistogram = prometheus.NewHistogramFrom(promopts.HistogramOpts{
    Namespace: "go_mysql_server",
    Subsystem: "regex",
    Name:      "compile_histogram",
}, []string{
    "regex",
    "duration",
})
regex.MatchHistogram = prometheus.NewHistogramFrom(promopts.HistogramOpts{
    Namespace: "go_mysql_server",
    Subsystem: "regex",
    Name:      "match_histogram",
}, []string{
    "string",
    "duration",
})

One important note - internally we set some labels for metrics, that's why have to pass those keys like "duration", "query", "driver", ... when we register metrics in prometheus. Other systems may have different requirements.

Powered by go-mysql-server

Acknowledgements

go-mysql-server was originally developed by the {source-d} organzation, and this repository was originally forked from src-d. We want to thank the entire {source-d} development team for their work on this project, especially Miguel Molina (@erizocosmico) and Juanjo Álvarez Martinez (@juanjux).

License

Apache License 2.0, see LICENSE

Comments
  • LastInsertId always returns 0

    LastInsertId always returns 0

    When running several insert commands like

    result, err := db.Exec("INSERT INTO mytable SET number = 18;")
    id, err := result.LastInsertId()
    

    Against a simple table like

    CREATE TABLE IF NOT EXISTS mytable (
    id int unsigned NOT NULL AUTO_INCREMENT,
    number int unsigned DEFAULT NULL,
    PRIMARY KEY (id),
    ) DEFAULT CHARSET=utf8;
    

    the returned id is always 0. While the go-mysql-driver returns the correct pkey.

    Used libraries: github.com/dolthub/go-mysql-server v0.6.1-0.20201228192939-415fc40f3a71 github.com/go-sql-driver/mysql v1.5.0

    opened by eqinox76 12
  • Support for prepared statements

    Support for prepared statements

    This is more important than we have been treating it, because there are many drivers that do this under the hood without clients explicitly asking for it. From https://github.com/liquidata-inc/go-mysql-server/issues/169

    opened by zachmu 11
  • Question/Feature Request: How can I increase the parallelism of expression evaluation?

    Question/Feature Request: How can I increase the parallelism of expression evaluation?

    I have a situation where I have a custom SQL function that is a bit slow (like a single network request slow). Because rows are demanded one at a time from a RowIter, these expressions are evaluated one at a time, meaning we run these network requests one at a time. I would like some way to evaluate these rows in parallel as this would greatly improve the speed of my queries. I can't prefetch everything because I do not have all the data needed for all the network requests until query execution. A while back I tried making a custom SQL plan node which wraps another node and prefetches rows from its child node in parallel, but I ran into some issues where the RowIter implementation I was calling misbehaved as it was not threadsafe. Do you have any suggestions for me? Was the parallel prefetch node a good/bad idea? I really appreciate your help with this, thanks.

    opened by andremarianiello 9
  • Error with passing time.Time as a field for saving

    Error with passing time.Time as a field for saving

    I am trying this as for writing DB unit tests. Pretty much seeing if I can get the the relvant tables so we can write some tests against the DB code that tries to interact with them. So mostly trying to work with code and SQL that we know works against a MySQL instance and see if it can do the same things with this instance (so can't really change around the syntax or calling too much).

    Having an issue where it's returning: Error 1105 (HY000): incompatible conversion to SQL type: TIMESTAMP

    From the time.Now() instance passed as part of a Prepared Statement.

    The field itself is being used with: CREATE TABLE IF NOT EXIXSTS ( ... TIMESTAMP NOT NULL ...) (which I thought should be safe enough)

    A bit of digging around in your codebase shows it's coming from /sql/datetime.go->ConvertWithoutRangeCheck

    It looks like it thinks the time instance being passed is a uint8[] instead of the actual time.Time or any of the other int types. Dumping it as-is (with Fmt.Println), shows the slice contains: [50 48 50 50 45 48 55 45 50 57 32 48 52 58 53 48 58 52 55 46 49 57 49 51 54 54] (which probably corresponds to something in my local GMT+10 instance)

    This looks very similar to this issue: https://4rum.dev/t/unsupported-scan-storing-driver-value-type-uint8-into-type-time-time/103/2 That others have reported against MySQL itself.

    Having a quick scan through the repo, nothing is standing out for me as a similar flag / config value that could be set to replicate this.

    Unsure what the actual fix happened for this, but will add more if I get closer to figuring out ways around this.

    opened by AndrewTheJavaGuy 8
  • Cast equivalent float64s to int64 rather than failing with ErrInvalidValue

    Cast equivalent float64s to int64 rather than failing with ErrInvalidValue

    When working with Pandas (both with and without doltpy) I ran into this issue multiple times - integer columns are converted to floats (due to the way Python handles nan values), which then causes dolt table import to complain with the following error despite the floats being "integral":

    Rows Processed: 0, Additions: 0, Modifications: 0, Had No Effect: 0
    
    A bad row was encountered while moving data.
    Bad Row: 
    error: '10.0' is not a valid value for 'INT'
    These can be ignored using the '--continue'
    

    This PR converts "integral"/equivalent floats to int64s to prevent this from happening. This still prevents non-integral floats from being imported, e.g.:

    Rows Processed: 0, Additions: 0, Modifications: 0, Had No Effect: 0
    
    A bad row was encountered while moving data.
    Bad Row: 
    error: '10.1' is not a valid value for 'INT'
    These can be ignored using the '--continue'
    

    I'm not sure how/where this should be tested, but if it is an acceptable PR I'll be happy to write the tests for it too.

    P.S. The isIntegral function can be removed and used in the if-statement as a condition if that's more preferable, though I think it should be documented (perhaps in a comment) since it's purpose may not be immediately obvious.

    opened by abmyii 8
  • Index error lost in parent call

    Index error lost in parent call

    Hi,

    First, thank you for the great package!

    I'm not sure if this is intentional, an error reported by a custom index implementation is not handled. The code is here: https://github.com/dolthub/go-mysql-server/blob/main/sql/analyzer/indexes.go#L71

    Should errInAnalysis = err be added here (same as the previous if line 61) so the error is reported to the caller?

    If so, I can send a PR for the fix. If not, how should a custom index implementation handle errors that should stop the process?

    Thanks.

    opened by jfrabaute 7
  • proposal: Refactor Indexed Table Access

    proposal: Refactor Indexed Table Access

    This is a design proposal to refactor the way indexed table access works and what abstractions are exposed to Integrators:

    The core change comes to sql.IndexLookup which will now be a concrete type constructed by the engine, not integrators:

    -type IndexLookup interface {
    -	fmt.Stringer
    +type IndexLookup struct {
    +	Str string
     	// Index returns the index that created this IndexLookup.
    -	Index() Index
    +	Index Index
     	// Ranges returns each Range that created this IndexLookup.
    -	Ranges() RangeCollection
    +	Ranges RangeCollection
     }
    

    This implies removing sql.Index.NewLookup, which will be replaced by an integrator method to indicate whether a given index can support a given lookup:

    type Index interface {
    	// ID returns the identifier of the index.
    	ID() string
    	// Database returns the database name this index belongs to.
    	Database() string
    	// Table returns the table name this index belongs to.
    	Table() string
    	// Expressions returns the indexed expressions. If the result is more than
    	// one expression, it means the index has multiple columns indexed. If it's
    	// just one, it means it may be an expression or a column.
    	Expressions() []string
    	// IsUnique returns whether this index is unique
    	IsUnique() bool
    	// Comment returns the comment for this index
    	Comment() string
    	// IndexType returns the type of this index, e.g. BTREE
    	IndexType() string
    	// IsGenerated returns whether this index was generated. Generated indexes
    	// are used for index access, but are not displayed (such as with SHOW INDEXES).
    	IsGenerated() bool
    -	// NewLookup returns a new IndexLookup for the ranges given. Ranges represent filters over columns. Each Range
    -	// is ordered by the column expressions (as returned by Expressions) with the RangeColumnExpr representing the
    -	// searchable area for each column expression. Each Range given will not overlap with any other ranges. Additionally,
    -	// all ranges will have the same length, and may represent a partial index (matching a prefix rather than the entire
    -	// index). If an integrator is unable to process the given ranges, then a nil may be returned. An error should be
    -	// returned only in the event that an error occurred.
    -	NewLookup(ctx *Context, ranges ...Range) (IndexLookup, error)
    +	// SupportsLookup returns true if the Index supports |lookup|.
    +	SupportsLookup(ctx *Context, lookup IndexLookup) (bool, error)
    	// ColumnExpressionTypes returns each expression and its associated Type. Each expression string should exactly
    	// match the string returned from Index.Expressions().
    	ColumnExpressionTypes(ctx *Context) []ColumnExpressionType
    }
    

    Finally, IndexedTable and IndexAddressableTable collapse into a single interface that exposed a set of Index definitions, and an AccessIndex() method that asks for a RowIter given an IndexLookup:

    diff --git a/sql/core.go b/sql/core.go
    index d0b3ba54..8ed56b1c 100644
    --- a/sql/core.go
    +++ b/sql/core.go
    @@ -430,28 +430,14 @@ type IndexColumn struct {
     // speed up execution of queries that reference those columns. Unlike DriverIndexableTable, IndexedTable doesn't need a
     // separate index driver to function.
     type IndexedTable interface {
    -	IndexAddressableTable
     	// GetIndexes returns all indexes on this table.
     	GetIndexes(ctx *Context) ([]Index, error)
    -}
    -
    -// IndexAddressable provides a Table that has its row iteration restricted to only the rows that match the given index
    -// lookup.
    -type IndexAddressable interface {
    -	// WithIndexLookup returns a version of the table that will return only the rows specified by the given IndexLookup,
    -	// which was in turn created by a call to Index.Get() for a set of keys for this table.
    -	WithIndexLookup(IndexLookup) Table
    -}
    -
    -// IndexAddressableTable is a table that can restrict its row iteration to only the rows that match the given index
    -// lookup.
    -type IndexAddressableTable interface {
    -	Table
    -	IndexAddressable
    +	// AccessIndex constructs a RowIter from an IndexLookup.
    +	AccessIndex(ctx *Context, lookup IndexLookup) (RowIter, error)
     }
    
    opened by andrew-wm-arthur 6
  • go 1.17 support

    go 1.17 support

    Any possibility of supporting go 1.17? We still use go 1.17 for now, we don't have plans to move to 1.18 due to the additional compilation costs. Also,we are not using 1.18 features.

    opened by joel-rieke 6
  • Aggregate Partition Window Rows beyond 127

    Aggregate Partition Window Rows beyond 127

    The number of rows referenced in a partition "ROWS BETWEEN" clause seem to be stored in a short int, because up to and including 127 works, but 128 and above do not.

    stocks> select date, act_symbol, avg(close) OVER (PARTITION BY act_symbol ORDER BY date ROWS BETWEEN 127 PRECEDING AND CURRENT ROW) AS ma200 FROM ohlcv WHERE act_symbol='AAPL' having date = '2022-02-11'; +-------------------------------+------------+--------------------+ | date | act_symbol | ma200 | +-------------------------------+------------+--------------------+ | 2022-02-11 00:00:00 +0000 UTC | AAPL | 158.39554687499958 | +-------------------------------+------------+--------------------+

    stocks> select date, act_symbol, avg(close) OVER (PARTITION BY act_symbol ORDER BY date ROWS BETWEEN 128 PRECEDING AND CURRENT ROW) AS ma200 FROM ohlcv WHERE act_symbol='AAPL' having date = '2022-02-11'; offset must be a non-negative integer; found: 128

    bug 
    opened by inversewd2 6
  • Proxy support?

    Proxy support?

    Hello,

    Has there been any investigation or proof of concepts around implementing a MySQL proxy based on go-mysql-server?

    I have prototyped a few different storage backend options such as S3 and CSV and go-mysql-server has worked out very well. I am thinking it could be very beneficial as a generic caching solution for MySQL.

    If there has been any work in this area is there any documentation or branches you can share? Any information you can provide would be helpful.

    Thanks Michael

    opened by mgale 6
  • Add support for read-only transactions

    Add support for read-only transactions

    Running "START TRANSACTION READ ONLY" on latest master gives me:

     Error 1105: syntax error at position 23 near 'READ'
    

    So I am assuming that read-only transactions are not supported.

    Would it make sense to at least swallow the error and treat the transaction as a read-write one?

    opened by bojanz 6
  • Some queries using `HAVING` can't be executed without `GROUPBY`

    Some queries using `HAVING` can't be executed without `GROUPBY`

    Some (but not all) queries using HAVING, but not explicitly using GROUPBY fail with the error: `found HAVING clause with no GROUP BY'. MySQL is able to execute these without problems, so GMS/Dolt should, too.

    Example: select t1.val as a from numbers as t1 having a = t1.val;

    opened by fulghum 0
  • Alias selection and conflated results when the same alias name is projected multiple times before a subquery

    Alias selection and conflated results when the same alias name is projected multiple times before a subquery

    When projection expressions define multiple aliases with the same name and project a subquery using that alias name, we don't match MySQL's behavior.

    Example Query: select 0 as a, 1 as a, (SELECT x from xy where x = a); MySQL Results: {0, 1, 0} GMS Results: {1, 1, 1}

    GMS currently chooses the second alias when it is referenced in the subquery, and the first column gets conflated with the second alias and returns incorrect results, too. This bug doesn't happen without the subquery projection expression, so seems likely that as part of manipulating the plan, we mess up the projection expressions.

    bug 
    opened by fulghum 0
  • Log warnings on ambiguous name qualification and resolution

    Log warnings on ambiguous name qualification and resolution

    In certain cases, MySQL allows ambiguity in referenced names and will register a warning for the query instead of throwing an error. GMS should follow this same behavior for consistency.

    An example from MySQL's SELECT Reference Documentation:

    If the HAVING clause refers to a column that is ambiguous, a warning occurs. In the following statement, col2 is ambiguous because it is used as both an alias and a column name: SELECT COUNT(col1) AS col2 FROM t GROUP BY col2 HAVING col2 = 2;

    opened by fulghum 0
  • Bad 'table not found' error on nested subqueries

    Bad 'table not found' error on nested subqueries

    error: 'table not found: dcim_rackgroup, maybe you mean dcim_rackgroup?' for query:

    SELECT COUNT(*) FROM (
    	SELECT (
    		SELECT count(*) 
    		FROM (
    			SELECT U0.`id` 
    			FROM `dcim_rack` U0 
    			INNER JOIN `dcim_rackgroup` U1 
    			ON (U0.`group_id` = U1.`id`) 
    			WHERE (
    				U1.`lft` >= `dcim_rackgroup`.`lft` AND 
    				U1.`lft` <= `dcim_rackgroup`.`rght` AND 
    				U1.`tree_id` = `dcim_rackgroup`.`tree_id`
    			)
    		) _count
    	) AS `rack_count` 
    	FROM `dcim_rackgroup` 
    	WHERE `dcim_rackgroup`.`id` 
    	IN ('418dd0dd47504bb190f354cf23ded6a6', 'd6d30bef4def4b66bcd180d4252eca7d', '34a74e488171481b96b222bf56a55bb9', '289e27c03cee4c299a3fa10517b54c52')
    ) subquery
    

    schema:

    CREATE TABLE `dcim_rackgroup` (
      `id` char(32) NOT NULL,
      `lft` int unsigned NOT NULL,
      `rght` int unsigned NOT NULL,
      `tree_id` int unsigned NOT NULL,
      `level` int unsigned NOT NULL,
      `parent_id` char(32),
      PRIMARY KEY (`id`),
      KEY `dcim_rackgroup_tree_id_9c2ad6f4` (`tree_id`),
      CONSTRAINT `dcim_rackgroup_parent_id_cc315105_fk_dcim_rackgroup_id` FOREIGN KEY (`parent_id`) REFERENCES `dcim_rackgroup` (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_bin;
    
    CREATE TABLE `dcim_rack` (
      `id` char(32) NOT NULL,
      `group_id` char(32),
      PRIMARY KEY (`id`),
      KEY `dcim_rack_group_id_44e90ea9` (`group_id`),
      CONSTRAINT `dcim_rack_group_id_44e90ea9_fk_dcim_rackgroup_id` FOREIGN KEY (`group_id`) REFERENCES `dcim_rackgroup` (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_bin;
    
    opened by andrew-wm-arthur 2
Releases(v0.12.0)
Owner
DoltHub
DoltHub
The lightweight, distributed relational database built on SQLite.

rqlite is a lightweight, distributed relational database, which uses SQLite as its storage engine. Forming a cluster is very straightforward, it grace

rqlite 11k Sep 18, 2022
"Go SQL DB" is a relational database that supports SQL queries for research purposes

A pure golang SQL database for database theory research

auxten 695 Sep 20, 2022
Go reproduction of Bustub--a simple relational database system.

Bustub in Golang Bustub is the course project of CMU15-445 Database System, which is a simple relational database system. This repo is a golang reprod

Zhang Each 1 Dec 18, 2021
TiDB is an open source distributed HTAP database compatible with the MySQL protocol

Slack Channel Twitter: @PingCAP Reddit Mailing list: lists.tidb.io For support, please contact PingCAP What is TiDB? TiDB ("Ti" stands for Titanium) i

PingCAP 32.4k Sep 24, 2022
A GPU-powered real-time analytics storage and query engine.

AresDB AresDB is a GPU-powered real-time analytics storage and query engine. It features low query latency, high data freshness and highly efficient i

Uber Open Source 2.9k Sep 23, 2022
A distributed MySQL binlog storage system built on Raft

What is kingbus? 中文 Kingbus is a distributed MySQL binlog store based on raft. Kingbus can act as a slave to the real master and as a master to the sl

Fei Chen 852 Sep 24, 2022
DonutDB: A SQL database implemented on DynamoDB and SQLite

DonutDB: A SQL database implemented on DynamoDB and SQLite

Peter Sanford 120 Sep 21, 2022
RadonDB is an open source, cloud-native MySQL database for building global, scalable cloud services

OverView RadonDB is an open source, Cloud-native MySQL database for unlimited scalability and performance. What is RadonDB? RadonDB is a cloud-native

RadonDB 1.7k Sep 17, 2022
Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding.

Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding.

Vitess 14.8k Sep 28, 2022
Run MySQL Database on Docker

Run MySQL Database on Docker cd <path>/resources/docker sudo docker-compose up (sudo for linux) This will start a container MySQL Database running on

null 0 Jan 1, 2022
Pure Go implementation of D. J. Bernstein's cdb constant database library.

Pure Go implementation of D. J. Bernstein's cdb constant database library.

John Barham 222 Jun 12, 2022
pure golang key database support key have value. 非常高效实用的键值数据库。

orderfile32 pure golang key database support key have value The orderfile32 is standard alone fast key value database. It have two version. one is thi

null 3 Apr 30, 2022
BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go

BadgerDB BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for Dgraph, a fast,

Blizard 1 Dec 10, 2021
A lightweight document-oriented NoSQL database written in pure Golang.

Lightweight document-oriented NoSQL Database ???? English | ???? 简体中文 | ???? Spanish CloverDB is a lightweight NoSQL database designed for being simpl

Stefano Scafiti 267 Sep 20, 2022
Lightweight RESTful database engine based on stack data structures

piladb [pee-lah-dee-bee]. pila means stack or battery in Spanish. piladb is a lightweight RESTful database engine based on stack data structures. Crea

Fernando Álvarez 198 Sep 7, 2022
Fast Database engine in Go.

gaeadb gaeadb is a pure Go Database engine designed by nnsgmsone. The goal of the project is to provide a database engine for table or other complex d

null 13 Oct 29, 2021
Terraform provider for SQLite database engine

Terraform provider for SQLite database engine !!! WARNING !!! This is an educational project. Not intended for any production use! !!! WARNING !!! Her

Konstantin Vasilev 3 Jun 11, 2022
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

null 35 Jun 17, 2022
This is a simple graph database in SQLite, inspired by "SQLite as a document database".

About This is a simple graph database in SQLite, inspired by "SQLite as a document database". Structure The schema consists of just two structures: No

Denis Papathanasiou 1.1k Sep 26, 2022