Type-safe Redis client for Golang

Overview

Redis client for Golang

Build Status PkgGoDev Documentation Chat

❤️ Uptrace.dev - distributed traces, logs, and errors in one place

Ecosystem

Features

Installation

go-redis supports 2 last Go versions and requires a Go version with modules support. So make sure to initialize a Go module:

go mod init github.com/my/repo

And then install go-redis/v8 (note v8 in the import; omitting it is a popular mistake):

go get github.com/go-redis/redis/v8

Quickstart

import (
    "context"
    "github.com/go-redis/redis/v8"
)

var ctx = context.Background()

func ExampleClient() {
    rdb := redis.NewClient(&redis.Options{
        Addr:     "localhost:6379",
        Password: "", // no password set
        DB:       0,  // use default DB
    })

    err := rdb.Set(ctx, "key", "value", 0).Err()
    if err != nil {
        panic(err)
    }

    val, err := rdb.Get(ctx, "key").Result()
    if err != nil {
        panic(err)
    }
    fmt.Println("key", val)

    val2, err := rdb.Get(ctx, "key2").Result()
    if err == redis.Nil {
        fmt.Println("key2 does not exist")
    } else if err != nil {
        panic(err)
    } else {
        fmt.Println("key2", val2)
    }
    // Output: key value
    // key2 does not exist
}

Look and feel

Some corner cases:

// SET key value EX 10 NX
set, err := rdb.SetNX(ctx, "key", "value", 10*time.Second).Result()

// SET key value keepttl NX
set, err := rdb.SetNX(ctx, "key", "value", redis.KeepTTL).Result()

// SORT list LIMIT 0 2 ASC
vals, err := rdb.Sort(ctx, "list", &redis.Sort{Offset: 0, Count: 2, Order: "ASC"}).Result()

// ZRANGEBYSCORE zset -inf +inf WITHSCORES LIMIT 0 2
vals, err := rdb.ZRangeByScoreWithScores(ctx, "zset", &redis.ZRangeBy{
    Min: "-inf",
    Max: "+inf",
    Offset: 0,
    Count: 2,
}).Result()

// ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 AGGREGATE SUM
vals, err := rdb.ZInterStore(ctx, "out", &redis.ZStore{
    Keys: []string{"zset1", "zset2"},
    Weights: []int64{2, 3}
}).Result()

// EVAL "return {KEYS[1],ARGV[1]}" 1 "key" "hello"
vals, err := rdb.Eval(ctx, "return {KEYS[1],ARGV[1]}", []string{"key"}, "hello").Result()

// custom command
res, err := rdb.Do(ctx, "set", "key", "value").Result()

Run the test

go-redis will start a redis-server and run the test cases.

The paths of redis-server bin file and redis config file are definded in main_test.go:

var (
	redisServerBin, _  = filepath.Abs(filepath.Join("testdata", "redis", "src", "redis-server"))
	redisServerConf, _ = filepath.Abs(filepath.Join("testdata", "redis", "redis.conf"))
)

For local testing, you can change the variables to refer to your local files, or create a soft link to the corresponding folder for redis-server and copy the config file to testdata/redis/:

ln -s /usr/bin/redis-server ./go-redis/testdata/redis/src
cp ./go-redis/testdata/redis.conf ./go-redis/testdata/redis/

Lastly, run:

go test

See also

Issues
  • undefined: otel.Meter or cannot find package

    undefined: otel.Meter or cannot find package "go.opentelemetry.io/otel/api/trace"

    To fix cannot find package "go.opentelemetry.io/otel/api/trace" or undefined: otel.Meter:

    1. Make sure to initialize a Go module: go mod init github.com/my/repo

    2. Make sure to use correct import path with v8 in the end: go get github.com/go-redis/redis/v8

    For example:

    mkdir /tmp/redis-test
    cd /tmp/redis-test
    go mod init redis-test
    go get github.com/go-redis/redis/v8
    

    The root cause

    The error is not caused by OpenTelemetry. OpenTelemetry is just the first module Go tries to install. And the error will not go away until you start using Go modules properly.

    The presence of $GOROOT or $GOPATH in error messages indicates that you are NOT using Go modules.

    opened by vmihailenco 32
  • V8 performance degradation ~20%

    V8 performance degradation ~20%

    @monkey92t

    Hi, thank you for your tests. I ran your tests in our environment, and saw similar comparative results. However, when I slightly modified the tests to reflect more accurately of our use case (and how Go HTTP spawn goroutine for each request), all the sudden the performance is degraded for V8. This is especially evident with 100+ concurrency.

    2 changes that were made:

    1. instead of pre-spawn Go routine and run fixed number of Get/Set in a for loop (this is retained using get2/set2), it runs through fixed number of requests and spawn a Go routine (only up to currency) to process them.
    2. each request will generate a random key so the load is spread across the Redis cluster.

    Both V7/V8 saw a decrease in throughput comparing using pre-spawn Go routines vs a Go routine per request. However, decrease for V7 is very small as expected, but V8 is quite dramatic.

    go-redis version: v7.4.0 and v8.6.0

    redis-cluster (version 5.0.7): master: 84 instances slave: 84 instances

    This is the RedisCluster test result: https://github.com/go-redis/redis/files/6158805/Results.pdf

    This is the test program: https://github.com/go-redis/redis/files/6158824/perftest.go.gz

    opened by jfjm2018 28
  • high memory usage + solution

    high memory usage + solution

    Hi,

    I noticed that the memory usage was very high in my project. I did a memory profiling with inuse_space, and 90% of my memory is used by go-redis in WriteBuffer. If I understand correctly, each connection in the pool has its own WriteBuffer.

    My projects runs 80 goroutines (on 8 CPUs) and each goroutine SET Redis keys. My Redis keys are large: several MB. (less than 100 MB) So, it's very easy to understand why the memory usage is very high.

    I think I have a solution, but it requires changes in go-redis internals. We could use a global sync.Pool of WriteBuffer instead.

    WDYT ?

    opened by pierrre 24
  • hscan adds support for i386 platform

    hscan adds support for i386 platform

    set: GOARCH=386

    redis 127.0.0.1:6379>>set a 100
    redis 127.0.0.1:6379>>set b 123456789123456789
    
    type Demo struct {
        A int8 `redis:"a"`
        B int64 `redis:"b"`
    }
    
    client := redis.NewClient(&Options{
            Network:      "tcp",
            Addr:         "127.0.0.1:6379",
    })  
    ctx := context.Background()
    d := &Demo{}
    err := client.MGet(ctx, "a", "b").Scan(d)
    t.Log(d, err)
    

    it should run normally on the i386 platform, and there should not be such an error: strconv.ParseInt: parsing "123456789123456789": value out of range

    opened by monkey92t 18
  • Add redis.Scan() to scan results from redis maps into structs.

    Add redis.Scan() to scan results from redis maps into structs.

    The package uses reflection to decode default types (int, string etc.) from Redis map results (key-value pair sequences) into struct fields where the fields are matched to Redis keys by tags.

    Similar to how encoding/json allows custom decoders usingUnmarshalJSON(), the package supports decoding of arbitrary types into struct fields by defining a Decode(string) errorfunction on types.

    The field/type spec of every struct that's passed to Scan() is cached in the package so that subsequent scans avoid iteration and reflection of the struct's fields.

    Issue: https://github.com/go-redis/redis/issues/1603

    opened by knadh 17
  • What is the Context / WithContext methods of clients use for?

    What is the Context / WithContext methods of clients use for?

    Hi, Just wondering if there is a way to use context when using this library?

    opened by seriousben 17
  • Add Limiter interface

    Add Limiter interface

    This is an alternative to https://github.com/go-redis/redis/pull/874. Basically it defines rate limiter interface which allows to implement different limiting strategies in separate packages.

    @xianglinghui what do you think? Is provided API enough to cover your needs? I am aware that code like https://github.com/go-redis/redis/blob/master/ring.go#L618-L621 requires some work in go-redis, but other than that it seems to be enough.

    opened by vmihailenco 17
  • Attempt to cleanup cluster logic.

    Attempt to cleanup cluster logic.

    @dim I tried to refactor code a bit to learn more about Redis cluster. Changes:

    • NewClusterClient does not return error any more, because NewClient does not too. I personally think that app can't do anything useful except exiting when NewClusterClient returns an error. So panic should be a good alternative.
    • Now ClusterClient.process tries next available replica before falling back to the randomClient. I am not sure that this change is correct, but I hope so :)
    • randomClient is completely rewritten so it does not require allocating seen map[string]struct{}{} on every request. It also checks that node is online before returning.
    opened by vmihailenco 15
  • redis: can't parse

    redis: can't parse "ype\":\"PerfdataValue\",\"unit\":\"\",\"value\":0.0,\"warn\":null}],\"status\":{\"checkercomponent\":{\"checker\":{\"i"

    We at @Icinga are developing two applications, one writes to Redis (and publishes events) and the other reads (and subscribes for the events).

    The writer PUBLISHes periodically data like...

    {"ApiListener":{"perfdata":[{"counter":false,"crit":null,"label":"api_num_conn_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_http_clients","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_clients","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_relay_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":46.399999999999998579,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_relay_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_sync_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_sync_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_count","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_not_conn_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"api":{"conn_endpoints":[],"http":{"clients":0.0},"identity":"CENSOREDCENSOREDCENSOREDCENSO","json_rpc":{"clients":0.0,"relay_queue_item_rate":46.399999999999998579,"relay_queue_items":0.0,"sync_queue_item_rate":0.0,"sync_queue_items":0.0,"work_queue_count":0.0,"work_queue_item_rate":0.0,"work_queue_items":0.0},"not_conn_endpoints":[],"num_conn_endpoints":0.0,"num_endpoints":0.0,"num_not_conn_endpoints":0.0,"zones":{"alexanders-mbp.int.netways.de":{"client_log_lag":0.0,"connected":true,"endpoints":["alexanders-mbp.int.netways.de"],"parent_zone":""}}}}},"CIB":{"perfdata":[],"status":{"active_host_checks":1.8500000000000000888,"active_host_checks_15min":1649.0,"active_host_checks_1min":111.0,"active_host_checks_5min":562.0,"active_service_checks":21.350000000000001421,"active_service_checks_15min":19280.0,"active_service_checks_1min":1281.0,"active_service_checks_5min":6399.0,"avg_execution_time":0.021172960599263507958,"avg_latency":0.011358479658762613354,"max_execution_time":0.077728986740112304688,"max_latency":0.045314073562622070312,"min_execution_time":0.001573085784912109375,"min_latency":0.0,"num_hosts_acknowledged":0.0,"num_hosts_down":1.0,"num_hosts_flapping":0.0,"num_hosts_in_downtime":0.0,"num_hosts_pending":0.0,"num_hosts_unreachable":0.0,"num_hosts_up":0.0,"num_services_acknowledged":0.0,"num_services_critical":3.0,"num_services_flapping":0.0,"num_services_in_downtime":0.0,"num_services_ok":4.0,"num_services_pending":0.0,"num_services_unknown":3.0,"num_services_unreachable":12.0,"num_services_warning":2.0,"passive_host_checks":0.0,"passive_host_checks_15min":0.0,"passive_host_checks_1min":0.0,"passive_host_checks_5min":0.0,"passive_service_checks":0.0,"passive_service_checks_15min":0.0,"passive_service_checks_1min":0.0,"passive_service_checks_5min":0.0,"remote_check_queue":0.0,"uptime":18855.292195796966553}},"CheckResultReader":{"perfdata":[],"status":{"checkresultreader":{}}},"CheckerComponent":{"perfdata":[{"counter":false,"crit":null,"label":"checkercomponent_checker_idle","max":null,"min":null,"type":"PerfdataValue","unit":"","value":13.0,"warn":null},{"counter":false,"crit":null,"label":"checkercomponent_checker_pending","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"checkercomponent":{"checker":{"idle":13.0,"pending":0.0}}}},"CompatLogger":{"perfdata":[],"status":{"compatlogger":{}}},"ElasticsearchWriter":{"perfdata":[],"status":{"elasticsearchwriter":{}}},"ExternalCommandListener":{"perfdata":[],"status":{"externalcommandlistener":{}}},"FileLogger":{"perfdata":[],"status":{"filelogger":{"main-log":1.0}}},"GelfWriter":{"perfdata":[],"status":{"gelfwriter":{}}},"GraphiteWriter":{"perfdata":[],"status":{"graphitewriter":{}}},"IcingaApplication":{"perfdata":[],"status":{"icingaapplication":{"app":{"enable_event_handlers":true,"enable_flapping":true,"enable_host_checks":true,"enable_notifications":true,"enable_perfdata":true,"enable_service_checks":true,"environment":"production","node_name":"alexanders-mbp.int.netways.de","pid":7700.0,"program_start":1531475256.183437109,"version":"v2.8.4-779-g45b3429fa"}}}},"InfluxdbWriter":{"perfdata":[],"status":{"influxdbwriter":{}}},"LivestatusListener":{"perfdata":[],"status":{"livestatuslistener":{}}},"NotificationComponent":{"perfdata":[],"status":{"notificationcomponent":{"notification":1.0}}},"OpenTsdbWriter":{"perfdata":[],"status":{"opentsdbwriter":{}}},"PerfdataWriter":{"perfdata":[],"status":{"perfdatawriter":{}}},"StatusDataWriter":{"perfdata":[],"status":{"statusdatawriter":{}}},"SyslogLogger":{"perfdata":[],"status":{"sysloglogger":{}}}}
    

    ... and the reader consumes that using this library.

    Wireshark shows nothing special, just these messages and some PINGs, but after a while the reader hits internal/proto/reader.go:106 with line being ...

    ype":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"checkercomponent":{"checker":{"idle":13.0,"pending":0.0}}}},"CompatLogger":{"perfdata":[],"status":{"compatlogger":{}}},"ElasticsearchWriter":{"perfdata":[],"status":{"elasticsearchwriter":{}}},"ExternalCommandListener":{"perfdata":[],"status":{"externalcommandlistener":{}}},"FileLogger":{"perfdata":[],"status":{"filelogger":{"main-log":1.0}}},"GelfWriter":{"perfdata":[],"status":{"gelfwriter":{}}},"GraphiteWriter":{"perfdata":[],"status":{"graphitewriter":{}}},"IcingaApplication":{"perfdata":[],"status":{"icingaapplication":{"app":{"enable_event_handlers":true,"enable_flapping":true,"enable_host_checks":true,"enable_notifications":true,"enable_perfdata":true,"enable_service_checks":true,"environment":"production","node_name":"CENSOREDCENSOREDCENSOREDCENSO","pid":7700.0,"program_start":1531475256.183437109,"version":"v2.8.4-779-g45b3429fa"}}}},"InfluxdbWriter":{"perfdata":[],"status":{"influxdbwriter":{}}},"LivestatusListener":{"perfdata":[],"status":{"livestatuslistener":{}}},"NotificationComponent":{"perfdata":[],"status":{"notificationcomponent":{"notification":1.0}}},"OpenTsdbWriter":{"perfdata":[],"status":{"opentsdbwriter":{}}},"PerfdataWriter":{"perfdata":[],"status":{"perfdatawriter":{}}},"StatusDataWriter":{"perfdata":[],"status":{"statusdatawriter":{}}},"SyslogLogger":{"perfdata":[],"status":{"sysloglogger":{}}}}
    
    opened by Al2Klimov 14
  • connection pool timeout

    connection pool timeout

    I am using Redis as a caching layer for long running web services. I initialize the connection like so:

    var (
        Queues  *redis.Client
        Tracker *redis.Client
    )
    
    func Connect(url string) {
        // cut away redis://
        url = url[8:]
    
        // connect to db #0
        Queues = redis.NewClient(&redis.Options{
            Addr:     url,
            Password: "",
            DB:       0,
        })
    
        _, err := Queues.Ping().Result()
        if err != nil {
            panic(err)
        }
    
        // connect to db #1
        Tracker = redis.NewClient(&redis.Options{
            Addr:     url,
            Password: "",
            DB:       1,
        })
    
        _, err = Tracker.Ping().Result()
        if err != nil {
            panic(err)
        }
    }
    

    Abeit in an upcoming patch (sysadmin is deploying a Redis cluster) it will be like so:

    var (
        Cluster *redis.ClusterClient
    )
    
    func ConnectCluster(cluster, password string) {
        addresses := strings.Split(cluster, ",")
        Cluster = redis.NewClusterClient(&redis.ClusterOptions{
            Addrs: addresses,
            // Password: password,
        })
    
        _, err := Cluster.Ping().Result()
        if err != nil {
            panic(err)
        }
    }
    

    The above code gets run once when service boots up in main.go and the *redis.ClusterClient is being used for the lifetime of the process.

    I realize there is an iherent problem with this approach, which is manifesting itself in connections timing out after a few days, and crashing the application: redis: connection pool timeout See logs here

    Please advise, what would be a proper approach to use go-redis in this situation?

    opened by Netherdrake 14
  • Bump github.com/onsi/gomega from 1.10.5 to 1.14.0

    Bump github.com/onsi/gomega from 1.10.5 to 1.14.0

    Bumps github.com/onsi/gomega from 1.10.5 to 1.14.0.

    Release notes

    Sourced from github.com/onsi/gomega's releases.

    v1.14.0

    1.14.0

    Features

    • gmeasure.SamplingConfig now suppers a MinSamplingInterval [e94dbca]
    • Eventually and Consistently support functions that make assertions [2f04e6e]
      • Eventually and Consistently now allow their passed-in functions to make assertions. These assertions must pass or the function is considered to have failed and is retried.
      • Eventually and Consistently can now take functions with no return values. These implicitly return nil if they contain no failed assertion. Otherwise they return an error wrapping the first assertion failure. This allows these functions to be used with the Succeed() matcher.
      • Introduce InterceptGomegaFailure - an analogue to InterceptGomegaFailures - that captures the first assertion failure and halts execution in its passed-in callback.

    Fixes

    • Call Verify GHTTPWithGomega receiver funcs (#454) [496e6fd]
    • Build a binary with an expected name (#446) [7356360]

    v1.13.0

    • Set consistently and eventually defaults on init (#443)

      Using environment variables

      Closes #434

      Signed-off-by: toby lorne [email protected]

    • gmeasure provides BETA support for benchmarking (#447)

      gmeasure is a new gomega subpackage intended to provide measurement and benchmarking support for durations and values. gmeasure replaces Ginkgo V1s deprecated Measure nodes and provides a migration path for users migrating to Ginkgo V2.

      gmeasure is organized around an Experiment metaphor. Experiments can record several different Measurements, with each Measurement comprised of multiple data points. Measurements can hold time.Durations and float64 values and gmeasure includes support measuring the duraiton of callback functions and for sampling functions repeatedly to build an ensemble of data points. In addition, gmeasure introduces a Stopwatch abtraction for easily measuring and recording durations of code segments.

      Once measured, users can readily generate Stats for Measurements to capture their key statistics and these stats can be ranked using a Ranking and associated RankingCriteria.

      Experiments can be Cached to disk to speed up subsequent runs. Experiments are cached by name and version number which makes it easy to manage and bust the cache.

      Finally, gmeasure integrates with Ginkgo V2 via the new ReportEntry abstraction. Experiments, Measurements, and Rankings can all be registered via AddReportEntry. Doing so generates colorful reports as part of Ginkgo's test output.

      gmeasure is currently in beta and will go GA around when Ginkgo V2 goes GA.

    v1.12.0

    Features

    • Add Satisfy() matcher (#437) [c548f31]
    • tweak truncation message [3360b8c]
    • Add format.GomegaStringer (#427) [cc80b6f]
    • Add Clear() method to gbytes.Buffer [c3c0920]

    Fixes

    • Fix error message in BeNumericallyMatcher (#432) [09c074a]

    ... (truncated)

    Changelog

    Sourced from github.com/onsi/gomega's changelog.

    1.14.0

    Features

    • gmeasure.SamplingConfig now suppers a MinSamplingInterval [e94dbca]
    • Eventually and Consistently support functions that make assertions [2f04e6e]
      • Eventually and Consistently now allow their passed-in functions to make assertions. These assertions must pass or the function is considered to have failed and is retried.
      • Eventually and Consistently can now take functions with no return values. These implicitly return nil if they contain no failed assertion. Otherwise they return an error wrapping the first assertion failure. This allows these functions to be used with the Succeed() matcher.
      • Introduce InterceptGomegaFailure - an analogue to InterceptGomegaFailures - that captures the first assertion failure and halts execution in its passed-in callback.

    Fixes

    • Call Verify GHTTPWithGomega receiver funcs (#454) [496e6fd]
    • Build a binary with an expected name (#446) [7356360]

    1.13.0

    Features

    • gmeasure provides BETA support for benchmarking (#447) [8f2dfbf]
    • Set consistently and eventually defaults on init (#443) [12eb778]

    1.12.0

    Features

    • Add Satisfy() matcher (#437) [c548f31]
    • tweak truncation message [3360b8c]
    • Add format.GomegaStringer (#427) [cc80b6f]
    • Add Clear() method to gbytes.Buffer [c3c0920]

    Fixes

    • Fix error message in BeNumericallyMatcher (#432) [09c074a]
    • Bump github.com/onsi/ginkgo from 1.12.1 to 1.16.2 (#442) [e5f6ea0]
    • Bump github.com/golang/protobuf from 1.4.3 to 1.5.2 (#431) [adae3bf]
    • Bump golang.org/x/net (#441) [3275b35]

    1.11.0

    Features

    • feature: add index to gstruct element func (#419) [334e00d]
    • feat(gexec) Add CompileTest functions. Close #410 (#411) [47c613f]

    Fixes

    • Check more carefully for nils in WithTransform (#423) [3c60a15]
    • fix: typo in Makefile [b82522a]
    • Allow WithTransform function to accept a nil value (#422) [b75d2f2]
    • fix: print value type for interface{} containers (#409) [f08e2dc]
    • fix(BeElementOf): consistently flatten expected values [1fa9468]
    Commits
    • 812e642 v1.14.0
    • 26cf82b silence (at least temporarily) the github test workflow's vetting of go.mod a...
    • 496e6fd Call Verify GHTTPWithGomega receiver funcs (#454)
    • 9da0d13 go mod tidy
    • 55e9553 bump ginkgo
    • e94dbca gmeasure.SamplingConfig now suppers a MinSamplingInterval
    • 2f04e6e Eventually and Consistently support functions that make assertions
    • febd7a2 remove travis ci
    • 7356360 Build a binary with an expected name (#446)
    • dbc6ecd v1.13.0
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies go wait 
    opened by dependabot[bot] 1
  • Connection reuse returning no error, but empty results

    Connection reuse returning no error, but empty results

    I'm not very familiar with Go, but rewrote my api to use go-redis, so please excuse me if this is a simple mistake.

    On my first attempt, I put the NewClient variable in my handler function outside of main, which works well aside from being super inefficient and using thousands of file descriptors. Realizing my mistake, I moved it to the main function with a global var. Now it works with reusing connections, but randomly it will start returning empty results with no errors!? Which of course is horrible, because that registers as a valid result. I can force it to throw errors, and have gotten EOF errors in some cases with timeouts, but this has no error. It happens whether I use Sentinel or my keepalived ip. The only reliable method is to connect everytime handler function is run.

    Here is a basic outline of the program. Same

    import (
    	redis "github.com/go-redis/redis/v8"
    )
    var rdb *redis.Client
    var ctx = context.Background()
    func main() {
    	router := http.NewServeMux()
    	router.HandleFunc("/api", ApiHandler)
    	srv := &http.Server{
    		Addr:    ":8080",
    		ReadTimeout:  5 * time.Second,
    		WriteTimeout: 10 * time.Second,
    	}
    	rdb = redis.NewClient(&redis.Options{
    			Addr: "192.168.1.2:6379",
    			Password: "password",
    			DB:       0,  // use default DB
    			//DialTimeout:        10 * time.Second,
    			//ReadTimeout:        30 * time.Second,
    			//WriteTimeout:       30 * time.Second,
    			PoolSize: 1000,
    			//PoolTimeout:        30 * time.Second,
    			//IdleTimeout:        time.Second * 25,
    			IdleCheckFrequency: time.Second * 5,
    			MaxRetries:         3,
    	})
    	log.Fatal(srv.ListenAndServe())
    }
    func ApiHandler(w http.ResponseWriter, r *http.Request) {
    	key, ok := r.URL.Query()["key"]
    	api_key := key[0]
    	//works if I move rdb newclient here
    	key_info, err := rdb.HGetAll(ctx, "key:" + api_key).Result()
    	if err != nil {
    		log.Println("key info error: ", err)
    	} else  len(key_info) == 0 {
    		log.Println("key info does not exist: ", err)
    		log.Println("Missing apiKey is: " + api_key)
    	} else {
    		log.Println("Good apiKey is: " + api_key)
    	}
    	defer r.Body.Close()
    	return
    }
    

    After many connections, the same key that registers as Good will randomly come up as Missing. Sometimes once, sometimes many times, before stating Good again. If I move the NewClient to my comment in ApiHandler, it is always Good. Thanks!

    opened by d3mon187 5
  • Set type cannot compare JSON strings

    Set type cannot compare JSON strings

    redis version : redis-cli 3.2.12 go-redis version: v6.15.9

    I have store json string like this : {"filename": "r7iPc2eE4KNzfXHaKJUkbfPGoXaruE.json", "user": "77xxx"} len(68)

    Use sismember to check whether the value exists. Use the same value, but return false

    I have try fmt.Sprintf() and json.Marshal()

    opened by hyahm 0
  • Redis cluster, admin commands should try request different nodes during retry.

    Redis cluster, admin commands should try request different nodes during retry.

    Expected Behavior

    Slot independent commands, including admin commands, should try request different nodes during retry.

    Current Behavior

    Retrying requests to same node according to chosed randomly slot previously. Cluter with masters only. https://github.com/go-redis/redis/blob/v8.9.0/cluster.go#L775

    Possible Solution

    For such commands try request randomly choosen node, independently of slot.

    Steps to Reproduce

    1. Setup cluster with 3 master nodes.
    2. Fail 1 node.
    3. Request multiple times ClusterInfo command.
    4. Expect all requests have value, but actually see errors while request failed node.

    Possible Implementation

    cmdInfo := c.cmdInfo(cmd.Name())
    slot := c.cmdSlot(cmd)
    
    var node *clusterNode
    var ask bool
    var lastErr error
    for attempt := 0; attempt <= c.opt.MaxRedirects; attempt++ {
    	if attempt > 0 {
    		if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
    			return err
    		}
    	}
    
    	if node == nil {
    		var err error
    		if 	canRandomNode := cmdFirstKeyPos(cmd, cmdInfo) == 0; canRandomNode{
    			node, err = c.nodes.Random()
    		} else {
    			node, err = c.cmdNode(ctx, cmdInfo, slot)
    		}
    
    		if err != nil {
    			return err
    		}
    	}
    
    opened by taras-zak 0
  • Allow for custom latency and error handling

    Allow for custom latency and error handling

    The standard method of simply calling the PING command does not take into account failures of the ping method. The added client option LatencyHealthFunc(c *Client) bool allows for failures to be taken into account, along with LOADING errors when using persistent Redis. The default behavior has been kept exactly the same as before.

    Also adds a client option OnErrFunc(n NodeExt, err error) that exposes a very restricted Node interface to a custom function that can mark the node as failed (or choose not to) based on the error gotten. E.g. a connection refused error could cause the node to be marked as failed, unlike the current client behavior.

    opened by keitwb 2
  • When doing Transaction with redis.watch I do not have access to Do command

    When doing Transaction with redis.watch I do not have access to Do command

    Issue tracker is used for reporting bugs and discussing new features. Please use stackoverflow for supporting issues.

    Expected Behavior

    Redis.Watch(context.Background(), func(tx *redis.Tx) error { })

    A user should be able to access the tx.Do command when doing transaction with watch.I need to work with redijsson module in a transaction but currently the Redis.Tx lacks a Do command that I can use to run custom redis commands.

    I thought I could use TxPipelined but the issue with it is that when one command fails other commands get executed while what I want is for all commands to fail if one fails.

    opened by Keithwachira 0
  • Retries may be done using broken connections from the pool

    Retries may be done using broken connections from the pool

    Expected Behavior

    When restarting the Redis server, due to the default of MaxRetries: 3, I'd expect go-redis to attempt to reconnect to the server and retry the query.

    Current Behavior

    If there are enough old connections in the connection pool, all retry attempts are done on broken connections and an error is returned without even attempting to reconnect.

    Possible Solution

    Some options:

    1. Perform a ping on the connection before reusing it (adds latency if the connection is fine)
    2. Don't count retries on connections from the pool towards the retry limit (might lead to a huge number of retries for a single query)
    3. Clear the pool on errors (might be overly pessimistic)
    4. Always use a fresh connection for retries

    Steps to Reproduce

    The following code reproduces the issue. Instead of restarting the server, it sets a dialer that simply closes each connection after 5 seconds.

    package main
    
    import (
    	"context"
    	"github.com/go-redis/redis/v8"
    	"log"
    	"net"
    	"time"
    )
    
    func main() {
    	ctx := context.Background()
    
    	// Redis client with a dialer that kills connections after 5 seconds (to simulate a server restart).
    	client := redis.NewClient(&redis.Options{
    		Dialer: func(ctx context.Context, network, addr string) (net.Conn, error) {
    			log.Print("Dialer called")
    			conn, err := net.Dial(network, addr)
    			if err == nil {
    				go func() {
    					time.Sleep(5 * time.Second)
    					conn.Close()
    				}()
    			}
    			return conn, err
    		},
    	})
    
    	// Function to ping the server and log errors if they occur.
    	ping := func() {
    		_, err := client.Ping(ctx).Result()
    		if err != nil {
    			log.Print(err)
    		}
    	}
    
    	// Perform some pings to fill the connection pool.
    	for i := 0; i < 10; i++ {
    		go ping()
    	}
    
    	// Wait for connections to die.
    	time.Sleep(10 * time.Second)
    
    	// Perform another ping and log pool stats before and after.
    	log.Printf("%#v", client.PoolStats())
    	ping()
    	log.Printf("%#v", client.PoolStats())
    }
    

    Example output (note that the dialer is not called for the last ping and the hit count in the pool stats increases by 4, i.e. the initial attempt and all 3 retries were done using stale connections from the pool):

    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:40:50 Dialer called
    2021/04/23 13:41:00 &redis.PoolStats{Hits:0x0, Misses:0xa, Timeouts:0x0, TotalConns:0xa, IdleConns:0xa, StaleConns:0x0}
    2021/04/23 13:41:00 set tcp 127.0.0.1:45526: use of closed network connection
    2021/04/23 13:41:00 &redis.PoolStats{Hits:0x4, Misses:0xa, Timeouts:0x0, TotalConns:0x6, IdleConns:0x6, StaleConns:0x0}
    

    You get similar behavior when you restart Redis at the right time instead of using this dialer. In this case, the error will be EOF instead.

    When adding MaxRetries: 20 to the redis.Options, the last 3 lines of the output look like this instead (note that there were 10 hits and then 1 miss that called the dialer):

    2021/04/23 13:48:46 &redis.PoolStats{Hits:0x0, Misses:0xa, Timeouts:0x0, TotalConns:0xa, IdleConns:0xa, StaleConns:0x0}
    2021/04/23 13:48:49 Dialer called
    2021/04/23 13:48:49 &redis.PoolStats{Hits:0xa, Misses:0xb, Timeouts:0x0, TotalConns:0x1, IdleConns:0x1, StaleConns:0x0}
    

    Context (Environment)

    I want my application to gracefully handle Redis starts and had hoped that this was handled automatically by go-redis due to the default of MaxRetries: 3.

    enhancement v9 
    opened by julianbrost 18
  • Feature Request: OpenCensus support `AllowRoot` setting

    Feature Request: OpenCensus support `AllowRoot` setting

    When registering tracing with OpenCensus it will register all calls. Typically, we're only interested in certain traces. Specifically if there is no Trace already on the context, we don't want to create a new Trace with Redis as the only span - might be useful, but often not.

    OCSQL has the setting AllowRoot to disable this exact behaviour.

    It'd be good to implement something similar here.

    opened by noseglid 0
  • Cmdable not support XInfoConsumers?

    Cmdable not support XInfoConsumers?

    Cmdable not support XInfoConsumers?

    opened by liuping001 2
  • ReceiveTimeout may break subscription message in the middle when timeout

    ReceiveTimeout may break subscription message in the middle when timeout

    Expected Behavior

    The ReceiveTimeout() should hide details of message protocol. It should pick previous timeout-ed message header and continue to produce a full message instead of simply dropping the header.

    Current Behavior

    • Subscribe a channel
    • Call ReceiveTimeout() with a positive timeout
    • Each published message arrived contains several sub-messages. Each sub-message contains a header and a data part, separated by "\r\n". For example:
        readline *4
        redis array reply 
    
        readline $8
        redis string reply pmessage
    
        readline $21
        redis string reply dispatcherTest_6091:*
    
        readline $21
        redis string reply dispatcherTest_6091:3
    
        readline $3
        redis string reply 777
    
      • a.1) If a timeout occurs when receiving a header, ReceiveTimeout() returns with an error. a.2) Next time ReceiveTimeout() is called. It will read the data part of the previous failed message. And process as if it is the header of a new message. And it will report a protocol error!
      • b.1) if a timeout occurs when receiving a sub-message but not the first one. ReceiveTimeout() returns with an error. b.2) Next time ReceiveTimeout() is called. it will read the next sub-message of the previous failed full message. And process as if it is the first sub-message of a new full message. This may lead to unexpected behavior.
    • The timeout-ed message is lost.

    • The subscription is broken.

    Possible Solution

    Maybe cache the message when a timeout occurs? The issue mainly hurts pub/sub + timeout is needed.

    Context (Environment)

    Detailed Description

    Possible Implementation

    opened by szmcdull 1
Releases(v7.4.1)
  • v7.4.1(Jul 16, 2021)

  • v8.11.0(Jun 30, 2021)

    Change

    Remove OpenTelemetry metrics, Linked #1534 #1805

    New Command

    1. XAutoClaim
    2. ZRangeStore
    3. ZUnion

    Command More Options

    1. XAdd: NoMkStream+MinID+Limit
    2. XTrim: MinID+Limit
    3. XGroup: CreateConsumer
    4. ZAdd: GT+LT
    5. ZRange: ByScore+ByLex+Rev+Limit

    New API

    1. XAutoClaim(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimCmd
    2. XAutoClaimJustID(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimJustIDCmd
    3. ZRangeStore(ctx context.Context, dst string, z ZRangeArgs) *IntCmd
    4. ZAddArgs(ctx context.Context, key string, args ZAddArgs) *IntCmd
    5. ZAddArgsIncr(ctx context.Context, key string, args ZAddArgs) *FloatCmd
    6. ZRangeArgs(ctx context.Context, z ZRangeArgs) *StringSliceCmd
    7. ZRangeArgsWithScores(ctx context.Context, z ZRangeArgs) *ZSliceCmd
    8. ZUnion(ctx context.Context, store ZStore) *StringSliceCmd
    9. ZUnionWithScores(ctx context.Context, store ZStore) *ZSliceCmd

    Mark deprecated(remove in v9)

    1. ZAddCh
    2. ZIncr
    3. ZAddNXCh
    4. ZAddXXCh
    5. ZIncrNX
    6. ZIncrXX
    7. XTrim
    8. XTrimApprox
    9. XAddArgs.MaxLenApprox

    Remarks

    There is a bug in the xtrim/xadd limit option (https://github.com/redis/redis/issues/9046)

    Source code(tar.gz)
    Source code(zip)
  • v8.10.0(Jun 3, 2021)

Type-safe Redis client for Golang

Redis client for Golang ❤️ Uptrace.dev - distributed traces, logs, and errors in one place Join Discord to ask questions. Documentation Reference Exam

null 12k Jul 20, 2021
Redis Sorted Sets Benchmark

redis-zbench-go Redis Sorted Sets Benchmark Overview This repo contains code to trigger load ( ZADD ) or query (ZRANGEBYLEX key min max) benchmarks, w

filipe oliveira 3 May 18, 2021
Go Redis Client

xredis Built on top of github.com/garyburd/redigo with the idea to simplify creating a Redis client, provide type safe calls and encapsulate the low l

Raed Shomali 15 Jun 2, 2021
Distributed WebSocket broker

dSock dSock is a distributed WebSocket broker (in Go, using Redis). Clients can authenticate & connect, and you can send text/binary message as an API

Charles Crete 200 Jul 22, 2021
Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)

gokv Simple key-value store abstraction and implementations for Go Contents Features Simple interface Implementations Value types Marshal formats Road

Philipp Gillé 350 Jul 27, 2021
Golang client for redislabs' ReJSON module with support for multilple redis clients (redigo, go-redis)

Go-ReJSON - a golang client for ReJSON (a JSON data type for Redis) Go-ReJSON is a Go client for ReJSON Redis Module. ReJSON is a Redis module that im

Nitish Malhotra 172 Jul 24, 2021
redis client implement by golang, inspired by jedis.

godis redis client implement by golang, refers to jedis. this library implements most of redis command, include normal redis command, cluster command,

piaohao 93 Jul 19, 2021
Google Go Client and Connectors for Redis

Go-Redis Go Clients and Connectors for Redis. The initial release provides the interface and implementation supporting the (~) full set of current Red

Joubin Houshyar 436 Jul 22, 2021
GoBigdis is a persistent database that implements the Redis server protocol. Any Redis client can interface with it and start to use it right away.

GoBigdis GoBigdis is a persistent database that implements the Redis server protocol. Any Redis client can interface with it and start to use it right

Riccardo 4 Jul 10, 2021
Redis client library for Go

go-redis go-redis is a Redis client library for the Go programming language. It's built on the skeleton of gomemcache. It is safe to use by multiple g

Alexandre Fiori 45 Jul 14, 2020
Go client for Redis

Redigo Redigo is a Go client for the Redis database. Features A Print-like API with support for all Redis commands. Pipelining, including pipelined tr

null 8.5k Jul 21, 2021
Redis client Mock Provide mock test for redis query

Redis client Mock Provide mock test for redis query, Compatible with github.com/go-redis/redis/v8 Install Confirm that you are using redis.Client the

null 50 Jul 21, 2021
🐺 Deploy Databases and Services Easily for Development and Testing Pipelines.

Peanut provides an API and a command line tool to deploy and configure the commonly used services like databases, message brokers, graphing tools ... etc. It perfectly suited for development, manual testing, automated testing pipelines where mocking is not possible and test drives.

Ahmed 35 Jul 24, 2021
Aerospike Client Go

Aerospike Go Client An Aerospike library for Go. This library is compatible with Go 1.9+ and supports the following operating systems: Linux, Mac OS X

Aerospike 366 Jul 6, 2021