Type-safe Redis client for Golang

Overview

All-in-one tool to optimize performance and monitor errors & logs

Redis client for Golang

build workflow PkgGoDev Documentation Chat

Ecosystem

Features

Installation

go-redis supports 2 last Go versions and requires a Go version with modules support. So make sure to initialize a Go module:

go mod init github.com/my/repo

And then install go-redis/v8 (note v8 in the import; omitting it is a popular mistake):

go get github.com/go-redis/redis/v8

Quickstart

import (
    "context"
    "github.com/go-redis/redis/v8"
)

var ctx = context.Background()

func ExampleClient() {
    rdb := redis.NewClient(&redis.Options{
        Addr:     "localhost:6379",
        Password: "", // no password set
        DB:       0,  // use default DB
    })

    err := rdb.Set(ctx, "key", "value", 0).Err()
    if err != nil {
        panic(err)
    }

    val, err := rdb.Get(ctx, "key").Result()
    if err != nil {
        panic(err)
    }
    fmt.Println("key", val)

    val2, err := rdb.Get(ctx, "key2").Result()
    if err == redis.Nil {
        fmt.Println("key2 does not exist")
    } else if err != nil {
        panic(err)
    } else {
        fmt.Println("key2", val2)
    }
    // Output: key value
    // key2 does not exist
}

Look and feel

Some corner cases:

// SET key value EX 10 NX
set, err := rdb.SetNX(ctx, "key", "value", 10*time.Second).Result()

// SET key value keepttl NX
set, err := rdb.SetNX(ctx, "key", "value", redis.KeepTTL).Result()

// SORT list LIMIT 0 2 ASC
vals, err := rdb.Sort(ctx, "list", &redis.Sort{Offset: 0, Count: 2, Order: "ASC"}).Result()

// ZRANGEBYSCORE zset -inf +inf WITHSCORES LIMIT 0 2
vals, err := rdb.ZRangeByScoreWithScores(ctx, "zset", &redis.ZRangeBy{
    Min: "-inf",
    Max: "+inf",
    Offset: 0,
    Count: 2,
}).Result()

// ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 AGGREGATE SUM
vals, err := rdb.ZInterStore(ctx, "out", &redis.ZStore{
    Keys: []string{"zset1", "zset2"},
    Weights: []int64{2, 3}
}).Result()

// EVAL "return {KEYS[1],ARGV[1]}" 1 "key" "hello"
vals, err := rdb.Eval(ctx, "return {KEYS[1],ARGV[1]}", []string{"key"}, "hello").Result()

// custom command
res, err := rdb.Do(ctx, "set", "key", "value").Result()

Run the test

go-redis will start a redis-server and run the test cases.

The paths of redis-server bin file and redis config file are definded in main_test.go:

var (
	redisServerBin, _  = filepath.Abs(filepath.Join("testdata", "redis", "src", "redis-server"))
	redisServerConf, _ = filepath.Abs(filepath.Join("testdata", "redis", "redis.conf"))
)

For local testing, you can change the variables to refer to your local files, or create a soft link to the corresponding folder for redis-server and copy the config file to testdata/redis/:

ln -s /usr/bin/redis-server ./go-redis/testdata/redis/src
cp ./go-redis/testdata/redis.conf ./go-redis/testdata/redis/

Lastly, run:

go test

See also

Comments
  • undefined: otel.Meter or cannot find package

    undefined: otel.Meter or cannot find package "go.opentelemetry.io/otel/api/trace"

    To fix cannot find package "go.opentelemetry.io/otel/api/trace" or undefined: otel.Meter:

    1. Make sure to initialize a Go module: go mod init github.com/my/repo

    2. Make sure to use correct import path with v8 in the end: go get github.com/go-redis/redis/v8

    For example:

    mkdir /tmp/redis-test
    cd /tmp/redis-test
    go mod init redis-test
    go get github.com/go-redis/redis/v8
    

    The root cause

    The error is not caused by OpenTelemetry. OpenTelemetry is just the first module Go tries to install. And the error will not go away until you start using Go modules properly.

    The presence of $GOROOT or $GOPATH in error messages indicates that you are NOT using Go modules.

    opened by vmihailenco 33
  • V8 performance degradation ~20%

    V8 performance degradation ~20%

    @monkey92t

    Hi, thank you for your tests. I ran your tests in our environment, and saw similar comparative results. However, when I slightly modified the tests to reflect more accurately of our use case (and how Go HTTP spawn goroutine for each request), all the sudden the performance is degraded for V8. This is especially evident with 100+ concurrency.

    2 changes that were made:

    1. instead of pre-spawn Go routine and run fixed number of Get/Set in a for loop (this is retained using get2/set2), it runs through fixed number of requests and spawn a Go routine (only up to currency) to process them.
    2. each request will generate a random key so the load is spread across the Redis cluster.

    Both V7/V8 saw a decrease in throughput comparing using pre-spawn Go routines vs a Go routine per request. However, decrease for V7 is very small as expected, but V8 is quite dramatic.

    go-redis version: v7.4.0 and v8.6.0

    redis-cluster (version 5.0.7): master: 84 instances slave: 84 instances

    This is the RedisCluster test result: https://github.com/go-redis/redis/files/6158805/Results.pdf

    This is the test program: https://github.com/go-redis/redis/files/6158824/perftest.go.gz

    opened by jfjm2018 29
  • high memory usage + solution

    high memory usage + solution

    Hi,

    I noticed that the memory usage was very high in my project. I did a memory profiling with inuse_space, and 90% of my memory is used by go-redis in WriteBuffer. If I understand correctly, each connection in the pool has its own WriteBuffer.

    My projects runs 80 goroutines (on 8 CPUs) and each goroutine SET Redis keys. My Redis keys are large: several MB. (less than 100 MB) So, it's very easy to understand why the memory usage is very high.

    I think I have a solution, but it requires changes in go-redis internals. We could use a global sync.Pool of WriteBuffer instead.

    WDYT ?

    opened by pierrre 24
  • Constantly Reestablishing Connections in Cluster Mode

    Constantly Reestablishing Connections in Cluster Mode

    Expected Behavior

    Creating a cluster client using pretty much default settings should not overwhelm Redis with constant barrage of new connections.

    redis.NewClusterClient(&redis.ClusterOptions{
        Addrs: []string{redisAddr},
        TLSConfig: &tls.Config{},
    })
    

    Current Behavior

    Occasionally, at times completely unrelated to system load/traffic, we are seeing connections being constantly re-established to one of the cluster nodes in our Redis cluster. We are using ElastiCache Redis in cluster mode with TLS enabled, and there seems to be no trigger we can find for this behavior. We also do not see any relevant logs in our service's systemd output in journalctl, other than

    redis_writer:85 {}        Error with write attempt: context deadline exceeded
    

    which seems more like a symptom of an overloaded Redis cluster node rather than a cause.

    When this issue happens, running CLIENT LIST on the affected Redis node shows age=0 or age=1 for all connections every time, which reinforces that connections are being dropped constantly for some reason. New connections plummet on other shards in the Redis cluster, and are all concentrated on one.

    New Connections (Cloudwatch)

    NewConnections

    Current Connections (Cloudwatch)

    CurrConnections

    In the example Cloudwatch graphs above we can also see that the issue can move between Redis cluster shards. As you can see, we're currently running with a 4-shard cluster, where each shard has 1 replica.

    Restarting our service does not address this problem, and to address it we basically need to do a hard reset (completely stop the clients for a while, then start them up again).

    We've reached out to AWS support and they have found no issues with our ElastiCache Redis cluster on their end. Additionally, there are no ElastiCache events happening at the time this issue is triggered.

    Possible Solution

    In this issue I'm mainly hoping to get insight into how I could better troubleshoot this issue and/or if there are additional client options we can use to try and mitigate this worst case scenario (i.e. rate limiting the creation of new connections in the cluster client) in absence of a root-cause fix.

    My main questions are:

    1. Is there a way for me to gather more data that would be helpful for the Redis/go-redis experts here?
    2. Is there a way for us to rate-limit the creation of new connections in the ClusterClient to keep things from getting too out of control if this does continue to occur?
    3. Has anyone else encountered a similar issue with Cluster mode, whether or not it was with ElastiCache Redis?

    Steps to Reproduce

    The description of our environment/service implementation below, as well as the snippet of our NewClusterClient call at the beginning of this issue, provide a fairly complete summary of how we're using both go-redis and ElastiCache Redis. We've not been able to consistently trigger this issue since it often happens when we're not load testing, and are mainly looking for answers for some of our questions above.

    Context (Environment)

    We're running a service that has a simple algorithm for claiming work from a Redis set, doing something with it, and then cleaning it up from Redis. In a nutshell, the algorithm is as follows:

    • SRANDMEMBER pending 10 - grab up to 10 random items from the pool of available work
    • ZADD in_progress <current_timestamp> <grabbed_item> for each of our items we got in the previous step
    • Any work items we weren't able to ZADD have been claimed by some other instance of the service, skip them
    • Once we're done with a work item, SREM pending <grabbed_item>
    • Periodically ZREMRANGEBYSCORE in_progress -inf <5_seconds_ago> so that claimed items aren't claimed forever

    Currently we run this algorithm on 6 EC2 instances, each running one service. Since each instance has 4 CPU cores, go-redis is calculating a max connection pool size of 20 for our ClusterClient. Each service has 20 goroutines performing this algorithm, and each goroutine sleeps 10ms between each invocation of the algorithm.

    At a steady state with no load on the system (just a handful of heartbeat jobs being added to pending every minute) we see a maximum of ~8% EngineCPUUtilization on each Redis shard, and 1-5 new connections/minute. Overall, pretty relaxed. When this issue has triggered recently, it's happened from this steady state, not during load tests.

    Our service is running on EC2 instances running Ubuntu 18.04 (Bionic), and we have tried using github.com/go-redis/redis/v8 v8.0.0 and github.com/go-redis/redis/v8 v8.11.2 - both have run into this issue.

    As mentioned earlier, we're currently running with a 4-shard ElastiCache Redis cluster with TLS enabled, where each shard has 1 replica.

    Detailed Description

    N/A

    Possible Implementation

    N/A

    opened by ianjhoffman 22
  • Add redis.Scan() to scan results from redis maps into structs.

    Add redis.Scan() to scan results from redis maps into structs.

    The package uses reflection to decode default types (int, string etc.) from Redis map results (key-value pair sequences) into struct fields where the fields are matched to Redis keys by tags.

    Similar to how encoding/json allows custom decoders usingUnmarshalJSON(), the package supports decoding of arbitrary types into struct fields by defining a Decode(string) errorfunction on types.

    The field/type spec of every struct that's passed to Scan() is cached in the package so that subsequent scans avoid iteration and reflection of the struct's fields.

    Issue: https://github.com/go-redis/redis/issues/1603

    opened by knadh 20
  • hscan adds support for i386 platform

    hscan adds support for i386 platform

    set: GOARCH=386

    redis 127.0.0.1:6379>>set a 100
    redis 127.0.0.1:6379>>set b 123456789123456789
    
    type Demo struct {
        A int8 `redis:"a"`
        B int64 `redis:"b"`
    }
    
    client := redis.NewClient(&Options{
            Network:      "tcp",
            Addr:         "127.0.0.1:6379",
    })  
    ctx := context.Background()
    d := &Demo{}
    err := client.MGet(ctx, "a", "b").Scan(d)
    t.Log(d, err)
    

    it should run normally on the i386 platform, and there should not be such an error: strconv.ParseInt: parsing "123456789123456789": value out of range

    opened by monkey92t 18
  • Add Limiter interface

    Add Limiter interface

    This is an alternative to https://github.com/go-redis/redis/pull/874. Basically it defines rate limiter interface which allows to implement different limiting strategies in separate packages.

    @xianglinghui what do you think? Is provided API enough to cover your needs? I am aware that code like https://github.com/go-redis/redis/blob/master/ring.go#L618-L621 requires some work in go-redis, but other than that it seems to be enough.

    opened by vmihailenco 17
  • connection pool timeout

    connection pool timeout

    I am using Redis as a caching layer for long running web services. I initialize the connection like so:

    var (
        Queues  *redis.Client
        Tracker *redis.Client
    )
    
    func Connect(url string) {
        // cut away redis://
        url = url[8:]
    
        // connect to db #0
        Queues = redis.NewClient(&redis.Options{
            Addr:     url,
            Password: "",
            DB:       0,
        })
    
        _, err := Queues.Ping().Result()
        if err != nil {
            panic(err)
        }
    
        // connect to db #1
        Tracker = redis.NewClient(&redis.Options{
            Addr:     url,
            Password: "",
            DB:       1,
        })
    
        _, err = Tracker.Ping().Result()
        if err != nil {
            panic(err)
        }
    }
    

    Abeit in an upcoming patch (sysadmin is deploying a Redis cluster) it will be like so:

    var (
        Cluster *redis.ClusterClient
    )
    
    func ConnectCluster(cluster, password string) {
        addresses := strings.Split(cluster, ",")
        Cluster = redis.NewClusterClient(&redis.ClusterOptions{
            Addrs: addresses,
            // Password: password,
        })
    
        _, err := Cluster.Ping().Result()
        if err != nil {
            panic(err)
        }
    }
    

    The above code gets run once when service boots up in main.go and the *redis.ClusterClient is being used for the lifetime of the process.

    I realize there is an iherent problem with this approach, which is manifesting itself in connections timing out after a few days, and crashing the application: redis: connection pool timeout See logs here

    Please advise, what would be a proper approach to use go-redis in this situation?

    opened by Netherdrake 16
  • dial tcp: i/o timeout

    dial tcp: i/o timeout

    I am using go-redis version v6.14.2. I have my application running in an AWS cluster behind loadbalancer. All redis requests failed in one of the nodes in the cluster. Rest of the nodes were working as expected. Application started working properly after a restart. We are using ElastiCache. Can you please help me with identifying the issue ?? If it is previously known issue and is solved in latest version, can you point me to that link ??

    The error was "dial tcp: i/o timeout".

    Below is my cluster configuration excluding redis host address and password:

    • ReadOnly : true
    • RouteByLatency : true
    • RouteRandomly : true
    • DialTimeout : 300ms
    • ReadTimeout : 30s
    • Write Timeout : 30s
    • PoolSize : 12000
    • PoolTimeout : 32
    • IdleTimeout : 120s
    • IdleCheckFrequency : 1s
    import (
    goRedisClient "github.com/go-redis/redis"
    )
    
    func GetRedisClient() *goRedisClient.ClusterClient {
    clusterClientOnce.Do(func() {
    redisClusterClient = goRedisClient.NewClusterClient(
    &goRedisClient.ClusterOptions{
    Addrs: viper.GetStringSlice("redis.hosts"),
    ReadOnly: true,
    RouteByLatency: true,
    RouteRandomly: true,
    Password: viper.GetString("redis.password"),
    
    			DialTimeout:  viper.GetDuration("redis.dial_timeout"),
    			ReadTimeout:  viper.GetDuration("redis.read_timeout"),
    			WriteTimeout: viper.GetDuration("redis.write_timeout"),
    
    			PoolSize:           viper.GetInt("redis.max_active_connections"),
    			PoolTimeout:        viper.GetDuration("redis.pool_timeout"),
    			IdleTimeout:        viper.GetDuration("redis.idle_connection_timeout"),
    			IdleCheckFrequency: viper.GetDuration("redis.idle_check_frequency"),
    		},
    	)
    
    	if err := redisClusterClient.Ping().Err(); err != nil {
    		log.WithError(err).Error(errorCreatingRedisClusterClient)
    	}
    })
    return redisClusterClient
    }
    

    As suggested in comments,https://github.com/go-redis/redis/issues/1194, I wrote the following snippet to dial and test nodes health for each slot. There were no errors. As mentioned, its happening randomly in one of the clients.Not always. It happened again after 3-4 months. And it is always fixed after a restart.

    func CheckRedisSlotConnection(testCase string) {
    	fmt.Println(viper.GetStringSlice("redis.hosts"))
    	fmt.Println("Checking testcase " + testCase)
    	client := redis.GetRedisClient()
    	slots := client.ClusterSlots().Val()
    	addresses := []string{}
    	for _, slot := range slots {
    		for _, node := range slot.Nodes {
    			addresses = append(addresses, node.Addr)
    		}
    	}
    	fmt.Println("Received " + strconv.Itoa(len(addresses)) + " Slots")
    	for _, address := range addresses {
    		fmt.Println("Testing address " + address)
    		conn, err := net.DialTimeout("tcp", address, 500*time.Millisecond)
    		if err != nil {
    			fmt.Println("Error dialing to address " + address + " Error " + err.Error())
    			continue
    		}
    		fmt.Println("Successfully dialled to address " + address)
    		err = conn.Close()
    		if err != nil {
    			fmt.Println("Error closing connection " + err.Error())
    			continue
    		}
    	}
    }
    
    opened by srinidhis94 15
  • Attempt to cleanup cluster logic.

    Attempt to cleanup cluster logic.

    @dim I tried to refactor code a bit to learn more about Redis cluster. Changes:

    • NewClusterClient does not return error any more, because NewClient does not too. I personally think that app can't do anything useful except exiting when NewClusterClient returns an error. So panic should be a good alternative.
    • Now ClusterClient.process tries next available replica before falling back to the randomClient. I am not sure that this change is correct, but I hope so :)
    • randomClient is completely rewritten so it does not require allocating seen map[string]struct{}{} on every request. It also checks that node is online before returning.
    opened by vmihailenco 15
  • How to implement periodic refresh topology

    How to implement periodic refresh topology

    My redis cluster is on top of kubernetes, so sometimes i may move the entire cluster to another set of nodes and they all change ip address. So my go-redis client needs to refresh the topology from time to time. I am wondering is there a config to do that? Or do i need to send some cluster-nodes command from time to time?

    opened by smartnews-weitao 14
  • fix: fixes ring.SetAddrs and rebalance race

    fix: fixes ring.SetAddrs and rebalance race

    While working on reducing ring.SetAddrs lock contention (see https://github.com/go-redis/redis/pull/2190#discussion_r953040289) I have discovered a race condition between SetAddrs and rebalance which I would like to fix first and separately.

    The change consists of two commits:

    • a test to reproduce the race
    • the fix

    The fix ensures atomic update of c.hash and c.shards, otherwise c.hash may return shard name that is not in the c.shards and cause ring operation panic.

    BenchmarkRingRebalanceLocked shows rebalance latency if that is a concern:

    go test . -run=NONE -bench=BenchmarkRingRebalanceLocked -v -count=10 | benchstat /dev/stdin
    name                   time/op
    RingRebalanceLocked-8  8.50µs ±14%
    

    (Note: it essentially reverts https://github.com/go-redis/redis/commit/a46b053aa626a005a30dfb1ac4e096abcce1ef76)

    Updates https://github.com/go-redis/redis/issues/2077 FYI @szuecs

    opened by AlexanderYastrebov 0
  • chore(deps): bump github.com/onsi/gomega from 1.21.1 to 1.24.1

    chore(deps): bump github.com/onsi/gomega from 1.21.1 to 1.24.1

    Bumps github.com/onsi/gomega from 1.21.1 to 1.24.1.

    Release notes

    Sourced from github.com/onsi/gomega's releases.

    v1.24.1

    No release notes provided.

    v1.24.0

    1.24.0

    Features

    Introducting gcustom - a convenient mechanism for building custom matchers.

    This is an RC release for gcustom. The external API may be tweaked in response to feedback however it is expected to remain mostly stable.

    Maintenance

    • Update BeComparableTo documentation [756eaa0]

    v1.23.0

    1.23.0

    Features

    • Custom formatting on a per-type basis can be provided using format.RegisterCustomFormatter() -- see the docs here

    • Substantial improvement have been made to StopTrying():

      • Users can now use StopTrying().Wrap(err) to wrap errors and StopTrying().Attach(description, object) to attach arbitrary objects to the StopTrying() error
      • StopTrying() is now always interpreted as a failure. If you are an early adopter of StopTrying() you may need to change your code as the prior version would match against the returned value even if StopTrying() was returned. Going forward the StopTrying() api should remain stable.
      • StopTrying() and StopTrying().Now() can both be used in matchers - not just polled functions.
    • TryAgainAfter(duration) is used like StopTrying() but instructs Eventually and Consistently that the poll should be tried again after the specified duration. This allows you to dynamically adjust the polling duration.

    • ctx can now be passed-in as the first argument to Eventually and Consistently.

    Maintenance

    • Bump github.com/onsi/ginkgo/v2 from 2.3.0 to 2.3.1 (#597) [afed901]
    • Bump nokogiri from 1.13.8 to 1.13.9 in /docs (#599) [7c691b3]
    • Bump github.com/google/go-cmp from 0.5.8 to 0.5.9 (#587) [ff22665]

    v1.22.1

    1.22.1

    Fixes

    • When passed a context and no explicit timeout, Eventually will only timeout when the context is cancelled [e5105cf]
    • Allow StopTrying() to be wrapped [bf3cba9]

    Maintenance

    • bump to ginkgo v2.3.0 [c5d5c39]

    v1.22.0

    1.22.0

    ... (truncated)

    Changelog

    Sourced from github.com/onsi/gomega's changelog.

    1.24.1

    Fixes

    • maintain backward compatibility for Eventually and Consisntetly's signatures [4c7df5e]
    • fix small typo (#601) [ea0ebe6]

    Maintenance

    • Bump golang.org/x/net from 0.1.0 to 0.2.0 (#603) [1ba8372]
    • Bump github.com/onsi/ginkgo/v2 from 2.4.0 to 2.5.0 (#602) [f9426cb]
    • fix label-filter in test.yml [d795db6]
    • stop running flakey tests and rely on external network dependencies in CI [7133290]

    1.24.0

    Features

    Introducting gcustom - a convenient mechanism for building custom matchers.

    This is an RC release for gcustom. The external API may be tweaked in response to feedback however it is expected to remain mostly stable.

    Maintenance

    • Update BeComparableTo documentation [756eaa0]

    1.23.0

    Features

    • Custom formatting on a per-type basis can be provided using format.RegisterCustomFormatter() -- see the docs here

    • Substantial improvement have been made to StopTrying():

      • Users can now use StopTrying().Wrap(err) to wrap errors and StopTrying().Attach(description, object) to attach arbitrary objects to the StopTrying() error
      • StopTrying() is now always interpreted as a failure. If you are an early adopter of StopTrying() you may need to change your code as the prior version would match against the returned value even if StopTrying() was returned. Going forward the StopTrying() api should remain stable.
      • StopTrying() and StopTrying().Now() can both be used in matchers - not just polled functions.
    • TryAgainAfter(duration) is used like StopTrying() but instructs Eventually and Consistently that the poll should be tried again after the specified duration. This allows you to dynamically adjust the polling duration.

    • ctx can now be passed-in as the first argument to Eventually and Consistently.

    Maintenance

    • Bump github.com/onsi/ginkgo/v2 from 2.3.0 to 2.3.1 (#597) [afed901]
    • Bump nokogiri from 1.13.8 to 1.13.9 in /docs (#599) [7c691b3]
    • Bump github.com/google/go-cmp from 0.5.8 to 0.5.9 (#587) [ff22665]

    1.22.1

    Fixes

    • When passed a context and no explicit timeout, Eventually will only timeout when the context is cancelled [e5105cf]
    • Allow StopTrying() to be wrapped [bf3cba9]

    ... (truncated)

    Commits
    • 3eef0d7 v1.24.1
    • 4c7df5e maintain backward compatibility for Eventually and Consisntetly's signatures
    • 1ba8372 Bump golang.org/x/net from 0.1.0 to 0.2.0 (#603)
    • f9426cb Bump github.com/onsi/ginkgo/v2 from 2.4.0 to 2.5.0 (#602)
    • ea0ebe6 fix small typo (#601)
    • d795db6 fix label-filter in test.yml
    • 7133290 stop running flakey tests and rely on external network dependencies in CI
    • ed1156b v1.24.0
    • 756eaa0 Update BeComparableTo documentation
    • 6015576 finish documenting gcustom
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies go 
    opened by dependabot[bot] 0
  • Blocking XGroupCreateMkStream does not interrupt on context cancellation

    Blocking XGroupCreateMkStream does not interrupt on context cancellation

    When XGroupCreateMkStream is called in blocking mode (Block = 0), call does not get interrupted by cancelling context.

    Expected Behavior

    Blocking function interrupts when context is cancelled

    Current Behavior

    Function continues to block after context cancellation

    Possible Solution

    Unsure yet

    Steps to Reproduce

    package main
    
    import (
    	"context"
    	"fmt"
    	"sync"
    	"time"
    
    	"github.com/go-redis/redis/v9"
    	"github.com/google/uuid"
    )
    
    func main() {
    	rdb := redis.NewUniversalClient(&redis.UniversalOptions{
    		Addrs:    []string{"localhost:6379"},
    		Password: "", // no password set
    		DB:       0,  // use default DB
    	})
    
    	defer rdb.Close()
    
    	ctx, cancelFn := context.WithCancel(context.Background())
    
    	go func() {
    		for idx := 0; idx < 5; idx++ {
    			fmt.Printf("Waiting %v...\n", idx)
    			time.Sleep(time.Second)
    		}
    		cancelFn()
    		fmt.Printf("Cancelled context and now expect blocking XGroupCreateMkStream to be interrupted...\n")
    	}()
    
    	name := "blag"
    	streamName := name
    	groupName := name + "-blah"
    
    	_, err := rdb.XGroupCreateMkStream(ctx, streamName, groupName, "0").Result()
    	fmt.Printf("%v\n", err)
    
    	var wg sync.WaitGroup
    
    	wg.Add(1)
    	go func() {
    		defer wg.Done()
    		objs, err := rdb.XReadGroup(ctx, &redis.XReadGroupArgs{
    			Group:    groupName,
    			Consumer: uuid.NewString(),
    			Streams:  []string{streamName, ">"},
    			Count:    100,
    			Block:    0,
    		}).Result()
    		fmt.Printf("%v, %v\n", err, objs)
    	}()
    
    	wg.Wait()
    	fmt.Printf("Done.\n")
    }
    
    

    Context (Environment)

    I have two goroutines concurrently performing XREADGROUP and XADD in blocking mode. XADD is triggered by external events and is not guaranteed to add items to the stream at any particular cadence or pattern. Shutting down reading goroutine is not possible due to the blocking call that does not get interrupted by context concellation.

    Detailed Description

    Blocking calls should interrupt when context is cancelled and connection closed.

    Possible Implementation

    N/A

    opened by jgirtakovskis 0
  • On MOVED response and on cluster refresh, the lib will not revalidate if the hostname still points to the same IP address

    On MOVED response and on cluster refresh, the lib will not revalidate if the hostname still points to the same IP address

    Expected Behavior

    On a MOVED response from Redis or when refreshing the cluster nodes, the library should validate that the hostname associated to the MOVED response or the one coming from CLUSTER SLOTS still points to the same IP.

    Current Behavior

    With maxRedirects above 0, the library will reissue the command on the node instructed in the MOVED reply. This works in most cases but does not account for the fact that the DNS returned by the response might not point to the same IP anymore, rendering the redirection handling useless.

    The same behavior is present on cluster refresh.

    Possible Solution

    When handling a MOVED response, the library should assert that the DNS resolves to the same IP and mark the node as failing or force a synchronous refresh.

    Steps to Reproduce

    Hard to reproduce manually but I've included a lot information in the next sections.

    Context (Environment)

    This is happening using an AWS Elasticache cluster during an engine update from 6.0 -> 6.2. It's a cluster with 1 shard and 2 replicas. While technically MOVED responses should not happen in a single shard cluster, AWS uses it during the cluster update. We have opened a support case to get more information on the update process, I'll report back if there is anything interesting on this side.

    The library version used is v8.11.5.

    Detailed Description

    I've spent a bunch of times trying to get to the bottom of this issue since we're trying to have an error-less engine update in our production environment. I've ended up using the mode readOnly: false because it's the mode that was causing the least amount of errors. I'm not exactly sure of the cause of the difference (might be related to this issue or not).

    Update behavior from the DNS side

    I've traced every second the result of the various DNS queries associated to the Elasticache cluster during an update (the configuration endpoint and node dedicated endpoints). Here are the results. In this case, initially, the master node was test-jbeaudet-0001-001.test-jbeaudet.9chuso.usw2.cache.amazonaws.com. During the update, it's hard to say because AWS seem to be using resharding inside the cluster and there are multiples instances reporting to be the master at the same time. After the update process, test-jbeaudet-0001-001.test-jbeaudet.9chuso.usw2.cache.amazonaws.com is the new final master.

    As you can see below, the old nodes being decommissionned are still valid during the update.

    Initial 
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.6.107
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.0.17
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.9.206
    test-jbeaudet-0001-001.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.9.206
    test-jbeaudet-0001-002.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.6.107
    test-jbeaudet-0001-003.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.0.17
    
    2022-10-31T15:11:38.722955
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.7.90
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.1.66
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.9.206
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.10.86
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.6.107
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.0.17
    test-jbeaudet-0001-001.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.10.86
    test-jbeaudet-0001-002.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.6.107
    test-jbeaudet-0001-003.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.0.17
    
    2022-10-31T15:15:46.533745
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.10.86
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.9.206
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.6.107
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.1.66
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.7.90
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.0.17
    test-jbeaudet-0001-001.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.10.86
    test-jbeaudet-0001-002.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.7.90
    test-jbeaudet-0001-003.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.0.17
    
    2022-10-31T15:16:47.835160
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.10.86
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.7.90
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.9.206
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.6.107
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.1.66
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.0.17
    test-jbeaudet-0001-001.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.10.86
    test-jbeaudet-0001-002.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.7.90
    test-jbeaudet-0001-003.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.1.66
    
    2022-10-31T15:26:12.539041
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.10.86
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.1.66
    clustercfg.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.7.90
    test-jbeaudet-0001-001.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.10.86
    test-jbeaudet-0001-002.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.7.90
    test-jbeaudet-0001-003.test-jbeaudet.9chuso.usw2.cache.amazonaws.com/172.30.1.66
    

    Behavior in the library

    During the update, our application issuing GET commands on the master only, started receiving MOVED 4559 test-jbeaudet-0001-001.test-jbeaudet.9chuso.usw2.cache.amazonaws.com:6379. This response is handled here and uses the hostname to fetch the node here. However, like we can see in the previous section, that DNS points to a new IP address! But since the code does not handle that case, it reissues the command to the same node, getting a new MOVED response and does this until the maxRedirects count has been reached.

    The same behavior happens when reloading the cluster state here, nothing validates that the DNS endpoint has changed so you're stuck with connections pointing to the old node of the cluster being decommissioned.

    I wish I could propose a solution but I'm not super familiar with the intricacies of the library.

    Thanks for looking into it!

    opened by jbeaudetupgrade 0
  • v9: `rediscensus.TracingHook` doesn't implement `redis.Hook`

    v9: `rediscensus.TracingHook` doesn't implement `redis.Hook`

    package main
    
    import (
    	"github.com/go-redis/redis/extra/rediscensus/v9"
    	"github.com/go-redis/redis/v9"
    )
    
    func main() {
    	client := redis.Client{}
    	client.AddHook(rediscensus.TracingHook{})
    }
    

    Expected Behavior

    rediscensus.TracingHook should implement redis.Hook.

    Current Behavior

    It does not.

    # github.com/go-redis/redis/extra/rediscensus/v9
    ../gopath3627430044/pkg/mod/github.com/go-redis/redis/extra/rediscensus/[email protected]/rediscensus.go:14:20: cannot use (*TracingHook)(nil) (value of type *TracingHook) as type redis.Hook in variable declaration:
    	*TracingHook does not implement redis.Hook (missing DialHook method)
    
    Go build failed.
    

    Steps to Reproduce

    https://go.dev/play/p/pbe3qBaLVqj

    opened by dcormier 0
Releases(v9.0.0-rc.1)
  • v9.0.0-rc.1(Oct 14, 2022)

  • v9.0.0-beta.3(Oct 6, 2022)

  • v9.0.0-beta.2(Jul 28, 2022)

  • v9.0.0-beta.1(Jun 4, 2022)

  • v8.11.5(Mar 17, 2022)

  • v8.11.4(Oct 4, 2021)

  • v8.11.2(Aug 6, 2021)

    Important changes:

    Revert #1824, because it will have a significant impact on the connection pool(#1849) We will re-add this feature in v9.

    Users who have already used v8.11.1, need to upgrade immediately.

    Source code(tar.gz)
    Source code(zip)
  • v8.11.1(Jul 29, 2021)

    Enhancement:

    • DBSize,ScriptLoad,ScriptFlush and ScriptExists should use hook. (#1811)
    • Added FIFO option to connection pool, set option Options.PoolFIFO to true. (#1820)
    • The connection is checked before use, it will increase the CPU time by 5-10% (#1824)
    • Check Failing() before serving random node. (#1825)

    Command:

    • RPOP command supports Count option (redis-server >= 6.2)
    • New cmd: GeoSearch, GeoSearchStore (redis-server >= 6.2)

    Thanks: @ktaekwon000 @hidu @AnatolyRugalev

    Source code(tar.gz)
    Source code(zip)
  • v7.4.1(Jul 16, 2021)

  • v8.11.0(Jun 30, 2021)

    Change

    Remove OpenTelemetry metrics, Linked #1534 #1805

    New Command

    1. XAutoClaim
    2. ZRangeStore
    3. ZUnion

    Command More Options

    1. XAdd: NoMkStream+MinID+Limit
    2. XTrim: MinID+Limit
    3. XGroup: CreateConsumer
    4. ZAdd: GT+LT
    5. ZRange: ByScore+ByLex+Rev+Limit

    New API

    1. XAutoClaim(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimCmd
    2. XAutoClaimJustID(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimJustIDCmd
    3. ZRangeStore(ctx context.Context, dst string, z ZRangeArgs) *IntCmd
    4. ZAddArgs(ctx context.Context, key string, args ZAddArgs) *IntCmd
    5. ZAddArgsIncr(ctx context.Context, key string, args ZAddArgs) *FloatCmd
    6. ZRangeArgs(ctx context.Context, z ZRangeArgs) *StringSliceCmd
    7. ZRangeArgsWithScores(ctx context.Context, z ZRangeArgs) *ZSliceCmd
    8. ZUnion(ctx context.Context, store ZStore) *StringSliceCmd
    9. ZUnionWithScores(ctx context.Context, store ZStore) *ZSliceCmd

    Mark deprecated(remove in v9)

    1. ZAddCh
    2. ZIncr
    3. ZAddNXCh
    4. ZAddXXCh
    5. ZIncrNX
    6. ZIncrXX
    7. XTrim
    8. XTrimApprox
    9. XAddArgs.MaxLenApprox

    Remarks

    There is a bug in the xtrim/xadd limit option (https://github.com/redis/redis/issues/9046)

    Source code(tar.gz)
    Source code(zip)
  • v8.10.0(Jun 3, 2021)

pggen - generate type safe Go methods from Postgres SQL queries

pggen - generate type safe Go methods from Postgres SQL queries pggen is a tool that generates Go code to provide a typesafe wrapper around Postgres q

Joe Schafer 202 Nov 21, 2022
Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client,

Devcloud-go Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client, you can use them w

HUAWEI CLOUD 11 Jun 9, 2022
Go client for Redis

Redigo Redigo is a Go client for the Redis database. Features A Print-like API with support for all Redis commands. Pipelining, including pipelined tr

null 9.3k Nov 22, 2022
REST based Redis client built on top of Upstash REST API

An HTTP/REST based Redis client built on top of Upstash REST API.

Andreas Thomas 5 Jul 31, 2022
Typescript type declaration to PostgreSQL CREATE TABLE converter

ts2psql NOTE: This is WIP. Details in this readme are ideal state. Current usage: go build && ./ts2psql (or go build && ts2psql if on Windows OS). A s

null 1 Jan 13, 2022
Golang Redis Postgres to-do Project

Golang Backend Project Problem Statement Build a to-do application with Golang a

null 6 Oct 17, 2022
WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

null 2.4k Nov 21, 2022
Query redis with SQL

reqlite reqlite makes it possible to query data in Redis with SQL. Queries are executed client-side with SQLite (not on the redis server). This projec

Augmentable 45 Aug 31, 2022
Go library that stores data in Redis with SQL-like schema

Go library that stores data in Redis with SQL-like schema. The goal of this library is we can store data in Redis with table form.

kaharman 2 Mar 14, 2022
A demo project that automatically restarts with a trio of docker, redis and go and transmits page visits.

A demo project that automatically restarts with a trio of docker, redis and go and transmits page visits.

Sami Salih İbrahimbaş 0 Feb 6, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dan Sosedoff 7.6k Nov 16, 2022
Go client for AMQP 0.9.1

Go RabbitMQ Client Library This is an AMQP 0.9.1 client with RabbitMQ extensions in Go. Project Maturity This project has been used in production syst

Sean Treadway 4.5k Nov 22, 2022
Interactive client for PostgreSQL and MySQL

dblab Interactive client for PostgreSQL and MySQL. Overview dblab is a fast and lightweight interactive terminal based UI application for PostgreSQL a

Daniel Omar Vergara Pérez 602 Nov 20, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dan Sosedoff 7.6k Nov 16, 2022
[mirror] the database client and tools for the Go vulnerability database

The Go Vulnerability Database golang.org/x/vulndb This repository is a prototype of the Go Vulnerability Database. Read the Draft Design. Neither the

Go 199 Nov 14, 2022
Migration tool for ksqlDB, which uses the ksqldb-go client.

ksqldb-migrate Migration tool for ksqlDB, which uses the ksqldb-go client.

Thomas Meitz 2 Nov 15, 2022
A client for TiKV

client-tikv ./tikv-client --pd 127.0.0.1:2379,127.0.0.2:2379,127.0.0.3:2379 usage You can query the value directly according to the key. tikv> select

#7 2 Apr 16, 2022
Client to import measurements to timestream databases.

Timestream DB Client Client to import measurements to timestream databases. Supported Databases/Services AWS Timestream AWS Timestream Run NewTimestre

Tommzn 0 Jan 11, 2022
Go-clickhouse - ClickHouse client for Go

ClickHouse client for Go 1.18+ This client uses native protocol to communicate w

Uptrace 150 Nov 20, 2022