Type-safe Redis client for Golang

Overview

All-in-one tool to optimize performance and monitor errors & logs

Redis client for Golang

build workflow PkgGoDev Documentation Chat

Ecosystem

Features

Installation

go-redis supports 2 last Go versions and requires a Go version with modules support. So make sure to initialize a Go module:

go mod init github.com/my/repo

And then install go-redis/v8 (note v8 in the import; omitting it is a popular mistake):

go get github.com/go-redis/redis/v8

Quickstart

import (
    "context"
    "github.com/go-redis/redis/v8"
)

var ctx = context.Background()

func ExampleClient() {
    rdb := redis.NewClient(&redis.Options{
        Addr:     "localhost:6379",
        Password: "", // no password set
        DB:       0,  // use default DB
    })

    err := rdb.Set(ctx, "key", "value", 0).Err()
    if err != nil {
        panic(err)
    }

    val, err := rdb.Get(ctx, "key").Result()
    if err != nil {
        panic(err)
    }
    fmt.Println("key", val)

    val2, err := rdb.Get(ctx, "key2").Result()
    if err == redis.Nil {
        fmt.Println("key2 does not exist")
    } else if err != nil {
        panic(err)
    } else {
        fmt.Println("key2", val2)
    }
    // Output: key value
    // key2 does not exist
}

Look and feel

Some corner cases:

// SET key value EX 10 NX
set, err := rdb.SetNX(ctx, "key", "value", 10*time.Second).Result()

// SET key value keepttl NX
set, err := rdb.SetNX(ctx, "key", "value", redis.KeepTTL).Result()

// SORT list LIMIT 0 2 ASC
vals, err := rdb.Sort(ctx, "list", &redis.Sort{Offset: 0, Count: 2, Order: "ASC"}).Result()

// ZRANGEBYSCORE zset -inf +inf WITHSCORES LIMIT 0 2
vals, err := rdb.ZRangeByScoreWithScores(ctx, "zset", &redis.ZRangeBy{
    Min: "-inf",
    Max: "+inf",
    Offset: 0,
    Count: 2,
}).Result()

// ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 AGGREGATE SUM
vals, err := rdb.ZInterStore(ctx, "out", &redis.ZStore{
    Keys: []string{"zset1", "zset2"},
    Weights: []int64{2, 3}
}).Result()

// EVAL "return {KEYS[1],ARGV[1]}" 1 "key" "hello"
vals, err := rdb.Eval(ctx, "return {KEYS[1],ARGV[1]}", []string{"key"}, "hello").Result()

// custom command
res, err := rdb.Do(ctx, "set", "key", "value").Result()

Run the test

go-redis will start a redis-server and run the test cases.

The paths of redis-server bin file and redis config file are definded in main_test.go:

var (
	redisServerBin, _  = filepath.Abs(filepath.Join("testdata", "redis", "src", "redis-server"))
	redisServerConf, _ = filepath.Abs(filepath.Join("testdata", "redis", "redis.conf"))
)

For local testing, you can change the variables to refer to your local files, or create a soft link to the corresponding folder for redis-server and copy the config file to testdata/redis/:

ln -s /usr/bin/redis-server ./go-redis/testdata/redis/src
cp ./go-redis/testdata/redis.conf ./go-redis/testdata/redis/

Lastly, run:

go test

See also

Issues
  • undefined: otel.Meter or cannot find package

    undefined: otel.Meter or cannot find package "go.opentelemetry.io/otel/api/trace"

    To fix cannot find package "go.opentelemetry.io/otel/api/trace" or undefined: otel.Meter:

    1. Make sure to initialize a Go module: go mod init github.com/my/repo

    2. Make sure to use correct import path with v8 in the end: go get github.com/go-redis/redis/v8

    For example:

    mkdir /tmp/redis-test
    cd /tmp/redis-test
    go mod init redis-test
    go get github.com/go-redis/redis/v8
    

    The root cause

    The error is not caused by OpenTelemetry. OpenTelemetry is just the first module Go tries to install. And the error will not go away until you start using Go modules properly.

    The presence of $GOROOT or $GOPATH in error messages indicates that you are NOT using Go modules.

    opened by vmihailenco 33
  • V8 performance degradation ~20%

    V8 performance degradation ~20%

    @monkey92t

    Hi, thank you for your tests. I ran your tests in our environment, and saw similar comparative results. However, when I slightly modified the tests to reflect more accurately of our use case (and how Go HTTP spawn goroutine for each request), all the sudden the performance is degraded for V8. This is especially evident with 100+ concurrency.

    2 changes that were made:

    1. instead of pre-spawn Go routine and run fixed number of Get/Set in a for loop (this is retained using get2/set2), it runs through fixed number of requests and spawn a Go routine (only up to currency) to process them.
    2. each request will generate a random key so the load is spread across the Redis cluster.

    Both V7/V8 saw a decrease in throughput comparing using pre-spawn Go routines vs a Go routine per request. However, decrease for V7 is very small as expected, but V8 is quite dramatic.

    go-redis version: v7.4.0 and v8.6.0

    redis-cluster (version 5.0.7): master: 84 instances slave: 84 instances

    This is the RedisCluster test result: https://github.com/go-redis/redis/files/6158805/Results.pdf

    This is the test program: https://github.com/go-redis/redis/files/6158824/perftest.go.gz

    opened by jfjm2018 28
  • high memory usage + solution

    high memory usage + solution

    Hi,

    I noticed that the memory usage was very high in my project. I did a memory profiling with inuse_space, and 90% of my memory is used by go-redis in WriteBuffer. If I understand correctly, each connection in the pool has its own WriteBuffer.

    My projects runs 80 goroutines (on 8 CPUs) and each goroutine SET Redis keys. My Redis keys are large: several MB. (less than 100 MB) So, it's very easy to understand why the memory usage is very high.

    I think I have a solution, but it requires changes in go-redis internals. We could use a global sync.Pool of WriteBuffer instead.

    WDYT ?

    opened by pierrre 24
  • Constantly Reestablishing Connections in Cluster Mode

    Constantly Reestablishing Connections in Cluster Mode

    Expected Behavior

    Creating a cluster client using pretty much default settings should not overwhelm Redis with constant barrage of new connections.

    redis.NewClusterClient(&redis.ClusterOptions{
        Addrs: []string{redisAddr},
        TLSConfig: &tls.Config{},
    })
    

    Current Behavior

    Occasionally, at times completely unrelated to system load/traffic, we are seeing connections being constantly re-established to one of the cluster nodes in our Redis cluster. We are using ElastiCache Redis in cluster mode with TLS enabled, and there seems to be no trigger we can find for this behavior. We also do not see any relevant logs in our service's systemd output in journalctl, other than

    redis_writer:85 {}        Error with write attempt: context deadline exceeded
    

    which seems more like a symptom of an overloaded Redis cluster node rather than a cause.

    When this issue happens, running CLIENT LIST on the affected Redis node shows age=0 or age=1 for all connections every time, which reinforces that connections are being dropped constantly for some reason. New connections plummet on other shards in the Redis cluster, and are all concentrated on one.

    New Connections (Cloudwatch)

    NewConnections

    Current Connections (Cloudwatch)

    CurrConnections

    In the example Cloudwatch graphs above we can also see that the issue can move between Redis cluster shards. As you can see, we're currently running with a 4-shard cluster, where each shard has 1 replica.

    Restarting our service does not address this problem, and to address it we basically need to do a hard reset (completely stop the clients for a while, then start them up again).

    We've reached out to AWS support and they have found no issues with our ElastiCache Redis cluster on their end. Additionally, there are no ElastiCache events happening at the time this issue is triggered.

    Possible Solution

    In this issue I'm mainly hoping to get insight into how I could better troubleshoot this issue and/or if there are additional client options we can use to try and mitigate this worst case scenario (i.e. rate limiting the creation of new connections in the cluster client) in absence of a root-cause fix.

    My main questions are:

    1. Is there a way for me to gather more data that would be helpful for the Redis/go-redis experts here?
    2. Is there a way for us to rate-limit the creation of new connections in the ClusterClient to keep things from getting too out of control if this does continue to occur?
    3. Has anyone else encountered a similar issue with Cluster mode, whether or not it was with ElastiCache Redis?

    Steps to Reproduce

    The description of our environment/service implementation below, as well as the snippet of our NewClusterClient call at the beginning of this issue, provide a fairly complete summary of how we're using both go-redis and ElastiCache Redis. We've not been able to consistently trigger this issue since it often happens when we're not load testing, and are mainly looking for answers for some of our questions above.

    Context (Environment)

    We're running a service that has a simple algorithm for claiming work from a Redis set, doing something with it, and then cleaning it up from Redis. In a nutshell, the algorithm is as follows:

    • SRANDMEMBER pending 10 - grab up to 10 random items from the pool of available work
    • ZADD in_progress <current_timestamp> <grabbed_item> for each of our items we got in the previous step
    • Any work items we weren't able to ZADD have been claimed by some other instance of the service, skip them
    • Once we're done with a work item, SREM pending <grabbed_item>
    • Periodically ZREMRANGEBYSCORE in_progress -inf <5_seconds_ago> so that claimed items aren't claimed forever

    Currently we run this algorithm on 6 EC2 instances, each running one service. Since each instance has 4 CPU cores, go-redis is calculating a max connection pool size of 20 for our ClusterClient. Each service has 20 goroutines performing this algorithm, and each goroutine sleeps 10ms between each invocation of the algorithm.

    At a steady state with no load on the system (just a handful of heartbeat jobs being added to pending every minute) we see a maximum of ~8% EngineCPUUtilization on each Redis shard, and 1-5 new connections/minute. Overall, pretty relaxed. When this issue has triggered recently, it's happened from this steady state, not during load tests.

    Our service is running on EC2 instances running Ubuntu 18.04 (Bionic), and we have tried using github.com/go-redis/redis/v8 v8.0.0 and github.com/go-redis/redis/v8 v8.11.2 - both have run into this issue.

    As mentioned earlier, we're currently running with a 4-shard ElastiCache Redis cluster with TLS enabled, where each shard has 1 replica.

    Detailed Description

    N/A

    Possible Implementation

    N/A

    opened by enjmusic 22
  • Add redis.Scan() to scan results from redis maps into structs.

    Add redis.Scan() to scan results from redis maps into structs.

    The package uses reflection to decode default types (int, string etc.) from Redis map results (key-value pair sequences) into struct fields where the fields are matched to Redis keys by tags.

    Similar to how encoding/json allows custom decoders usingUnmarshalJSON(), the package supports decoding of arbitrary types into struct fields by defining a Decode(string) errorfunction on types.

    The field/type spec of every struct that's passed to Scan() is cached in the package so that subsequent scans avoid iteration and reflection of the struct's fields.

    Issue: https://github.com/go-redis/redis/issues/1603

    opened by knadh 20
  • hscan adds support for i386 platform

    hscan adds support for i386 platform

    set: GOARCH=386

    redis 127.0.0.1:6379>>set a 100
    redis 127.0.0.1:6379>>set b 123456789123456789
    
    type Demo struct {
        A int8 `redis:"a"`
        B int64 `redis:"b"`
    }
    
    client := redis.NewClient(&Options{
            Network:      "tcp",
            Addr:         "127.0.0.1:6379",
    })  
    ctx := context.Background()
    d := &Demo{}
    err := client.MGet(ctx, "a", "b").Scan(d)
    t.Log(d, err)
    

    it should run normally on the i386 platform, and there should not be such an error: strconv.ParseInt: parsing "123456789123456789": value out of range

    opened by monkey92t 18
  • Add Limiter interface

    Add Limiter interface

    This is an alternative to https://github.com/go-redis/redis/pull/874. Basically it defines rate limiter interface which allows to implement different limiting strategies in separate packages.

    @xianglinghui what do you think? Is provided API enough to cover your needs? I am aware that code like https://github.com/go-redis/redis/blob/master/ring.go#L618-L621 requires some work in go-redis, but other than that it seems to be enough.

    opened by vmihailenco 17
  • dial tcp: i/o timeout

    dial tcp: i/o timeout

    I am using go-redis version v6.14.2. I have my application running in an AWS cluster behind loadbalancer. All redis requests failed in one of the nodes in the cluster. Rest of the nodes were working as expected. Application started working properly after a restart. We are using ElastiCache. Can you please help me with identifying the issue ?? If it is previously known issue and is solved in latest version, can you point me to that link ??

    The error was "dial tcp: i/o timeout".

    Below is my cluster configuration excluding redis host address and password:

    • ReadOnly : true
    • RouteByLatency : true
    • RouteRandomly : true
    • DialTimeout : 300ms
    • ReadTimeout : 30s
    • Write Timeout : 30s
    • PoolSize : 12000
    • PoolTimeout : 32
    • IdleTimeout : 120s
    • IdleCheckFrequency : 1s
    import (
    goRedisClient "github.com/go-redis/redis"
    )
    
    func GetRedisClient() *goRedisClient.ClusterClient {
    clusterClientOnce.Do(func() {
    redisClusterClient = goRedisClient.NewClusterClient(
    &goRedisClient.ClusterOptions{
    Addrs: viper.GetStringSlice("redis.hosts"),
    ReadOnly: true,
    RouteByLatency: true,
    RouteRandomly: true,
    Password: viper.GetString("redis.password"),
    
    			DialTimeout:  viper.GetDuration("redis.dial_timeout"),
    			ReadTimeout:  viper.GetDuration("redis.read_timeout"),
    			WriteTimeout: viper.GetDuration("redis.write_timeout"),
    
    			PoolSize:           viper.GetInt("redis.max_active_connections"),
    			PoolTimeout:        viper.GetDuration("redis.pool_timeout"),
    			IdleTimeout:        viper.GetDuration("redis.idle_connection_timeout"),
    			IdleCheckFrequency: viper.GetDuration("redis.idle_check_frequency"),
    		},
    	)
    
    	if err := redisClusterClient.Ping().Err(); err != nil {
    		log.WithError(err).Error(errorCreatingRedisClusterClient)
    	}
    })
    return redisClusterClient
    }
    

    As suggested in comments,https://github.com/go-redis/redis/issues/1194, I wrote the following snippet to dial and test nodes health for each slot. There were no errors. As mentioned, its happening randomly in one of the clients.Not always. It happened again after 3-4 months. And it is always fixed after a restart.

    func CheckRedisSlotConnection(testCase string) {
    	fmt.Println(viper.GetStringSlice("redis.hosts"))
    	fmt.Println("Checking testcase " + testCase)
    	client := redis.GetRedisClient()
    	slots := client.ClusterSlots().Val()
    	addresses := []string{}
    	for _, slot := range slots {
    		for _, node := range slot.Nodes {
    			addresses = append(addresses, node.Addr)
    		}
    	}
    	fmt.Println("Received " + strconv.Itoa(len(addresses)) + " Slots")
    	for _, address := range addresses {
    		fmt.Println("Testing address " + address)
    		conn, err := net.DialTimeout("tcp", address, 500*time.Millisecond)
    		if err != nil {
    			fmt.Println("Error dialing to address " + address + " Error " + err.Error())
    			continue
    		}
    		fmt.Println("Successfully dialled to address " + address)
    		err = conn.Close()
    		if err != nil {
    			fmt.Println("Error closing connection " + err.Error())
    			continue
    		}
    	}
    }
    
    opened by srinidhis94 15
  • Attempt to cleanup cluster logic.

    Attempt to cleanup cluster logic.

    @dim I tried to refactor code a bit to learn more about Redis cluster. Changes:

    • NewClusterClient does not return error any more, because NewClient does not too. I personally think that app can't do anything useful except exiting when NewClusterClient returns an error. So panic should be a good alternative.
    • Now ClusterClient.process tries next available replica before falling back to the randomClient. I am not sure that this change is correct, but I hope so :)
    • randomClient is completely rewritten so it does not require allocating seen map[string]struct{}{} on every request. It also checks that node is online before returning.
    opened by vmihailenco 15
  • How to implement periodic refresh topology

    How to implement periodic refresh topology

    My redis cluster is on top of kubernetes, so sometimes i may move the entire cluster to another set of nodes and they all change ip address. So my go-redis client needs to refresh the topology from time to time. I am wondering is there a config to do that? Or do i need to send some cluster-nodes command from time to time?

    opened by smartnews-weitao 14
  • redis: can't parse

    redis: can't parse "ype\":\"PerfdataValue\",\"unit\":\"\",\"value\":0.0,\"warn\":null}],\"status\":{\"checkercomponent\":{\"checker\":{\"i"

    We at @Icinga are developing two applications, one writes to Redis (and publishes events) and the other reads (and subscribes for the events).

    The writer PUBLISHes periodically data like...

    {"ApiListener":{"perfdata":[{"counter":false,"crit":null,"label":"api_num_conn_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_http_clients","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_clients","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_relay_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":46.399999999999998579,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_relay_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_sync_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_sync_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_count","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_not_conn_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"api":{"conn_endpoints":[],"http":{"clients":0.0},"identity":"CENSOREDCENSOREDCENSOREDCENSO","json_rpc":{"clients":0.0,"relay_queue_item_rate":46.399999999999998579,"relay_queue_items":0.0,"sync_queue_item_rate":0.0,"sync_queue_items":0.0,"work_queue_count":0.0,"work_queue_item_rate":0.0,"work_queue_items":0.0},"not_conn_endpoints":[],"num_conn_endpoints":0.0,"num_endpoints":0.0,"num_not_conn_endpoints":0.0,"zones":{"alexanders-mbp.int.netways.de":{"client_log_lag":0.0,"connected":true,"endpoints":["alexanders-mbp.int.netways.de"],"parent_zone":""}}}}},"CIB":{"perfdata":[],"status":{"active_host_checks":1.8500000000000000888,"active_host_checks_15min":1649.0,"active_host_checks_1min":111.0,"active_host_checks_5min":562.0,"active_service_checks":21.350000000000001421,"active_service_checks_15min":19280.0,"active_service_checks_1min":1281.0,"active_service_checks_5min":6399.0,"avg_execution_time":0.021172960599263507958,"avg_latency":0.011358479658762613354,"max_execution_time":0.077728986740112304688,"max_latency":0.045314073562622070312,"min_execution_time":0.001573085784912109375,"min_latency":0.0,"num_hosts_acknowledged":0.0,"num_hosts_down":1.0,"num_hosts_flapping":0.0,"num_hosts_in_downtime":0.0,"num_hosts_pending":0.0,"num_hosts_unreachable":0.0,"num_hosts_up":0.0,"num_services_acknowledged":0.0,"num_services_critical":3.0,"num_services_flapping":0.0,"num_services_in_downtime":0.0,"num_services_ok":4.0,"num_services_pending":0.0,"num_services_unknown":3.0,"num_services_unreachable":12.0,"num_services_warning":2.0,"passive_host_checks":0.0,"passive_host_checks_15min":0.0,"passive_host_checks_1min":0.0,"passive_host_checks_5min":0.0,"passive_service_checks":0.0,"passive_service_checks_15min":0.0,"passive_service_checks_1min":0.0,"passive_service_checks_5min":0.0,"remote_check_queue":0.0,"uptime":18855.292195796966553}},"CheckResultReader":{"perfdata":[],"status":{"checkresultreader":{}}},"CheckerComponent":{"perfdata":[{"counter":false,"crit":null,"label":"checkercomponent_checker_idle","max":null,"min":null,"type":"PerfdataValue","unit":"","value":13.0,"warn":null},{"counter":false,"crit":null,"label":"checkercomponent_checker_pending","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"checkercomponent":{"checker":{"idle":13.0,"pending":0.0}}}},"CompatLogger":{"perfdata":[],"status":{"compatlogger":{}}},"ElasticsearchWriter":{"perfdata":[],"status":{"elasticsearchwriter":{}}},"ExternalCommandListener":{"perfdata":[],"status":{"externalcommandlistener":{}}},"FileLogger":{"perfdata":[],"status":{"filelogger":{"main-log":1.0}}},"GelfWriter":{"perfdata":[],"status":{"gelfwriter":{}}},"GraphiteWriter":{"perfdata":[],"status":{"graphitewriter":{}}},"IcingaApplication":{"perfdata":[],"status":{"icingaapplication":{"app":{"enable_event_handlers":true,"enable_flapping":true,"enable_host_checks":true,"enable_notifications":true,"enable_perfdata":true,"enable_service_checks":true,"environment":"production","node_name":"alexanders-mbp.int.netways.de","pid":7700.0,"program_start":1531475256.183437109,"version":"v2.8.4-779-g45b3429fa"}}}},"InfluxdbWriter":{"perfdata":[],"status":{"influxdbwriter":{}}},"LivestatusListener":{"perfdata":[],"status":{"livestatuslistener":{}}},"NotificationComponent":{"perfdata":[],"status":{"notificationcomponent":{"notification":1.0}}},"OpenTsdbWriter":{"perfdata":[],"status":{"opentsdbwriter":{}}},"PerfdataWriter":{"perfdata":[],"status":{"perfdatawriter":{}}},"StatusDataWriter":{"perfdata":[],"status":{"statusdatawriter":{}}},"SyslogLogger":{"perfdata":[],"status":{"sysloglogger":{}}}}
    

    ... and the reader consumes that using this library.

    Wireshark shows nothing special, just these messages and some PINGs, but after a while the reader hits internal/proto/reader.go:106 with line being ...

    ype":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"checkercomponent":{"checker":{"idle":13.0,"pending":0.0}}}},"CompatLogger":{"perfdata":[],"status":{"compatlogger":{}}},"ElasticsearchWriter":{"perfdata":[],"status":{"elasticsearchwriter":{}}},"ExternalCommandListener":{"perfdata":[],"status":{"externalcommandlistener":{}}},"FileLogger":{"perfdata":[],"status":{"filelogger":{"main-log":1.0}}},"GelfWriter":{"perfdata":[],"status":{"gelfwriter":{}}},"GraphiteWriter":{"perfdata":[],"status":{"graphitewriter":{}}},"IcingaApplication":{"perfdata":[],"status":{"icingaapplication":{"app":{"enable_event_handlers":true,"enable_flapping":true,"enable_host_checks":true,"enable_notifications":true,"enable_perfdata":true,"enable_service_checks":true,"environment":"production","node_name":"CENSOREDCENSOREDCENSOREDCENSO","pid":7700.0,"program_start":1531475256.183437109,"version":"v2.8.4-779-g45b3429fa"}}}},"InfluxdbWriter":{"perfdata":[],"status":{"influxdbwriter":{}}},"LivestatusListener":{"perfdata":[],"status":{"livestatuslistener":{}}},"NotificationComponent":{"perfdata":[],"status":{"notificationcomponent":{"notification":1.0}}},"OpenTsdbWriter":{"perfdata":[],"status":{"opentsdbwriter":{}}},"PerfdataWriter":{"perfdata":[],"status":{"perfdatawriter":{}}},"StatusDataWriter":{"perfdata":[],"status":{"statusdatawriter":{}}},"SyslogLogger":{"perfdata":[],"status":{"sysloglogger":{}}}}
    
    opened by Al2Klimov 14
  • Return early when context signals done

    Return early when context signals done

    Hi! My app's logs are full of errors that look like this:

    redis: 2022/06/18 01:21:35 sentinel.go:587: sentinel: GetMasterAddrByName name="gprd-redis" failed: context canceled
    redis: 2022/06/18 01:21:35 sentinel.go:587: sentinel: GetMasterAddrByName name="gprd-redis" failed: context canceled
    redis: 2022/06/18 01:21:35 sentinel.go:587: sentinel: GetMasterAddrByName name="gprd-redis" failed: context canceled
    redis: 2022/06/18 01:21:35 sentinel.go:587: sentinel: GetMasterAddrByName name="gprd-redis" failed: context canceled
    <my app's message>: redis: all sentinels specified in configuration are unreachable
    

    Almost always there are several identical log lines with the same timestamp - one per configured sentinel. I looked at the code and noticed that when context is done, the code keep iterating and trying other sentinels. When it runs out of sentinels to try, it returns the final error and my app logs it.

    I'd like to improve the behavior and just return the error from the context in this case so that the app can recognize the situation and react accordingly. In my case "accordingly" means not sending the error to Sentry and not logging it, but instead just returning a proper response code to the client.

    WDYT?

    Thanks for an awesome library!

    p.s. While working on this PR is occurred to me that it's a bit weird to use the passed context to connect to sentinel/etc. Wouldn't it be better to do it asynchronously and have a working sentinel/server/whatever connected to and ready to go when a library's method is called? I think this is how gRPC works - it manages the pool of healthy underlying TCP connections asynchronously and pick one when necessary. If none available, it blocks until one becomes available OR context is done. If context is done, it returns early but still tries to establish a connection in the background.

    opened by ash2k 0
  • go-redis/v8 nil shard in the shards map

    go-redis/v8 nil shard in the shards map

    Expected Behavior

    no panic due to nil pointer dereference after retrieving a nil shard from the shards map

    Current Behavior

    There is no check if a shard is not nil before returning it.

    It might cause panic due to nil pointer dereference here

    Possible Solution

    Check if shard is not nil before returning it.

    Steps to Reproduce

    We have a proprietary code which wraps go-redis/go and sometimes recreates the Ring structure (and hence repopulating the shards map), all while holding the mutex. Despite it, sometimes after the ring is recreated, we get panic in the place linked above because the shard retrieved by the name is nil.

    Possible Implementation

    func (c *ringShards) GetByName(shardName string) (*ringShard, error) {
    	if shardName == "" {
    		return c.Random()
    	}
    
    	c.mu.RLock()
    	shard := c.shards[shardName]
    	c.mu.RUnlock()
    
    	if shard == nil {
    		return nil, fmt.Errorf("a shard named %q is nil", shardName)
    	}
    
    	return shard, nil
    }
    
    opened by yexelm 2
  • setnx a uniq key first time, bu it return false

    setnx a uniq key first time, bu it return false

    Issue tracker is used for reporting bugs and discussing new features. Please use stackoverflow for supporting issues.

    When I set a unique key, setnx will return true in most cases, but occasionally false, and I am sure it is the first time to execute

    Expected Behavior

    Current Behavior

    Possible Solution

    Steps to Reproduce

    Context (Environment)

    Detailed Description

    Possible Implementation

    opened by simanstar 0
  • PubSub goroutine blocked?

    PubSub goroutine blocked?

    Consider this code:

    func TestPubSub(t *testing.T) {
    	rdb := redisRaw.NewClient(&redisRaw.Options{
    		Addr:     "127.0.0.1:6379",
    		DB:       0,
    	})
    
    	wg := sync.WaitGroup{}
    	for i := 0; i < 10; i++ {
    		i := i
    		wg.Add(1)
    		go func() {
    			defer wg.Done()
    			startTime := time.Now()
    			topic := fmt.Sprintf("test-topic:%d", i)
    
    			// subscribe
    			pubsub := rdb.Subscribe(context.Background(), topic)
    			defer pubsub.Close()
    
    			fmt.Printf("%s sub ======= \n", topic)
    
    			// publish
    			if err := rdb.Publish(context.Background(), topic, "ok").Err(); err != nil {
    				panic(err)
    			}
    
    			// receive payload
    			<-pubsub.Channel()
    			fmt.Printf("%s done ======= %s\n", topic, time.Since(startTime))
    		}()
    	}
    	wg.Wait()
    	fmt.Println("done") // never reach here.
    }
    

    Run the test, then I get stuck:

    === RUN   TestPubSub
    test-topic:9 sub ======= 
    test-topic:0 sub ======= 
    test-topic:4 sub ======= 
    test-topic:5 sub ======= 
    test-topic:8 sub ======= 
    test-topic:7 sub ======= 
    test-topic:1 sub ======= 
    test-topic:3 sub ======= 
    test-topic:6 sub ======= 
    test-topic:2 sub ======= 
    test-topic:8 done ======= 10.484539ms
    test-topic:9 done ======= 10.735532ms
    

    It seems like <-pubsub.Channel() blocked though a job has already been published.

    The test never finished since wg.Wait() never complete.

    I'm using github.com/go-redis/redis/v8 v8.11.5, the go version is go1.18.3 darwin/amd64

    opened by DarthPestilane 1
  • No results but also no error!

    No results but also no error!

    go-redis v8 go version go1.17.1 darwin/amd64

    Expected Behavior

    if a command does not find a key/member than it should return redis.Nil in the error part of the result

    Current Behavior

    command does not find a match and returns nil value and nil error.

    Possible Solution

    Steps to Reproduce

    When I print val and err i get [<nil>] <nil> which does not make sense. Since there are no results, the error should be redis.Nil.

    func (r *RedisClient) GetOne(ctx context.Context, item *Identifier) (*SearchResponse, error) {
    	pos, err := r.GeoPos(ctx, item.key(), item.Id).Result()
    
    	if err == redis.Nil {
    		return &SearchResponse{0, nil}, nil
    	} else if err != nil {
    		return nil, errors.New("failed to get point")
    	}
    	fmt.Println(pos, err) // [<nil>] <nil>
    	point := pos[0]
    	var objects []Object
    	objects = append(objects, Object{
    		Identifier: item,
    		Location: &Location{
    			Lat: point.Latitude,
    			Lng: point.Longitude,
    		},
    	})
    
    	return &SearchResponse{1, &objects}, nil
    
    }
    
    bug v9 
    opened by SamiAlsubhi 5
  • Listen for context.Cancelled, try to cancel read if detected, still racy

    Listen for context.Cancelled, try to cancel read if detected, still racy

    This is somewhat of an 80% attempt at solving #2117 - as net.Conn doesn't have a cancellation context:

    • I've started a background goroutine in Conn.WithReader that tries to listen for that cancellation,
    • If cancellation has been caught, the net.Conn read is cancelled with a SetReadDeadline,
    • Maintain a context.Canceled value with error wrapping and an explicit early exit cancellation check,

    I'm still not entirely happy about this, as some error details get lost on cancellation (reader error returned). The function produces 3 error objects, which could be merged with go-uber/multierr to enable value checks with errors.Is.

    opened by titpetric 0
Releases(v9.0.0-beta.1)
  • v9.0.0-beta.1(Jun 4, 2022)

  • v8.11.5(Mar 17, 2022)

  • v8.11.4(Oct 4, 2021)

  • v8.11.2(Aug 6, 2021)

    Important changes:

    Revert #1824, because it will have a significant impact on the connection pool(#1849) We will re-add this feature in v9.

    Users who have already used v8.11.1, need to upgrade immediately.

    Source code(tar.gz)
    Source code(zip)
  • v8.11.1(Jul 29, 2021)

    Enhancement:

    • DBSize,ScriptLoad,ScriptFlush and ScriptExists should use hook. (#1811)
    • Added FIFO option to connection pool, set option Options.PoolFIFO to true. (#1820)
    • The connection is checked before use, it will increase the CPU time by 5-10% (#1824)
    • Check Failing() before serving random node. (#1825)

    Command:

    • RPOP command supports Count option (redis-server >= 6.2)
    • New cmd: GeoSearch, GeoSearchStore (redis-server >= 6.2)

    Thanks: @ktaekwon000 @hidu @AnatolyRugalev

    Source code(tar.gz)
    Source code(zip)
  • v7.4.1(Jul 16, 2021)

  • v8.11.0(Jun 30, 2021)

    Change

    Remove OpenTelemetry metrics, Linked #1534 #1805

    New Command

    1. XAutoClaim
    2. ZRangeStore
    3. ZUnion

    Command More Options

    1. XAdd: NoMkStream+MinID+Limit
    2. XTrim: MinID+Limit
    3. XGroup: CreateConsumer
    4. ZAdd: GT+LT
    5. ZRange: ByScore+ByLex+Rev+Limit

    New API

    1. XAutoClaim(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimCmd
    2. XAutoClaimJustID(ctx context.Context, a *XAutoClaimArgs) *XAutoClaimJustIDCmd
    3. ZRangeStore(ctx context.Context, dst string, z ZRangeArgs) *IntCmd
    4. ZAddArgs(ctx context.Context, key string, args ZAddArgs) *IntCmd
    5. ZAddArgsIncr(ctx context.Context, key string, args ZAddArgs) *FloatCmd
    6. ZRangeArgs(ctx context.Context, z ZRangeArgs) *StringSliceCmd
    7. ZRangeArgsWithScores(ctx context.Context, z ZRangeArgs) *ZSliceCmd
    8. ZUnion(ctx context.Context, store ZStore) *StringSliceCmd
    9. ZUnionWithScores(ctx context.Context, store ZStore) *ZSliceCmd

    Mark deprecated(remove in v9)

    1. ZAddCh
    2. ZIncr
    3. ZAddNXCh
    4. ZAddXXCh
    5. ZIncrNX
    6. ZIncrXX
    7. XTrim
    8. XTrimApprox
    9. XAddArgs.MaxLenApprox

    Remarks

    There is a bug in the xtrim/xadd limit option (https://github.com/redis/redis/issues/9046)

    Source code(tar.gz)
    Source code(zip)
  • v8.10.0(Jun 3, 2021)

pggen - generate type safe Go methods from Postgres SQL queries

pggen - generate type safe Go methods from Postgres SQL queries pggen is a tool that generates Go code to provide a typesafe wrapper around Postgres q

Joe Schafer 171 Jun 18, 2022
Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client,

Devcloud-go Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client, you can use them w

HUAWEI CLOUD 11 Jun 9, 2022
Go client for Redis

Redigo Redigo is a Go client for the Redis database. Features A Print-like API with support for all Redis commands. Pipelining, including pipelined tr

null 9.2k Jun 24, 2022
REST based Redis client built on top of Upstash REST API

An HTTP/REST based Redis client built on top of Upstash REST API.

Andreas Thomas 4 Jun 2, 2022
Typescript type declaration to PostgreSQL CREATE TABLE converter

ts2psql NOTE: This is WIP. Details in this readme are ideal state. Current usage: go build && ./ts2psql (or go build && ts2psql if on Windows OS). A s

null 1 Jan 13, 2022
Golang Redis Postgres to-do Project

Golang Backend Project Problem Statement Build a to-do application with Golang a

null 4 Jun 11, 2022
WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

null 2.2k Jun 28, 2022
Query redis with SQL

reqlite reqlite makes it possible to query data in Redis with SQL. Queries are executed client-side with SQLite (not on the redis server). This projec

Augmentable 41 Jun 1, 2022
Go library that stores data in Redis with SQL-like schema

Go library that stores data in Redis with SQL-like schema. The goal of this library is we can store data in Redis with table form.

kaharman 2 Mar 14, 2022
A demo project that automatically restarts with a trio of docker, redis and go and transmits page visits.

A demo project that automatically restarts with a trio of docker, redis and go and transmits page visits.

Sami Salih İbrahimbaş 0 Feb 6, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dan Sosedoff 7.3k Jun 23, 2022
Go client for AMQP 0.9.1

Go RabbitMQ Client Library This is an AMQP 0.9.1 client with RabbitMQ extensions in Go. Project Maturity This project has been used in production syst

Sean Treadway 4.4k Jun 20, 2022
Interactive client for PostgreSQL and MySQL

dblab Interactive client for PostgreSQL and MySQL. Overview dblab is a fast and lightweight interactive terminal based UI application for PostgreSQL a

Daniel Omar Vergara Pérez 196 Jun 21, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dan Sosedoff 7.3k Jun 21, 2022
[mirror] the database client and tools for the Go vulnerability database

The Go Vulnerability Database golang.org/x/vulndb This repository is a prototype of the Go Vulnerability Database. Read the Draft Design. Neither the

Go 43 May 28, 2022
Migration tool for ksqlDB, which uses the ksqldb-go client.

ksqldb-migrate Migration tool for ksqlDB, which uses the ksqldb-go client.

Thomas Meitz 1 Jun 14, 2022
A client for TiKV

client-tikv ./tikv-client --pd 127.0.0.1:2379,127.0.0.2:2379,127.0.0.3:2379 usage You can query the value directly according to the key. tikv> select

#7 2 Apr 16, 2022
Client to import measurements to timestream databases.

Timestream DB Client Client to import measurements to timestream databases. Supported Databases/Services AWS Timestream AWS Timestream Run NewTimestre

Tommzn 0 Jan 11, 2022
Go-clickhouse - ClickHouse client for Go

ClickHouse client for Go 1.18+ This client uses native protocol to communicate w

Uptrace 109 Jun 21, 2022