Redis-shake is a tool for synchronizing data between two redis databases. Redis-shake是一个用于在两个redis之间同步数据的工具,满足用户非常灵活的同步、迁移需求。

Overview

RedisShake is mainly used to synchronize data from one redis to another.
Thanks to the Douyu's WSD team for the support.

Redis-Shake


Redis-shake is developed and maintained by NoSQL Team in Alibaba-Cloud Database department.
Redis-shake has made some improvements based on redis-port, including bug fixes, performance improvements and feature enhancements.

Main Functions


The type can be one of the followings:

  • decode: Decode dumped payload to human readable format (hex-encoding).
  • restore: Restore RDB file to target redis.
  • dump: Dump RDB file from source redis.
  • sync: Sync data from source redis to target redis by sync or psync command. Including full synchronization and incremental synchronization.
  • rump: Sync data from source redis to target redis by scan command. Only support full synchronization. Plus, RedisShake also supports fetching data from given keys in the input file when scan command is not supported on the source side. This mode is usually used when sync and psync redis commands aren't supported.

Please check out the conf/redis-shake.conf to see the detailed parameters description.

Support


Redis version from 2.x to 5.0. Supports Standalone, Cluster, and some proxies type like Codis, twemproxy, Aliyun Cluster Proxy, Tencent Cloud Proxy and so on.
For codis and twemproxy, there maybe some constraints, please checkout this question.

Configuration

Redis-shake has several parameters in the configuration(conf/redis-shake.conf) that maybe confusing, if this is your first time using, please visit this tutorial.

Verification


User can use RedisFullCheck to verify correctness.

Metric


Redis-shake offers metrics through restful api and log file.

  • restful api: curl 127.0.0.1:9320/metric.
  • log: the metric info will be printed in the log periodically if enable.
  • inner routine heap: curl http://127.0.0.1:9310/debug/pprof/goroutine?debug=2

Redis Type


Both the source and target type can be standalone, opensource cluster and proxy. Although the architecture patterns of different vendors are different for the proxy architecture, we still support different cloud vendors like alibaba-cloud, tencent-cloud and so on.
If the target is open source redis cluster, redis-shake uses redis-go-cluster driver to write data. When target type is proxy, redis-shakes write data in round-robin way.
If the source is redis cluster, redis-shake launches multiple goroutines for parallel pull. User can use rdb.parallel to control the RDB syncing concurrency.
The "move slot" operations must be disabled on the source side.

Code branch rules


Version rules: a.b.c.

  • a: major version
  • b: minor version. even number means stable version.
  • c: bugfix version
branch name rules
master master branch, do not allowed push code. store the latest stable version. develop branch will merge into this branch once new version created.
develop(main branch) develop branch. all the bellowing branches fork from this.
feature-* new feature branch. forked from develop branch and then merge back after finish developing, testing, and code review.
bugfix-* bugfix branch. forked from develop branch and then merge back after finish developing, testing, and code review.
improve-* improvement branch. forked from develop branch and then merge back after finish developing, testing, and code review.

Tag rules:
Add tag when releasing: "release-v{version}-{date}". for example: "release-v1.0.2-20180628"
User can use -version to print the version.

Usage


You can directly download the binary in the release package, and use start.sh script to start it directly: ./start.sh redis-shake.conf sync.
You can also build redis-shake yourself according to the following steps, the go and govendor must be installed before compile:

  • git clone https://github.com/alibaba/RedisShake.git
  • cd RedisShake
  • export GOPATH=`pwd`
  • cd src/vendor
  • govendor sync #please note: must install govendor first and then pull all dependencies: go get -u github.com/kardianos/govendor
  • cd ../../ && ./build.sh
  • ./bin/redis-shake -type=$(type_must_be_sync_dump_restore_decode_or_rump) -conf=conf/redis-shake.conf #please note: user must modify collector.conf first to match needs.

Shake series tool


We also provide some tools for synchronization in Shake series.

Plus, we have a DingDing(钉钉) group so that users can join and discuss, please scan the code. DingDing

Thanks


Username Mail
ceshihao [email protected]
wangyiyang [email protected]
muicoder [email protected]
zhklcf [email protected]
shuff1e [email protected]
xuhualin [email protected]
Issues
  • 同步过程中报:DbSyncer[14] Event:FlushFail

    同步过程中报:DbSyncer[14] Event:FlushFail

    在redis-shake在全量同步的过程中报下面这个错误,会是什么原因: DbSyncer[14] Event:FlushFail Id:redis-shake Error:dial tcp 10.18.73.20:11234: i/o timeout [stack]: 1 github.com/alibaba/RedisShake/redis-shake/dbSync/syncIncrease.go:390 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).sendTargetCommand.func1 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncIncrease.go:444 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).sendTargetCommand ... ...

    opened by xcjing 49
  • [PANIC] restore command error key:privilege_1409124_xxxxx err:Do failed[MOVED 2361 xxxxx:10000]

    [PANIC] restore command error key:privilege_1409124_xxxxx err:Do failed[MOVED 2361 xxxxx:10000]

    开源版redis cluster,版本为3.0.3,进行同步报错。

    [error]: Do failed[MOVED 2361 10.8.60.123:10000] [stack]: 1 /Users/vinllen-ali/code/redis-shake-inner/redis-shake/src/redis-shake/common/utils.go:844 redis-shake/common.RestoreRdbEntry 0 /Users/vinllen-ali/code/redis-shake-inner/redis-shake/src/redis-shake/sync.go:447 redis-shake.(*dbSyncer).syncRDBFile.func1.1 ... ...

    bug 
    opened by whh881114 24
  • [BUG] 三主三从架构,增量同步失败

    [BUG] 三主三从架构,增量同步失败

    2022/02/10 15:04:26 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:27 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:27 [ERROR] dbSyncer[0] send offset to source redis failed[write tcp xx.xx.xx.34:42527-> xx.xx.xx.34:6379: write: connection reset by peer] [stack]: 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncBegin.go:145 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).pSyncPipeCopy.func1 ... ... 2022/02/10 15:04:27 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:27 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:28 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:28 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:28 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:29 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:29 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:29 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:30 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:30 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:30 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:31 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:31 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:31 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:32 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:32 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0

    2022/02/10 15:04:39 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:39 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:39 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:40 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:40 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:40 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:41 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:41 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:41 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:42 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:42 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:42 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:43 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:43 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:43 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:44 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:44 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:44 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:45 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:45 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:45 [ERROR] dbSyncer[2] send offset to source redis failed[write tcp xx.xx.xx.34:45407-> xx.xx.xx.37:6379: write: connection reset by peer] [stack]: 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncBegin.go:145 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).pSyncPipeCopy.func1 ... ... 2022/02/10 15:04:45 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:46 [WARN] DbSyncer[0] Event:GetFakeSlaveOffsetFail Id:redis-shake Warn:OffsetNotFoundInInfo 2022/02/10 15:04:46 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:46 [WARN] DbSyncer[2] Event:GetFakeSlaveOffsetFail Id:redis-shake Warn:OffsetNotFoundInInfo 2022/02/10 15:04:46 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:46 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:47 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:47 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:47 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:48 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:48 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:48 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:49 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:49 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:49 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:49 [ERROR] dbSyncer[1] send offset to source redis failed[write tcp xx.xx.xx.34:54523->xx.xx.xx.35:6379: write: connection reset by peer] [stack]: 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncBegin.go:145 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).pSyncPipeCopy.func1 ... ... 2022/02/10 15:04:50 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0

    opened by kunCharles 23
  • 同步数据报错

    同步数据报错

    版本:develop,4a26b1ca10bc9c6849b30fc73004d135c8227063,go1.10.3,2019-03-08_13:43:41 redis版本: 源:redis_version:3.2.4 目标:5.0.2 报错: 2019/04/23 19:49:49 [PANIC] Event:NetErrorWhileReceive Id:redis-shake Error:EOF [stack]: 0 /home/zhuzhao.cx/redis-shake/src/redis-shake/sync.go:405 redis-shake.(*CmdSync).SyncCommand.func2

    opened by navyaijm2017 21
  • 高ops的时候,redis-shake占用CPU比较高,13M/s,3W ops+

    高ops的时候,redis-shake占用CPU比较高,13M/s,3W ops+

    pprof查出,是net.Read调用太频繁,可以适当延迟,目前发现2ms比较合适,我不知道最佳是多少,1的话,效果不理想。 原来我们的cpu占用为125%,pprof出来,net.Read为71%,当我们加上time.Sleep之后,直接将为50%左右,并且不影响同步 然后openConnet可以 net.DialTCP,可以设置ReadBuffer和WriteBuffer,然后当做net.Conn传出去即可。

    welcome-pr 
    opened by jiuker 15
  • 开启psync,同步报错,[PANIC] invalid psync response, fullsync

    开启psync,同步报错,[PANIC] invalid psync response, fullsync

    2019/04/04 15:55:38 [INFO] redis-shake configuration: {"Id":"redis-shake","LogFile":"","SystemProfile":9310,"HttpProfile":9320,"NCpu":0,"Parallel":4,"InputRdb":"local","OutputRdb":"local_dump","SourceAddress":"192.168.111.93:6379","SourcePasswordRaw":"","SourcePasswordEncoding":"","SourceVersion":7,"SourceAuthType":"auth","TargetAddress":"192.168.111.94:6379","TargetPasswordRaw":"","TargetPasswordEncoding":"","TargetVersion":7,"TargetDB":-1,"TargetAuthType":"auth","FakeTime":"","Rewrite":true,"FilterDB":"","FilterKey":[],"FilterSlot":[],"BigKeyThreshold":524288000,"Psync":true,"Metric":true,"MetricPrintLog":false,"HeartbeatUrl":"","HeartbeatInterval":3,"HeartbeatExternal":"test external","HeartbeatNetworkInterface":"","SenderSize":104857600,"SenderCount":5000,"SenderDelayChannelSize":65535,"KeepAlive":0,"PidPath":"","RedisConnectTTL":0,"ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetRedisVersion":"4.0.12","TargetReplace":true}
    2019/04/04 15:55:38 [INFO] sync from '192.168.111.93:6379' to '192.168.111.94:6379' http '9320'
    2019/04/04 15:55:38 [INFO] sync from '192.168.111.93:6379' to '192.168.111.94:6379'
    2019/04/04 15:55:38 [PANIC] invalid psync response, fullsync
    [error]: bad resp CRLF end
        6   /Users/wangyiyang/Documents/GitHub/RedisShake/src/pkg/redis/decoder.go:179
                pkg/redis.(*Decoder).decodeSingleLineBulkBytesArray
        5   /Users/wangyiyang/Documents/GitHub/RedisShake/src/pkg/redis/decoder.go:97
                pkg/redis.(*Decoder).decodeResp
        4   /Users/wangyiyang/Documents/GitHub/RedisShake/src/pkg/redis/decoder.go:32
                pkg/redis.Decode
        3   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/common/utils.go:162
                redis-shake/common.SendPSyncFullsync
        2   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/sync.go:180
                redis-shake.(*CmdSync).SendPSyncCmd
        1   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/sync.go:119
                redis-shake.(*CmdSync).Main
        0   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/main/main.go:115
                main.main
            ... ...
    [stack]:
        3   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/common/utils.go:164
                redis-shake/common.SendPSyncFullsync
        2   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/sync.go:180
                redis-shake.(*CmdSync).SendPSyncCmd
        1   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/sync.go:119
                redis-shake.(*CmdSync).Main
        0   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/main/main.go:115
                main.main
            ... ...
    
    opened by wangyiyang 14
  • 是否支持 aws elastic cache 的同步?

    是否支持 aws elastic cache 的同步?

    如题,由于 AWS ElasticCache 禁用了 syncpsync 命令,因此 RedisShake 走 sync 模式肯定是不行的, 那 aws 其实是有给每个集群的分片提供单独的 sync 命令,比如 shard 1 的话,可能是一个前缀:xhma21xfks,然后要同步 shard 1 的 sync 命令就会是 xhma21xfkssync,然后 shard 2 可能会是另外一个 prefix,比如:nmfu2sl5o,它的 sync 等价命令就会是 nmfu2sl5osync,那这种形式,RedisShake 有办法变相支持一下吗?

    我这边有看到咱们在不支持 psync、sync 的情况下提供了 scan 的方式来同步数据,但是这个方案更像是迁移场景,我们这边更多是同步数据的常驻服务场景,所以还不太适用。

    希望能帮忙给个建议,感谢!

    opened by Colstuwjx 13
  • 某些情况下,同步过程中丢失数据

    某些情况下,同步过程中丢失数据

    RedisShake版本:release-v1.2.2-20190403 源redis版本:3.2.8 目的redis版本:4.0.10 cluster 配置:仅增大了 parallel 参数,其余参数均为默认参数 现象:同步过程中,无错误提示,同步完成之后,保持同步进程继续运行。 统计两端key总数,发现目的redis的key数量约为源redis的90%。 使用redis-full-check默认参数进行检查,报错:https://github.com/alibaba/RedisFullCheck/issues/36 使用 redis-full-check -m 3 进行检查,发现两端一致,即工具认为key是一致的。

    从调试信息中找到了两端不一致的key:uc_basic_data_2852157937_1_28521579370000000000000000000010

    源redis: > type uc_basic_data_2852157937_1_28521579370000000000000000000010 hash > ttl uc_basic_data_2852157937_1_28521579370000000000000000000010 (integer) 596453

    目的redis: > type uc_basic_data_2852157937_1_28521579370000000000000000000010 none > ttl uc_basic_data_2852157937_1_28521579370000000000000000000010 (integer) -2

    opened by cedarwu 12
  •  read: connection reset by peer

    read: connection reset by peer

    两次redis同步遇到相同问题 Snipaste_2021-08-19_10-58-02

    redis的相关配置为:

    repl-timeout 60
    client-output-buffer-limit slave 0 0 0 
    

    源端日志显示为

    * Background saving terminated with success
    # Connection with slave client id #2894126 lost.
    

    已尝试参照 https://github.com/alibaba/RedisShake/issues/282 修改 parallel = 64,但并未成功 尝试过修改repl-timeout为300,也试过修改client-output-buffer-limit slave 为256m 64m 60,也没有成功。 所以请教一下作者,是否有什么好的调整建议。

    opened by oneday2442 11
  • release-v2.0.3-20200724 opens too many connections to destination redis host

    release-v2.0.3-20200724 opens too many connections to destination redis host

    Running in "sync" mode as follows ./redis-shake.linux -conf=standalone2standalone.conf -type=sync

    Used configuration:

    source.type=standalone
    source.address=10.XX.XX.XX:6379
    
    target.type=standalone
    target.address=10.XX.XX.XXX:6379
    

    At most 6 mb/s data transfer seen at log informations. Screen Shot 2021-03-04 at 13 35 29

    Is there any configurative way to increase this data transfer rate with decreasing the count of opening network connections?

    Network monitoring logs are not exist for now. I can provide them also if needed.

    opened by mselmansezgin 11
  • 从3.2.9的单点redis无法同步数据到5.0.5的集群

    从3.2.9的单点redis无法同步数据到5.0.5的集群

    同步执行的命令方式: redis-shake -type=sync -conf=redis-shake.conf

    执行后出现下面错误

    • execute: "./redis-shake" 2020/01/13 13:46:56 [INFO] the target redis type is cluster, only pass db0 2020/01/13 13:46:56 [INFO] source rdb[127.0.0.1:7070] checksum[yes] 2020/01/13 13:46:56 [WARN]

    \ \ _ ______ | \ \ / _-=O'/|O'/| \ RedisShake, here we go !! _____\ / | / ) / / '/-== _/|/=-| -GM / Alibaba Cloud / * \ | | / / (o)

    if you have any problem, please visit https://github.com/alibaba/RedisShake/wiki/FAQ

    2020/01/13 13:46:56 [INFO] redis-shake configuration: {"Id":"redis-shake","LogFile":"","LogLevel":"info","SystemProfile":9310,"HttpProfile":9320,"Parallel":32,"SourceType":"standalone","SourceAddress":"127.0.0.1:7070","SourcePasswordRaw":"r515f2114669tv1f98b0b5330ec0e2sd","SourcePasswordEncoding":"","SourceAuthType":"auth","SourceTLSEnable":false,"SourceRdbInput":["local"],"SourceRdbParallel":1,"SourceRdbSpecialCloud":"","TargetAddress":"10.199.99.40:7000;10.199.99.133:7000;10.199.99.132:7000","TargetPasswordRaw":"r515f2114669tv1f98b0b5330ec0e2sd","TargetPasswordEncoding":"","TargetDBString":"-1","TargetAuthType":"auth","TargetType":"cluster","TargetTLSEnable":false,"TargetRdbOutput":"local_dump","TargetVersion":"5.0.5","FakeTime":"","Rewrite":true,"FilterDBWhitelist":["0"],"FilterDBBlacklist":[],"FilterKeyWhitelist":[],"FilterKeyBlacklist":[],"FilterSlot":[],"FilterLua":false,"BigKeyThreshold":524288000,"Metric":true,"MetricPrintLog":false,"SenderSize":104857600,"SenderCount":4095,"SenderDelayChannelSize":65535,"KeepAlive":0,"PidPath":"./","ScanKeyNumber":50,"ScanSpecialCloud":"","ScanKeyFile":"","Qps":200000,"ResumeFromBreakPoint":false,"Psync":true,"NCpu":0,"HeartbeatUrl":"","HeartbeatInterval":10,"HeartbeatExternal":"","HeartbeatNetworkInterface":"","ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"FilterKey":null,"FilterDB":"","SourceAddressList":["127.0.0.1:7070"],"TargetAddressList":["10.199.99.40:7000","10.199.99.133:7000","10.199.99.132:7000"],"SourceVersion":"5.0.5","HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetReplace":true,"TargetDB":-1,"Version":"feature-1.7-unstable,d3edad22d3ed7ee17d0fddb81afad8d13bbe958e,go1.10.1,2019-12-31_15:54:33","Type":"sync"} 2020/01/13 13:46:56 [INFO] DbSyncer[0] starts syncing data from 127.0.0.1:7070 to [10.199.99.40:7000 10.199.99.133:7000 10.199.99.132:7000] with http[9321] 2020/01/13 13:46:56 [PANIC] DbSyncer[0] load checkpoint from [10.199.99.40:7000 10.199.99.133:7000 10.199.99.132:7000] failed[Do failed[invalid response]] [stack]: 0 /home/zhuzhao.cx/RedisShake/src/redis-shake/dbSync/dbSyncer.go:82 redis-shake/dbSync.(*DbSyncer).Sync ... ...

    • child process[35934] terminated .
    • child process killed in 0 seconds , may wrong ! exit !
    enhancement 
    opened by zhouyongqiang1975 11
  • How to know when is the sync-up done?

    How to know when is the sync-up done?

    Even the sync-up is done it continuously go on, so what is the way to determine that the sync-up is done and make the code stop from that point of time? If we have three masters so we will have to syncup to each master individually or we provide all IPs at one place using commas as we used to do in older version?

    opened by ayasha05 5
  • redis3.2 cluster 到redis6.2使用redis-shake数据同步报错:redisWriter received error;

    redis3.2 cluster 到redis6.2使用redis-shake数据同步报错:redisWriter received error;

    报错日志如下: 2022-07-29 13:23:55 PNC redisWriter received error. error=[redis: nil], argv=[EVAL if (redis.call('exists', KEYS[1]) == 0) then redis.call('hset', KEYS[1], ARGV[2], 1); redis.call('pexpire', KEYS[1], ARGV[1]); return nil; end; if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then redis.call('hincrby', KEYS[1], ARGV[2], 1); redis.call('pexpire', KEYS[1], ARGV[1]); return nil; end; return redis.call('pttl', KEYS[1]); 1 method.A2022042100000022.S2022051700000028.lock 5000 0130a735-0e5a-4059-8d5f-66f68fa18552:173], slots=[16005], reply=[] panic: redisWriter received error. error=[redis: nil], argv=[EVAL if (redis.call('exists', KEYS[1]) == 0) then redis.call('hset', KEYS[1], ARGV[2], 1); redis.call('pexpire', KEYS[1], ARGV[1]); return nil; end; if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then redis.call('hincrby', KEYS[1], ARGV[2], 1); redis.call('pexpire', KEYS[1], ARGV[1]); return nil; end; return redis.call('pttl', KEYS[1]); 1 method.A2022042100000022.S2022051700000028.lock 5000 0130a735-0e5a-4059-8d5f-66f68fa18552:173], slots=[16005], reply=[]

    goroutine 25 [running]: github.com/rs/zerolog.(*Logger).Panic.func1({0xc000946240, 0x0}) github.com/rs/[email protected]/log.go:359 +0x2d github.com/rs/zerolog.(*Event).msg(0xc00020f440, {0xc000946240, 0x20a}) github.com/rs/[email protected]/event.go:156 +0x2b8 github.com/rs/zerolog.(*Event).Msg(...) github.com/rs/[email protected]/event.go:108 github.com/alibaba/RedisShake/internal/log.logFinally(0xc00020f440, {0x7bef4d, 0x410e6a}, {0xc00024ff78, 0x74b980, 0x1}) github.com/alibaba/RedisShake/internal/log/func.go:71 +0x53 github.com/alibaba/RedisShake/internal/log.Panicf({0x7bef4d, 0x45}, {0xc00024ff78, 0x4, 0x4}) github.com/alibaba/RedisShake/internal/log/func.go:27 +0x57 github.com/alibaba/RedisShake/internal/writer.(*redisWriter).flushInterval(0xc00009bef0) github.com/alibaba/RedisShake/internal/writer/redis.go:71 +0x2b7 created by github.com/alibaba/RedisShake/internal/writer.NewRedisWriter github.com/alibaba/RedisShake/internal/writer/redis.go:31 +0x18f

    opened by wwang113 1
  • [BUG] v3.0.0-beta.7在做完全量同步,开始做增量的时候报write: connection timed out错

    [BUG] v3.0.0-beta.7在做完全量同步,开始做增量的时候报write: connection timed out错

    错误信息: {"level":"info","time":"2022-07-28T00:59:52Z","message":"send RDB finished. address=[10.218.74.151:13507], repl-stream-db=[0]"}{"level":"info","time":"2022-07-28T00:59:53Z","message":"AOFReader open file. aof_filename=[24418154362.aof]"} {"level":"panic","time":"2022-07-28T00:59:53Z","message":"write tcp 10.28.34.90:56822->10.28.77.104:10515: write: connection timed out"}

    opened by xcjing 1
  • [CRASH]

    [CRASH]

    2022/07/27 08:54:04 [INFO] DbSyncer[18] total = 16.70GB - 3.83MB [ 0%] entry=0
    2022/07/27 08:54:04 [INFO] DbSyncer[14] total = 16.70GB - 899.34KB [ 0%] entry=0
    2022/07/27 08:54:04 [INFO] DbSyncer[3] total = 16.61GB - 35.91MB [ 0%] entry=1536
    2022/07/27 08:54:04 [INFO] DbSyncer[23] total = 16.62GB - 5.75MB [ 0%] entry=8
    2022/07/27 08:54:04 [INFO] DbSyncer[5] total = 16.70GB - 2.53MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[13] total = 16.58GB - 14.22MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[11] total = 16.72GB - 6.42MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[9] total = 16.74GB - 4.24MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[2] total = 16.64GB - 9.88MB [ 0%] entry=326
    2022/07/27 08:54:05 [INFO] DbSyncer[12] total = 16.67GB - 3.20MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[21] total = 16.73GB - 8.28MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[10] total = 16.49GB - 11.85MB [ 0%] entry=383
    2022/07/27 08:54:05 [INFO] DbSyncer[15] total = 16.61GB - 5.04MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[0] total = 16.70GB - 5.69MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[19] total = 16.72GB - 7.58MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[8] total = 16.83GB - 4.64MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[16] total = 16.73GB - 2.65MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[4] total = 16.71GB - 3.27MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[6] total = 16.80GB - 2.07MB [ 0%] entry=0
    2022/07/27 08:54:05 [INFO] DbSyncer[1] total = 16.73GB - 3.38MB [ 0%] entry=0
    2022/07/27 08:54:05 [PANIC] restore command response = 'ECONNTIMEOUT: dial tcp 172.26.0.35:7000: i/o timeout', should be 'OK' [stack]: 1 github.com/alibaba/RedisShake/redis-shake/common/utils.go:920 github.com/alibaba/RedisShake/redis-shake/common.RestoreRdbEntry 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncRDB.go:71 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).syncRDBFile.func1.1 ... ...

    我在使用2.1.2版本对redis cluster进行迁移, 源时24个节点的redis cluster, 每个节点使用内存大概24G左右, 这上面报的是目的节点,设置了keep_alive = 1也不管用

    opened by wwwbjqcom 3
  • WIP: Add TypeHashListpack TypeZSetListpack

    WIP: Add TypeHashListpack TypeZSetListpack

    related issue: #445

    add two RDB object value type in redis 7.0:

    • RDB_TYPE_HASH_LISTPACK
    • RDB_TYPE_ZSET_LISTPACK

    support bigdata for zset_listpack/hash_listpack

    add function type in redis 7.0:

    • RDB_OPCODE_FUNCTION2

    todo:

    • [ ] add function type RDB_OPCODE_FUNCTION
    opened by jun1015 3
Releases(v3.0.0-beta.9)
Owner
Alibaba
Alibaba Open Source
Alibaba
This is the code example how to use SQL to query data from any relational databases in Go programming language.

Go with SQL example This is the code example how to use SQL to query data from any relational databases in Go programming language. To start, please m

Muhammad Uzair Mohd Faizul 1 Mar 12, 2022
🏋️ dbbench is a simple database benchmarking tool which supports several databases and own scripts

dbbench Table of Contents Description Example Installation Supported Databases Usage Custom Scripts Troubeshooting Development Acknowledgements Descri

Simon Jürgensmeyer 72 Jul 25, 2022
Go package for sharding databases ( Supports every ORM or raw SQL )

Octillery Octillery is a Go package for sharding databases. It can use with every OR Mapping library ( xorm , gorp , gorm , dbr ...) implementing data

BlasTrain Co., Ltd. 167 Jul 26, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dan Sosedoff 7.4k Aug 5, 2022
Universal command-line interface for SQL databases

usql A universal command-line interface for PostgreSQL, MySQL, Oracle Database, SQLite3, Microsoft SQL Server, and many other databases including NoSQ

XO 7.5k Aug 9, 2022
SQL API is designed to be able to run queries on databases without any configuration by simple HTTP call.

SQL API SQL API is designed to be able to run queries on databases without any configuration by simple HTTP call. The request contains the DB credenti

Çiçeksepeti Tech 24 Jun 21, 2022
Go sqlite3 http vfs: query sqlite databases over http with range headers

sqlite3vfshttp: a Go sqlite VFS for querying databases over http(s) sqlite3vfshttp is a sqlite3 VFS for querying remote databases over http(s). This a

Peter Sanford 46 Aug 3, 2022
The open-source collaborative IDE for your databases.

The open-source collaborative IDE for your databases in your browser. About Slashbase is an open-source collaborative IDE for your databases in your b

Slashbase 322 Jul 25, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dan Sosedoff 7.4k Aug 1, 2022
test ALL the databases

This project is an integration test, testing various Go database drivers (for the database/sql package). To run these tests, in this directory, run:

Brad Fitzpatrick 108 Jul 13, 2022
Use SQL to instantly query instances, networks, databases, and more from Scaleway. Open source CLI. No DB required.

Scaleway Plugin for Steampipe Use SQL to query infrastructure servers, networks, databases and more from your Scaleway project. Get started → Document

Turbot 6 Jul 16, 2022
Manage SQL databases, users and grant using kubernetes manifests

SqlOperator Operate sql databases, users and grants. This is a WIP project and should not at all be used in production at this time. Feel free to vali

Stenic 1 Nov 28, 2021
Manage Schema for KubeDB managed Databases

schema-manager Manage Schema for KubeDB managed Databases Installation To install KubeDB, please follow the guide here. Using KubeDB Want to learn how

Kubernetes Database 8 Feb 19, 2022
A go Library for scan database/sql rows to struct、slice、other types. And it support multiple databases connection management

ploto A go Library for scan database/sql rows to struct、slice、other types. And it support multiple databases connection management It's not an ORM. wo

solar 2 Mar 16, 2022
SQLite extension for accessing other SQL databases

dblite SQLite extension for accessing other SQL databases, in SQLite. Similar to how Postgres Foreign Data Wrappers enable access to other databases i

MergeStat 9 May 20, 2022
Client to import measurements to timestream databases.

Timestream DB Client Client to import measurements to timestream databases. Supported Databases/Services AWS Timestream AWS Timestream Run NewTimestre

Tommzn 0 Jan 11, 2022
Use SQL to query databases, logs and more from PlanetScale

Use SQL to instantly query PlanetScale databases, branches and more. Open source CLI. No DB required.

Turbot 1 Feb 13, 2022
A simple auditor of SQL databases.

DBAuditor SQL数据库审计系统,目前支持SQL注入攻击审计 环境配置 sudo apt install golang 运行方式 将待审计语句填入test.txt中,然后运行主程序: 直接运行: go run main.go 编译运行: go build main.go ./main 主要目

Yunjie Xiao 4 May 13, 2022
MySQL Storage engine conversion,Support mutual conversion between MyISAM and InnoDB engines.

econvert MySQL Storage engine conversion 简介 此工具用于MySQL存储引擎转换,支持CTAS和ALTER两种模式,目前只支持MyISAM和InnoDB存储引擎相互转换,其它引擎尚不支持。 注意:当对表进行引擎转换时,建议业务停止访问或者极少量访问时进行。 原

null 5 Oct 25, 2021