DTM 是一款跨语言的分布式事务管理方案,在各类微服务架构中,提供高性能和简单易用的分布式事务服务

Overview

轻量级分布式事务管理服务

DTM 是一款跨语言的分布式事务管理方案,在各类微服务架构中,提供高性能和简单易用的分布式事务服务。

特色

跨语言

语言无关,任何语言实现了http方式的服务,都可以接入DTM,用来管理分布式事务。支持go、python、php、nodejs、ruby

多种分布式事务协议支持

  • TCC: Try-Confirm-Cancel
  • SAGA:
  • 可靠消息
  • XA 需要底层数据库支持XA

高可用

基于数据库实现,易集群化,已水平扩展

快速开始

安装

go get github.com/yedf/dtm

dtm依赖于mysql

使用已有的mysql:

cp conf.sample.yml conf.yml # 修改conf.yml

或者通过docker安装mysql

docker-compose -f compose.mysql.yml up

启动并运行saga示例

go run app/main.go

开始使用

使用

gid := common.GenGid() // 生成事务id
req := &gin.H{"amount": 30} // 微服务的负荷
// 生成dtm的saga对象
saga := dtm.SagaNew(DtmServer, gid).
  // 添加两个子事务
  Add(startBusi+"/TransOut", startBusi+"/TransOutCompensate", req).
  Add(startBusi+"/TransIn", startBusi+"/TransInCompensate", req)
  // 提交saga事务
err := saga.Commit()

完整示例

参考examples/quick_start.go

交流群

请加 yedf2008 好友或者扫码加好友,验证回复 dtm 按照指引进群

yedf2008

Comments
  • Clarification on workflow.Interceptor

    Clarification on workflow.Interceptor

    Is it safe to use a grpc client for which I have installed a workflow.Interceptor also outside a workflow definition? Or will all calls to the client be intercepted and cached by the interceptor?

    opened by ostafen 16
  • saga事务进入无限补偿重试,1个补偿大致每秒重试80次,请问是什么原因?

    saga事务进入无限补偿重试,1个补偿大致每秒重试80次,请问是什么原因?

    问题:saga事务进入无限补偿重试,1个补偿大致每秒重试80次,请问是什么原因? 事务:

    • /goods.service.v1.Goods/StockDeduct商品库存扣减,/goods.service.v1.Goods/StockDeductRevert商品库存扣减补偿
    • /order.service.v1.Order/Create订单创建,/order.service.v1.Order/CreateRevert订单创建补偿

    在任务的server端添加了auth jwt 验证,dtm 未添加对应的 auth header token,在这种情况下,商品库存扣减商品库存扣减补偿都无法成功,dtm进入无限重试商品库存扣减补偿状态。

    文档注明会进入 指数退避算法重试 状态,但实际没有,请问是什么原因呢?文档参考

    opened by mowocc 15
  • Seeking advice for usage of dtm

    Seeking advice for usage of dtm

    I build a service which connected to multiple MySQLs,and has a logic that will start an XA transaction on these dbs. On the consideration of distributed transaction, should I set up the grpc server for the logic and use DTM to control the XA transaction, or i just need to let my service to be the TM and control the XA transaction?

    Looking forward to suggestions!

    opened by SgtDaJim 15
  • Customize ErrFailure message

    Customize ErrFailure message

    Currently, if our application needs to stop the ongoing workflow, it must return either a http/grpc failure error or an ErrFailure. However, if we are not calling a remote endpoint, the only available alternative is the ErrFailure error. Is there a way to create custom errors (wich a custom message also) which are treated by dtm as ErrFailure?

    opened by ostafen 11
  • windows版  http://localhost:36789 无法显示

    windows版 http://localhost:36789 无法显示

    image 配置了redis 和端口 也还是原来默认的 image

    #####################################################################

    dtm can be run without any config.

    all config in this file is optional. the default value is as specified in each line

    all configs can be specified from env. for example:

    MicroService.EndPoint => MICRO_SERVICE_END_POINT

    #####################################################################

    Store: # specify which engine to store trans status

    Driver: 'mysql'

    Host: 'localhost'

    User: 'root'

    Password: ''

    Port: 3306

    Db: 'dtm'

    Driver: 'boltdb' # default store engine

    Driver: 'redis' Host: 'localhost' User: '' Password: '' Port: 6379

    Driver: 'postgres'

    Host: 'localhost'

    User: 'postgres'

    Password: 'mysecretpassword'

    Port: '5432'

    following config is for only Driver postgres/mysql

    MaxOpenConns: 500

    MaxIdleConns: 500

    ConnMaxLifeTime: 5 # default value is 5 (minutes)

    flollowing config is only for some Driver

    DataExpire: 604800 # Trans data will expire in 7 days. only for redis/boltdb.

    FinishedDataExpire: 86400 # finished Trans data will expire in 1 days. only for redis.

    RedisPrefix: '{a}' # default value is '{a}'. Redis storage prefix. store data to only one slot in cluster

    MicroService: # grpc based microservice config

    Driver: 'dtm-driver-gozero' # name of the driver to handle register/discover

    Target: 'etcd://localhost:2379/dtmservice' # register dtm server to this url

    EndPoint: 'localhost:36790'

    HttpMicroService: # http based microservice config

    Driver: 'dtm-driver-http' # name of the driver to handle register/discover

    RegistryType: 'nacos'

    RegistryAddress: '127.0.0.1:8848,127.0.0.1:8848'

    RegistryOptions: '{"UserName":"nacos","Password":"nacos","NotLoadCacheAtStart":true}'

    Target: '{"ServiceName":"dtmService","Enable":true,"Healthy":true,"Weight":10}' # target and options

    EndPoint: '127.0.0.1:36789'

    the unit of following configurations is second

    TransCronInterval: 3 # the interval to poll unfinished global transaction for every dtm process

    TimeoutToFail: 35 # timeout for XA, TCC to fail. saga's timeout default to infinite, which can be overwritten in saga options

    RetryInterval: 10 # the subtrans branch will be retried after this interval

    RequestTimeout: 3 # the timeout of HTTP/gRPC request in dtm

    LogLevel: 'info' # default: info. can be debug|info|warn|error

    Log:

    Outputs: 'stderr' # default: stderr, split by ",", you can append files to Outputs if need. example:'stderr,/tmp/test.log'

    RotationEnable: 0 # default: 0

    RotationConfigJSON: '{}' # example: '{"maxsize": 100, "maxage": 0, "maxbackups": 0, "localtime": false, "compress": false}'

    HttpPort: 36798 GrpcPort: 36799

    JsonRpcPort: 36791

    advanced options

    UpdateBranchAsyncGoroutineNum: 1 # num of async goroutine to update branch status

    opened by vrockn 9
  • postman grpc  调用 dtm server 导致dtm server 停止

    postman grpc 调用 dtm server 导致dtm server 停止

    dtm version:1.14.4 docker 部署 dtm env:

    STORE_DRIVER: mysql
    STORE_HOST: xxxxx.com
    STORE_USER: root
    STORE_PASSWORD: 'pwd'
    STORE_PORT: 3306
    
    MICRO_SERVICE_DRIVER: dtm-driver-gozero
    MICRO_SERVICE_TARGET: "consul://host.docker.internal:8500/dtm-server"
    MICRO_SERVICE_END_POINT: "host.docker.internal:36790"
    

    postman version:v9.24.2 grpc Request:
    method: Prepare message: postman 自动生成的

        "BinPayloads": [
            "u1NVblZ8IRIrJN==",
            "jaUYjyap9Mhf2h1A3Zbs1WRv",
            "iENGtAdeW4VHxlMcrjqIFHiDaUR"
        ],
        "CustomedData": "consectetur non esse sit",
        "Gid": "incididunt",
        "QueryPrepared": "aliquip eu",
        "ReqExtra": {},
        "RollbackReason": "est esse ut",
        "Steps": "amet proident nisi",
        "TransOptions": {
            "PassthroughHeaders": [
                "magna proident fugiat exercitation",
                "adipisicing voluptate ullamco",
                "amet ut dolor culpa qui"
            ],
            "RequestTimeout": "18686",
            "RetryInterval": "1313",
            "TimeoutToFail": "862415057",
            "WaitResult": false
        },
        "TransType": "in adipisicing ex fugiat aliquip"
    }
    

    dtm 报错信息

    panic: invalid character 'a' looking for beginning of value
    
    goroutine 67 [running]:
    github.com/dtm-labs/dtm/dtmcli/dtmimp.E2P(...)
            /app/dtm/dtmcli/dtmimp/utils.go:62
    github.com/dtm-labs/dtm/dtmcli/dtmimp.MustUnmarshal(...)
            /app/dtm/dtmcli/dtmimp/utils.go:122
    github.com/dtm-labs/dtm/dtmcli/dtmimp.MustUnmarshalString(...)
            /app/dtm/dtmcli/dtmimp/utils.go:127
    github.com/dtm-labs/dtm/dtmsvr.TransFromDtmRequest(0x2781578, 0xc00017bf80, 0xc0003f3d60, 0x20bc3c0)
            /app/dtm/dtmsvr/trans_class.go:113 +0x469
    github.com/dtm-labs/dtm/dtmsvr.(*dtmServer).Prepare(0x369ad80, 0x2781578, 0xc00017bf80, 0xc0003f3d60, 0x369ad80, 0x0, 0x0)
            /app/dtm/dtmsvr/api_grpc.go:33 +0x45
    github.com/dtm-labs/dtm/dtmgrpc/dtmgpb._Dtm_Prepare_Handler.func1(0x2781578, 0xc00017bf80, 0x218b660, 0xc0003f3d60, 0x2, 0x2, 0x1f1fa00, 0xc0009e7988)
            /app/dtm/dtmgrpc/dtmgpb/dtmgimp_grpc.pb.go:179 +0x89
    github.com/dtm-labs/dtm/dtmgrpc/dtmgimp.GrpcServerLog(0x2781578, 0xc00017bf80, 0x218b660, 0xc0003f3d60, 0xc00041d600, 0xc000a7eb28, 0x49ba06, 0x62cf8585, 0x1c06170d, 0x1c9266e5f0a)
            /app/dtm/dtmgrpc/dtmgimp/types.go:27 +0x1d6
    google.golang.org/grpc.chainUnaryInterceptors.func1.1(0x2781578, 0xc00017bf80, 0x218b660, 0xc0003f3d60, 0x203000, 0x0, 0x0, 0x7f2be3976698)
            /go/pkg/mod/google.golang.org/[email protected]/server.go:1117 +0x8c
    github.com/dtm-labs/dtm/dtmsvr.grpcMetrics(0x2781578, 0xc00017bf80, 0x218b660, 0xc0003f3d60, 0xc00041d600, 0xc00053b300, 0x0, 0x0, 0x0, 0x0)
            /app/dtm/dtmsvr/metrics.go:86 +0x182
    google.golang.org/grpc.chainUnaryInterceptors.func1.1(0x2781578, 0xc00017bf80, 0x218b660, 0xc0003f3d60, 0xc0009e7b48, 0x40e398, 0x18, 0x2054a20)
            /go/pkg/mod/google.golang.org/[email protected]/server.go:1120 +0x11c
    google.golang.org/grpc.chainUnaryInterceptors.func1(0x2781578, 0xc00017bf80, 0x218b660, 0xc0003f3d60, 0xc00041d600, 0xc000a7eb28, 0xc000a6abb8, 0x51d766, 0x214e1a0, 0xc00017bf80)
            /go/pkg/mod/google.golang.org/[email protected]/server.go:1122 +0xeb
    github.com/dtm-labs/dtm/dtmgrpc/dtmgpb._Dtm_Prepare_Handler(0x20bc3c0, 0x369ad80, 0x2781578, 0xc00017bf80, 0xc00012d080, 0xc000468400, 0x2781578, 0xc00017bf80, 0xc0001fe5a0, 0x118)
            /app/dtm/dtmgrpc/dtmgpb/dtmgimp_grpc.pb.go:181 +0x150
    google.golang.org/grpc.(*Server).processUnaryRPC(0xc0009cc000, 0x279b5b8, 0xc000601520, 0xc000160480, 0xc000766a20, 0x364bf30, 0x0, 0x0, 0x0)
            /go/pkg/mod/google.golang.org/[email protected]/server.go:1283 +0x544
    google.golang.org/grpc.(*Server).handleStream(0xc0009cc000, 0x279b5b8, 0xc000601520, 0xc000160480, 0x0)
            /go/pkg/mod/google.golang.org/[email protected]/server.go:1620 +0xd0c
    google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000334f40, 0xc0009cc000, 0x279b5b8, 0xc000601520, 0xc000160480)
            /go/pkg/mod/google.golang.org/[email protected]/server.go:922 +0xab
    created by google.golang.org/grpc.(*Server).serveStreams.func1
            /go/pkg/mod/google.golang.org/[email protected]/server.go:920 +0x1fd
    
    opened by imythu 8
  • 可否支持多通信协议混用?

    可否支持多通信协议混用?

    粗略的看了一些示例,配置一组事务时只需指定一个 DTM Server,通过其 URL 区分子事务的通信协议是 HTTP、gRPC 或 JSON-RPC。(补充,用到的 cli 也是不同的) 而在实际使用上,我其实并不太关心与 DTM Server 的通信方式,而是关心每个子事务的通信协议。 我的理想使用方式是配置事务时可以给每个子事务分别指定其通信协议,从而支持多协议混用。

    opened by ostatsu 8
  • Problem when setting config.Store values inside helm chart

    Problem when setting config.Store values inside helm chart

    I was trying to deploy dtm in my kubernetes cluster using the helm chart provided in the repo. However, when I set the following configuration key for pointing to a postgres instance:

    # Default values for dtm.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
    
    # DTM configuration. Specify content for config.yaml
    # ref: https://github.com/dtm-labs/dtm/blob/main/conf.sample.yml
    configuration: |-
     Store: # specify which engine to store trans status
      Driver: 'postgres' # default store engine
      Host: 'someurl'
      User: 'someuser'
      Password: 'somepassword'
      Port: 5432
      Db: 'dtm'
      Schema: 'public'
    

    I get the following error:

    2022/11/28 15:43:14 fatal error: yaml: unmarshal errors:
    line 7: field Db not found in type config.Store
    line 8: field Schema not found in type config.Store
    

    Values seem exactly the ones provided in the sample config.yml file. Any idea on how to fix this?

    opened by ostafen 7
  • Query saga state through dtmcli

    Query saga state through dtmcli

    Hi, everyone. As the title suggests, my question is: is there a way to query a saga state through the dtm client? The documentation section for WaitResult says: "There are many possible cases, and it is better for the client to query the status of the global transaction through the query interface of dtm." Can you provide a link to the documentation for the mentioned query API?

    opened by ostafen 7
  • Saga state

    Saga state

    Hi, all! First of all thank you for the excellent library! I wondered if this framework support persisting the saga state on database to recover from failures.

    Thank you

    opened by ostafen 6
  • consul服务疑问

    consul服务疑问

    docker是虚拟机,内部搭建consul/dtm服务 虚拟机ip 192.168.38.128 两个服务都是开启并可用的 grpc服务器在本地 192.168.0.112:8001 这个服务是一定可用的

    配置中心 配置 MicroService: Driver: 'dtm-driver-gozero' # Target: 'consul://192.168.38.128:2379/dtmservice' # EndPoint: '192.168.38.128:36790'

    具体代码

    `var dtmServer = "consul://192.168.38.128:2379/dtmservice"

    orderRpcBusiServer, err := l.svcCtx.Config.UserRpcConf.BuildTarget()
    if err != nil {
    	//这个地方没报错
    	return nil, fmt.Errorf("下单异常超时")
    }
    //	orderRpcBusiServer = consul://192.168.38.128:8500/user.rpc 服务绝对可用
    fmt.Println(orderRpcBusiServer)
    
    gid := dtmgrpc.MustGenGid(dtmServer)
    saga := dtmgrpc.NewSagaGrpc(dtmServer, gid).
    	//测试过
    	//consul://192.168.38.128:8500/user.rpc
    	//192.168.38.128:8500
    	Add(orderRpcBusiServer+"/usercenter/QueryUser", orderRpcBusiServer"+"/usercenter/QueryUser", &grpc_.UniversalUserRequest{
    		Pattern: 1,
    		Data:    nil,
    	})
    err = saga.Submit()
    dtmimp.FatalIfError(err)`
    

    最终报错

    {"@timestamp":"2022-06-18T22:00:36.758+08:00","caller":"handler/loghandler.go:174","content":"[HTTP] 503 - POST - /order/quickCreate 127.0.0.1:2510 - PostmanRuntime/7.26.2 - slowcall(slowcall(3000.7ms))","duration":"3000.7ms","level":"slow","span":"8c7601ab80ff29ff","trace":"1de87c93118a14f1ec8cc806c7506795"} {"@timestamp":"2022-06-18T22:00:36.758+08:00","caller":"handler/loghandler.go:199","content":"[HTTP] 503 - POST /order/quickCreate - 127.0.0.1:2510 - PostmanRuntime/7.26.2\nPOST /order/quickCreate HTTP/1.1\r\nHost: 127.0.0.1:8889\r\nAccept: /\r\nAccept-Encoding: gzip, deflate, br\r\nConnection: keep-alive\r\nContent-Length: 55\r\nContent-Type: application/json\r\nPostman-Token: 79c6e169-cc47-4ccf-b244-e6f6b041d165\r\nUser-Agent: PostmanRuntime/7.26.2\r\n\r\n{\r\n "userId": 1,\r\n "goodsId": 1,\r\n "num": 1\r\n}","duration":"3000.7ms","level":"error","span":"8c7601ab80ff29ff","trace":"1de87c93118a14f1ec8cc806c7506795"} {"@timestamp":"2022-06-18T22:00:54.764+08:00","caller":"[email protected]/resolver.go:68","content":"[Consul resolver] Couldn't fetch endpoints. target={service='dtmservice' healthy='false' tag=''}; error={Get "http://192.168.38.128:2379/v1/health/service/dtmservice?near=_agent": dial tcp 192.168.38.128:2379: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.}","level":"error"}

    //这个时候我以为是端口错误, consul://192.168.38.128:8500/dtmservice 更换为这个端口,都更换了 报错 {"level":"error","ts":"2022-06-18T22:03:23.036+0800","caller":"dtmgimp/types.go:46","msg":"grpc client called: consul://192.168.38.128:8500/dtmservice/dtmgimp.Dtm/NewGid {} result: {} err: rpc error: code = Unavailable desc = name resolver error: produced zero addresses","stacktrace":"github.com/dtm-labs/dtmgrpc/dtmgimp.GrpcClientLog\n\tH:/z包/Golang/go1.18.1/pkg/mod/github.com/dtm-labs/[email protected]/dtmgimp/types.go:46\ngoogle.golang.org/grpc.(*ClientConn).Invoke\n\tH:/z包/Golang/go1.18.1/pkg/mod/google.golang.org/[email protected]/call.go:35\ngithub.com/dtm-labs/dtmgrpc/dtmgpb.(*dtmClient).NewGid\n\tH:/z包/Golang/go1.18.1/pkg/mod/github.com/dtm-labs/[email protected]/dtmgpb/dtmgimp_grpc.pb.go:43\ngithub.com/dtm-labs/dtmgrpc.MustGenGid\n\tH:/z包/Golang/go1.18.1/pkg/mod/github.com/dtm-labs/[email protected]/type.go:51\nawesomeProject9/internal/logic.(*CreateLogic).Create\n\tC:/Users/z/go/src/awesomeProject9/internal/logic/createLogic.go:44\nawesomeProject9/internal/handler.createHandler.func1\n\tC:/Users/z/go/src/awesomeProject9/internal/handler/createHandler.go:22\nnet/http.HandlerFunc.ServeHTTP\n\tH:/z包/Golang/go1.18.1/go/src/net/http/server.go:2084\ngithub.com/zeromicro/go-zero/rest/handler.GunzipHandler.func1\n\tH:/z包/Golang/go1.18.1/pkg/mod/github.com/zeromicro/[email protected]/rest/handler/gunziphandler.go:26\nnet/http.HandlerFunc.ServeHTTP\n\tH:/z包/Golang/go1.18.1/go/src/net/http/server.go:2084\ngithub.com/zeromicro/go-zero/rest/handler.MaxBytesHandler.func2.1\n\tH:/z包/Golang/go1.18.1/pkg/mod/github.com/zeromicro/[email protected]/rest/handler/maxbyteshandler.go:24\nnet/http.HandlerFunc.ServeHTTP\n\tH:/z包/Golang/go1.18.1/go/src/net/http/server.go:2084\ngithub.com/zeromicro/go-zero/rest/handler.MetricHandler.func1.1\n\tH:/z包/Golang/go1.18.1/pkg/mod/github.com/zeromicro/[email protected]/rest/handler/metrichandler.go:21\nnet/http.HandlerFunc.ServeHTTP\n\tH:/z包/Golang/go1.18.1/go/src/net/http/server.go:2084\ngithub.com/zeromicro/go-zero/rest/handler.RecoverHandler.func1\n\tH:/z包/Golang/go1.18.1/pkg/mod/github.com/zeromicro/[email protected]/rest/handler/recoverhandler.go:21\nnet/http.HandlerFunc.ServeHTTP\n\tH:/z包/Golang/go1.18.1/go/src/net/http/server.go:2084\ngithub.com/zeromicro/go-zero/rest/handler.(*timeoutHandler).ServeHTTP.func1\n\tH:/z包/Golang/go1.18.1/pkg/mod/github.com/zeromicro/[email protected]/rest/handler/timeouthandler.go:79"}

    dtm服务报错

    {"level":"info","ts":"2022-06-18T14:04:25.458Z","caller":"dtmsvr/trans_status.go:27","msg":"TouchCronTime for: {"ID":0,"create_time":"2022-06-18T08:18:28.28577398Z","update_time":"2022-06-18T14:04:25.457628306Z","gid":"gJTkRAxRbomhKBccRqUcxM","trans_type":"tcc","status":"submitted","protocol":"grpc","next_cron_interval":20,"next_cron_time":"2022-06-18T14:04:45.457628Z","wait_result":true}"} {"level":"error","ts":"2022-06-18T14:04:25.458Z","caller":"dtmsvr/trans_process.go:52","msg":"processInner got error: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup host.docker.internal on 192.168.38.2:53: no such host"","stacktrace":"github.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).processInner.func1\n\t/app/dtm/dtmsvr/trans_process.go:52\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).processInner\n\t/app/dtm/dtmsvr/trans_process.go:63\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).process\n\t/app/dtm/dtmsvr/trans_process.go:38\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).Process\n\t/app/dtm/dtmsvr/trans_process.go:20\ngithub.com/dtm-labs/dtm/dtmsvr.CronTransOnce\n\t/app/dtm/dtmsvr/cron.go:35\ngithub.com/dtm-labs/dtm/dtmsvr.CronExpiredTrans\n\t/app/dtm/dtmsvr/cron.go:42"} {"level":"info","ts":"2022-06-18T14:04:25.459Z","caller":"dtmsvr/cron.go:54","msg":"cron job return a trans: {"ID":0,"create_time":"2022-06-18T10:57:47.975316619Z","update_time":"2022-06-18T14:04:13.858789044Z","gid":"tCPN73GUWBSohXjEuQmpzW","trans_type":"msg","steps":[{"action":"127.0.0.1:8001/usercenter/QueryUser"}],"status":"submitted","protocol":"grpc","next_cron_interval":10,"next_cron_time":"2022-06-18T14:04:35.458458466Z","wait_result":true}"} {"level":"error","ts":"2022-06-18T14:04:25.459Z","caller":"dtmgimp/types.go:46","msg":"grpc client called: 127.0.0.1:8001/usercenter/QueryUser "IAEqensicGFzc1dvcmQiOiIxMjM0NTYiLCJtb2JpbGUiOjEzNTY4NjU0NDIsIndlQ2hhdFRva2VuIjoiIiwibW9kZWwiOiJwNTAiLCJicmFuZCI6IuWNjuS4uiIsImlkZW50aWZpY2F0aW9uQ29kZSI6Ijk4NzY1NDMyMSJ9" result: "" err: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:8001: connect: connection refused"","stacktrace":"github.com/dtm-labs/dtm/dtmgrpc/dtmgimp.GrpcClientLog\n\t/app/dtm/dtmgrpc/dtmgimp/types.go:46\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryClient.func1.1.1\n\t/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:72\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryClient.func1\n\t/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:81\ngoogle.golang.org/grpc.(*ClientConn).Invoke\n\t/go/pkg/mod/google.golang.org/[email protected]/call.go:35\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).getURLResult\n\t/app/dtm/dtmsvr/trans_status.go:90\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).getBranchResult\n\t/app/dtm/dtmsvr/trans_status.go:123\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).execBranch\n\t/app/dtm/dtmsvr/trans_status.go:138\ngithub.com/dtm-labs/dtm/dtmsvr.(*transMsgProcessor).ProcessOnce\n\t/app/dtm/dtmsvr/trans_type_msg.go:69\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).processInner\n\t/app/dtm/dtmsvr/trans_process.go:62\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).process\n\t/app/dtm/dtmsvr/trans_process.go:38\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).Process\n\t/app/dtm/dtmsvr/trans_process.go:20\ngithub.com/dtm-labs/dtm/dtmsvr.CronTransOnce\n\t/app/dtm/dtmsvr/cron.go:35\ngithub.com/dtm-labs/dtm/dtmsvr.CronExpiredTrans\n\t/app/dtm/dtmsvr/cron.go:42"} {"level":"info","ts":"2022-06-18T14:04:25.459Z","caller":"dtmsvr/trans_status.go:27","msg":"TouchCronTime for: {"ID":0,"create_time":"2022-06-18T10:57:47.975316619Z","update_time":"2022-06-18T14:04:25.459222335Z","gid":"tCPN73GUWBSohXjEuQmpzW","trans_type":"msg","steps":[{"action":"127.0.0.1:8001/usercenter/QueryUser"}],"status":"submitted","protocol":"grpc","next_cron_interval":20,"next_cron_time":"2022-06-18T14:04:45.459222025Z","wait_result":true}"} {"level":"error","ts":"2022-06-18T14:04:25.459Z","caller":"dtmsvr/trans_process.go:52","msg":"processInner got error: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:8001: connect: connection refused"","stacktrace":"github.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).processInner.func1\n\t/app/dtm/dtmsvr/trans_process.go:52\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).processInner\n\t/app/dtm/dtmsvr/trans_process.go:63\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).process\n\t/app/dtm/dtmsvr/trans_process.go:38\ngithub.com/dtm-labs/dtm/dtmsvr.(*TransGlobal).Process\n\t/app/dtm/dtmsvr/trans_process.go:20\ngithub.com/dtm-labs/dtm/dtmsvr.CronTransOnce\n\t/app/dtm/dtmsvr/cron.go:35\ngithub.com/dtm-labs/dtm/dtmsvr.CronExpiredTrans\n\t/app/dtm/dtmsvr/cron.go:42"} 这里有个很奇怪的事情 127.0.0.1:8001,但没有地方上传 grpc 服务对应的地址是192.168.0.112:8001

    opened by 309791679 6
  • Initialization of dtm logger

    Initialization of dtm logger

    Hi, I'm experimenting an issue with dtm logging initialization. Basically, whenever dtm is imported, the logger is initialized as follows:

    func init() {
         logger.Init(os.GetEnv("LOG_LEVEL")
    }
    

    Now, the LOG_LEVEL env var is already being used by other packages in my case (logrus) to initialize other logging middlewares. Since the value I'm using is TRACE, my code panic inside dtm, because it says that TRACE is not a valid value for dtm logger. Can we fix this piece of code to not panic in the case the LOG_LEVEL var contains something not valid? For example, something like:

    func init() {
         level := os.GetEnv("LOG_LEVEL")
         if !isValidDtmLogLevel(level) {
             level = aDefaultLevelValue
         }
         logger.Init(level)
    }
    

    Or we could also handle this inside the logger.Init() method. What do you think?

    opened by ostafen 5
  • Feature Request:以更直观的 Concurrent 来表达并行

    Feature Request:以更直观的 Concurrent 来表达并行

    现在的并发是如下写法,其中用数字0、1、2……来表达顺序依赖:

    	saga := dtmcli.NewSaga(DtmServer, shortuuid.New()).
    		Add(Busi+"/CanRollback1", Busi+"/CanRollback1Revert", req).
    		Add(Busi+"/CanRollback2", Busi+"/CanRollback2Revert", req).
    		Add(Busi+"/UnRollback1", "", req).
    		Add(Busi+"/UnRollback2", "", req).
    		EnableConcurrent().
    		AddBranchOrder(2, []int{0, 1}). // 指定step 2,需要在0,1完成后执行
    		AddBranchOrder(3, []int{0, 1}) // 指定step 3,需要在0,1完成后执行
    

    期望支持如下表达式,以更直观的 Concurrent 来表达并行:

    	saga := dtmcli.NewSaga(DtmServer, shortuuid.New()).
    		Concurrent(
    			Add(Busi+"/CanRollback1", Busi+"/CanRollback1Revert", req),
    			Add(Busi+"/CanRollback2", Busi+"/CanRollback2Revert", req)
    		).
    		Add(Busi+"/UnRollback1", "", req).
    		Add(Busi+"/UnRollback2", "", req)
    
    opened by speedoops 1
Releases(v1.16.7)
Owner
null