MySQL replication topology management and HA


downloads release

orchestrator [Documentation]

Orchestrator logo

orchestrator is a MySQL high availability and replication management tool, runs as a service and provides command line access, HTTP API and Web interface. orchestrator supports:


orchestrator actively crawls through your topologies and maps them. It reads basic MySQL info such as replication status and configuration.

It provides you with slick visualization of your topologies, including replication problems, even in the face of failures.


orchestrator understands replication rules. It knows about binlog file:position, GTID, Pseudo GTID, Binlog Servers.

Refactoring replication topologies can be a matter of drag & drop a replica under another master. Moving replicas around is safe: orchestrator will reject an illegal refactoring attempt.

Fine-grained control is achieved by various command line options.


orchestrator uses a holistic approach to detect master and intermediate master failures. Based on information gained from the topology itself, it recognizes a variety of failure scenarios.

Configurable, it may choose to perform automated recovery (or allow the user to choose type of manual recovery). Intermediate master recovery achieved internally to orchestrator. Master failover supported by pre/post failure hooks.

Recovery process utilizes orchestrator's understanding of the topology and of its ability to perform refactoring. It is based on state as opposed to configuration: orchestrator picks the best recovery method by investigating/evaluating the topology at the time of recovery itself.

The interface

orchestrator supports:

  • Command line interface (love your debug messages, take control of automated scripting)
  • Web API (HTTP GET access)
  • Web interface, a slick one.

Orcehstrator screenshot

Additional perks

  • Highly available
  • Controlled master takeovers
  • Manual failovers
  • Failover auditing
  • Audited operations
  • Pseudo-GTID
  • Datacenter/physical location awareness
  • MySQL-Pool association
  • HTTP security/authentication methods
  • There is also an orchestrator-mysql Google groups forum to discuss topics related to orchestrator
  • More...

Read the Orchestrator documentation

Authored by Shlomi Noach:

Related projects


Get started developing Orchestrator by reading the developer docs. Thanks for your interest!


orchestrator is free and open sourced under the Apache 2.0 license.

  • Orchestrator promotes a replica with lag on Mariadb

    Orchestrator promotes a replica with lag on Mariadb

    We have a 3 node MariaDB 10.5.10 setup on Centos. 1 Primary and 2 replicas with semi-sync enabled. Our current orchestrator version is 3.2.4

    We had a scenario where the replicas were lagging by few hours, master was not reachable so one of the replicas was promoted as primary in spite of the huge lag. This resulted in a data loss. Ideally orchestrator should wait for the replica's relay logs to be applied on the replica then promote as a master. This seems to be the behavior on MySQL based on my testing but not on Mariadb.

    --Test case: Tests against MySQL and Mariadb are done with these orchestrator parameters in /etc/orchaestrator.conf.json

    "DelayMasterPromotionIfSQLThreadNotUpToDate": true, "debug": true

    Restart orchestrator on all 3 nodes I)Test on MariaDB: Start a 3 node Mariadb cluster (Semi-sync enabled) 1.Create and add data to a test table create table test (colA int, colB int, colC datetime, colD int);

    insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000);

    1. Stop slave SQL_THREAD on replicas (Node 2, 3)

    2. Wait for few secs and add some more data to Node 1 (master) insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000);

    3. Stop mysqld on Master (Node 1)

    4. You will see the orchestrator promoting a replica without the data added in Step #3.

    Test on MySQL 5.7.32: Repeat the same test on 3 node MySQL You will notice orchestrator promoting one of the replicas with out any data loss ie seeing 14 rows !!!

    Thank You Mohan

    opened by mohankara 46
  • EnforceSemiSyncReplicas & RecoverLockedSemiSyncMaster - actively enable/disable semi-sync replicas to match master's wait count

    EnforceSemiSyncReplicas & RecoverLockedSemiSyncMaster - actively enable/disable semi-sync replicas to match master's wait count

    This is a WIP PR that attempts to address

    There are tons of open questions and things missing, but this is the idea.

    Open questions (all answered in

    1. ~EnableSemiSync also manages the master flag. do we really want that? should we not have an EnableSemiSyncReplica?~

    2. ~Should there be two modes: EnforceSemiSyncReplicas: exact|enough (exact would handle MasterWithTooManySemiSyncReplicas and LockedSemiSyncMaster, and enough would only handle LockedSemiSyncMaster)?~

    3. ~LockedSemiSyncMasterHypothesis waits ReasonableReplicationLagSeconds. I'd like there to be another variable to control the wait time. This seems like it's overloaded.~


    • [x] properly succeed failover; currently it kinda retries even though it succeeded, not sure why
    • [x] discuss downtime behavior with shlomi
    • [x] possibly implement MasterWithIncorrectSemiSyncReplicas, see PoC:
    • [x] when a replica is downtimed but replication is enabled, MasterWithTooManySemiSyncReplicas does not behave correctly
    • [x] MaybeEnableSemiSyncReplica does not manage the master flag though it previously did (in the new logic only)
    • [x] excludeNotReplicatingReplicas should be a specific instance, not all non-replicating instances!
    • [x] re-test old logic
    • [x] handle master failover semi-sync enable/disable
    • [x] semi-sync replica priority (or come up with better concept)
    • [x] enabled RecoverLockedSemiSyncMaster without exact mode
    • [x] perform sanity checks in checkAndRecover* functions BEFORE enabling/disabling replicas
    • [x] add ReasonableLockedSemiSyncSeconds with fallback to ReasonableReplicationLagSeconds
    opened by binwiederhier 30
  • GTID not found properly (5.7) and some graceful-master-takeover issues

    GTID not found properly (5.7) and some graceful-master-takeover issues


    I am testing orchestrator with 5.7.17, Master and two slaves. Have moved one of the slaves to change the topology like A-B-C and then executed orchestrator -c graceful-master-takeover -alias myclusteralias

    The issues found are:

    1. GTID appears as disabled in the master, the web interface shows the button to enable it, when obviously it is enabled in all the replication chain (GTID_MODE=ON). Slaves are showed with GTID enabled.
    2. This issue causes that the takeover doesn't use GTID (I guess)
    3. Instance B was in read-only before the takeover, after the takeover, the read-only is not disabled, is this a feature or something that should I add via hooks? Should be nice to have a parameter to end the process in the status that you prefer, depending on the takeover reasons/conditions.
    4. Also, for any reason the role change old-master-> new slave doesn't work. It executes a CHANGE MASTER but apparently the replication username in the old master is empty, failing the change master operation (orchestrator user has SELECT ON mysql.slave_master_info in the cluster).
    5. Finally, should be nice to add a feature to force to refactor the topology when you have one master and several slaves below. It requires moving slaves below the new elected master, just before the master-takeover. The process will take a bit longer, moving the slaves, and waiting until they are ready.

    Thanks for this amazing tool! Regards, Eduardo

    opened by ecortestws 30
  • hook for graceful master switch

    hook for graceful master switch

    I have been running some graceful master takeover testing using ProxySQL and Orchestrator together, and I believe it would be a good idea to have a hook that is triggered even earlier than PreFailoverProcesses. The issue with PreFailoverProcesses is that it is triggered after the demoted master has already been placed by Orchestrator in read_only mode, as shown by this extract from the log:

    Mar 03 14:25:10 mysql3 orchestrator[25032]: [martini] Started GET /api/graceful-master-takeover/mysql1/3306 for
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Will demote mysql1:3306 and promote mysql2:3306 instead
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Stopped slave on mysql2:3306, Self:mysql-bin.000009:3034573, Exec:mysql-bin.000010:18546301
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Will set mysql1:3306 as read_only
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO instance mysql1:3306 read_only: true
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO auditType:read-only instance:mysql1:3306 cluster:mysql1:3306 message:set as true
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Will advance mysql2:3306 to master coordinates mysql-bin.000010:18546301
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Will start slave on mysql2:3306 until coordinates: mysql-bin.000010:18546301
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Stopped slave on mysql2:3306, Self:mysql-bin.000009:3034573, Exec:mysql-bin.000010:18546301
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO executeCheckAndRecoverFunction: proceeding with DeadMaster detection on mysql1:3306; isActionable?: true; skipProcesses: false
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: detected DeadMaster failure on mysql1:3306
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: Running 1 OnFailureDetectionProcesses hooks
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2055: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: Running OnFailureDetectionProcesses hook 1 of 1: echo 'Detected DeadMaster on mysql1:3306. Affected replicas: 1' >> /tmp/recovery.log
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2056: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO CommandRun(echo 'Detected DeadMaster on mysql1:3306. Affected replicas: 1' >> /tmp/recovery.log,[])
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO CommandRun/running: bash /tmp/orchestrator-process-cmd-358000144
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO CommandRun successful. exit status 0
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: Completed OnFailureDetectionProcesses hook 1 of 1 in 4.556463ms
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2057: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Completed OnFailureDetectionProcesses hook 1 of 1 in 4.556463ms
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: done running OnFailureDetectionProcesses hooks
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2058: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2059: register-failure-detection
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO executeCheckAndRecoverFunction: proceeding with DeadMaster recovery on mysql1:3306; isRecoverable?: true; skipProcesses: false
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2060: write-recovery
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: will handle DeadMaster event on mysql1:3306
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2061: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO auditType:recover-dead-master instance:mysql1:3306 cluster:mysql1:3306 message:problem found; will recover
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: Running 1 PreFailoverProcesses hooks

    For the ProxySQL use case, this returns errors to the application as soon as the host is set in read_only mode. I would like to use the proposed hook to have ProxySQL set the old master to offline_soft and give active connections a chance to finish work to minimize errors the application is returned.

    opened by igroene 29
  • MySQL can't recover with GTID

    MySQL can't recover with GTID

    @shlomi-noach: I'm trying to test the recovery of orchestrator and having some problems. When I stop the master of MySQL, who enable the GTID, the slave can't be promoted to master. This content appear in the recover log: All errors: PseudoGTIDPattern not configured; cannot use Pseudo-GTID I find this word in the document: "At this time recovery requires either GTID (Oracle or MariaDB), Pseudo GTID or Binlog Servers." Is it means the GTID only useful for Oracle or MariaDB, not MySQL?

    By the way, I'm the first time to use the raft, should I do some extra thing to start the raft? I have create and edit /etc/profile.d/ according to the document. But I don't know how to start the raft. I can use the command like "orchestrator-client -c which-api", but error on "orchestrator-client -c raft-leader" of word "raft-state: not running with raft setup". What can I do for this?


    opened by Xinglao4 26
  • Credentials not set after master takeover

    Credentials not set after master takeover

    This issue was introduced between 3.2.3 (working) and 3.2.5 (broken).

    Setup: The replication credentials are stored in a metadata table on the database server, Orchestrator knows about these with ReplicationCredentialsQuery.

    In 3.2.3 - issue graceful-master-takeover and orchestrator automatically connects the old master to the new one.

    In 3.2.5 - after graceful-master-takeover the old master is set as slave but username is missing (password possibly too).

    Last IO error | "Fatal error: Invalid (empty) username when attempting to connect to the master server. Connection attempt terminated."
    opened by d-rupp 25
  • 3.0.2 fails to run on RHEL 6.7

    3.0.2 fails to run on RHEL 6.7

    Hi, I was happily running 2.1.5 on RHEL 6.7, but when I install the new 3.0.2 version using rpm I get the following error:

    [[email protected] orchestrator]# cat /etc/system-release
    Red Hat Enterprise Linux Server release 6.7 (Santiago)
    [[email protected] orchestrator]# /usr/local/orchestrator/orchestrator --debug http &
    [1] 1908
    [[email protected] orchestrator]# /usr/local/orchestrator/orchestrator: /lib64/ version `GLIBC_2.14' not found (required by /usr/local/orchestrator/orchestrator)
    [1]+  Exit 1                  /usr/local/orchestrator/orchestrator --debug http

    Is this version meant to be run on RHEL 7?

    opened by deejaykim 23
  • refactor(go mod): migrate to go modules

    refactor(go mod): migrate to go modules

    During the migration process, some code is updated directly in the vendor folder, causing inconsistent with the upstream code repository. I move this part of the code to the migrate folder.

    fixes #1355

    Signed-off-by: cndoit18 [email protected]

    opened by cndoit18 22
  • Slaves lagging by couple of hours are elected as master by orchestrator

    Slaves lagging by couple of hours are elected as master by orchestrator

    We are seeing instances where slaves with couple of hours of lag are elected as masters. Is there any configuration for that not to happen?


    opened by adkhare 20
  • remote error: tls: bad certificate

    remote error: tls: bad certificate


    How can I debug the remote error: tls: bad certificate below. It's not clear for me which part of orchestrator have tls problems.


    cat /var/lib/orchestrator/orchestrator-sqlite.conf.json
        "Debug": true,
        "EnableSyslog": false,
        "ListenAddress": ":3000",
        "AutoPseudoGTID": true,
        "RaftEnabled": true,
        "RaftDataDir": "/var/lib/orchestrator",
        "RaftBind": "",
        "RaftNodes": ["", "", ""] ,
        "BackendDB": "sqlite",
        "SQLite3DataFile": "/var/lib/orchestrator/data/orchestrator.sqlite3",
        "MySQLTopologyCredentialsConfigFile": "/var/lib/orchestrator/orchestrator-topology.cnf",
        "InstancePollSeconds": 5,
        "DiscoverByShowSlaveHosts": false,
        "FailureDetectionPeriodBlockMinutes": 60,
        "UseSSL": true,
        "SSLPrivateKeyFile": "/var/lib/orchestrator/pki/mysql-001.livesystem.at_privatekey.pem",
        "SSLCertFile": "/var/lib/orchestrator/pki/mysql-001.livesystem.at_cert.pem",
        "SSLCAFile": "/var/lib/orchestrator/pki/ca_cert.pem",
        "SSLSkipVerify": false,

    debug output

    [email protected]:~# cd /usr/local/orchestrator && orchestrator --debug --config=/var/lib/orchestrator/orchestrator-sqlite.conf.json --stack http
    2019-05-07 10:14:49 INFO starting orchestrator, version: 3.0.14, git commit: f4c69ad05010518da784ce61865e65f0d9e0081c
    2019-05-07 10:14:49 INFO Read config: /var/lib/orchestrator/orchestrator-sqlite.conf.json
    2019-05-07 10:14:49 DEBUG Parsed topology credentials from /var/lib/orchestrator/orchestrator-topology.cnf
    2019-05-07 10:14:49 DEBUG Connected to orchestrator backend: sqlite on /var/lib/orchestrator/data/orchestrator.sqlite3
    2019-05-07 10:14:49 DEBUG Initializing orchestrator
    2019-05-07 10:14:49 DEBUG Migrating database schema
    2019-05-07 10:14:49 DEBUG Migrated database schema to version [3.0.14]
    2019-05-07 10:14:49 INFO Connecting to backend :3306: maxConnections: 128, maxIdleConns: 32
    2019-05-07 10:14:49 INFO Starting Discovery
    2019-05-07 10:14:49 INFO Registering endpoints
    2019-05-07 10:14:49 INFO continuous discovery: setting up
    2019-05-07 10:14:49 DEBUG Setting up raft
    2019-05-07 10:14:49 DEBUG Queue.startMonitoring(DEFAULT)
    2019-05-07 10:14:49 INFO Starting HTTPS listener
    2019-05-07 10:14:49 INFO Read in CA file: /var/lib/orchestrator/pki/ca_cert.pem
    2019-05-07 10:14:49 DEBUG raft: advertise=
    2019-05-07 10:14:49 DEBUG raft: transport=&{connPool:map[] connPoolLock:{state:0 sema:0} consumeCh:0xc42008b500 heartbeatFn:<nil> heartbeatFnLock:{state:0 sema:0} logger:0xc420911400 maxPool:3 shutdown:false shutdownCh:0xc42008b560 shutdownLock:{state:0 sema:0} stream:0xc42026b9a0 timeout:10000000000 TimeoutScale:262144}
    2019-05-07 10:14:49 DEBUG raft: peers=[]
    2019-05-07 10:14:49 DEBUG raft: logStore=&{dataDir:/var/lib/orchestrator backend:<nil>}
    2019-05-07 10:14:50 INFO raft: store initialized at /var/lib/orchestrator/raft_store.db
    2019-05-07 10:14:50 INFO new raft created
    2019/05/07 10:14:50 [INFO] raft: Node at [Follower] entering Follower state (Leader: "")
    2019-05-07 10:14:50 INFO continuous discovery: starting
    2019-05-07 10:14:50 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019/05/07 10:14:51 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2019/05/07 10:14:51 [INFO] raft: Node at [Candidate] entering Candidate state
    2019/05/07 10:14:51 [ERR] raft: Failed to make RequestVote RPC to dial tcp connect: connection refused
    2019/05/07 10:14:51 [ERR] raft: Failed to make RequestVote RPC to dial tcp connect: connection refused
    2019/05/07 10:14:51 [DEBUG] raft: Votes needed: 2
    2019/05/07 10:14:51 [DEBUG] raft: Vote granted from Tally: 1
    2019/05/07 10:14:53 [WARN] raft: Election timeout reached, restarting election
    2019/05/07 10:14:53 [INFO] raft: Node at [Candidate] entering Candidate state
    2019/05/07 10:14:53 [ERR] raft: Failed to make RequestVote RPC to dial tcp connect: connection refused
    2019/05/07 10:14:53 [DEBUG] raft: Votes needed: 2
    2019/05/07 10:14:53 [DEBUG] raft: Vote granted from Tally: 1
    2019/05/07 10:14:53 [DEBUG] raft: Vote granted from Tally: 2
    2019/05/07 10:14:53 [INFO] raft: Election won. Tally: 2
    2019/05/07 10:14:53 [INFO] raft: Node at [Leader] entering Leader state
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to dial tcp connect: connection refused
    2019/05/07 10:14:53 [INFO] raft: pipelining replication to peer
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to dial tcp connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to dial tcp connect: connection refused
    2019/05/07 10:14:53 [DEBUG] raft: Node updated peer set (2): []
    2019-05-07 10:14:53 DEBUG orchestrator/raft: applying command 2: leader-uri
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to dial tcp connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to dial tcp connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to dial tcp connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to dial tcp connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to dial tcp connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to dial tcp connect: connection refused
    2019/05/07 10:14:53 [WARN] raft: Failed to contact in 508.458369ms
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to dial tcp connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to dial tcp connect: connection refused
    2019-05-07 10:14:53 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to dial tcp connect: connection refused
    2019/05/07 10:14:54 [ERR] raft: Failed to heartbeat to dial tcp connect: connection refused
    2019/05/07 10:14:54 [WARN] raft: Failed to contact in 998.108974ms
    2019/05/07 10:14:54 [ERR] raft: Failed to AppendEntries to dial tcp connect: connection refused
    2019/05/07 10:14:54 [ERR] raft: Failed to heartbeat to dial tcp connect: connection refused
    2019/05/07 10:14:54 [WARN] raft: Failed to contact in 1.450057377s
    2019-05-07 10:14:54 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019/05/07 10:14:54 [INFO] raft: pipelining replication to peer
    2019-05-07 10:14:55 DEBUG raft leader is (this host); state: Leader
    2019-05-07 10:14:55 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:14:56 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:14:57 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:14:58 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:14:59 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:00 DEBUG raft leader is (this host); state: Leader
    2019-05-07 10:15:00 DEBUG orchestrator/raft: applying command 3: request-health-report
    2019/05/07 10:15:00 http: TLS handshake error from remote error: tls: bad certificate
    2019/05/07 10:15:00 http: TLS handshake error from remote error: tls: bad certificate
    2019/05/07 10:15:00 http: TLS handshake error from remote error: tls: bad certificate
    2019-05-07 10:15:00 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:01 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:02 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:03 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:05 DEBUG raft leader is (this host); state: Leader
    2019-05-07 10:15:10 DEBUG raft leader is (this host); state: Leader
    2019-05-07 10:15:10 DEBUG orchestrator/raft: applying command 4: request-health-report
    2019/05/07 10:15:10 http: TLS handshake error from remote error: tls: bad certificate
    2019/05/07 10:15:10 http: TLS handshake error from remote error: tls: bad certificate
    2019/05/07 10:15:10 http: TLS handshake error from remote error: tls: bad certificate
    2019-05-07 10:15:15 DEBUG raft leader is (this host); state: Leader
    opened by git001 18
  • Help me

    Help me

    I'm a bit lost in installing the Orchestrator, can I install an instance of it in each slave server? Because if it's on a master instance that goes down, who will do the failover?

    opened by JuanLND 0
  • bug report,Cluster alias update failed after master failover

    bug report,Cluster alias update failed after master failover

    If this is a bug report, please provide a test case and the error output. Useful information:

    • your orchestrator.conf.json config file/contents 1、Except for the account configuration, other parameters remain unchanged. (raft: false)
    "HostnameResolveMethod": "none",
    "MySQLHostnameResolveMethod": "@@report_host",
    "RecoverMasterClusterFilters": [
    • your topology (e.g. run orchestrator-client -c topology -alias my-cluster)   [0s,ok,8.0.20-11,rw,ROW,>>,GTID]
    + [0s,ok,8.0.20-11,rw,ROW,>>,GTID]
    + [0s,ok,8.0.20-11,rw,ROW,>>,GTID]
    • what did you do? 1、First, i set cluster alias(cluster_name:, alias: eeo) by web api.
      2、After auto master failover, Cluster architecture changed to
    [email protected] bin % ./orchestrator-client -c topology -alias   [0s,ok,8.0.20-11,rw,ROW,>>,GTID]
    + [0s,ok,8.0.20-11,rw,ROW,>>,GTID]
    >  new cluster:(cluster_name:, alias:,alias is cluster_name.
    [email protected] bin % ./orchestrator-client -c topology -alias eeo [unknown,invalid,8.0.20-11,rw,ROW,>>,GTID,downtimed]
    >  failed cluster:(cluster_name:, alias: eeo),alias is eeo.
    • what did you expect to happen? 1、The cluster structure I would like is shown below
    >  new cluster:(cluster_name:, alias: eeo),alias is eeo.
    >  failed cluster:(cluster_name:, alias:,alias is
    • what happened? 1、Since I have aliased the cluster, the code follows the following logic。
    if alias := analysisEntry.ClusterDetails.ClusterAlias; alias != "" {
           inst.SetClusterAlias(promotedReplica.Key.StringCode(), alias)

    2、When the above function is executed, the new cluster name should be eeo。

    new cluster:(cluster_name:, alias: eeo)

    3、During master failover, because the 'go inst.UpdateClusterAliases()' executes this SQL every 5 seconds

    replace into cluster_alias (alias, cluster_name, last_registered)
    	 cluster_name as alias, cluster_name, now()
    from  database_instance
    group by  cluster_name
    sum(suggested_cluster_alias = '') = count(*)
    value: ('', '', now())

    4、Cluster alias was replaced because of unique constraint on the table. The end result was not what we expected.

    (cluster_name:, alias: eeo)   was replaced to (cluster_name:, alias:
    CREATE TABLE `cluster_alias` (
      `cluster_name` varchar(128) CHARACTER SET ascii COLLATE ascii_general_ci NOT NULL,
      `alias` varchar(128) NOT NULL,
      `last_registered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
      PRIMARY KEY (`cluster_name`),
      UNIQUE KEY `alias_uidx` (`alias`),
      KEY `last_registered_idx` (`last_registered`)
    • How to fix ?
    1. topology_recover.go
    if alias := analysisEntry.ClusterDetails.ClusterAlias; alias != "" {
           + inst.SetClusterNameByAliasOverride(after, alias)
           inst.SetClusterAlias(promotedReplica.Key.StringCode(), alias)
    1. cluster_alias.go
    func SetClusterNameByAliasOverride(newClusterName string, alias string) error {
    	return updateClusterNameByAliasOverride(newClusterName, alias)
    1. cluster_alias_dao.go
    func updateClusterNameByAliasOverride(newClusterName string, alias string) error {
    	writeFunc := func() error {
    		_, err := db.ExecOrchestrator(`
    			update cluster_alias_override set cluster_name = ? where alias=?
    			newClusterName, alias)
    		return log.Errore(err)
    	return ExecDBWriteFunc(writeFunc)

    Unique constraints are recommended for alias field of cluster_alias_override

    opened by jxs-2022 0
  • Start replica with credentials

    Start replica with credentials

    Hello to all,

    Before asking our question, here is a quick description of our topology. In our environment we have 2 master servers seen as co-master by Orchestrator. Behind each master we have several replica servers dedicated for read queries. All servers use Percona for MySQL 8.0.27 version. Orchestrator can connect to all servers and offers a good view of our topology espacially when we have replication lag, no replication process, etc. And when we do a maintenance on one of our master, we usually use Orchestrator to move all replicas from one master to the other in one single drag&drop action (we love this feature). We can use this feature because we set up replication process with credentials like CHANGE REPLICATION SOURCE TO SOURCE_USER=user1, SOURCE_PASSWORD=password1, ... and then all replication infos are stored in the mysql.slave_master_info table. But such query generates this kind of messages in error log : [Warning] [MY-010897] [Repl] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.

    To improve our security we decide to remove replication credentials from CHANGE REPLICATION SOURCE (so they aren't stored in the system table) and to pass them in replcation start command : START REPLICA USER=user1 PASSWORD=password1 By doing this, we can't move replicas by drag&drop in Orchestrator (probably because by default Orchestrator run a simple START REPLICA). So is there a way to tell Orchestrator to use USER and PASSWORD options with START REPLICA statement ?

    opened by tfulcrand 0
  • Unable to determine why its throwing super-read-only error

    Unable to determine why its throwing super-read-only error

    I am seeing this errors from orchestrator. Can you please help what causing it. I have given orchestrator user with SUPER privilege and still seeing this issue.

    Jun  4 09:33:25 k8s-worker3 orchestrator: 2022-06-04 09:33:25 ERROR Error 1290: The MySQL server is running with the --super-read-only option so it cannot execute this statement
    mysql> show grants for [email protected]'10.%';
    | Grants for [email protected]%                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
    | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `orchestrator`@`10.%`                                                                                                                                                                                                                                                                                                                                                 |
    2 rows in set (0.00 sec)


    opened by phanimullapudi 0
  • JQuery 1.2 Multiple XSS vulnerabilities

    JQuery 1.2 Multiple XSS vulnerabilities

    I see that the packaged version of Jquery is 1.2 in orchestrator. Our security scans found that this is insecure and has multiple cross site scripting vulnerabilities. My request is to upgrade the packaged Jquery to at least 3.5.0 in order to remove these vulnerabilities.

    opened by rlittle316 1
  • Graceful master takeover auto constently send error

    Graceful master takeover auto constently send error

    Hello, I'm having an issue with the gracefull master takeover (automatic start replication mode) with Orchestrator. My topology is made of 3 mysql servers, 1 master, 2 slaves, with MariaDB 10.5.15 : testmysql1/2/3. image Let's say testmysql1 is the master. When i ask for a gracefull master takeover auto in order to set testmysql2 as master (with /usr/local/orchestrator/orchestrator -c graceful-master-takeover-auto -alias MyAlias -d testmysql2.mydomain:3306 ), everything goes fine : image (I'm still getting an error in the CLI ERROR GracefulMasterTakeover: sanity problem. Demoted master's coordinates changed from mysql-bin.000018:32587961 to mysql-bin.000018:32697914 while supposed to have been frozen but replication is fine).

    But when i put back testmysql1 as master (so /usr/local/orchestrator/orchestrator -c graceful-master-takeover-auto -alias MyAlias -d testmysql1.mydomain:3306), i don't have any error in the CLI, but in the webui, the two new slaves servers show this error : image And indeed when i run a SHOW SLAVE STATUS \G; on my slaves servers, Slave SQL is not running and last error is : Could not execute Delete_rows_v1 event on table orchestrator.cluster_alias; Can't find record in 'cluster_alias', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000018, end_log_pos 33779915 I don't know why, but table database_instance_maintenance in orchestrator database is empty on the newly promoted master server (testmysql1). Then i just click on "Skip query" button and then replication starts again...

    I don't really know why i've got this error... Here is my configuration :

    • Linux Bullseye 11
    • MariaDB 10.5.15
    • Orchestrator 3.2.6

    My orchestrator.conf.json :

      "Debug": false,
      "EnableSyslog": false,
      "ListenAddress": "",
      "MySQLTopologyUser": "orchestrator_user",
      "MySQLTopologyPassword": "orchestrator_password",
      "MySQLTopologyCredentialsConfigFile": "",
      "AutoPseudoGTID": true,
      "MySQLTopologySSLPrivateKeyFile": "",
      "MySQLTopologySSLCertFile": "",
      "MySQLTopologySSLCAFile": "",
      "MySQLTopologySSLSkipVerify": true,
      "MySQLTopologyUseMutualTLS": false,
      "MySQLOrchestratorHost": "localhost",
      "MySQLOrchestratorPort": 3306,
      "MySQLOrchestratorDatabase": "orchestrator",
      "MySQLOrchestratorUser": "orchestrator_user",
      "MySQLOrchestratorPassword": "orchestrator_password",
      "ReplicationCredentialsQuery": "SELECT repl_user, repl_pass from meta.cluster where anchor=1",
      "MySQLOrchestratorCredentialsConfigFile": "",
      "MySQLOrchestratorSSLPrivateKeyFile": "",
      "MySQLOrchestratorSSLCertFile": "",
      "MySQLOrchestratorSSLCAFile": "",
      "MySQLOrchestratorSSLSkipVerify": true,
      "MySQLOrchestratorUseMutualTLS": false,
      "MySQLConnectTimeoutSeconds": 1,
      "MySQLTopologyUseMixedTLS": false,
      "DefaultInstancePort": 3306,
      "DiscoverByShowSlaveHosts": false,
      "InstancePollSeconds": 5,
      "DetachLostSlavesAfterMasterFailover": true,
      "ApplyMySQLPromotionAfterMasterFailover": false,
      "PreventCrossDataCenterMasterFailover": false,
      "PreventCrossRegionMasterFailover": false,
      "MasterFailoverDetachReplicaMasterHost": false,
      "MasterFailoverLostInstancesDowntimeMinutes": 0,
      "ApplyMySQLPromotionAfterMasterFailover": true,
      "DiscoveryIgnoreReplicaHostnameFilters": [
      "UnseenInstanceForgetHours": 240,
      "SnapshotTopologiesIntervalHours": 0,
      "InstanceBulkOperationsWaitTimeoutSeconds": 10,
      "HostnameResolveMethod": "default",
      "MySQLHostnameResolveMethod": "@@hostname",
      "SkipBinlogServerUnresolveCheck": true,
      "ExpiryHostnameResolvesMinutes": 60,
      "RejectHostnameResolvePattern": "",
      "ReasonableReplicationLagSeconds": 10,
      "ProblemIgnoreHostnameFilters": [],
      "VerifyReplicationFilters": false,
      "ReasonableMaintenanceReplicationLagSeconds": 20,
      "CandidateInstanceExpireMinutes": 60,
      "RemoveTextFromHostnameDisplay": ".mydomain:3306",
      "AuditLogFile": "",
      "AuditToSyslog": false,
      "ReadOnly": false,
      "AuthenticationMethod": "",
      "HTTPAuthUser": "",
      "HTTPAuthPassword": "",
      "AuthUserHeader": "",
      "PowerAuthUsers": [
      "ClusterNameToAlias": {
        "": "test suite"
      "RecoveryPeriodBlockSeconds": 3600,
      "RecoveryIgnoreHostnameFilters": [],
      "RecoverMasterClusterFilters": [
      "RecoverIntermediateMasterClusterFilters": [
      "ReplicationLagQuery": "",
      "DetectClusterAliasQuery": "SELECT cluster_name FROM meta.cluster;",
      "DetectClusterDomainQuery": "",
      "DetectInstanceAliasQuery": "",
      "DetectPromotionRuleQuery": "",
      "DataCenterPattern": "[.]([^.]+)[.][^.]+[.]mydomain[.]com",
      "PhysicalEnvironmentPattern": "[.]([^.]+[.][^.]+)[.]mydomain[.]com",
      "PromotionIgnoreHostnameFilters": [],
      "DetectSemiSyncEnforcedQuery": "",
      "ServeAgentsHttp": false,
      "AgentsServerPort": ":3001",
      "AgentsUseSSL": false,
      "AgentsUseMutualTLS": false,
      "AgentSSLSkipVerify": false,
      "AgentSSLPrivateKeyFile": "",
      "AgentSSLCertFile": "",
      "AgentSSLCAFile": ""

    Any idea of what i could've missed ? Or misconfigured ? Thanks a lot, i really appreciate Orchestrator, very usefull for my needs :+1:

    opened by Bilanda 0
  • v3.2.6(Jul 27, 2021)

    Changes since


    • EnforceSemiSyncReplicas & RecoverLockedSemiSyncMaster - actively enable/disable semi-sync replicas to match master's wait count #1373, thanks @binwiederhier
    • ReasonableInstanceCheckSeconds: allows configuring the time that Orchestrator allows for a check to take before considering an instanced failed #1368, thanks @binwiederhier
    • ReplicationCredentialsQuery: flexible whether the query returns 2 or 5 values #1378 (bugfix)
    • Handle MariaDB behavior of dropping relay log entries on failure scenarios #1374
    • Adding systemd open file limit #1372, thanks @nivedreddy

    On the build side:

    • refactor(go mod): migrate to go modules #1356, thanks @cndoit18
    • Build/CI: using golang 1.16 #1391
    $ sha256sum *
    aed1016c1d169afbfb93955c1093fba0762383a3f9954a9e672103356f4f39e7  orchestrator-3.2.6-1.x86_64.rpm
    4ef96ed8576ace00e0bba032c7c11e36e8011751a8b4baf0c4b095b87415625b  orchestrator_3.2.6_amd64.deb
    688fd3432a014628692a284d9bee06fbab5c7b5704bd3425d580a37b3eb3a9da  orchestrator-3.2.6-linux-amd64.tar.gz
    9b63b6709423469326986ba1ac8209df983caf3f7dc3186a1f4bec2b2a1acbfc  orchestrator-cli-3.2.6-1.x86_64.rpm
    3a625f7e8ce05546514ae82124b230fd12c1e9b90f68d4785d1d37c525d38c98  orchestrator-cli_3.2.6_amd64.deb
    d4a733ec91ea48322c9ff87ec49d69fc89d9d6348608d3c7d0cde7f4b99c7737  orchestrator-client-3.2.6-1.x86_64.rpm
    53b5bc59fe94850163e84a0eacf035c5a82a1c1eb7e957894da81492e4f0e232  orchestrator-client_3.2.6_amd64.deb
    fbd01b4e428b60227907f476dee0fc6d3442970985bf5a1e063a77d50ccea7a9  orchestrator-client-sysv-3.2.6-1.x86_64.rpm
    a9f0c7bfb416822fe9fcd3ec97bb3d7125b9c4d8ac8c52210141394ef4b06671  orchestrator-client-sysv-3.2.6_amd64.deb
    9c4c06b735679d8ef84593c5b09ca0c0b0e41b50c801b1e2e3f93fb7438ff559  orchestrator-cli-sysv-3.2.6-1.x86_64.rpm
    dcb3dc0bd6fbef90d01ccaddcd9443e2a78f0a11e38728301865d000b18cb8e2  orchestrator-cli-sysv-3.2.6_amd64.deb
    ff8dc70c60d128d628157b7007ef5b8969877cbe69a05034adebc88557d09dc7  orchestrator-sysv-3.2.6-1.x86_64.rpm
    9ce8a18387729350053aaa4db9a6febc5531a4e14c3c5a2feba88824a150b9b5  orchestrator-sysv-3.2.6_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.2.6-1.x86_64.rpm(10.07 MB)
    orchestrator-3.2.6-linux-amd64.tar.gz(10.11 MB)
    orchestrator-cli-3.2.6-1.x86_64.rpm(9.60 MB)
    orchestrator-cli-sysv-3.2.6-1.x86_64.rpm(9.60 MB)
    orchestrator-cli-sysv-3.2.6_amd64.deb(9.65 MB)
    orchestrator-client-3.2.6-1.x86_64.rpm(15.01 KB)
    orchestrator-client-sysv-3.2.6-1.x86_64.rpm(15.01 KB)
    orchestrator-client-sysv-3.2.6_amd64.deb(10.47 KB)
    orchestrator-client_3.2.6_amd64.deb(10.47 KB)
    orchestrator-cli_3.2.6_amd64.deb(9.65 MB)
    orchestrator-sysv-3.2.6-1.x86_64.rpm(10.07 MB)
    orchestrator-sysv-3.2.6_amd64.deb(10.12 MB)
    orchestrator_3.2.6_amd64.deb(10.12 MB)
  • v3.2.5(May 27, 2021)

    Changes since


    • Introducing RecoverNonWriteableMaster flag #1332
    • Drop fixed list of cipher suites. #1295, thanks @kormat
    • If access to ORCHESTRATOR_API fails do not expose the password(s) #1319, thanks @sjmudd
    • Expose Binlog Coordinates at time of promotion as Environment Variable #1323, thanks @gsraman
    • Fix filter match logic, be strict about IP addresses #1318
    • ConsulTxnStore: batch KV updates by key-prefix to avoid ops limit #1311, thanks @timvaillancourt
    • Use RaftHttpTransport for reverse-proxy #1344, thanks @akatashev
    • Remove resetting Auth credentials in reverse-proxy #1349, thanks @akatashev
    • Bump Bootstrap version, per CVE-2018-14040 #1336
    • docs/spelling, thanks @wreiske, michaelcoburn
    $ sha256sum *
    e91e121fd164b07e23fb462962efc1a62013d8f6c46751ccefc65494b39fe236  orchestrator-3.2.5-1.x86_64.rpm
    2589278e9ae3a71694f60343547dce056f6054bbb0d7e4011f6f37c0c4fe7438  orchestrator_3.2.5_amd64.deb
    5c8f4fb4fd6a8d4b22c39d3ecaa60a8570d8ce5ff616210caf0b6828112d542c  orchestrator-3.2.5-linux-amd64.tar.gz
    5addf1f178421324b01a02b5455eddd9e794242866d34c29d1e0690c9bf3cf8a  orchestrator-cli-3.2.5-1.x86_64.rpm
    45dd5439eb47e7722c0526be4efa618281e9d2a41da0882298443948bcc9fba8  orchestrator-cli_3.2.5_amd64.deb
    28244ea5b785a6bbb0ff5777e9479c448eafecdb909f29e034072dd9e25f036c  orchestrator-client-3.2.5-1.x86_64.rpm
    cf19c1d643c5d16211c71e706953709d187e0ae5c0fd3ed80ee7f5cb7e4b7dd6  orchestrator-client_3.2.5_amd64.deb
    7bd7e5aaf28f83598c0191b0bddc27f762ce0bb3c63b83cbb8a319b5b9ce20ae  orchestrator-client-sysv-3.2.5-1.x86_64.rpm
    9e77ce77cafde5fee080a9ef8236c4cb671fdadad759d42a9fd926f0543808a9  orchestrator-client-sysv-3.2.5_amd64.deb
    2a295fcc540b04cee2c290ce250e5a93e7e2c5c6b66868320ab231b9ccabe69e  orchestrator-cli-sysv-3.2.5-1.x86_64.rpm
    b66bf3be1f0a639f71ccca6fb81a33687a1e1ce6dce71c9b76b5a00d7ed2e7a7  orchestrator-cli-sysv-3.2.5_amd64.deb
    a94d093b695e92020d88c06576b2a61f4c98cfe1f8d6dfeabd97f8ded4d4858b  orchestrator-sysv-3.2.5-1.x86_64.rpm
    4fc3d307fc5e264dfcd4e7b68d7aaf6085f9d38abb8498afae437df6a5f5a5b7  orchestrator-sysv-3.2.5_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.2.5-1.x86_64.rpm(10.53 MB)
    orchestrator-3.2.5-linux-amd64.tar.gz(10.58 MB)
    orchestrator-cli-3.2.5-1.x86_64.rpm(10.06 MB)
    orchestrator-cli-sysv-3.2.5-1.x86_64.rpm(10.06 MB)
    orchestrator-cli-sysv-3.2.5_amd64.deb(10.12 MB)
    orchestrator-client-3.2.5-1.x86_64.rpm(15.01 KB)
    orchestrator-client-sysv-3.2.5-1.x86_64.rpm(15.01 KB)
    orchestrator-client-sysv-3.2.5_amd64.deb(10.47 KB)
    orchestrator-client_3.2.5_amd64.deb(10.47 KB)
    orchestrator-cli_3.2.5_amd64.deb(10.12 MB)
    orchestrator-sysv-3.2.5-1.x86_64.rpm(10.53 MB)
    orchestrator-sysv-3.2.5_amd64.deb(10.59 MB)
    orchestrator_3.2.5_amd64.deb(10.59 MB)
  • v3.2.4(Feb 21, 2021)

    Changes since v3.2.3:


    • Add support for recovery of async/semisync replicas of failed replication group members #1254, thanks @ejortegau
    • Issue #1259 Fix topology-related API endpoints for group replication setups #1263, thanks @ejortegau
    • SSL support in replication config; contribution #1250, thanks @noggi
    • writeManyInstances: improve error message if we run out of placeholders. #1265, thanks @sjmudd
    • add replication delay command #1267, thanks @marcosvm
    • Skip RestartReplicationQuick() on MariaDB with GTID #1264
    • Adjust the discovery uri logging to show the full dsn used (less pass… #1272, thanks @sjmudd
    • Add .PutKVPairs() method to KVStore interface #1274, thanks @timvaillancourt
    • ConsulTxnStore: handle failure in read/get transaction #1301, thanks @timvaillancourt
    • EnableMasterSSL with graceful-master-takeover-auto errors #1280, thanks @dtest
    • Add Consul KV store based on atomic transactions #1276, thanks @timvaillancourt
    • Do not restart SQL thread in RestartReplicationQuick #1309, thanks @gsraman
    • Expire cluster_alias entries #1246
    • Using TopologyRecovery as pointer to avoid go vet lock-copy issues #1242
    • Fix Orchestrator Favicon #1240, thanks @ushuz
    • Add docs for topology-tags command #1269, thanks @nickdelnano
    • XSS: sanitize 'orchestrator-msg' param #1313

    Thank you to many other contributors for documentation/typos fixes

    $ sha256sum *
    50d5d0e7362d004b9bc93785994822c0d3f0d4ba4575445273117e492d89edeb  orchestrator-3.2.4-1.x86_64.rpm
    5882d588c6e863f62fe27c3b20c97671931a98cbae6d26d36489e4bf23efeba9  orchestrator_3.2.4_amd64.deb
    f0e62b2c1a8afe50cc242ecd2177aacb9e7b61602ef1087769cc432907e72850  orchestrator-3.2.4-linux-amd64.tar.gz
    38d29ad8507d382c5a524a2cc9e0b566a1cf49ad7ebd534a5b73dced2307ac80  orchestrator-cli-3.2.4-1.x86_64.rpm
    23bf46214d774ebbb29836deeae1924100b492a6f6daa0e8dc6df1ffcee3586a  orchestrator-cli_3.2.4_amd64.deb
    6a320a299965e5491d67f4a12b0c5d37f4fc5b6f653c1a535e1daa84b13752c8  orchestrator-client-3.2.4-1.x86_64.rpm
    6e082c3910658e59f31be2fa78b6e4ccb61b83501458381a11aec072688df1b3  orchestrator-client_3.2.4_amd64.deb
    5af8202b9aeb445ea98d597e55b645e8ee27bed75a218a9c8b64d8c2fc674ce4  orchestrator-client-sysv-3.2.4-1.x86_64.rpm
    cfffd433a3bb39ebb93dde75d13b49ebeff666d55a4de764418f9f9bb0ff5233  orchestrator-client-sysv-3.2.4_amd64.deb
    39c2df8e9e9c5070d17b058923eb38670425ba4607c14e07655548d9147f2909  orchestrator-cli-sysv-3.2.4-1.x86_64.rpm
    62c64348283a7dec43f929989b514da9a7c73fc9b97d0c2b7b05c5eb38a9d898  orchestrator-cli-sysv-3.2.4_amd64.deb
    099406fe8724373b26905bae5b5f731cf022a74cb75ef974fecfde526894ffd8  orchestrator-sysv-3.2.4-1.x86_64.rpm
    b91ad648cacba590992e4cbd7bfed92895a6c1958a4f0334fedec302b2ce045d  orchestrator-sysv-3.2.4_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.2.4-1.x86_64.rpm(10.46 MB)
    orchestrator-3.2.4-linux-amd64.tar.gz(10.51 MB)
    orchestrator-cli-3.2.4-1.x86_64.rpm(10.05 MB)
    orchestrator-cli-sysv-3.2.4-1.x86_64.rpm(10.05 MB)
    orchestrator-cli-sysv-3.2.4_amd64.deb(10.10 MB)
    orchestrator-client-3.2.4-1.x86_64.rpm(14.91 KB)
    orchestrator-client-sysv-3.2.4-1.x86_64.rpm(14.91 KB)
    orchestrator-client-sysv-3.2.4_amd64.deb(10.37 KB)
    orchestrator-client_3.2.4_amd64.deb(10.37 KB)
    orchestrator-cli_3.2.4_amd64.deb(10.10 MB)
    orchestrator-sysv-3.2.4-1.x86_64.rpm(10.46 MB)
    orchestrator-sysv-3.2.4_amd64.deb(10.51 MB)
    orchestrator_3.2.4_amd64.deb(10.51 MB)
  • v3.2.3(Aug 4, 2020)

    Changes since v3.2.2:


    • Add basic support for group replication #1180 by @ejortegau
    • support for api/raft-add-peer and api/raft-remove-peer #1208, addressing #253
    • Added instance Alias to the topology display from the command line #1215 by @martinarrieta
    • Recovery: relaxed promotion rule check while searching for ideal replica #1222, a performance improvement in recovery process
    • Better analysis of UnreachableMaster scenario #1225
    • Doc updates

    Thanks to all contributors @sjmudd @luisyonaldo @MOON-CLJ @EagleEyeJohn, John Nicholls

    $ sha256sum *
    191b6cb6be6c4b2231a56c154b7544d7190c2e27f0604ca46297ecad02000733  orchestrator-3.2.3-1.x86_64.rpm
    e1e5cb51973c45500c5de7f487378d7dc5a478d468850e9d228d1907e3fe5773  orchestrator_3.2.3_amd64.deb
    73867476805d7cb972d27acdf4f94fa74ddd5cec700ed39bc7dbc932916bb6b5  orchestrator-3.2.3-linux-amd64.tar.gz
    0fb8073ea77dad06deda9b056ee0c1c9d55956cc1900801d89543926f7b0cc6b  orchestrator-cli-3.2.3-1.x86_64.rpm
    f4b6962fb87811350f63e1ee6132b74eeb9a56c710a5bb9d7d43bae4d342e354  orchestrator-cli_3.2.3_amd64.deb
    80b196f130f7db91f7d00bc301aafa5c8129a5fc26cf9eae6659fbc9618d928b  orchestrator-client-3.2.3-1.x86_64.rpm
    b63d5ee473684e69760b65db91bdd83daffbee2b75fb19a2d7a5746e178bc943  orchestrator-client_3.2.3_amd64.deb
    dc03402fd98235842f2470fa9f387e70bfdc40e6084d42d5f8a310bb16994843  orchestrator-client-sysv-3.2.3-1.x86_64.rpm
    f0773d30e531d1f745b9aeaca62fab79a1e521035cb064e1233e774da48b1c75  orchestrator-client-sysv-3.2.3_amd64.deb
    ad00dfe18612e1552521e5734a36a165f65e890376312e4ddca5a03cdce22456  orchestrator-cli-sysv-3.2.3-1.x86_64.rpm
    18dd81ec287b51f56bfe8f4245204759fb40a5c63dc4a83b9134e2963e9eeb69  orchestrator-cli-sysv-3.2.3_amd64.deb
    a75ddb95369b4271ac791e634a09dfb80e209cecc70ff89322e326a040dafe49  orchestrator-sysv-3.2.3-1.x86_64.rpm
    e3930653fd0ec682628e0dfe9b8c025c2637f3d84de51def9d88d811ce57cb2c  orchestrator-sysv-3.2.3_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.2.3-1.x86_64.rpm(9.98 MB)
    orchestrator-3.2.3-linux-amd64.tar.gz(10.03 MB)
    orchestrator-cli-3.2.3-1.x86_64.rpm(9.57 MB)
    orchestrator-cli-sysv-3.2.3-1.x86_64.rpm(9.57 MB)
    orchestrator-cli-sysv-3.2.3_amd64.deb(9.62 MB)
    orchestrator-client-3.2.3-1.x86_64.rpm(14.79 KB)
    orchestrator-client-sysv-3.2.3-1.x86_64.rpm(14.79 KB)
    orchestrator-client-sysv-3.2.3_amd64.deb(10.26 KB)
    orchestrator-client_3.2.3_amd64.deb(10.26 KB)
    orchestrator-cli_3.2.3_amd64.deb(9.62 MB)
    orchestrator-sysv-3.2.3-1.x86_64.rpm(9.98 MB)
    orchestrator-sysv-3.2.3_amd64.deb(10.03 MB)
    orchestrator_3.2.3_amd64.deb(10.03 MB)
  • v3.2.2(Jun 22, 2020)

    Changes since v3.1.4:

    Notable changes:

    core logic, detection & failover:

    • Support for FailMasterPromotionOnLagMinutes #1115
    • introducing gracefaul-master-takeover-auto: graceful takeover where orchestrator can auto-pick new master and also start replication on demoted master.
    • Better semi-sync analysis #1171, introducing NotEnoughValidSemiSyncReplicasStructureWarning
    • Analysis: locked semi sync master #1175, introducing LockedSemiSyncMaster

    development, build & testing:

    • New CI tests: upgrade, system tests; major overhaul of testing, see pospelov-v
    • script/dock to run local/system environments, tests, generate packages. See
    • Expect and use go1.14 #1186


    • Orchestrator systemd depency #1112
    • Fixed ReadUnambiguousSuggestedClusterAliases logic #1161
    • fix mustPromoteOtherCoMaster debug message #1162
    • ascii topology: indicate errant GTID #1163
    • search recoveries by cluster alias #1090
    • skip AddReplicaKey if it is specified in config.Config.DiscoveryIgnoreReplicaHostnameFilters #1096
    • Support HTTPS for Consul KV #1047
    • Allow sorting clusters on dashboard by count, name, or alias #1054
    • Display region, data center, and environment in UI #1095
    • orchestrator-client: return raw JSON for api call on error #1166
    • Format delays in days / hours / minutes / seconds #1184
    • Skip Verify should be SSLSkipVerify Instead the mysql backend config. #1191


    • slave->replica changes throughout the code.,,, #1188
    • API incompatibility: analysis names changes: DeadMasterAndSlaves->DeadMasterAndReplicas etc. See
    • Web interface to use "replica" terminology (e.g. "Stop replication" button replaces "Stop slave")
    • API: transition into new terminology #1188:
      • The API for Instance now adds new terminology fields. Replicas is identical to SlaveHosts. ReplicaitonLagSeconds is identical to SlaveLagSeconds etc.
      • Users can opt to use the new naming convention. At this point I believe there is no user interaction (command line, API call, parsing API response) that forces the user to use slave terminology.
      • There is no plan at this time for deprecating old names.
      • Internally, the old names have been removed, and are only exposed in the API for backwards compatibility.

    Contributions by @jhriggs, @luisyonaldo, @rluisr, @smirnov-vs, @MaxFedotov, @sjmudd, @cezmunsta , @mcrauwel , @pospelov-v, @martinarrieta - thank you!

    $ sha256sum *
    b7fe2069db0092041d8ec3a427efb8a072773de9c8648962885ad35d4a38b67b  orchestrator-3.2.2-1.x86_64.rpm
    e90fa66a37c8d509e7d4b1c3a4118fd8c8bc8d8b856fa183ddadf31f11a1f3f7  orchestrator_3.2.2_amd64.deb
    334c6f01e05abf428d62625001f0371d117944cf92d12da49bc6ae958501e6e4  orchestrator-3.2.2-linux-amd64.tar.gz
    c7e05ca9b8493e93caedc77a1e39daacf2071f9827166fe296b4b1f93e7075f5  orchestrator-cli-3.2.2-1.x86_64.rpm
    f351e8cac721eea7fa8786954e53f8d514f3673497d017d133a58e38e17a7657  orchestrator-cli_3.2.2_amd64.deb
    ba15f1a2070ffc710f74e25c756d38f82732ea450bed59507b8fc5bfb231b864  orchestrator-client-3.2.2-1.x86_64.rpm
    ac6606ca10fb644315d5f88baa262611cea5136d16e4dcf38f886eee3ee7c854  orchestrator-client_3.2.2_amd64.deb
    b9a381dd64ab218ac87635e6367d51b88629eb5b416b4bce1e5db68121685642  orchestrator-client-sysv-3.2.2-1.x86_64.rpm
    89ef1d3fd9b4476bdb9da92fbceb4321c1cb010573c694836b71336e7ca98d42  orchestrator-client-sysv-3.2.2_amd64.deb
    6294ea0f6b16c8ed0715eb8ba92edfbd9e2326c7b6da7073e296f175a62e0c09  orchestrator-cli-sysv-3.2.2-1.x86_64.rpm
    3d9dd1b0fdcd20688c8c49088a4a11c312af0eaa0d907454e721d8b0cf9a2068  orchestrator-cli-sysv-3.2.2_amd64.deb
    a907a53ab0d630c89b290672ef9c2fbdb5fd06d75b89fe94291e20fff4c515fc  orchestrator-sysv-3.2.2-1.x86_64.rpm
    ab42fad5833efe2d3d737c5dd8fa687188ad007cb63eb6aab060967222da7ddc  orchestrator-sysv-3.2.2_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.2.2-1.x86_64.rpm(9.96 MB)
    orchestrator-3.2.2-linux-amd64.tar.gz(10.01 MB)
    orchestrator-cli-3.2.2-1.x86_64.rpm(9.56 MB)
    orchestrator-cli-sysv-3.2.2-1.x86_64.rpm(9.56 MB)
    orchestrator-cli-sysv-3.2.2_amd64.deb(9.61 MB)
    orchestrator-client-3.2.2-1.x86_64.rpm(14.79 KB)
    orchestrator-client-sysv-3.2.2-1.x86_64.rpm(14.79 KB)
    orchestrator-client-sysv-3.2.2_amd64.deb(10.26 KB)
    orchestrator-client_3.2.2_amd64.deb(10.27 KB)
    orchestrator-cli_3.2.2_amd64.deb(9.61 MB)
    orchestrator-sysv-3.2.2-1.x86_64.rpm(9.96 MB)
    orchestrator-sysv-3.2.2_amd64.deb(10.01 MB)
    orchestrator_3.2.2_amd64.deb(10.01 MB)
  • v3.1.4(Jan 26, 2020)

    Changes since v3.1.3:


    • Support for DiscoverySeeds: hard coded list of instances to be discovered upon startup. Good for bootstrapping bundled clusters, eg. in testing environments (no need to call "discover" after startup), #1036
    • Update from openark/golib to support TZ with logging #1017, thanks @jfudally
    • #1033 fix instance alias, thanks @jfg956
    • #1043 doc updates, thanks @jfg956
    • #960 write buffer metrics - thanks @luisyonaldo, @sjmudd
    • #1034 bugfixm thanks @yangeagle
    • #1044 : fix bug where post failure processes were invoked even if prefailover processes failed
    • fix return values in defer function, #946
    • Fix systemd on reboot #1012, thanks @Honiix

    There's a bunch of yet unmerged pull requests -- thank you all for your contributions and for your patience!

    $ sha256sum *.*
    2fa460f9684aef2a95884b83225b933b3bcac0935d80df8e9b8690e427298803  orchestrator-3.1.4-1.x86_64.rpm
    e9b2b48b102fa30c64f3a6419185c171d2ab3cc483fc89a6ced1ca51a1f38ef2  orchestrator-3.1.4-linux-amd64.tar.gz
    8def7d4e67824dc27876ad5fdb51d51f9bed6c25be35daa07f464a03925e9554  orchestrator-cli-3.1.4-1.x86_64.rpm
    85d057b814dad4a3a463f819688117ceacf910e0b9e576117e49d6965becc2d5  orchestrator-cli-sysv-3.1.4-1.x86_64.rpm
    3477f26c2292adb4ffcae88ecc33b4230e3662ce95f28fab3f5ecdc60b368353  orchestrator-cli-sysv-3.1.4_amd64.deb
    e809976b99c808c9d30294af9e4bef492adb803f9361a8e2d71e33f0af755f71  orchestrator-cli_3.1.4_amd64.deb
    63f3fc7f188fdd9b167192a00a90bce4d13b419809fdabedaf308dac8b4ec4f5  orchestrator-client-3.1.4-1.x86_64.rpm
    115c3c69eb58f51528b24c81121af601e5348056c02b8b37ba60d0cb530fef40  orchestrator-client-sysv-3.1.4-1.x86_64.rpm
    2155f1d85e8e617885c2c6a7bf0f1c3afdb3064c42fb614df7a83260f8c597bb  orchestrator-client-sysv-3.1.4_amd64.deb
    121b1cdb3387d09b77e4b865ce8205de8d6824225dff1862467497f5f8a20971  orchestrator-client_3.1.4_amd64.deb
    5052b2922e989591905ad67d60e895e3e6997588375014c2c9b16f2f28e9fd90  orchestrator-sysv-3.1.4-1.x86_64.rpm
    220fa41abe00843321b12a5eb5bf006ed9bcef38af83ef32ba2ac9f98eb7b45c  orchestrator-sysv-3.1.4_amd64.deb
    fef3c797a623789b4cc99d09c56fa50ff2e1eb8cebd5388d24552a7dbb8d5210  orchestrator_3.1.4_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.1.4-1.x86_64.rpm(9.82 MB)
    orchestrator-3.1.4-linux-amd64.tar.gz(9.87 MB)
    orchestrator-cli-3.1.4-1.x86_64.rpm(9.42 MB)
    orchestrator-cli-sysv-3.1.4-1.x86_64.rpm(9.42 MB)
    orchestrator-cli-sysv-3.1.4_amd64.deb(9.46 MB)
    orchestrator-client-3.1.4-1.x86_64.rpm(14.71 KB)
    orchestrator-client-sysv-3.1.4-1.x86_64.rpm(14.71 KB)
    orchestrator-client-sysv-3.1.4_amd64.deb(10.16 KB)
    orchestrator-client_3.1.4_amd64.deb(10.16 KB)
    orchestrator-cli_3.1.4_amd64.deb(9.46 MB)
    orchestrator-sysv-3.1.4-1.x86_64.rpm(9.82 MB)
    orchestrator-sysv-3.1.4_amd64.deb(9.87 MB)
    orchestrator_3.1.4_amd64.deb(9.87 MB)
  • v3.1.3(Dec 4, 2019)

    Changes since v3.1.2:

    Notable changes:

    • Add topology-tags command, #942, thanks @nickdelnano
    • support for DiscoveryIgnoreHostnameFilters and DiscoveryIgnoreMasterHostnameFilters, #1018
    • Supporting UnreachableIntermediateMasterWithLaggingReplicas, #1005
    • topology recovery: supporting {instanceType}, {isMaster}, {isCoMaster} placeholders, #1008
    • Recovery processes ending with "&" are executed asynchronously, #968
    • Removed TravisCI builds, now building via GitHub Actions #1007
    • emergentlyRestartReplicationOnTopologyInstance fixes, #1010
    • Implement alias view in web interface, #992 , thanks @jfg956
    • Dockerfile: upgrade to go 1.12.10 and reduce layer churn around packages, #986, thanks @nickvanw
    • MySQLOrchestratorSSLSkipVerify to apply on backend TLS config, #985
    • doc updates

    Also thanks:

    • @MaxFedotov #976 , #972
    • @JoseFeng #1013
    • @amangoel #994
    • @tom--bo #973 , #970

    There's still some outstanding PRs that did not make it to this release. Hope to be able to merge & release them soon.

    $ sha256sum *.*
    132e04a6d1ef05dae268864cdd2eef82f913b3a7c49e89cd72c000e12c1d0a38  orchestrator-3.1.3-1.x86_64.rpm
    6764195e61ca36e0e0096f2bca859f31f31bd742f18c1b3177bfa8478240c402  orchestrator-3.1.3-linux-amd64.tar.gz
    4f25c28a007d5fc16fb268ccffcc0c97a5e75d7734c7ba2f53c8fb788896ebb5  orchestrator-cli-3.1.3-1.x86_64.rpm
    037d00942187d52ff2fd79087beaae1205635f6d2d2439662d0c179d8591816a  orchestrator-cli-sysv-3.1.3-1.x86_64.rpm
    68b2a08de0645ab316dd916cdb69475ffd857db089078f35501074e1e56997b8  orchestrator-cli-sysv-3.1.3_amd64.deb
    9d3a29c29f293819e12fa17b3d48063c2fa606879a44dcdb2b7acf23ef413e49  orchestrator-cli_3.1.3_amd64.deb
    e3a6231c1bdc6f8bda768d3ba0244f6f6855ae6f92cd34706b4056122435aad6  orchestrator-client-3.1.3-1.x86_64.rpm
    26c4f62d917dda1382f01e717c7adf0f778b50a530428e47b3f226ddd47207a1  orchestrator-client-sysv-3.1.3-1.x86_64.rpm
    c72811259b7a79f5009171632a1f5c0111633da5ba0bd3e4fe6d6003ae1a1fcb  orchestrator-client-sysv-3.1.3_amd64.deb
    6fc489fa375919826d5a7cf37ec4bf4c88669d4bc93e965970e8b6c3f47b8260  orchestrator-client_3.1.3_amd64.deb
    662f95e3ad0d5d2b20988bed06760bb45c2f68696fb88f78218b707efa84231a  orchestrator-sysv-3.1.3-1.x86_64.rpm
    4ba954844ce28630423eb82601b91b51ecc960575f713adbdfefd5c80d9220ad  orchestrator-sysv-3.1.3_amd64.deb
    9e1565bb26a107e89d6bda7ee991c594dc49f16a091c5246b1bf386be6c663d7  orchestrator_3.1.3_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.1.3-1.x86_64.rpm(9.81 MB)
    orchestrator-3.1.3-linux-amd64.tar.gz(9.85 MB)
    orchestrator-cli-3.1.3-1.x86_64.rpm(9.41 MB)
    orchestrator-cli-sysv-3.1.3-1.x86_64.rpm(9.41 MB)
    orchestrator-cli-sysv-3.1.3_amd64.deb(9.45 MB)
    orchestrator-client-3.1.3-1.x86_64.rpm(14.71 KB)
    orchestrator-client-sysv-3.1.3-1.x86_64.rpm(14.71 KB)
    orchestrator-client-sysv-3.1.3_amd64.deb(10.16 KB)
    orchestrator-client_3.1.3_amd64.deb(10.17 KB)
    orchestrator-cli_3.1.3_amd64.deb(9.45 MB)
    orchestrator-sysv-3.1.3-1.x86_64.rpm(9.81 MB)
    orchestrator-sysv-3.1.3_amd64.deb(9.86 MB)
    orchestrator_3.1.3_amd64.deb(9.86 MB)
  • v3.1.2(Aug 25, 2019)

    Changes since v3.1.0:

    Notable changes:

    • Using new GitHub Actions CI/CD. Travis builds still working. They will probably be removed once GitHub Actions CI/CD becomes GA and fully available to all.
    • graceful master takeover: reduced some overhead; revert to writable on error #948
    • Override master promotion: apply post-unsuccessful processes #947
    • Include port in DiscoveryIgnoreReplicaHostnameFilters regexp matching, #952 , thanks @dougfales
    • Fixing raft leaderAPI URL when URLPrefix nonempty, #951, thanks @xjxyxgq
    • support searching by cluster alias #936, thanks @MaxFedotov
    sha256sum *.*
    6edb952e394e8a6c4dfd91a94ba7bc91918dcc892c41887c86c02eddf50f8c25  orchestrator-3.1.2-1.x86_64.rpm
    4abd0f4bddf7dd5899b4acc0f809b89e941c02cddfda5c33e5998def47b69232  orchestrator-3.1.2-linux-amd64.tar.gz
    7f951ab76243da1ee1cb2d87e3dd9d59bbe7eaf9b8ad676fce424abc23775636  orchestrator-cli-3.1.2-1.x86_64.rpm
    28578833104364cc1e7e59560449c0f33553d34c4248d1dbbf4c5db8e7fddf1c  orchestrator-cli-sysv-3.1.2-1.x86_64.rpm
    5de386c02e53f166dfc2971489d4399f3ca0f73a9839dc8cdd5f375548434de7  orchestrator-cli-sysv-3.1.2_amd64.deb
    d163ac934c4b3b9f7adf87fab5c4f4fa6c38639b3a2fb0fbdbfbec6d6f0559ce  orchestrator-cli_3.1.2_amd64.deb
    fbc9bb0d135306e065d918baf62a0da1236c5b32f997406302cb7ab966bf57fa  orchestrator-client-3.1.2-1.x86_64.rpm
    65dc6dbe9f68c04ca9771a1c26a06daef75ea6a7e73ee51a89d2732826faa1eb  orchestrator-client-sysv-3.1.2-1.x86_64.rpm
    a84742065902809b3c9eaac29944174b30350f54575c7076baa3bdef2e4d012f  orchestrator-client-sysv-3.1.2_amd64.deb
    dc9d35b0d18b431f1a7b92b468b4ac02d30d05706d02e5d88b8be2fe9410cbad  orchestrator-client_3.1.2_amd64.deb
    b64274812cb32046a0abd0e3178c2749008e419af3bdd660182255f61e304d73  orchestrator-sysv-3.1.2-1.x86_64.rpm
    0bb22c1d85b3608721aa4465f6c94f85f1988f0549697afc89db74a32bc629fe  orchestrator-sysv-3.1.2_amd64.deb
    cb66756386466d705f04fe189fb76e83dfa84dc3c5244b091320826aefd0a495  orchestrator_3.1.2_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.1.2-1.x86_64.rpm(9.80 MB)
    orchestrator-3.1.2-linux-amd64.tar.gz(9.85 MB)
    orchestrator-cli-3.1.2-1.x86_64.rpm(9.40 MB)
    orchestrator-cli-sysv-3.1.2-1.x86_64.rpm(9.40 MB)
    orchestrator-cli-sysv-3.1.2_amd64.deb(9.45 MB)
    orchestrator-client-3.1.2-1.x86_64.rpm(14.53 KB)
    orchestrator-client-sysv-3.1.2-1.x86_64.rpm(14.53 KB)
    orchestrator-client-sysv-3.1.2_amd64.deb(10.00 KB)
    orchestrator-client_3.1.2_amd64.deb(10.00 KB)
    orchestrator-cli_3.1.2_amd64.deb(9.45 MB)
    orchestrator-sysv-3.1.2-1.x86_64.rpm(9.80 MB)
    orchestrator-sysv-3.1.2_amd64.deb(9.85 MB)
    orchestrator_3.1.2_amd64.deb(9.85 MB)
  • v3.1.1(Jul 7, 2019)

    This release is identical in code to v3.1.0

    This maintenance release fixes the binary packages, which were built under alpine linux and would not run with glibc.

    $ sha256sum *.*
    1f11fa2e2be154377d81655bf21b621ab8b6382f01ba2c9b5a56a7ead7d12231  orchestrator-3.1.0-1.x86_64.rpm
    a6f8db1e36a19b103eed2017631ecf0f869445012b0fd7befc9e3df5b26c59fa  orchestrator-3.1.0-linux-amd64.tar.gz
    af6428a158d8bf499cb14698e7891c2edbddf63499f661496986cd9806dd815b  orchestrator-cli-3.1.0-1.x86_64.rpm
    1993fc0661c25492b56b19beb62a85fae822146f31fb0591c3b5e076b97bdd3e  orchestrator-cli-sysv-3.1.0-1.x86_64.rpm
    0b0d7bceb6fdd75bf007dcf4bbcbb4bcfb51076f86b2409ea4beee326589181c  orchestrator-cli-sysv-3.1.0_amd64.deb
    27cba78e9252ce1c672e3cd476c916bdd571a4f06998b0091655988df1ba9faf  orchestrator-cli_3.1.0_amd64.deb
    41582033cdd8351663eb59721c23420e8c81440caa8310c3a7016198c1408483  orchestrator-client-3.1.0-1.x86_64.rpm
    9c6bd0c39fa4e93df9611f15783695af9bfa2367fbba64475dfd49bda230a311  orchestrator-client-sysv-3.1.0-1.x86_64.rpm
    6ae189971fe097bea4745009395b4a663859a46e6c3521a30982258ca3fc42aa  orchestrator-client-sysv-3.1.0_amd64.deb
    40bd54c1e046ca7acdb87b0c72eca48616373639a993981c705e0d653405d7e4  orchestrator-client_3.1.0_amd64.deb
    0f8b2142beb0da42e248ba26f5efd907fb81055b199ed9e9c485daa68d6e7fab  orchestrator-sysv-3.1.0-1.x86_64.rpm
    236f00ec4c7351289fc337457a7cc716d9ac9b4bc9cda28fb349742004d647ef  orchestrator-sysv-3.1.0_amd64.deb
    5b6bd6872bc713495c98d0cec61200398fadb40f6a925ca686cc9e0880e28673  orchestrator_3.1.0_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.1.0-1.x86_64.rpm(9.80 MB)
    orchestrator-3.1.0-linux-amd64.tar.gz(9.85 MB)
    orchestrator-cli-3.1.0-1.x86_64.rpm(9.40 MB)
    orchestrator-cli-sysv-3.1.0-1.x86_64.rpm(9.40 MB)
    orchestrator-cli-sysv-3.1.0_amd64.deb(9.45 MB)
    orchestrator-client-3.1.0-1.x86_64.rpm(14.53 KB)
    orchestrator-client-sysv-3.1.0-1.x86_64.rpm(14.53 KB)
    orchestrator-client-sysv-3.1.0_amd64.deb(9.96 KB)
    orchestrator-client_3.1.0_amd64.deb(9.96 KB)
    orchestrator-cli_3.1.0_amd64.deb(9.45 MB)
    orchestrator-sysv-3.1.0-1.x86_64.rpm(9.80 MB)
    orchestrator-sysv-3.1.0_amd64.deb(9.85 MB)
    orchestrator_3.1.0_amd64.deb(9.85 MB)
  • v3.1.0(Jul 3, 2019)

    Changes since v.3.0.14:

    v3.1.0 is now released. It's been a while since previous release and so this release has longer changelog.

    Users will note the jump of minor version from 3.0.14 to 3.1.0. This repository does not use semantic versioning. The change of versioning is largely attributed to:

    • deb and rpm packaging now default to systemd as opposed to sysv
    • golang1.12 is required to build orchestrator

    Notable changes since 3.0.14:

    • Supporting Consul auto DC KV distribution #819 A new configuration variable ConsulCrossDataCenterDistribution (bool, default false) is introduced. When enabled, the orchestrator leader (whether raft-based on not), as part of submit-master-kv-stores, will ask Consul to distribute the KV values to all known datacenters.
      • also related: Consul KV consistency checks #894
    • purge-binary-logs API, safe operation #825 Added purge-binary-logs/:host/:port/:logFile API endpoint, supporting ?force=true purge-binary-logs will now refuse purging, by default, if the host has replicas which have not yet applied events in the binary logs to-be-purged.
    • Adding locate-gtid-errant command #850
      • locate-gtid-errant reports the names of the binary logs containing errant GTID for a given instance.
      • which-gtid-errant command outputs the errant GTID, if such exists.
    • Adding which-cluster-alias #900 orchestrator -c which-cluster-alias and orchestrator-client -c which-cluster-alias return the alias for a cluster given either -c <cluster> or -i <instance>.
    • Create Post Take-Master Processes Hook #859, thanks @daniel-2647 This PR introduces 1 new config option: PostTakeMasterProcesses : "some PostTakeMasterHook here"
    • NoFailoverSupportStructureWarning, NoLoggingReplicasStructureWarning #852
    • Add structure warning to replication-analysis when all masters are read_only #878, thanks @jfudally
    • Adding Region; Prevent cross region failover #884 Adding Region field in Instance, and a corresponding region column in database_instance backend table. Region is a geographic location, of higher level than DataCenter. E.g. an AWS us-east-1 is a region. The region value is supported by the following new configuration variables:
      • RegionPattern: a regexp to extract the region name from the hostname, if possible
      • DetectRegionQuery: alternatively, a query which computes the region
      • PreventCrossRegionMasterFailover: a failover restriction which only allows failovers within same region as failed master, or else abort promotion of a new master (similar in behavior to PreventCrossDataCenterMasterFailover)
    • Forget instance: accept fuzzy/partial hostnames #886
    • consolidate detach-replica and detach-replica-master-host #801
    • server side problem analysis #793; api owns Problems, as opposed to JS computing them
    • Workaround to bug 83713: GTID, MTR and relay log corruption #807
    • all things equal, prefer promoting instance without errant GTID #812
    • relocate-replicas: sanity check to avoid invalid circular replication #839
    • Modify bulk-promotion-rules api call to return the promotion rule expiry timestamp #843, thanks @sjmudd
    • Provide snapshot-topologies support in orchestrator-client (and via api) #912, thanks @sjmudd
    • orchestrator-client and orchestrator command line usage differences #903, thanks @sjmudd
    • Update support for go 1.12 (also triggers some file reformatting) #861 , thanks @sjmudd This has been expanded to only use go1.12 or above; also in Dockerfile, Travis.
    • Support countLostReplicas in failover hooks #877
    • Master failover: update alias #913
    • force-master-failover does not require master to be writable
    • Web UI: take-master and graceful-master-takeover #895 both take-master and graceful-master-takeover are now supported in all modes (smart, classic, GTID, pseudoGTID).
    • bugfix: fix executed_gtid_set missing from instances without binlogs #804, thanks @fuyar
    • bugfix: Check and recover random order #800, thanks @yangeagle
    • bugfix: in orchestrator-client for authentication handling #797, thanks @cswingler
    • bugfix: LeaderURI: self identify, avoid infinite forwarding #792
    • bugfix: fix flappy integration test #785, thanks @mialinx
    • bugfix: ChangeMasterCredentials: fixed when server is not a replica #789
    • bugfix: adding bash to final container so examples work #805, thanks @anthonyneto
    • bugfix: add curl and jq for orchestrator-client #863, thanks @marcosvm
    • Add the DiscoveryIgnoreReplicaHostnameFilters in the sample configuration. #815, thanks @jfg956
    • Add MySQLConnectTimeoutSeconds option in sample conf files. #860, thanks @jfg956
    • orchestrator-client gsed support for Darwin/BSD #795, thanks @cswingler
    • Documentation updates & fixes, thanks @seeekr, @utdrmac, @ruleant, @cezmunsta, @sjmudd
    • Build: Dockerfile.packaging now provides a full build cycle to generate binaries and release packaged (tgz, deb, rpm)
      • build via docker build . -f Dockerfile.packaging -t orchestrator-packaging
      • docker run --rm -it orchestrator-packaging:latest, find artifacts in /tmp/orchestrator-release
    • Build: generating packages for both systemd (default) and sysv (see package name with -sysv-), thanks @mateusduboli

    See packages in v3.1.1

    Due to an oversight, packages attached in this release were built with musl as opposed to glibc. Please use packages from or later.

    Source code(tar.gz)
    Source code(zip)
  • v3.0.14(Jan 16, 2019)

    Changes since v.3.0.13:

    Some three months since previous release with much going on. Notable changes:

    • New config: PreventCrossDataCenterMasterFailover, #766 to ensures failovers only take place within same datacenter, if so desired.
    • Tagging: tag, untag, search instances by tags via #664, see documentation at
    • KV store auto populates master info if not exists: #549
    • Multi-values replication thread state, #767, thanks @ggunson
    • GTID: support fixing errant transaction by injecting empty transaction on master, #707 introduces -c gtid-errant-inject-empty command.
    • GTID: fix to master-master, #672, #673
    • Fix to authentication in orchestratorclient, #681 , thanks @cezmunsta
    • orchestrator-client: Add credentials to environment variables, #675 , thanks @mateusduboli
    • FigureClusterName: Avoid heavy queries if clusterHint is empty, #727 , thanks @sjmudd
    • fix create-per-test missing default value, #692, thanks @MOON-CLJ
    • Fix to picking replica on balanced version / binlog_format state at failover, #773 , thanks @yangeagle
    • Fix to take-master in master-master topology, #734
    • raft support for SetClusterAliasManualOverride #776 , thanks @MOON-CLJ
    • Added force-master-takeover API, #745 , thanks @MOON-CLJ
    • Add --depends 'jq >= 1.5' tp fpm builds #752 , thanks @tomkrouper
    • Some mitigation for reset-master risk, #706 , #762
    • Web interface global message, #733
    • Updated Docker + build, #774
    • Documentation updates

    And much more...

    $ sha256sum *
    9d63b6c0db3805bc12408b001188701db2889355a70969d9f49d30b66cfd5a2d  orchestrator-3.0.14-1.x86_64.rpm
    a7ef69fcc10d7ce80a414e5c1e28ebf286ca4478d33a290fbdadc4fdbdc5d0e2  orchestrator-3.0.14-linux-amd64.tar.gz
    4146a0aa6dc2f122d137937fb70eaeb4510573f8991727c34c48333bb1644582  orchestrator-cli-3.0.14-1.x86_64.rpm
    6d8a9c42769583ed6366b71840a718c8b035e6d58298f58681f4dc9a42d98868  orchestrator-cli_3.0.14_amd64.deb
    3685339dc83067527af6a8db81a315d40c8427c205894c44a69aa105995d17df  orchestrator-client-3.0.14-1.x86_64.rpm
    0c0e5d7aa77a83f12aeac0d1503fd82151a776579367314a1427e491f2167d4f  orchestrator-client_3.0.14_amd64.deb
    14725ce9ef3a57c2050793af172ab3d14c3abb73442329afdd79e4a44de52c61  orchestrator_3.0.14_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.14-1.x86_64.rpm(6.84 MB)
    orchestrator-3.0.14-linux-amd64.tar.gz(6.90 MB)
    orchestrator-cli-3.0.14-1.x86_64.rpm(6.44 MB)
    orchestrator-client-3.0.14-1.x86_64.rpm(10.01 KB)
    orchestrator-client_3.0.14_amd64.deb(9.47 KB)
    orchestrator-cli_3.0.14_amd64.deb(6.50 MB)
    orchestrator_3.0.14_amd64.deb(6.91 MB)
  • v3.0.13(Oct 28, 2018)

    Changes since last release:

    TL;DR Much Oracle GTID support, and many other things.

    • GTID: detecting and analyzing errant transactions. On Oracle GTID clusters, orchestrator will:
      • Identify if a replica has errant transactions and will indicate the GTID set for those errant transactions, see
      • Web UI will visually indicate a server running with errant transactions, see
      • Allow "fixing" of errant transactions via reset master operation. This will auto calculate the errant range, and will correctly set gtid_executed and gtid_purged. orchestrator will protect against running this "fix" on an intermediate master. See
    • GTID failure prediction:
      • Predict replication error and prevent relocation ( Will not allow relocating server A below server B if A is known to fail replication GTID-wise. MySQL can tell after effect, and orchestrator can tell before effect. This protects against loss of relay logs (especially important for SQL_DELAYed servers) and waste of futile operation. Thanks @MarkLeith, @evanelias, @sjmudd !
    • GTID operations:
      • Better concurrency control, considers postponed-functions, utilizes failover optimizations (
    • Support for ReplicationCredentialsQuery config (, which allows orchestrator to grab replication credentials per-cluster; these can be used for setting up replication on graceful takeover or on master-master setup. Thanks @cezmunsta, @igroene !
    • Fix to Zk KV update (
    • Fix to analysis GROUP BY statement (
    • Much internal refactoring (
    • Fixed integration tests on 5.7 (#649, #610), thanks @bbuchalter !
    • Added web UI button for disable/enable global recoveries (#646, #656), thanks @mateusduboli !
    • Added which-broken-replicas to list replicas with errors (#660), thanks @cezmunsta !
    • Added which-cluster-osc-running-replicas command (#663), thanks @cezmunsta !
    • take-siblings now uses "smart" logic (
    • raft internal health reports support HTTP auth. Thanks @almeida-pythian !
    • Added MySQLConnectionLifetime, thanks @maciej-dobrzanski !
    • Updates to docs, thanks @bbuchalter !
    • Fix non-zero exit code for redeploy-internal-db (#624), thanks @bbuchalter !
    • Include stdout/stderr in CommandRun's returned error (#635), thanks @bbuchalter !
    • Fix: manual recovery overrides globally-disabled recoveries (#665)
    • More visibility in logs in applying post-recovery steps (#666)

    ...and more!

    $ gsha256sum *
    2ca8f1d72ad1beae6048edc88efd3b0168593e42a2c6928e3cfa997d53062b79  orchestrator-3.0.13-1.x86_64.rpm
    73ad98f312151c8aab5c708b940139fb8aa848c5891d096cf46ea83ce397bbe9  orchestrator-3.0.13-linux-amd64.tar.gz
    9ceb60d669c2509f8d8d81f34591edcaafacf777606a995f742afcb1e40d3369  orchestrator-cli-3.0.13-1.x86_64.rpm
    7b6f03b0b724e91d128b831ade51c1936efb847f0e39b9e749e370d59d631d7a  orchestrator-cli_3.0.13_amd64.deb
    9192acf91ae0ba795918e20799fc6790af9275a3f25af73515e86cc5587dd826  orchestrator-client-3.0.13-1.x86_64.rpm
    f2c13d0f679e8ef02216e2df697841f1ae4267a697c724901c0e5c5e09afa036  orchestrator-client_3.0.13_amd64.deb
    26df8b44fa8bb6187f6b512a8fe4a63324b1faab006ec0c8959198a7b2e99427  orchestrator_3.0.13_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.13-1.x86_64.rpm(6.53 MB)
    orchestrator-3.0.13-linux-amd64.tar.gz(6.59 MB)
    orchestrator-cli-3.0.13-1.x86_64.rpm(6.12 MB)
    orchestrator-client-3.0.13-1.x86_64.rpm(9.33 KB)
    orchestrator-client_3.0.13_amd64.deb(8.73 KB)
    orchestrator-cli_3.0.13_amd64.deb(6.19 MB)
    orchestrator_3.0.13_amd64.deb(6.59 MB)
  • v3.0.12(Aug 29, 2018)

    Changes since last release:

    • New analysis: UnreachableMasterWithLaggingReplicas, handles situations where the master is in lockdown and replicas still think everything is fine. Handles by restarting IO thread on replicas, #572
    • Massive reduction and redaction of logs, #555
    • GTID-based promotion utilize immediate master promotion optimization, previously designed for Pseudo-GTID based failovers, #551
    • Docker builds now using current branch as opposed to pulling master, #550
    • Docker image now smaller, thanks @rhoml, #545
    • Fix to basic authentication in orchestrator-client, #581
    • Graceful master takeover: fail when attempting to promote ignored host, #570
    • Graceful master takeover: now works when master is read-only, #548
    • Added FailMasterPromotionIfSQLThreadNotUpToDate configuration, thanks @samiahlroos, #534
    • orchestrator-client uses incremental sleep to auto-recover brief leader loss, #527
    • Fixed timestamp on sqlite in non UTC boxes, #591, thanks @ndelnano
    • start-replica returns an error if replication does not kick in in timely manner, #590, thanks @cezmunsta
    • Fixed take-master logic when server already is co-master, thanks @Pomyk, #576

    There's much more, and there are outstanding PRs which I am yet to review/test. Thank you to the many contributors!

    $ gsha256sum *
    2fbe927d9ee725d80ff30ba85dba2139c33bde5007f388ede8f4a05ee00799c1  orchestrator-3.0.12-1.x86_64.rpm
    918d6950616b2cf16db019c7af8c7dae223de7b7ff1e3e221cad34baf734b0bb  orchestrator-3.0.12-linux-amd64.tar.gz
    de8e639636d9d7fe17062780d4ad67ae43a9e0d2b913c62bac347aeffcc57dd4  orchestrator-cli-3.0.12-1.x86_64.rpm
    9252b0cd387ea090c01983cb59cb510ad9d725236f3ef6adabff9d3a81ae456a  orchestrator-cli_3.0.12_amd64.deb
    8aeb69c35225e30692ea3c203955314b0571e36539af64c92c76b0c59c745b0a  orchestrator-client-3.0.12-1.x86_64.rpm
    ec8558217103be7a2457aabc6663545757332192f1e71310b59bed3a784fea53  orchestrator-client_3.0.12_amd64.deb
    653ae7b6e2edfffd50cced387ce6d60247ff287fa7a8866d54bceb88d1bcdca6  orchestrator_3.0.12_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.12-1.x86_64.rpm(6.52 MB)
    orchestrator-3.0.12-linux-amd64.tar.gz(6.58 MB)
    orchestrator-cli-3.0.12-1.x86_64.rpm(6.11 MB)
    orchestrator-client-3.0.12-1.x86_64.rpm(9.11 KB)
    orchestrator-client_3.0.12_amd64.deb(8.48 KB)
    orchestrator-cli_3.0.12_amd64.deb(6.18 MB)
    orchestrator_3.0.12_amd64.deb(6.58 MB)
  • v3.0.11(May 22, 2018)

    Changes since last release:


    • Graceful master takeover:
      • Supports multiple replicas to the master. Supports indicating the explicit replica onto which promotion takes place (orchestrator-client -c graceful-master-takeover -alias mycluster -d, #470
      • Graceful and forced failover: better anlaysis including GTID fix #509 This fixes a couple issues with GTID-based graceful takeovers.
    • Better visual indication for GTID (indicate if server is configured with GTID but uses MASTER_AUTO_POSITION=0), #493
    • KV support for ZooKeeper, #501
    • Better propagation of errors in raft operations
    • Contribution by @MaxFedotov, ACL support conf Consul KV, #510
    • Contribution by @dveeden: build and doc fixes, #486, #487
    • Contribution by @Slach: actualize Vagrant and fix problems on orchestrator-agent api with sqlite backend, #445
    $ sha256sum *
    58136686a8fbf14444b43f0e8f1a04efb76a9169f31b70590df09786283dd6e9  orchestrator-3.0.11-1.x86_64.rpm
    3d21c14bc68cec1dc9b8cfc0c282e7984d07eda860b4ef340a5cef6e5ca41c32  orchestrator-3.0.11-linux-amd64.tar.gz
    4e25a0f7a1327774f4d64b955eccb3bae4bd459103b3388f29807348ecda404c  orchestrator-centos6-3.0.11-1.x86_64.rpm
    9edcfaf827691b60283e30a70dfa7878d15e05ea815a53064b49062f31fcbb8e  orchestrator-cli-3.0.11-1.x86_64.rpm
    d19614df666ebe1d37a4b636b3e22aaa97926636c8247b2e5ba1458091ff756b  orchestrator-cli-centos6-3.0.11-1.x86_64.rpm
    c9a9eda30f1250b02dc1b79774c8909ebaf5dcdc4917ff9eccda72262d8935cd  orchestrator-cli_3.0.11_amd64.deb
    d2eba023b2f2ce0d0a2b0a5aa6c9e9e4684452d2d7d54868fb6bf864f782ffca  orchestrator-client-3.0.11-1.x86_64.rpm
    2a53b3a39e094898188d994d4a04a569fa1c6b2c3d2817eada30bce44d5aca40  orchestrator-client-centos6-3.0.11-1.x86_64.rpm
    e96190c8c13a0244cdb902bbebe89b6925e6b1830ca8c312d8040db43fcbceb9  orchestrator-client_3.0.11_amd64.deb
    c845c13c0d888aff18174441a5a49c4da8732af904a07add1bcbd42fb031c781  orchestrator_3.0.11_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.11-1.x86_64.rpm(6.52 MB)
    orchestrator-3.0.11-linux-amd64.tar.gz(6.58 MB)
    orchestrator-centos6-3.0.11-1.x86_64.rpm(6.34 MB)
    orchestrator-cli-3.0.11-1.x86_64.rpm(6.12 MB)
    orchestrator-cli-centos6-3.0.11-1.x86_64.rpm(5.94 MB)
    orchestrator-client-3.0.11-1.x86_64.rpm(8.43 KB)
    orchestrator-client-centos6-3.0.11-1.x86_64.rpm(8.51 KB)
    orchestrator-client_3.0.11_amd64.deb(7.79 KB)
    orchestrator-cli_3.0.11_amd64.deb(6.18 MB)
    orchestrator_3.0.11_amd64.deb(6.58 MB)
  • v3.0.10(Apr 16, 2018)

    Changes since last release:


    • Graceful master takeover: support for PreGracefulTakeoverProcesses, PostGracefulTakeoverProcesses (thanks @igroene)
    • Failure analysis contains command hint, advertised to hooks, #442 (thanks @choadrocker)
    • Support for 'ack-all-recoveries' command (e.g. orchestrator-client -c ack-all-recoveries)
    • Smarter cluster alias evaluation (#462), closes #459
    • Docs: scripting cheatsheet,
    • Contribution by @sjmudd: limit more precisely when special binlog filtering is used (#465)
    $ sha256sum *
    3b63347808a1290aad93b09d1fbe87884c18110f32f635e5280d9c692216d75a  orchestrator-3.0.10-1.x86_64.rpm
    3456d393f23568622ab32b497cb4d2a1ae3f5f33c4cb754e5ba0e85d7b0965b0  orchestrator-3.0.10-linux-amd64.tar.gz
    21abbd0d3b0ec0bdfb1af8fc548c0b9776e123dd6666b228d2e5c46421240d1e  orchestrator-centos6-3.0.10-1.x86_64.rpm
    f90520dc9c514d2353e3099a1cc090499893ebbacc22e828194728b2439b5a55  orchestrator-cli-3.0.10-1.x86_64.rpm
    e088c104ae6884a3b6cc4e1b152dd8ce3d8452ac184cbd7e4a088a7643e769dd  orchestrator-cli-centos6-3.0.10-1.x86_64.rpm
    ea1481a602ba88ad0d1f62635fdfe192598366d3b07a6b0f0d7504377ce296e6  orchestrator-cli_3.0.10_amd64.deb
    250b44d29038cfc08d2ed686d3ca7e860fa4e40ea9d81358d0d2454c27215d55  orchestrator-client-3.0.10-1.x86_64.rpm
    36097b7a62726b104c084df1fbebd0dfa28471fea177239556809d286fa68ba0  orchestrator-client-centos6-3.0.10-1.x86_64.rpm
    33c1218c9105b2e01762bb038f20481416bc5eec413afe9c2a65751a3b4690e8  orchestrator-client_3.0.10_amd64.deb
    1198445ee375c61d716e03ca813940b600d43cf6be47f0fbe62576a8268b4067  orchestrator_3.0.10_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.10-1.x86_64.rpm(6.44 MB)
    orchestrator-3.0.10-linux-amd64.tar.gz(6.50 MB)
    orchestrator-centos6-3.0.10-1.x86_64.rpm(6.27 MB)
    orchestrator-cli-3.0.10-1.x86_64.rpm(6.04 MB)
    orchestrator-cli-centos6-3.0.10-1.x86_64.rpm(5.86 MB)
    orchestrator-client-3.0.10-1.x86_64.rpm(8.34 KB)
    orchestrator-client-centos6-3.0.10-1.x86_64.rpm(8.42 KB)
    orchestrator-client_3.0.10_amd64.deb(7.71 KB)
    orchestrator-cli_3.0.10_amd64.deb(6.10 MB)
    orchestrator_3.0.10_amd64.deb(6.50 MB)
  • v3.0.9(Mar 12, 2018)

    Changes since last release:


    • Introducing HTTPAdvertise config, #430, useful for orchestrator/raft on kubernetes deployments.
    • Improved support for GTID via gtid_mode, #425, #427, more to come.
    • Solved concurrent circular relocation issue via #420
    • orchestrator/raft: followers report health status back to the leader, #431 . This can pave the way for delegation of tasks from leader to followers.
    • Reduced noise for UnreachableMaster, #436
    • Reducing backend DB write load: #432
    • Fixing a concurrency issue: #435
    $ sha256sum *
    f0e059661f64a5c56c752862de454a2ab00a9eb84df8124e84eaa608a45a1591  orchestrator-3.0.9-1.x86_64.rpm
    d91f3390de8c9847f1ad7198ba3b363f11d873a439dc1b2f14effb6e9cc736d7  orchestrator-3.0.9-linux-amd64.tar.gz
    a305af04487d62586276347466f3a57f1deb979c3b7bae0f8f3d897a74eb3277  orchestrator-centos6-3.0.9-1.x86_64.rpm
    417d7264322baa23cfc6f85e2b4ed8c533b70d79bb9fa4dc4b2504f36db51b88  orchestrator-cli-3.0.9-1.x86_64.rpm
    ab4c70f0208f9a9a0e0e842aee9016963caed0ced0e2c0e1d1ac8368c1343152  orchestrator-cli-centos6-3.0.9-1.x86_64.rpm
    0ea78bda98747e6693a34e58fb6dac5b05eebc590999ac163a84939c89889022  orchestrator-cli_3.0.9_amd64.deb
    510f47dbdbdc9af550864821773c328311506b1d45bb4e2176706ec9f7d7c678  orchestrator-client-3.0.9-1.x86_64.rpm
    02ec4fea3ffe0ca80e2f455377b62d5f00ed28746e9974419453220b15a92d04  orchestrator-client-centos6-3.0.9-1.x86_64.rpm
    df0f1e1e47a40b2ff93cc1021b713e8dbeae5715414e0f459bc0ee55927e27a7  orchestrator-client_3.0.9_amd64.deb
    bccf436cc808b2d219bab786992dd5055c5705c7e727dce30d09590e9a48981f  orchestrator_3.0.9_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.9-1.x86_64.rpm(6.44 MB)
    orchestrator-3.0.9-linux-amd64.tar.gz(6.50 MB)
    orchestrator-centos6-3.0.9-1.x86_64.rpm(6.26 MB)
    orchestrator-cli-3.0.9-1.x86_64.rpm(6.04 MB)
    orchestrator-cli-centos6-3.0.9-1.x86_64.rpm(5.86 MB)
    orchestrator-client-3.0.9-1.x86_64.rpm(8.31 KB)
    orchestrator-client-centos6-3.0.9-1.x86_64.rpm(8.39 KB)
    orchestrator-client_3.0.9_amd64.deb(7.67 KB)
    orchestrator-cli_3.0.9_amd64.deb(6.10 MB)
    orchestrator_3.0.9_amd64.deb(6.50 MB)
  • v3.0.8(Feb 22, 2018)

    Changes since last release:

    This release consists of bugfixes found in 3.0.7:

    • web interface:
      • Showing error response when submitting empty acknowledgement comment
      • "... blocked due to a ... previous recovery" links to the exact blocking recovery, not to general cluster listing.
      • Fixed typo
    • Fixes raft persistence of cluster_injected_pseudo_gtid
    • Added missing last-pseudo-gtid command in orchestrator-client
    $ sha256sum *.*
    7ff3a142dcd42c1f8f272ae39248134ed26a291537e5a979571df33b0a555d7b  orchestrator-3.0.8-1.x86_64.rpm
    0932adcac3b330cbc533f10e5c4db1e0280efcc01ff9fb754999b47ba9e7c9ca  orchestrator-3.0.8-linux-amd64.tar.gz
    6c5df74832cd4256db4608dea889edd4e18684b6bdccdbf1960727a19486d269  orchestrator-centos6-3.0.8-1.x86_64.rpm
    7093f7665ff971f7c4f00dedb7a9937d1e249149531a79ef7c7eabaffc29eb65  orchestrator-cli-3.0.8-1.x86_64.rpm
    de0815390753488bb64d483d5396d8840134f2990b90aeb728d1c1a48173e0ef  orchestrator-cli-centos6-3.0.8-1.x86_64.rpm
    6a5fb95b53609d60d5b449d77076b8fccd55afe0426387618149c6a0862f17d7  orchestrator-cli_3.0.8_amd64.deb
    eaa4b6a0355da28f60c4c753158fae06694feff4721f759ef58d65f1f6ec863c  orchestrator-client-3.0.8-1.x86_64.rpm
    3b15d55b0886834ff3c1f4d1c57f548bf0e9653739924ffbca9d91943d87f346  orchestrator-client-centos6-3.0.8-1.x86_64.rpm
    4148d4a210c46dca0f6d48fc4e8b6f8dc88bddb05ae1019cbd123e1ec6d8f778  orchestrator-client_3.0.8_amd64.deb
    fe3ebd80088167ee999ba37927e00076170ef9dd489dd3a9dd50eab9e0cfdd6b  orchestrator_3.0.8_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.8-1.x86_64.rpm(6.43 MB)
    orchestrator-3.0.8-linux-amd64.tar.gz(6.49 MB)
    orchestrator-centos6-3.0.8-1.x86_64.rpm(6.25 MB)
    orchestrator-cli-3.0.8-1.x86_64.rpm(6.03 MB)
    orchestrator-cli-centos6-3.0.8-1.x86_64.rpm(5.85 MB)
    orchestrator-client-3.0.8-1.x86_64.rpm(8.31 KB)
    orchestrator-client-centos6-3.0.8-1.x86_64.rpm(8.39 KB)
    orchestrator-client_3.0.8_amd64.deb(7.68 KB)
    orchestrator-cli_3.0.8_amd64.deb(6.09 MB)
    orchestrator_3.0.8_amd64.deb(6.49 MB)
  • v3.0.7(Feb 18, 2018)

    Changes since last release:


    • Raft: non-leader nodes reverse proxy web/API requests to leader, assuming they're part of the healthy raft group.
      • Introducing new health check, /api/raft-health, responding 200 OK when a node is healthy in the raft group (i.e. not isolated)
    • Support super_read_only: adds UseSuperReadOnly config variables (default false). When true, orchestrator will set super_read_only together with read_only, and in particular on master promotion.
    • Support SSL for master graceful takeover if SSL is enables (#401) -- thank you @stankevich !
    • Custom JS and CSS (any customizations will not be supported) (#404) -- thank you @berlincount !
    • Better handling of failure analysis when all replicas are downtimed.
    • KV store: breaking down master identify (in addition to existing entry) Four more key/value pairs are added per master:
      • hostname
      • port
      • IPv4, if available
      • IPv6, if available
    • Fixed lost-in-recovery downtime duration
    • Improved web UI for recovery auditing, plus bugfixes (#413 , #415)
    $ sha256sum *.*
    c1cfc91d8cbdb8affabec1d9ba299562d5cd7d33b32958087af71d6d5eb96d9e  orchestrator-3.0.7-1.x86_64.rpm
    c61873063c373481f8025a38a8907485526462d88d28f31548e5e17dcd87e6f8  orchestrator-3.0.7-linux-amd64.tar.gz
    022bc1ade26c03620382d9b8ea4e1f17b4c28d91704f70825d89439ad934359e  orchestrator-centos6-3.0.7-1.x86_64.rpm
    f142324a86e6c0df0487905e154e58523f5be75cb11e5b2dc7a7ad144bfdffbe  orchestrator-cli-3.0.7-1.x86_64.rpm
    d58fb35aa84f5b9864f95980efbe5052846fd46f5112718fb0b887a16d643237  orchestrator-cli-centos6-3.0.7-1.x86_64.rpm
    8dbbf11bf5f669c011048f3234e43deda603c1b414ae04271d0cba124d40d365  orchestrator-cli_3.0.7_amd64.deb
    d59659a3c1f30e5f47462365b99513b765ef12d187593d39711ac273943da91c  orchestrator-client-3.0.7-1.x86_64.rpm
    7ec3b6771b1a490a00661472f657fd49e3c37fe8fb4d8f6f62c5cce554cc1dd3  orchestrator-client-centos6-3.0.7-1.x86_64.rpm
    ba6aa6f2f36afa3f6e9e350704c48667a709483b0426c257a87d1a2f5d0ec50f  orchestrator-client_3.0.7_amd64.deb
    7cebf944c500d18cc802bac7150defcebc6d6cdf4b0537be9dc55323a6ac0126  orchestrator_3.0.7_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.7-1.x86_64.rpm(6.43 MB)
    orchestrator-3.0.7-linux-amd64.tar.gz(6.49 MB)
    orchestrator-centos6-3.0.7-1.x86_64.rpm(6.25 MB)
    orchestrator-cli-3.0.7-1.x86_64.rpm(6.03 MB)
    orchestrator-cli-centos6-3.0.7-1.x86_64.rpm(5.85 MB)
    orchestrator-client-3.0.7-1.x86_64.rpm(8.25 KB)
    orchestrator-client-centos6-3.0.7-1.x86_64.rpm(8.33 KB)
    orchestrator-client_3.0.7_amd64.deb(7.61 KB)
    orchestrator-cli_3.0.7_amd64.deb(6.09 MB)
    orchestrator_3.0.7_amd64.deb(6.49 MB)
  • v3.0.6(Jan 28, 2018)

    Changes since last release:


    Automated Pseudo-GTID injection

    With this release you may instruct orchestrator to inject Pseudo-GTID for you, via the new"AutoPseudoGTID": true variable. You may then drop or ignore any other Pseudo-GTID related configuration. See Pseudo-GTID configuration

    orchestrator will auto-inject Pseudo-GTID where allowed to, and will auto recognize the masters onto which it is supposed to inject Pseudo-GTID. Pseudo-GTID gives one powers much like GTID, but without making the commitment to GTID.

    $ sha256sum *
    12b220607943a7e8e8ab8950fc770eb8489cd2e6c6c6a7c93b56375a0b766ae0  orchestrator-3.0.6-1.x86_64.rpm
    bc8f74b155e662817cdb9782576a7bb5ee710ba07f372ffb9a498a921cca7f6b  orchestrator-3.0.6-linux-amd64.tar.gz
    4e138bc3e4cc49745a156c9cc44cd633acf19491e21d2644b6c7fb8052398ede  orchestrator-centos6-3.0.6-1.x86_64.rpm
    73bdf67475389f87393caa6b1496ec883303894b80102cedefed47dd0db1b1f3  orchestrator-cli-3.0.6-1.x86_64.rpm
    5f7ebfa4ba3ad7c656aceea831295a1a4aba03c566854284fcb3752d6242081a  orchestrator-cli-centos6-3.0.6-1.x86_64.rpm
    6fb89dad49618149deadb3f7c3e625e364953384095f3df8a438c449f7e0c95d  orchestrator-cli_3.0.6_amd64.deb
    9fa1515e52764323c56d8b24ae021b100f05162358a93ef618c433fdff7703ee  orchestrator-client-3.0.6-1.x86_64.rpm
    e20281a1e6fa38fff4cc3ad20260bd9286e180c6b0cc82dab83a85be38c9f0d8  orchestrator-client-centos6-3.0.6-1.x86_64.rpm
    5b090c03ee3d5140f8b6c2040a612a27d34586d5c33b4fec10b3f9ca1ef8c1bd  orchestrator-client_3.0.6_amd64.deb
    9d8bca2cdc45e4f54da98db97a86f9f02f2c11fe0c20e7e5b5a144cb0b612c30  orchestrator_3.0.6_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.6-1.x86_64.rpm(6.41 MB)
    orchestrator-3.0.6-linux-amd64.tar.gz(6.47 MB)
    orchestrator-centos6-3.0.6-1.x86_64.rpm(6.23 MB)
    orchestrator-cli-3.0.6-1.x86_64.rpm(6.01 MB)
    orchestrator-cli-centos6-3.0.6-1.x86_64.rpm(5.84 MB)
    orchestrator-client-3.0.6-1.x86_64.rpm(8.20 KB)
    orchestrator-client-centos6-3.0.6-1.x86_64.rpm(8.28 KB)
    orchestrator-client_3.0.6_amd64.deb(7.56 KB)
    orchestrator-cli_3.0.6_amd64.deb(6.07 MB)
    orchestrator_3.0.6_amd64.deb(6.47 MB)
  • v3.0.5(Jan 17, 2018)

    Changes since last release:


    • Faster failure detection. Recall that orchestrator uses a holistic approach where is needs agreement (consensus, if you will) from replicas that a master or an intermediate master are dead. orchestrator requires a single agreement to all; there was an implicit and unintentional delay previously that causes orchestrator to run two detection cycles.
      • As result, orchestrator will report more UnreachableMaster incidents, seeing that a single network glitch can cause a server probe error. Recall that UnreachableMaster is not an actionable alert; this does not translate to false positives, but to more non-actionable failure detections.
      • This is known to reduce detection time by some 5 seconds, depending on polling interval.
    • Faster failure detection where packets are dropped. Dropped packets typically cause confusion as they are not rejected and TCP takes more time to recognize failure. orchestrator will keep a concurrent stopwatch such that a probe may be hanging or blocked, yet detection proceeds uninterrupted.
      • This is known to reduce detection time by 5 to 15 seconds on packet drop incidents.
    • More aggressive failure detection interval (once per sec, considerate of long running failure detection queries)
    • Faster master failure recoveries. When, during recovery, orchestrator identifies that it has picked the "ideal" replica: the one that orchestrator really wants to promote and is marked as candidate (aka prefer), orchestrator immediately proceeds to promoting it as master, and asynchronously attaches the rest of the replicas below promoted server. This is known to reduce recovery time by 5-6 seconds.
    • Support for semi-sync enable/disable & visibility.
      • introduced orchestrator -c enable-semi-sync-master, orchestrator -c enable-semi-sync-replica, orchestrator -c disable-semi-sync-master, orchestrator -c disable-semi-sync-replica
    • Complete removal of prepared statements. orchestrator now uses client-side query argument interpolation. This saves round trips as well as makes orchestrator queries more easily identifiable.
    • A detection that is failed over is implicitly acknowledged. This means while next failover (on same host) is potentially blocked (as per anti flapping interval), detection itself is not.
    • Rewrite of failure detection & recovery documentation.
    • tabulated ascii topology, making it easier on scripting & automation
    • Forcing, validating and documenting RaftDataDir
    • Fixed vagrant networking configuration - thanks @gtowey!
    • UI changes
    • Fixed orchestrator/raft bug where completed failovers were marked as "failed" on other raft member nodes.
    $ sha256sum *
    7bdd8a264832ae700586893b1814a1f44d387eb3a4cae99494cc03adbdf90a60  orchestrator-3.0.5-1.x86_64.rpm
    434e7843db13c89d29dd03c9be1d8971372643866c0cc00848d9d70996ba1eb9  orchestrator-3.0.5-linux-amd64.tar.gz
    f010ddd846731ce13886e921b8bddc35b7ef783bb53884b25883bf246f1ef153  orchestrator-cli-3.0.5-1.x86_64.rpm
    265275d96346f6d7b582612a284b409458a1b5bb08e152aceae0e439a2b7990c  orchestrator-cli_3.0.5_amd64.deb
    c7b34b19e12f018e7287a6bfb801157c127f1acf35b1add76eb5a00cff5241c0  orchestrator-client-3.0.5-1.x86_64.rpm
    50c292833bd150aceee549ec9e9052ff0fd05b71dc6382bd56cbbc75ce71b033  orchestrator-client_3.0.5_amd64.deb
    c2c5aebe9d4aa90a4decfcfbf747b4b231488c02a33df2264a2c49bfe12224a5  orchestrator_3.0.5_amd64.deb
    35c2204d3d274491855026fcea6321ba1f45ea245f69e235f7373b2b1b3d143d  orchestrator-3.0.5-1.x86_64.rpm
    434268a0db23bae82d0e26acdcdf8ad8e3b157a95221ab193e150ff8ec97e711  orchestrator-cli-3.0.5-1.x86_64.rpm
    b206c28c2df497dd7b573665d1ce06deeedebbec407fb17eef08309791603554  orchestrator-client-3.0.5-1.x86_64.rpm
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.5-1.x86_64.rpm(6.40 MB)
    orchestrator-3.0.5-linux-amd64.tar.gz(6.46 MB)
    orchestrator-centos6-3.0.5-1.x86_64.rpm(6.23 MB)
    orchestrator-cli-3.0.5-1.x86_64.rpm(6.00 MB)
    orchestrator-cli-centos6-3.0.5-1.x86_64.rpm(5.83 MB)
    orchestrator-client-3.0.5-1.x86_64.rpm(8.20 KB)
    orchestrator-client-centos6-3.0.5-1.x86_64.rpm(8.28 KB)
    orchestrator-client_3.0.5_amd64.deb(7.56 KB)
    orchestrator-cli_3.0.5_amd64.deb(6.06 MB)
    orchestrator_3.0.5_amd64.deb(6.46 MB)
  • v3.0.3(Nov 16, 2017)

    Changes since last release:

    See detailed announcement: orchestrator 3.0.3: auto provisioning raft nodes, native Consul support and more


    • orchestrator/raft auto provisioning nodes: With 3.0.3 the failed node can go down for as long as it wants. Once it comes back, it attempts to join the raft cluster. A node keeps its own snapshots and its raft log outside the relational backend DB. If it has recent-enough data, it just needs to catch up with raft replication log, which is acquires from one of the active nodes. If it went down for a substantial amount of time, or is completely wiped out, the node gets, and bootstraps from, a valid snapshot from one of the active nodes.
    • Key-Value support for master discovery: orchestrator will write to KV stores (Consul supported at this time) clusters' master identities. It will update KV upon failover.
    • Web UI improvements: graceful promotion, mater takeover, silent mode
    • More.


    0748e3e499447b6168dd67de24e562388477bf5b9a51d3783d88f81831cbb259  orchestrator-3.0.3-linux-amd64.tar.gz
    162981cdad50f3f9f163ae1b4cee3e4974b12d62541affcdb506f827a38f3b62  orchestrator-cli_3.0.3_amd64.deb
    329f71888743a95a12ee7c086408bc8516ebd0a5d6583a12072c5e3e80c4ec56  orchestrator_3.0.3_amd64.deb
    4d34757572e784f8042aae9f8e11e8d437f0b9569afdcca4771635e464ebd851  orchestrator-cli-3.0.3-1.x86_64.rpm
    63307f4de051c774ef23074d81d34d8d0ab4ea0684f7c55d8e1166d9f40157ad  orchestrator-client_3.0.3_amd64.deb
    6c3e9ca89188c50132186aaec5a1c4afaf69719c0093439193caf1799bfe691b  orchestrator-client-3.0.3-1.x86_64.rpm
    c6489fd6e4f4382f26a435a4b2b90c1a67f5ab4955eb12815009bb913a20fa7d  orchestrator-3.0.3-1.x86_64.rpm
    5fe7bf4a920643c2bb3309f54945bd99b385b07b8e8955477741592567f02858  orchestrator-centos6-3.0.3-1.x86_64.rpm
    80d7222444d6b40b3e9147e059dd67d49bcc67225bcc137b1867669c06c4ed85  orchestrator-client-centos6-3.0.3-1.x86_64.rpm
    e0da1fb38341cba55e6f02bed5817d11d0ce7572b4f125256dec65e89f259794  orchestrator-cli-centos6-3.0.3-1.x86_64.rpm
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.3-1.x86_64.rpm(6.41 MB)
    orchestrator-3.0.3-linux-amd64.tar.gz(6.47 MB)
    orchestrator-centos6-3.0.3-1.x86_64.rpm(6.24 MB)
    orchestrator-cli-3.0.3-1.x86_64.rpm(6.01 MB)
    orchestrator-cli-centos6-3.0.3-1.x86_64.rpm(5.84 MB)
    orchestrator-client-3.0.3-1.x86_64.rpm(7.93 KB)
    orchestrator-client-centos6-3.0.3-1.x86_64.rpm(8.01 KB)
    orchestrator-client_3.0.3_amd64.deb(7.26 KB)
    orchestrator-cli_3.0.3_amd64.deb(6.07 MB)
    orchestrator_3.0.3_amd64.deb(6.47 MB)
  • v3.0.2(Sep 12, 2017)

    GA release of orchestrator 3.0, please note v3.0-pre-release and v3.0.1.pre-release

    Re-listing major changes in 3.0 version, followed by specific changes since v3.0.1

    raft consensus

    • orchestrator/raft setup: consensus based leader election & quorum. orchestrator/raft setup achieves high availability without the need for a shared-backend high availability solution such as Galera/InnoDB Cluster.

      Backend DB nodes are independent and do not communicate with each other. All communication goes by the orchestrator nodes gossiping via raft.


    • SQLite support: in a orchestrator/raft setup each orchestrator node has a dedicated backend DB. This backend DB can be a standalone MySQL server or an embedded SQLite database. orchestrator now comes with SQLite embedded within its binary.

      SQLite is also supported on single-node setups, useful for local dev machines, and on testing or CI environments.


    • orchestrator-client: a utility script which removes the need for an orchestrator binary on remote boxes.


    Changes since last release:


    • Packaging for orchestrator-client - the standalone client shell script
    • Support for --ignore-raft-setup command line client invocation, overriding raft config; power to the engineers.
    • Failure detection takes notice of downtimed replicas
    • Web/UI fixes, adjustments
    • Web/UI: Better visualization of downtimed problems
    • Fix and reprioritization of promoted servers on DeadIntermediateMaster
    • Misc. bug fixes
    • more...
    $ md5sum *
    ea5391f69e1ada71d3c58f7646f8a048  orchestrator-3.0.2-1.x86_64.rpm
    0030f8379fa870a2e761f178824c9241  orchestrator-3.0.2-linux-amd64.tar.gz
    6ce932f9356baaa2ce8a51f88404cb01  orchestrator-cli-3.0.2-1.x86_64.rpm
    51573a791f3642e75537c36add0b5b08  orchestrator-cli_3.0.2_amd64.deb
    657185cb863363a6291d30591d439c70  orchestrator-client-3.0.2-1.x86_64.rpm
    ced92c8648d28514dd6b99a7a3f87842  orchestrator-client_3.0.2_amd64.deb
    e16e960f3252409e50ff6b9443f0b642  orchestrator_3.0.2_amd64.deb
    $ sha1sum *
    157c6b4d9d3d6c658afa56ec7e6259934c4a48fd  orchestrator-3.0.2-1.x86_64.rpm
    10e66cd09097c2cf1be8e36a5b1f351113a3fe7c  orchestrator-3.0.2-linux-amd64.tar.gz
    40a0d83d764d61eeec364299754a8098258d5206  orchestrator-cli-3.0.2-1.x86_64.rpm
    e2c4e4773d425b021f0031deafcb98e66517b7d9  orchestrator-cli_3.0.2_amd64.deb
    7a92a93a9ea2922e055771b37d0c08b88a3c84f0  orchestrator-client-3.0.2-1.x86_64.rpm
    a58e4b6db3a140ef9e3f32b7eceaa40a6b332c66  orchestrator-client_3.0.2_amd64.deb
    378ca20b5eaf980e83283f36ac2c71ec8bd32bd2  orchestrator_3.0.2_amd64.deb
    $ sha256sum *
    c5ca99a39727d22edebeb0b0f2908a085497d031d6d7b2bf06d41c92282fcb7a  orchestrator-3.0.2-1.x86_64.rpm
    7b43ae94517ead11fbc01c55499d9e19d88adf2be5c40e9c346af202fef78451  orchestrator-3.0.2-linux-amd64.tar.gz
    f7708562abc757ee471f9258c2640cef552254db4b3496f40fb12f073ddbde9d  orchestrator-cli-3.0.2-1.x86_64.rpm
    1c2508d9376f3cdf16b4f8d46b114da1176ab096b3844bd91546a6d464623694  orchestrator-cli_3.0.2_amd64.deb
    fe0665dd1a5d6093fb26c1b7f33b83a9ed42612639d2daf34dd0d40a9337f3c2  orchestrator-client-3.0.2-1.x86_64.rpm
    6991b293435e2482c6e915b725c1a48e6a6128c6c5a08c8b7145d2874be9ee59  orchestrator-client_3.0.2_amd64.deb
    d365f2bf614d659450f3dbad0920175914b02d62a736e465793b9dfdb1e98561  orchestrator_3.0.2_amd64.deb
    $ md5sum *RHEL6*
    ffa077686a54b0e826d2f47253db3349  orchestrator-RHEL6-3.0.2-1.x86_64.rpm
    68d222e0531531ae078fe6616d64dc42  orchestrator-RHEL6-cli-3.0.2-1.x86_64.rpm
    65e03a04c1691cd21d9a98600bb2934d  orchestrator-RHEL6-client-3.0.2-1.x86_64.rpm
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.2-1.x86_64.rpm(6.00 MB)
    orchestrator-3.0.2-linux-amd64.tar.gz(6.06 MB)
    orchestrator-cli-3.0.2-1.x86_64.rpm(5.60 MB)
    orchestrator-client-3.0.2-1.x86_64.rpm(7.83 KB)
    orchestrator-client_3.0.2_amd64.deb(7.20 KB)
    orchestrator-cli_3.0.2_amd64.deb(5.66 MB)
    orchestrator-RHEL6-3.0.2-1.x86_64.rpm(6.16 MB)
    orchestrator-RHEL6-cli-3.0.2-1.x86_64.rpm(5.76 MB)
    orchestrator-RHEL6-client-3.0.2-1.x86_64.rpm(7.90 KB)
    orchestrator_3.0.2_amd64.deb(6.06 MB)
  • v3.0.1.pre-release(Aug 15, 2017)

    Pre release 3.0.1:

    Changes since last release:

    • orchestrator/raft: configuration supports fqdn (hostnames):
    • Graceful period before running detection/recovery:
    • Support for prefer_not promotion rule:
    • Updated docuentation
    • Fixed pem tests: thanks, @maurosr!
    • Fixed handling of ClusterNameToAlias map
    $ md5sum *
    a62bdd734282ad20d2da7313eb416d7a  orchestrator-3.0.1-1.x86_64.rpm
    f9aeb7e74a86400af80958336052a759  orchestrator-3.0.1-linux-amd64.tar.gz
    1a20484abec704480874ca1e342537a8  orchestrator-cli-3.0.1-1.x86_64.rpm
    22ec4b9532e1415608c4e4f6f7cacdd3  orchestrator-cli_3.0.1_amd64.deb
    3e974e6ffaa70076e52f13f1d3490624  orchestrator_3.0.1_amd64.deb
    $ sha1sum *
    4b2c4983c4b7238904fdc45b71dac5dc7806c54a  orchestrator-3.0.1-1.x86_64.rpm
    94e4588cfe016c9d2248679477f75ac796c8f923  orchestrator-3.0.1-linux-amd64.tar.gz
    3456a283b7dc49c2a22a402ce77b533dc325cd8a  orchestrator-cli-3.0.1-1.x86_64.rpm
    87a05f4ab88bfcb47abc503926bf3219bfe4020f  orchestrator-cli_3.0.1_amd64.deb
    6c733d4223bf7b273eb1d98fb2b4fb3f1fc46692  orchestrator_3.0.1_amd64.deb
    $ sha256sum *
    28e64b718d08eac5e299d99d14c922e1b305b242096da8c7672f5dfc2ca0c8ee  orchestrator-3.0.1-1.x86_64.rpm
    a1975798664f008a7a869676e9a00d6c626bb127c915cf87b87395aba3c186f0  orchestrator-3.0.1-linux-amd64.tar.gz
    3d7fcc491a0e105954b76fab7b9c47cac5521f370d6252dfaac6a84ccba83a2e  orchestrator-cli-3.0.1-1.x86_64.rpm
    008ca64abec27620a33c5bd08889796744d40dc8882bcf3d1f5d5b07680c7108  orchestrator-cli_3.0.1_amd64.deb
    7f9784a3462b8c4c5ed2bd6185dfae4c28117000d423c8ea103751f274758238  orchestrator_3.0.1_amd64.deb
    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0.1-1.x86_64.rpm(6.00 MB)
    orchestrator-3.0.1-linux-amd64.tar.gz(6.06 MB)
    orchestrator-cli-3.0.1-1.x86_64.rpm(5.60 MB)
    orchestrator-cli_3.0.1_amd64.deb(5.66 MB)
    orchestrator_3.0.1_amd64.deb(6.06 MB)
  • v3.0.pre-release(Aug 3, 2017)

    Pre release 3.0 offers:

    • orchestrator/raft setup: consensus based leader election & quorum. orchestrator/raft setup achieves high availability without the need for a shared-backend high availability solution such as Galera/InnoDB Cluster.

      Backend DB nodes are independent and do not communicate with each other. All communication goes by the orchestrator nodes gossiping via raft.

    • SQLite support: in a orchestrator/raft setup each orchestrator node has a dedicated backend DB. This backend DB can be a standalone MySQL server or an embedded SQLite database. orchestrator now comes with SQLite embedded within its binary.

      SQLite is also supported on single-node setups, useful for local dev machines, and on testing or CI environments.

    • orchestrator-client: a utility script which removes the need for an orchestrator binary on remote boxes.

    • Various other changes, see ; special thanks to @sjmudd, @maurosr and @dveeden for continuous contributions.

    See announcement on

    md5 checksums:

    c357aef3e6794b7c473042b10b0b49d2  orchestrator-3.0-1.x86_64.rpm
    3d79877ec33d220975c710b148996c49  orchestrator-3.0-linux-amd64.tar.gz
    7ca1d6120162bdda3b576bc6c3eac203  orchestrator-cli-3.0-1.x86_64.rpm
    6620eba6a5c73345928c3c908e1d15c0  orchestrator-cli_3.0_amd64.deb
    9d80bb78e86a01eb54a2872508e9df8b  orchestrator_3.0_amd64.deb

    sha1 checksums:

    67f29ac832f48ea983a3f0d66640ed522ff3f5fb  orchestrator-3.0-1.x86_64.rpm
    5b283e763436326493c655f0a24c3826bd2ea9be  orchestrator-3.0-linux-amd64.tar.gz
    7ec395b5a1c9f0fdebaa95b7133e569590db1a96  orchestrator-cli-3.0-1.x86_64.rpm
    4da53ffebcf0edb5aa84cd5ea1a8211cd7af8b26  orchestrator-cli_3.0_amd64.deb
    939bdebfa5dc15979542e0969710438f0c306a46  orchestrator_3.0_amd64.deb

    sha256 checksums:

    3229aaf90d0ba8cd4408afb619aa9f926060ec84079ea9607faf16b3941527be  orchestrator-3.0-1.x86_64.rpm
    06d87b5de0c2a5145c3a5a74113a61bb1804e971ef68c4dc3fd79f757d595d03  orchestrator-3.0-linux-amd64.tar.gz
    91aec077f417bdec90a172e76e0d30710c5768351d4ca237641c266f4988f011  orchestrator-cli-3.0-1.x86_64.rpm
    90b21f5ed181c54c9e3f2863afe9516888d7b9b3083d547dd77e7b273be78323  orchestrator-cli_3.0_amd64.deb
    b46c987fee44e05a8bce21c918269357ed1ce2d4bc5b7eac571e41507960909d  orchestrator_3.0_amd64.deb


    Source code(tar.gz)
    Source code(zip)
    orchestrator-3.0-1.x86_64.rpm(6.00 MB)
    orchestrator-3.0-linux-amd64.tar.gz(6.05 MB)
    orchestrator-cli-3.0-1.x86_64.rpm(5.60 MB)
    orchestrator-cli_3.0_amd64.deb(5.65 MB)
    orchestrator_3.0_amd64.deb(6.05 MB)
  • v2.1.5(Jun 28, 2017)

    Changes since last release:


    • ReplicationLagSeconds value was inconsistently broken following, fixed by #219
    • Fixed XSS flaw with search box,, thanks @Oneiroi
    • Capping of idle backend connections by @sjmudd ,
    • /web/status better visualization via #213 and #216
    • PowerAuthGroups config via #215: authentication based on unix groups, by @sjmudd
    • Other small changes

    md5 checksums:

    ea75098db392fd1aa58348503cffd9b3  orchestrator-2.1.5-1.x86_64.rpm
    088aed4caac9430b00455ac8d0c23acf  orchestrator-2.1.5-linux-amd64.tar.gz
    6725899bd5ac21b603ee0ddb4126b709  orchestrator-cli-2.1.5-1.x86_64.rpm
    f58f27b24851cad956fba70013ca4cf4  orchestrator-cli_2.1.5_amd64.deb
    eef8f39d43c3ad4c874563e4139c5a34  orchestrator_2.1.5_amd64.deb

    sha1 checksums:

    59464521a1b454ed499ff32088fc0bf39432de92  orchestrator-2.1.5-1.x86_64.rpm
    84f101f16fbb8427909cdb8c48b7544c37652285  orchestrator-2.1.5-linux-amd64.tar.gz
    b1e3328d69738cab52333d34c9f54f295c871905  orchestrator-cli-2.1.5-1.x86_64.rpm
    eba21da0445991d063e8c97f962a054c30d3b1e8  orchestrator-cli_2.1.5_amd64.deb
    919f02d413f7c7fbc2bef1c97d492ce75dc2e7b9  orchestrator_2.1.5_amd64.deb


    Source code(tar.gz)
    Source code(zip)
    orchestrator-2.1.5-1.x86_64.rpm(4.43 MB)
    orchestrator-2.1.5-linux-amd64.tar.gz(4.48 MB)
    orchestrator-cli-2.1.5-1.x86_64.rpm(4.04 MB)
    orchestrator-cli_2.1.5_amd64.deb(4.09 MB)
    orchestrator_2.1.5_amd64.deb(4.48 MB)
  • v2.1.4(Jun 13, 2017)

    Changes since last release:


    • Important bugfix in recovery flow; some failure detections could go unhandled depending on concurrency behavior, see
    • Added -c force-master-failover command, which just kicks in a failover for a cluster; there is no designated server to promote, orchestrator just makes believe master is dead and does whatever it does to fix the situation.
      • likewise /api/force-master-failover/:hostname/:port
    • The Great Configuration Variables Exodus (deprecation) -- deprecating many variables; more work to come
    • lost-in-recovery: masters who have been lost in recovery are:
      • downtimed forever
      • or until found to be healthy in a replication environment
    • Concurrent probing of topology servers: reduces per-server discovery time by some 70% on high latency networks
    • More visibility in failure detection process:
      • Better logging
      • Introduced "is actionable" flag for detection
      • Allows multiple detections in backend database for same instance/incident: up to one detection entry for "actionable", up to one detection entry for "not actionable". This gives some more (though not full) visibility into escalation process.
    • added /api/leader-check: returns 200 when node is the leader, 404 if not.
    • more... thanks @sjmudd for continued contributions


    Source code(tar.gz)
    Source code(zip)
    orchestrator-2.1.4-1.x86_64.rpm(4.42 MB)
    orchestrator-2.1.4-linux-amd64.tar.gz(4.47 MB)
    orchestrator-cli-2.1.4-1.x86_64.rpm(4.03 MB)
    orchestrator-cli_2.1.4_amd64.deb(4.08 MB)
    orchestrator_2.1.4_amd64.deb(4.47 MB)
  • v2.1.2(Apr 20, 2017)

    Changes since last release:


    • structured, audited recovery steps:
      • each recovery has audited steps, identified by UID
      • UID presented in post failover hooks
      • underlying table: topology_recovery_steps
      • presented in web interface
    • recovery speed improvements:
      • faster, concurrent execution of postponed functions
      • yet more optimization in prefering DC-local operations
      • incremental sleep following START SLAVE reduces wasted sleep time
      • more postponed functions for unurgent operations
      • recognizing hosts with discovery latencies, pushing them to postponed recovery
    • graceful-master-takeover attempts setting replication credentials on demoted master (doable if master_info_repository=TABLE)
    • FailMasterPromotionIfSQLThreadNotUpToDate config (bool, default false), when true and master fails while all replicas are lagging, heal topology but do not proceed with post-failover hooks, and in fact consider the failover as failed.
    • Many metrics added by @sjmudd
    • Fixes and improvements to GTID master recoveries
    • binlog_row_image collected and made visible
    • TravisCI now tests all PRs
    • more...


    Source code(tar.gz)
    Source code(zip)
    orchestrator-2.1.2-1.x86_64.rpm(4.42 MB)
    orchestrator-2.1.2-linux-amd64.tar.gz(4.46 MB)
    orchestrator-cli-2.1.2-1.x86_64.rpm(4.03 MB)
    orchestrator-cli_2.1.2_amd64.deb(4.08 MB)
    orchestrator_2.1.2_amd64.deb(4.47 MB)
  • v2.1.0(Mar 21, 2017)

    Changes since last release:

    • Massive SQL related changes, supporting sqlite as backend (sqlite support is tested but sqllite as backend is not included in this release).
      • Backend schema: changes to index names. By running this release first time you will experience a once time rebuild of almost all backend tables
      • Queries: many queries simplified. Multi-table DMLs broken into parts. GROUP/GROUP_CONCAT simplified; some logic moved to app.
      • Otherwise operations are unchanged; many tests added
    • Supporting MultiMatchBelowIndependently. This supposedly step-back from multi-server bucket optimization is made possible by binlog indexing, released a year ago, which alleviates the pain from re-scanning master binlogs. Right now, for backwards compatibility, this defaults false; but potentially in the future it will change to true. Very large installments, where number of replicas per server is within the many dozens or hundreds, will still benefit from the bucketing approach (MultiMatchBelowIndependently: false)
    • Fix bug in identifying Percona Server GTID based replicaiton
    • Support DiscoveryIgnoreReplicaHostnameFilters (thanks @sjmudd). With growing number of apps that pretend to be replicas (gh-ost, tungsten, others...) orchestrator falls into the trap of trying to discover a "server" where that "server" is really some app. This is an initial, simple approach to filter out such apps based on hostnames. We may iterate on that in the future.
    • More failure analysis scenarios and detection
    • More...
    • Also thanks @samveen (working on build script init script)
    • Thanks to the many collaborators who report issues and discuss potential fixes


    Source code(tar.gz)
    Source code(zip)
    orchestrator-2.1.0-1.x86_64.rpm(4.40 MB)
    orchestrator-2.1.0-linux-amd64.tar.gz(4.45 MB)
    orchestrator-cli-2.1.0-1.x86_64.rpm(4.01 MB)
    orchestrator-cli_2.1.0_amd64.deb(4.06 MB)
    orchestrator_2.1.0_amd64.deb(4.45 MB)
  • v2.0.3(Mar 7, 2017)

    Changes since last release:

    Notable changes:

    • Better support for 5.6->5.7 replication with Pseudo-GTID ( contribution)
    • Docket image improvmenets (@calind contribution)
    • More detailed timing metrics for discoveries, backend latencies etc. ( contribution)
    • Logging improvements ( contribution)
    • supporting SkipOrchestratorDatabaseUpdate and PanicIfDifferentDatabaseDeploy
    • relaxation of health expiry writes ( contribution)
    • support for PseudoGTIDPreferIndependentMultiMatch

    and many more. Special thanks to @sjmudd!


    Source code(tar.gz)
    Source code(zip)
    orchestrator-2.0.3-1.x86_64.rpm(4.39 MB)
    orchestrator-2.0.3-linux-amd64.tar.gz(4.44 MB)
    orchestrator-cli-2.0.3-1.x86_64.rpm(4.00 MB)
    orchestrator-cli_2.0.3_amd64.deb(4.05 MB)
    orchestrator_2.0.3_amd64.deb(4.44 MB)
PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology management, high availability, configuration management, and plugin extensions.

What is PolarDB Cluster Manager PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology manage

null 8 Dec 15, 2021
Golang MySql binary log replication listener

Go MySql binary log replication listener Pure Go Implementation of MySQL replication protocol. This allow you to receive event like insert, update, de

Pavel <Ven> Gulbin 185 Apr 13, 2022
A river for elasticsearch to automatically index mysql content using the replication feed.

Mysql River Plugin for ElasticSearch The Mysql River plugin allows to hook into Mysql replication feed using the excellent python-mysql-replication an

null 160 Jun 1, 2022
Enhanced PostgreSQL logical replication

pgcat - Enhanced postgresql logical replication Why pgcat? Architecture Build from source Install Run Conflict handling Table mapping Replication iden

jinhua luo 357 May 28, 2022
Streaming replication for SQLite.

Litestream Litestream is a standalone streaming replication tool for SQLite. It runs as a background process and safely replicates changes incremental

Ben Johnson 6.8k Jun 20, 2022
mysql to mysql 轻量级多线程的库表数据同步

goMysqlSync golang mysql to mysql 轻量级多线程库表级数据同步 测试运行 设置当前binlog位置并且开始运行 go run main.go -position mysql-bin.000001 1 1619431429 查询当前binlog位置,参数n为秒数,查询结

null 13 Jun 14, 2022
WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

null 2.2k Jun 28, 2022
A Go rest API project that is following solid and common principles and is connected to local MySQL database.

This is an intermediate-level go project that running with a project structure optimized RESTful API service in Go. API's of that project is designed based on solid and common principles and connected to the local MySQL database.

Kıvanç Aydoğmuş 21 Jun 6, 2022
BQB is a lightweight and easy to use query builder that works with sqlite, mysql, mariadb, postgres, and others.

Basic Query Builder Why Simple, lightweight, and fast Supports any and all syntax by the nature of how it works Doesn't require learning special synta

Aaron M 37 Jun 11, 2022
Interactive client for PostgreSQL and MySQL

dblab Interactive client for PostgreSQL and MySQL. Overview dblab is a fast and lightweight interactive terminal based UI application for PostgreSQL a

Daniel Omar Vergara Pérez 196 Jun 21, 2022
Interactive terminal user interface and CLI for database connections. MySQL, PostgreSQL. More to come.

?? dbui dbui is the terminal user interface and CLI for database connections. It provides features like, Connect to multiple data sources and instance

Kanan Rahimov 88 Jun 18, 2022
MySQL Storage engine conversion,Support mutual conversion between MyISAM and InnoDB engines.

econvert MySQL Storage engine conversion 简介 此工具用于MySQL存储引擎转换,支持CTAS和ALTER两种模式,目前只支持MyISAM和InnoDB存储引擎相互转换,其它引擎尚不支持。 注意:当对表进行引擎转换时,建议业务停止访问或者极少量访问时进行。 原

null 5 Oct 25, 2021
Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client,

Devcloud-go Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client, you can use them w

HUAWEI CLOUD 11 Jun 9, 2022
CRUD API example is written in Go using net/http package and MySQL database.

GoCrudBook CRUD API example is written in Go using net/http package and MySQL database. Requirements Go MySQL Code Editor Project Structure GoCrudBook

Serhat Karabulut 3 May 15, 2022
Support MySQL or MariaDB for gopsql/psql and gopsql/db

mysql Support MySQL or MariaDB for You can make MySQL SELECT, INSERT, UPDATE, DELETE statements with this package. NOTE: Pleas

null 0 Dec 9, 2021
A proxy is database proxy that de-identifies PII for PostgresDB and MySQL

Surf Surf is a database proxy that is capable of de-identifying PII and anonymizing sentive data fields. Supported databases include Postgres, MySQL,

null 1 Dec 14, 2021
Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)

mogo Mogo is a lightweight browser-based logs analytics and logs search platform

Shimo HQ 670 Jun 22, 2022
Go-fiber - Implement CRUD Data Go and Mysql using Authentication & Authorization

Implement CRUD Data Go and Mysql using Authentication & Authorization

Stevra Silvanus 4 Jun 8, 2022