A distributed key-value storage system developed by Alibaba Group


Product Overview

Tair is fast-access memory (MDB)/persistent (LDB) storage service. Using a high-performance and high-availability distributed cluster architecture, Tair can meet businesses' high requirements for read/write performance and scalable capacity.


System architecture

A Tair cluster has three necessary modules: the ConfigServer, DataServer and client.

Generally, a Tair cluster includes two ConfigServers and multiple DataServers. The two ConfigServers act as primary and standby servers. Heartbeat checks between the DataServer and ConfigServer are used to check live and available DataServers in the cluster to establish the distribution of data in the cluster (comparison table). DataServers store, copy and migrate data as directed by the ConfigServer. When a client is launched, it obtains data distribution information from the ConfigServer. Based on this data distribution information, the client interacts with the corresponding DataServers to perform the user's requests.

In terms of architecture, the role of ConfigServer is similar to the central node in traditional application systems. The whole cluster service depends on the ConfigServer. In fact, Tair's ConfigServers are extremely lightweight. When a working ConfigServer is experiencing downtime, another ConfigServer automatically takes over within seconds. Even if two ConfigServers are experiencing downtime simultaneously, Tair can run normally as long as there is no change in the DataServers. Users only need to connect apps to ConfigServers and do not need to know the details of the internal nodes.


  • Two ConfigServers act as primary and standby.
  • Live and available DataServer node information for the cluster is determined using a heartbeat check between the ConfigServer and DataServer.
  • Based on the DataServer node information, the ConfigServer constructs a data distribution table which shows how data is distributed in the cluster.
  • The ConfigServer provides a data distribution table query service.
  • The ConfigServer schedules data migration and copying between DataServers.


  • DataServers provide storage engines.
  • DataServers receive operations initiated by clients, such as put/get/remove.
  • DataServers migrate and copy data.
  • DataServers provide access statistics.


  • Clients provide APIs for accessing the Tair cluster.
  • Clients update and cache data distribution tables.
  • Clients provide LocalCache to prevent overheated data access from affecting the Tair cluster service.
  • Clients control traffic.

Product features

Distributed architecture

  • A distributed cluster architecture is used to provide automatic disaster recovery and failover.
  • Load balancing is supported and data is distributed evenly.
  • System storage space and throughput performance can be scaled elastically, resolving data volume and QPS performance limitations.
  • Fully-featured and user-friendly access
  • The data structure is rich. Single level key-value structures and secondary indexing structures are supported.
  • Various uses are supported. Counter mode is also supported.
  • Data expiration and version control are supported.


Database caching

As business volume increases, the number of concurrent requests to the database system is increasing and the load on the database system is becoming heavier. When the database system is overloaded, response speeds are slower and, in extreme situations, the service may even be interrupted. To address this problem, Tair MDB can be deployed together with database products to provide high throughput and low latency storage. MDB can respond quickly, and generally completes requests within milliseconds. Moreover, MDB supports a higher QPS rate and can process more concurrent requests than databases. By observing the business, the user can store hotspot data in MDB and significantly lessen the load on the database. This not only reduces database costs but also improves system availability.

Temporary data storage

Applications such as social websites, e-commerce websites, games, and advertisements need to maintain a large volume of temporary data. Storing temporary data in MDB can reduce the memory management overheads and application load. In a distributed environment, MDB can be used as uniform global storage which can prevent data loss caused by a single point of failure and resolve issues relating to synchronization between multiple applications. A common example is to use MDB as a session manager. If the website uses distributed deployment and the traffic is huge, different requests from the same user may be sent to different web servers. In this case, MDB can be used as a global storage solution to save session data, user tokens, permission information and other data.

Data Storage

  • The recommendation and advertising businesses generally need to compute huge amounts of data offline. LDB supports persistent storage and provides superb performance. Online services are supported, allowing the user to regularly import offline data to LDB for online services.
  • After computing, list businesses can store the final lists in LDB to be directly displayed to front-end apps. In this way, LDB meets storage and high-speed access needs.


Security apps have many blacklist/whitelist scenarios. These blacklist/whitelist scenarios are characterized by low hit rates, large access volumes, and business loss caused by data loss. LDB supports data persistence and high access volume, so it is widely used in these scenarios.

Distributed locks

Distributed locks are usually used to prevent data inconsistency and logical chaos caused by multi thread concurrence. Distributed locks can be implemented using Tair's version feature or computing function. Thanks to LBS's persistence, the locks aren't lost and can be released normally even if the service goes down.


The source code is available user the GPL version 2. We are avtively looking for contributors so if you have any ideas, bug reports, or patchs you would like to contribute please do not hesitate to do so.

  • 关于tair3.2.4


    hi team! 我们现在正在使用tair2.3.x版本,并且部署到了产线,想问问若升级到tair3.2.4版本,底层协议与是否是兼容老版本,我试了试tair2.3.x的tairclient连接到tair3.2.4的configserver,目前是没问题的,有没有需要注意的地方?这方面我看文档没有这么阐述。另外这两个版本的差别有多大? 感谢,望回复。

    opened by rl5c 2
  • 用tair_client_api编程,程序运行完毕退出时报错!


    用tair_client_api编程,程序运行完毕退出时报错! 代码如下:

    #include #include <boost/timer.hpp> #include <boost/progress.hpp> #include <boost/shared_ptr.hpp> #include <boost/scoped_ptr.hpp>

    #include "Tair/include/tair_client_api.hpp"

    using namespace std; using namespace tair; using namespace boost;

    int main() { cout << "Hello World!" << endl;

    boost::scoped_ptr<tair_client_api> tair_client(new tair_client_api);
    bool bOk = tair_client->startup("", nullptr, "group_1");
        data_entry key("k1");
        data_entry value("v1");
        int ret = 0;
        boost::progress_timer t;
        for(int i = 0; i < 10; ++i)
            ret = tair_client->put(0, key, value, 0, 0);
                cout <<"Put Error"<<endl;
        cout << "Tair Start Error"<<endl;
    return 0;



    The Inferior stopped because it received a signal from the operating system. Signal name :SIGABRT Signal meaning: Aborted

    求解! 十分感谢!

    opened by Landy0104 2
  • 编译不过


    src/storage/ldb/leveldb/ax_port_leveldb.m4:15: the top level configure.ac:113: warning: AC_LANG_CONFTEST: no AC_LANG_SOURCE call detected in body ../../lib/autoconf/lang.m4:193: AC_LANG_CONFTEST is expanded from... ../../lib/autoconf/general.m4:2590: _AC_COMPILE_IFELSE is expanded from... ../../lib/autoconf/general.m4:2606: AC_COMPILE_IFELSE is expanded from... ../../lib/m4sugar/m4sh.m4:639: AS_IF is expanded from... ../../lib/autoconf/general.m4:2031: AC_CACHE_VAL is expanded from... ../../lib/autoconf/general.m4:2052: AC_CACHE_CHECK is expanded from... ax_boost.m4:43: AX_BOOST is expanded from... configure.ac:113: the top level configure.ac:3: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see: configure.ac:3: http://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_005fINIT_005fAUTOMAKE-invocation configure.ac:123: error: required file 'test/Makefile.in' not found configure.ac:123: error: required file 'test/interface_test/Makefile.in' not found configure.ac:123: error: required file 'test/unit_test/Makefile.in' not found configure.ac:123: error: required file 'test/statistics_test/Makefile.in' not found configure.ac:123: error: required file 'test/retry_all_test/Makefile.in' not found Makefile.am:2: error: required directory ./test does not exist Makefile.am:5: error: required directory ./test does not exist

    opened by zorrohahaha 2
  • log bug

    log bug

          log_error("dataserver: %s UP, accept strategy is: %d illegal.",
              p_server->group_info_data->get_accept_strategy(), tbsys::CNetUtil::addrToString(req->server_id).c_str());
    opened by hljyunxi 2
  • java client

    java client

    How can we get the latest java client?I find one at this address:https://github.com/alibaba/tair-java-client.But it is too old.And in my test its addItem method is not compatible with tair3.2.4.

    opened by yangjiajun2014 1
  • 咨询个超时的问题


    https://github.com/alibaba/tair/blob/5cce166b779385f422e8c2fce727cd9b4545491c/src/common/wait_object.hpp#L306 如果请求超时 会走到这个分支,看起来是啥也没做 。这里不需要显示的告诉外部这个wait_object已经超时了么?

    opened by youngzhang26 0
  • CentOS8 编译libeasy失败

    CentOS8 编译libeasy失败

    easy_uthread.c:251:17: error: ‘SIG_BLOCK’ undeclared (first use in this function); did you mean ‘F_LOCK’? sigprocmask(SIG_BLOCK, &zero, &t->context.uc_sigmask); ^~~~~~~~~


    opened by seeguitar 4
Alibaba Open Source
The TinyKV course builds a key-value storage system with the Raft consensus algorithm.

The TinyKV Course The TinyKV course builds a key-value storage system with the Raft consensus algorithm. It is inspired by MIT 6.824 and TiKV Project.

jaegerwang 1 Nov 19, 2021
An distributed system developed by gin

An distributed system developed by gin

null 0 Nov 22, 2021
Elastic Key-Value Storage With Strong Consistency and Reliability

What is Elasticell? Elasticell is a distributed NoSQL database with strong consistency and reliability. Compatible with Redis protocol Use Elasticell

Deep Fabric 511 Nov 7, 2022
gathering distributed key-value datastores to become a cluster

go-ds-cluster gathering distributed key-value datastores to become a cluster About The Project This project is going to implement go-datastore in a fo

FileDrive Team 16 Aug 19, 2022
Distributed-Services - Distributed Systems with Golang to consequently build a fully-fletched distributed service

Distributed-Services This project is essentially a result of my attempt to under

Hamza Yusuff 6 Jun 1, 2022
A distributed MySQL binlog storage system built on Raft

What is kingbus? 中文 Kingbus is a distributed MySQL binlog store based on raft. Kingbus can act as a slave to the real master and as a master to the sl

Fei Chen 854 Oct 30, 2022
BlobStore is a highly reliable,highly available and ultra-large scale distributed storage system

BlobStore Overview Documents Build BlobStore Deploy BlobStore Manage BlobStore License Overview BlobStore is a highly reliable,highly available and ul

CubeFS 14 Oct 10, 2022
A feature complete and high performance multi-group Raft library in Go.

Dragonboat - A Multi-Group Raft library in Go / 中文版 News 2021-01-20 Dragonboat v3.3 has been released, please check CHANGELOG for all changes. 2020-03

lni 4.4k Nov 28, 2022
Distributed disk storage database based on Raft and Redis protocol.

IceFireDB Distributed disk storage system based on Raft and RESP protocol. High performance Distributed consistency Reliable LSM disk storage Cold and

IceFireDB 939 Nov 14, 2022
Distributed lock manager. Warning: very hard to use it properly. Not because it's broken, but because distributed systems are hard. If in doubt, do not use this.

What Dlock is a distributed lock manager [1]. It is designed after flock utility but for multiple machines. When client disconnects, all his locks are

Sergey Shepelev 25 Dec 24, 2019
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly.

Gleam Gleam is a high performance and efficient distributed execution system, and also simple, generic, flexible and easy to customize. Gleam is built

Chris Lu 3.1k Nov 27, 2022
A distributed system for embedding-based retrieval

Overview Vearch is a scalable distributed system for efficient similarity search of deep learning vectors. Architecture Data Model space, documents, v

vector search infrastructure for AI applications 1.5k Nov 18, 2022
💡 A Distributed and High-Performance Monitoring System. The next generation of Open-Falcon

夜莺简介 夜莺是一套分布式高可用的运维监控系统,最大的特点是混合云支持,既可以支持传统物理机虚拟机的场景,也可以支持K8S容器的场景。同时,夜莺也不只是监控,还有一部分CMDB的能力、自动化运维的能力,很多公司都基于夜莺开发自己公司的运维平台。开源的这部分功能模块也是商业版本的一部分,所以可靠性有保

DiDi 5.6k Nov 24, 2022
a dynamic configuration framework used in distributed system

go-archaius This is a light weight configuration management framework which helps to manage configurations in distributed system The main objective of

null 203 Nov 1, 2022
Verifiable credential system on Cosmos with IBC for Distributed Identities

CertX This is a project designed to demonstrate the use of IBC between different zones in the Cosmos ecosystem for privacy preserving credential manag

bwty 6 Mar 29, 2022
A distributed and coördination-free log management system

OK Log is archived I hoped to find the opportunity to continue developing OK Log after the spike of its creation. Unfortunately, despite effort, no su

OK Log 3k Nov 18, 2022