Polaris is a cloud-native service discovery and governance center

Overview

Polaris: Service Discovery and Governance

Build Status

English | 简体中文


README:

Visit website to learn more

Introduction

Polaris is a cloud-native service discovery and governance center. It can be used to solve the problem of service connection, fault tolerance, traffic control and secure in distributed and microservice architecture.

Functions:

  • basic: service discover, service register and health check
  • fault tolerance: circuit break and rate limit
  • traffic control: request route and load balance
  • secure: authenticate

Features:

  • It provides SDK for high-performance business scenario and sidecar for non-invasive development mode.
  • It provides multiple clients for different development languages, such as Java, Go, C++ and Nodejs.
  • It can integrate with different service frameworks and gateways, such as Spring Cloud, gRPC and Nginx.
  • It is compatible with Kubernetes and supports automatic injection of K8s service and Polaris sidecar.

Components

server:

client:

ecosystem:

others:

  • website: Source for the polarismesh.cn site
  • samples: Samples for Learning PolarisMesh

Getting started

Preconditions

Prepare database

Please download and install MySQL, version requirement >=5.7, download available here: https://dev.mysql.com/downloads/mysql/5.7.html

Import SQL script

Point Script: ./store/defaultStore/polaris_server.sql, one can import through mysql admin or console.

Prepare golang compile environment

Polaris server end needs golang compile environment, version number needs >=1.12, download available here: https://golang.org/dl/#featured.

Build

chmod +x build.sh
./build.sh

After built, one can see 'polaris-server-release_${version}.tar.gz' package from the list.

Installation

Unzip package

Obtain polaris-server-release_${version}.tar.gz, and unzip.

Change polaris configuration

After unzipped, vi polaris-server.yaml, replace DB configuration's variable to real database information : ##DB_USER## (database username), ##DB_PWD##(database password), ##DB_ADDR##(database address), ##DB_NAME##(database name)

Execute Installation Script

chmod +x ./tool/*.sh
# install
./tool/install.sh
# test whether the process is successful 
./tool/p.sh

After all, run ./p.sh, prompt Polaris Server, proof the installation is successful

Verify installation

curl http://127.0.0.1:8080

Return text is 'Polaris Server', proof features run smoothly

Issues
  • 修改CHARSET = utf8mb4 COLLATE = utf8mb4_bin;

    修改CHARSET = utf8mb4 COLLATE = utf8mb4_bin;

    Please provide issue(s) of this PR: Fixes #430

    To help us figure out who should review this PR, please put an X in all the areas that this PR affects.

    • [ ] Configuration
    • [ ] Docs
    • [x] Installation
    • [ ] Performance and Scalability
    • [ ] Naming
    • [ ] HealthCheck
    • [ ] Test and Release

    Please check any characteristics that apply to this pull request.

    • [ ] Does not have any user-facing changes. This may include API changes, behavior changes, performance improvements, etc.
    issue-shoot 
    opened by HIPIAOYI 10
  • feat: refactor redispool

    feat: refactor redispool

    Please provide issue(s) of this PR: refactor # refactor common/redispool for https://github.com/polarismesh/polaris/issues/404 for https://github.com/polarismesh/polaris/pull/403 for https://github.com/polarismesh/polaris/pull/398

    need discuss 
    opened by daheige 9
  • polaris-server支持redis哨兵模式和集群模式

    polaris-server支持redis哨兵模式和集群模式

    What is the feature you want to add?

    polaris-server支持redis哨兵模式和集群模式

    Why do you want to add this feature?

    现有polaris-server只支持redis的直连模式,这种情况极度依赖 云上的redis产品。 建议polaris-server 支持 哨兵模式和redis 集群模式,支持polaris-server接入独立部署的redis。 哨兵模式 优先级高于 集群模式。 预期版本1.9和2.0。

    How to implement this feature? 参考 github.com/go-redis/redis/v8

    改造思路: 方案一:改写common/redispool/redis_pool.go,支持UniversalClient实现,支持各种高可用模式。需要调整现有配置。 方案二:新增基于哨兵的redis_pool和 基于集群的redis_pool,然后注入到heartbeatredis。无需改动现有配置,新增即可。

    Additional context Add any other context or screenshots about the feature request here.

    enhancement good first issue issue-shoot 
    opened by davionchen 8
  • [ISSUE #345] 依赖timestamp进行cache增量更新

    [ISSUE #345] 依赖timestamp进行cache增量更新

    Please provide issue(s) of this PR: Fixes #345

    To help us figure out who should review this PR, please put an X in all the areas that this PR affects.

    • [ ] Configuration
    • [ ] Docs
    • [ ] Installation
    • [ ] Performance and Scalability
    • [ ] Naming
    • [ ] HealthCheck
    • [ ] Test and Release

    Please check any characteristics that apply to this pull request.

    • [ ] Does not have any user-facing changes. This may include API changes, behavior changes, performance improvements, etc.
    mysql> explain select * from service where mtime > FROM_UNIXTIME(1652286241);
    +----+-------------+---------+------------+-------+---------------+-------+---------+------+------+----------+-----------------------+
    | id | select_type | table   | partitions | type  | possible_keys | key   | key_len | ref  | rows | filtered | Extra                 |
    +----+-------------+---------+------------+-------+---------------+-------+---------+------+------+----------+-----------------------+
    |  1 | SIMPLE      | service | NULL       | range | mtime         | mtime | 4       | NULL |    2 |   100.00 | Using index condition |
    +----+-------------+---------+------------+-------+---------------+-------+---------+------+------+----------+-----------------------+
    1 row in set, 1 warning (0.01 sec)
    
    mysql> explain select * from service where UNIX_TIMESTAMP(mtime) > 1652286241;
    +----+-------------+---------+------------+------+---------------+------+---------+------+------+----------+-------------+
    | id | select_type | table   | partitions | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra       |
    +----+-------------+---------+------------+------+---------------+------+---------+------+------+----------+-------------+
    |  1 | SIMPLE      | service | NULL       | ALL  | NULL          | NULL | NULL    | NULL |    7 |   100.00 | Using where |
    +----+-------------+---------+------------+------+---------------+------+---------+------+------+----------+-------------+
    1 row in set, 1 warning (0.00 sec)
    
    
    bug service 
    opened by shichaoyuan 7
  • support prometheus http extension to make sdk can be pulled by prometheus

    support prometheus http extension to make sdk can be pulled by prometheus

    What is the feature you want to add? add prometheus extension, user can do with the node exporters to get the node metrics

    Why do you want to add this feature?

    How to implement this feature?

    Additional context Add any other context or screenshots about the feature request here.

    enhancement keystone 
    opened by andrewshan 7
  • 试用polaris后的一些小问题

    试用polaris后的一些小问题

    在掘金看到了项目相关介绍,很感谢腾讯的同行开源相关项目。👍 试用后感觉很不错,希望有机会可能共同参与到项目生态建设中来。

    下面是试用过程中发现的一些问题

    1. polaris standalone启动约20分钟后无故挂掉

    启动后新建了一个service,没有注册服务,之后server就挂掉了

    相关日志:

    stdout
    
    [INFO] load config from polaris-server.yaml
    {Bootstrap:{Logger:{OutputPaths:[] ErrorOutputPaths:[] RotateOutputPath:log/polaris-server.log RotationMaxSize:500 RotationMaxAge:30 RotationMaxBackups:100 JSONEncoding:false LogGrpc:false Level:debug outputLevels: logCallers: stackTraceLevels:} StartInOrder:map[key:sz open:true] PolarisService:{EnableRegister:false ProbeAddress: Isolated:false Services:[0xc000107340 0xc000107380]}} APIServers:[{Name:httpserver Option:map[connLimit:map[maxConnLimit:5120 maxConnPerHost:128 openConnLimit:false purgeCounterExpired:5s purgeCounterInterval:10s whiteList:127.0.0.1] enablePprof:true listenIP:0.0.0.0 listenPort:8090] API:map[admin:{Enable:true Include:[]} client:{Enable:true Include:[discover register healthcheck]} console:{Enable:true Include:[default]}]} {Name:grpcserver Option:map[connLimit:map[maxConnLimit:5120 maxConnPerHost:128 openConnLimit:false] listenIP:0.0.0.0 listenPort:8091] API:map[client:{Enable:true Include:[discover register healthcheck]}]}] Cache:{Open:true Resources:[{Name:service Option:map[disableBusiness:false needMeta:true]} {Name:instance Option:map[disableBusiness:false needMeta:true]} {Name:routingConfig Option:map[]} {Name:rateLimitConfig Option:map[]} {Name:circuitBreakerConfig Option:map[]}]} Naming:{Auth:map[open:false] HealthCheck:{Open:true KvConnNum:0 KvServiceName: KvNamespace: KvPasswd: SlotNum:30 LocalHost: MaxIdle:20 IdleTimeout:120} Batch:map[deregister:map[concurrency:64 maxBatchCount:32 open:true queueSize:10240 waitTime:32ms] register:map[concurrency:64 maxBatchCount:32 open:true queueSize:10240 waitTime:32ms]]} Store:{Name:boltdbStore Option:map[path:./polaris.bolt]} Plugin:{CMDB:{Name: Option:map[]} RateLimit:{Name:token-bucket Option:map[api-limit:map[apis:[map[name:POST:/v1/naming/services rule:store-write] map[name:PUT:/v1/naming/services rule:store-write] map[name:POST:/v1/naming/services/delete rule:store-write] map[name:GET:/v1/naming/services rule:store-read] map[name:GET:/v1/naming/services/count rule:store-read] map[name:]] open:false rules:[map[limit:map[bucket:2000 open:true rate:1000] name:store-read] map[limit:map[bucket:1000 open:true rate:500] name:store-write]]] instance-limit:map[global:map[bucket:2 rate:2] open:true resource-cache-amount:1024] ip-limit:map[global:map[bucket:300 open:true rate:200] open:true resource-cache-amount:1024 white-list:[127.0.0.1]] remote-conf:false]} History:{Name:HistoryLogger Option:map[]} Statis:{Name:local Option:map[interval:60 outputPath:./statis]} DiscoverStatis:{Name:discoverLocal Option:map[interval:60 outputPath:./discover-statis]} ParsePassword:{Name: Option:map[]} Auth:{Name: Option:map[]} MeshResourceValidate:{Name: Option:map[]}}}
    finish starting server
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16a5fdb]
    
    goroutine 96 [running]:
    github.com/polarismesh/polaris-server/store/boltdbStore.(*serviceStore).GetSourceServiceToken(0xc0001dcbe0, 0xc00030c890, 0x5, 0xc00030c898, 0x4, 0x0, 0x0, 0x0)
            /home/runner/work/polaris/polaris/store/boltdbStore/service.go:181 +0xbb
    github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).batchVerifyInstances(0xc0002d4000, 0xc0000d2bd0, 0xc0000d2bd0, 0x0, 0x0, 0xc00007ae00)
            /home/runner/work/polaris/polaris/naming/batch/instance.go:385 +0x6ce
    github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).registerHandler(0xc0002d4000, 0xc00029a400, 0x1, 0x20, 0xc000107550, 0x1cf2001)
            /home/runner/work/polaris/polaris/naming/batch/instance.go:226 +0x2b5
    github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).storeWorker(0xc0002d4000, 0x1938c58, 0xc000107540, 0xc)
            /home/runner/work/polaris/polaris/naming/batch/instance.go:181 +0x24b
    created by github.com/polarismesh/polaris-server/naming/batch.(*InstanceCtrl).Start
            /home/runner/work/polaris/polaris/naming/batch/instance.go:93 +0x1b5
    
    polaris-server.log
    2021-09-25T00:29:30.821075Z     debug   timewheel/timewheel.go:128      ckv task timewheel task start time:1632500970, use time:470ns, exec num:0
    2021-09-25T00:29:31.819654Z     debug   timewheel/timewheel.go:128      ckv task timewheel task start time:1632500971, use time:397ns, exec num:0
    2021-09-25T00:29:31.819618Z     info    cache/instance.go:129   [Cache][Instance] get more instances    {"update": 0, "delete": 0, "last": "1970-01-01T08:00:00.000000Z", "used": "10.961µs"}
    2021-09-25T00:29:31.819654Z     debug   timewheel/timewheel.go:128      db task timewheel task start time:1632500971, use time:580ns, exec num:0
    2021-09-25T00:29:31.819743Z     info    cache/service.go:129    [Cache][Service] get more services      {"update": 1, "delete": 0, "last": "2021-09-25T00:10:18.000000Z", "used": "154.389µs"}
    2021-09-25T00:29:32.307459Z     info    grpcserver/server.go:376        receive request {"client-address": "127.0.0.1:65095", "user-agent": "grpc-go/1.22.0", "request-id": "52831770710", "method": "/v1.PolarisGRPC/ReportClient"}
    2021-09-25T00:29:32.820542Z     debug   timewheel/timewheel.go:128      db task timewheel task start time:1632500972, use time:1.978µs, exec num:0
    2021-09-25T00:29:32.820986Z     debug   timewheel/timewheel.go:128      ckv task timewheel task start time:1632500972, use time:1.324µs, exec num:0
    2021-09-25T00:29:32.821241Z     info    cache/service.go:129    [Cache][Service] get more services      {"update": 1, "delete": 0, "last": "2021-09-25T00:10:18.000000Z", "used": "416.22µs"}
    2021-09-25T00:29:32.821146Z     info    cache/instance.go:129   [Cache][Instance] get more instances    {"update": 0, "delete": 0, "last": "1970-01-01T08:00:00.000000Z", "used": "239.593µs"}
    2021-09-25T00:29:32.830494Z     info    grpcserver/server.go:376        receive request {"client-address": "127.0.0.1:65096", "user-agent": "grpc-go/1.22.0", "request-id": "1379206009", "method": "/v1.PolarisGRPC/RegisterInstance"}
    2021-09-25T00:29:32.842209Z     info    batch/instance.go:203   [Batch] Start batch creating instances count: 1
    
    1. 单机版能否提供docker-compose的启动方式?

    个人认为通过一个大zip分发包括 prometheus/pushgateway等组件感觉不是一个很优雅的方式。且在应用挂掉如果后通过uninstall.sh停止全部应用,又导致了所有数据全部丢失,通过docker-compose启动应该会更灵活一点

    1. go sdk中api的写法不是很优雅

    像api.NewProviderAPI() 这种写法令人感到有带一些迷惑,无法很直观的看出来是哪个api(尤其是接入polaris的多数为server端应用,大部分程序都会去建个api包,用于提供对外的rpc或http api)。可能像zap.NewProduction()gin.Default() 这样,写成 polaris.NewProviderAPI()这样会更直观一点。

    1. 文档里一些小bug

    如 https://polarismesh.cn/zh/doc/%E5%BF%AB%E9%80%9F%E5%85%A5%E9%97%A8/%E4%BD%BF%E7%94%A8polaris-go.html#%E9%85%8D%E7%BD%AE%E6%9C%8D%E5%8A%A1%E7%AB%AF%E5%9C%B0%E5%9D%80 中 在应用当前运行目录下,添加polaris.yml文件,配置服务端地址信息 实际为 polaris.yaml文件

    1. 控制台中无法新建namespace?
    question 
    opened by Akitata 7
  • [ISSUE #119] feat: multi-file log

    [ISSUE #119] feat: multi-file log

    Please provide issue(s) of this PR: feat #119

    配置项

    # server启动引导配置
    bootstrap:
      # 全局日志
      logger:
        path:
          naming: log/polaris-naming.log
          health-check: log/polaris-health-check.log
          store: log/polaris-store.log
          plugin: log/polaris-plugin.log
          server: log/polaris-server.log
          # 默认日志路径
          default: log/polaris-default.log
    

    To help us figure out who should review this PR, please put an X in all the areas that this PR affects.

    • [x] Configuration
    • [ ] Docs
    • [ ] Installation
    • [ ] Performance and Scalability
    • [ ] Naming
    • [ ] HealthCheck
    • [ ] Test and Release

    Please check any characteristics that apply to this pull request.

    • [ ] Does not have any user-facing changes. This may include API changes, behavior changes, performance improvements, etc.
    enhancement 
    opened by DHBin 6
  • Failed to build polaris on macOS

    Failed to build polaris on macOS

    Describe the bug Failed to build polaris on macOS

    To Reproduce

    1. chmod +x build.sh
    2. ./build.sh ./build.sh: line 5: realpath: command not found usage: dirname string [...]

    Expected behavior After built, one can see 'polaris-server-release_${version}.tar.gz' package from the list.

    Environment

    • Version: latest
    • OS: macOS Monterey
    bug documentation good first issue 
    opened by GuangmingLuo 5
  • Use Concurrentmap in  CacheProvider to reduce  lock granularity

    Use Concurrentmap in CacheProvider to reduce lock granularity

    What is the feature you want to add? 使用concurrentmap 来减少目前 healthcheck / CacheProvider,https://github.com/polarismesh/polaris/blob/main/healthcheck/cache.go#L30, 中全局读写锁的粒度,划分为多个map+sync.RWMutex 来实现分片map的效果,提高相应的性能 Why do you want to add this feature?

    Use concurrentmap to replace map+sync.RWMutex

    How to implement this feature? 将原先map 的key值 instanceid 进行hash得到的值对concurrentmap 的分片数量来求余 , 来确定分发到指定的map 上,从而来实现减少读写锁的粒度。

    type sharedMap struct {
    	size   uint32
    	shared []*shared
    }
    
    type shared struct {
    	healthCheckInstances map[string]*InstanceWithChecker
    	healthCheckMutex     sync.RWMutex
    }
    
    func (m *sharedMap) getShard(instanceId string) *shared {
    	return m.shared[fnv32(instanceId)%m.size]
    }
    
    // FNV hash
    func fnv32(key string) uint32 {
    	hash := uint32(2166136261)
    	const prime32 = uint32(16777619)
    	for i := 0; i < len(key); i++ {
    		hash *= prime32
    		hash ^= uint32(key[i])
    	}
    	return hash
    }
    

    Additional context Add any other context or screenshots about the feature request here.

    enhancement need discuss 
    opened by liu-song 5
  • Can support multiple database storage types

    Can support multiple database storage types

    What is the feature you want to add?

    Can support multiple database storage types

    eg: sqlserver、PostgreSQL、oracle、kingbase、dameng ...

    • [ ] support store can use sqlserver
    • [ ] support store can use PostgreSQL
    • [ ] support store can use Oracle
    • [ ] support store can use embedded distribute KVStorage or Database

    Welcome to participate in the evil classmates in the community, you can have relevant discussions in this question


    我们希望北极星的后端存储可以不仅仅只对接mysql,希望还可以对接sqlserver、PostgreSQL、oracle、kingbase、dameng 等,当前可以优先实现这三种数据库的对接

    • [ ] support store can use sqlserver
    • [ ] support store can use PostgreSQL
    • [ ] support store can use Oracle
    • [ ] support store can use embedded distribute KVStorage or Database

    欢迎社区中感兴趣的同学参与,可以在本issue中进行相关的讨论

    Why do you want to add this feature?

    1. Support domestic database
    2. Meet the database selection needs of different users

    How to implement this feature?

    Additional context Add any other context or screenshots about the feature request here.

    enhancement help wanted need discuss good advanced issue 
    opened by chuntaojun 5
  • password is not required

    password is not required

    fixes #99

    mysql password is not required.

     [email protected] ~  mysql -uroot
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 30
    Server version: 8.0.26 Homebrew
    
    Copyright (c) 2000, 2021, Oracle and/or its affiliates.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql>
    
    opened by horizonzy 5
  • Dev/feature log archiving

    Dev/feature log archiving

    Please provide issue(s) of this PR: Fixes #

    To help us figure out who should review this PR, please put an X in all the areas that this PR affects.

    • [ ] Configuration
    • [ ] Docs
    • [ ] Installation
    • [X] Performance and Scalability
    • [ ] Naming
    • [ ] HealthCheck
    • [ ] Test and Release

    Please check any characteristics that apply to this pull request.

    • [ ] Does not have any user-facing changes. This may include API changes, behavior changes, performance improvements, etc.
    bug 
    opened by awdela0202 1
  • Add `http_sd_configs` to prometheus.yml in install.sh

    Add `http_sd_configs` to prometheus.yml in install.sh

    What is the feature you want to add? Add http_sd_configs to prometheus.yml file, so that the prometheus can fetch all metrics endpoints which client report.

    Why do you want to add this feature? Polaris support fetching client report metrics endpoints now, but http_sd_configs is not configured in prometheus.yml.

    How to implement this feature? Modify installPrometheus function in install.sh.

    Additional context Add any other context or screenshots about the feature request here.

    enhancement good first issue install 
    opened by wallezhang 0
  • time.after  can cause memory leaks

    time.after can cause memory leaks

    Describe the bug A clear and concise description of what the bug is. 非正确使用time.after 会造成内存泄漏 image

    To Reproduce Steps to reproduce the behavior.

    Expected behavior A clear and concise description of what you expected to happen. 建议使用 time.Ticker

    Environment

    • Version: [e.g. v1.0.0]
    • OS: [e.g. CentOS8]

    Additional context Add any other context about the problem here.

    bug 
    opened by liu-song 0
  • 支持实例创建事件

    支持实例创建事件

    What is the feature you want to add? 目前北极星支持以下几种事件:

    1. 服务实例下线事件
    2. 服务实例从健康变为不健康事件
    3. 服务实例从不健康变成健康事件

    不支持:

    1. 服务实例上线事件

    事件在排查问题时非常有用,期望能够支持服务实例上线事件,这样就能够完整的把一个实例的生命周期串起来,便于排查问题。

    Why do you want to add this feature?

    How to implement this feature?

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by lepdou 0
  • 服务列表支持优化的模糊搜索服务

    服务列表支持优化的模糊搜索服务

    What is the feature you want to add? image

    如果上图所示,目前需要用户手动输入 * 才可以模糊搜索,但是页面上又没有任何提示,所以用户很难使用模糊搜索的能力。

    是否可以默认就是模糊搜索,前端自动加 *?

    Additional context Add any other context or screenshots about the feature request here.

    enhancement need discuss 
    opened by lepdou 0
Releases(v1.10.0)
Owner
PolarisMesh
a service discovery and governance center that supports gRPC, Spring Cloud, Kubernetes, Service Mesh and others
PolarisMesh
provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

null 23 May 8, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

null 0 Oct 19, 2021
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.

Open Service Mesh (OSM) Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure,

Open Service Mesh 2.4k Jun 24, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 22 Jun 17, 2022
The mec platform for service register/discovery/subscribe and other functions.roject main repo.

EdgeGallery MEP project Introduction Edgegallery MEP is an open source implementation of MEC platform according to ETSI MEC 003 [1] and 011 [2] docume

EdgeGallery 35 Feb 16, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

CloudSnorkel 16 Jun 8, 2022
Service registry/discovery implementation in Go.

go-service-registry Availabe endpoints : GET http://localhost:3000/ --> Dashboard GET http://localhost:3000/services/[serviceName] --> Get available

null 1 Feb 11, 2022
Cloud Native Electronic Trading System built on Kubernetes and Knative Eventing

Ingenium -- Still heavily in prototyping stage -- Ingenium is a cloud native electronic trading system built on top of Kubernetes and Knative Eventing

Mark Winter 5 Dec 23, 2021
🔥 🔥 Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥

CVE-2021-44228 Log4J Vulnerability can be detected at runtime and attack paths can be visualized by ThreatMapper. Live demo of Log4J Vulnerability her

null 1.7k Jun 22, 2022
A lightweight, cloud-native data transfer agent and aggregator

English | 中文 Loggie is a lightweight, high-performance, cloud-native agent and aggregator based on Golang. It supports multiple pipeline and pluggable

null 685 Jul 1, 2022
Substation is a cloud native toolkit for building modular ingest, transform, and load (ITL) data pipelines

Substation Substation is a cloud native data pipeline toolkit. What is Substation? Substation is a modular ingest, transform, load (ITL) application f

Brex 19 May 29, 2022
The Cloud Native Application Proxy

Traefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your ex

Traefik Labs 38.6k Jun 26, 2022
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Oliver Borchert 30 Mar 2, 2022
Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

Zadig Developer-oriented Continuous Delivery Product English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? Ho

KodeRover 1.5k Jun 27, 2022
Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

Zadig Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use?

KodeRover 30 May 12, 2021
Interactive Cloud-Native Environment Client

Fenix-CLI:Interactive Cloud-Native Environment Client English | 简体中文 Fenix-CLI is an interactive cloud-native operating environment client. The goal i

IcyFenix 78 Jun 27, 2022
This is a cloud-native application that focuses on the DevOps area.

Get started Install KubeSphere via kk (or other ways). This is an optional step, basically we need a Kubernetes Cluster and the front-end of DevOps. I

KubeSphere SIGs 112 Jul 1, 2022
Cloud Native Configurations for Kubernetes

CNCK CNCK = Cloud Native Configurations for Kubernetes Make your Kubernetes applications more cloud native by injecting runtime cluster information in

Tal Liron 5 Nov 4, 2021
cloud native application deploy flow

Triton-io/Triton English | 简体中文 Introduction Triton provides a cloud-native DeployFlow, which is safe, controllable, and policy-rich. For more introdu

triton-io 43 May 28, 2022