Enterprise-grade container platform tailored for multicloud and multi-cluster management

Overview

KubeSphere Container Platform

License Build Status Go Report Card KubeSphere release

logo


What is KubeSphere

English | 中文

KubeSphere is a distributed operating system providing cloud native stack with Kubernetes as its kernel, and aims to be plug-and-play architecture for third-party applications seamless integration to boost its ecosystem. KubeSphere is also a multi-tenant enterprise-grade container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform, which includes most common functionalities needed for enterprise Kubernetes strategy, see Feature List for details.

The following screenshots give a close insight into KubeSphere. Please check What is KubeSphere for further information.

Workbench Project Resources
CI/CD Pipeline App Store

Demo Environment

Using the account demo1 / Demo123 to log in the demo environment. Please note the account is granted view access. You can also have a quick view of KubeSphere Demo Video.

Architecture

KubeSphere uses a loosely-coupled architecture that separates the frontend from the backend. External systems can access the components of the backend which are delivered as Docker containers through the REST APIs. See Architecture for details.

Architecture

Features

Feature Description
Provisioning Kubernetes Cluster Support deploy Kubernetes on your infrastructure out of box, including online and air gapped installation
Multi-cluster Management Provide a centralized control plane to manage multiple Kubernetes Clusters, support application distribution across multiple clusters and cloud providers
Kubernetes Resource Management Provide web console for creating and managing Kubernetes resources, with powerful observability including monitoring, logging, events, alerting and notification
DevOps System Provide out-of-box CI/CD based on Jenkins, and offers automated workflow tools including binary-to-image (B2I) and source-to-image (S2I)
Application Store Provide application store for Helm-based applications, and offers application lifecycle management
Service Mesh (Istio-based) Provide fine-grained traffic management, observability and tracing for distributed microservice applications, provides visualization for traffic topology
Rich Observability Provide multi-dimensional monitoring metrics, and provides multi-tenant logging, events and auditing management, support alerting and notification for both application and infrastructure
Multi-tenant Management Provide unified authentication with fine-grained roles and three-tier authorization system, supports AD/LDAP authentication
Infrastructure Management Support node management and monitoring, and supports adding new nodes for Kubernetes cluster
Storage Support Support GlusterFS, CephRBD, NFS, LocalPV (default), etc. open source storage solutions, provides CSI plugins to consume storage from cloud providers
Network Support Support Calico, Flannel, etc., provides Network Policy management, and load balancer plug-in Porter for bare metal.
GPU Support Support add GPU node, support vGPU, enables running ML applications on Kubernetes, e.g. TensorFlow

Please see the Feature and Benefits for further information.


Latest Release

KubeSphere 3.0.0 is now generally available! See the Release Notes For 3.0.0 for the updates.

Installation

KubeSphere can run anywhere from on-premise datacenter to any cloud to edge. In addition, it can be deployed on any version-compatible running Kubernetes cluster.

QuickStarts

Quickstarts include six hands-on lab exercises that help you quickly get started with KubeSphere.

Installing on Existing Kubernetes Cluster

Installing on Linux

Contributing, Support, Discussion, and Community

We ❤️ your contribution. The community walks you through how to get started contributing KubeSphere. The development guide explains how to set up development environment.

Please submit any KubeSphere bugs, issues, and feature requests to KubeSphere GitHub Issue.

Who are using KubeSphere

The user case studies page includes the user list of the project. You can submit a PR to add your institution name and homepage if you are using KubeSphere.

Landscapes



    

KubeSphere is a member of CNCF and a Kubernetes Conformance Certified platform , which enriches the CNCF CLOUD NATIVE Landscape.

Comments
  • error occurs on the page of custom monitoring page,

    error occurs on the page of custom monitoring page,

    Describe the Bug error occurs on the page of custom monitoring page 427e3b183ddf9aca19524b2bca680f3

    For UI issues please also add a screenshot that shows the issue.

    Versions Used KubeSphere: ks3.2 Kubernetes: (If KubeSphere installer used, you can skip this)

    How To Reproduce Steps to reproduce the behavior:

    1. In ks3.1.1, update the images of ks-console, ks-controller - Manager, and ks-apiserver to a nightly 20210915
    2. Install CRD: Monitoring - clusterdashboard - customResourceDefinition yaml monitoring-dashboard-customResourceDefinition.yaml
    3. entering the custom monitoring page, error occurs

    Problem analysis: Replace v1aphal1 in the following interface with v1aphal2 image

    /assign @zhu733756 @harrisonliu5

    opened by liuyp2018 39
  • install kubesphere error always show waiting for etcd to start

    install kubesphere error always show waiting for etcd to start

    [master1 172.16.0.2] MSG: Configuration file already exists Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start WARN[23:31:55 CST] Task failed ...
    WARN[23:31:55 CST] error: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1 Error: Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1 Usage: kk create cluster [flags]

    Flags: -f, --filename string Path to a configuration file -h, --help help for cluster --skip-pull-images Skip pre pull images --with-kubernetes string Specify a supported version of kubernetes --with-kubesphere Deploy a specific version of kubesphere (default v3.0.0) -y, --yes Skip pre-check of the installation

    Global Flags: --debug Print detailed information (default true)

    Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1

    area/installation kind/support 
    opened by Widerstehen 34
  • does anyone  install edgemesh sucess in kubesphere?

    does anyone install edgemesh sucess in kubesphere?

    I follow this document : https://www.modb.pro/db/241198 Edgemesh is installed from app store and sucess. But the demo of edgemesh failed. I have run edgemesh sucess on Kubeedge without kubesphere. Does anyone know how to check the error? or have do the same work ?

    help wanted stale area/edge 
    opened by alongL 28
  • 【加急】CentOS Linux release 7.5.1804 安装失败kubesphere2.0.2离线版

    【加急】CentOS Linux release 7.5.1804 安装失败kubesphere2.0.2离线版

    问题描述

    TASK [ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db] *****************************************************************************************************************************************************************************
    Tuesday 29 October 2019  21:06:32 +0800 (0:00:00.776)       0:06:25.399 ******* 
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (15 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (14 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (13 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (12 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (11 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (10 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (9 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (8 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (7 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (6 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (5 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (4 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (3 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (2 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left).
    fatal: [ks-allinone]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl -n openpitrix-system get pod | grep openpitrix-db-deployment | awk '{print $3}'", "delta": "0:00:00.277817", "end": "2019-10-29 21:11:40.450306", "rc": 0, "start": "2019-10-29 21:11:40.172489", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]}
    
    PLAY RECAP **************************************************************************************************************************************************************************************************************************************
    ks-allinone                : ok=244  changed=90   unreachable=0    failed=1   
    
    Tuesday 29 October 2019  21:11:40 +0800 (0:05:07.919)       0:11:33.318 ******* 
    =============================================================================== 
    

    安装环境的硬件配置 8vCPU 32GB内存 CentOS Linux release 7.5.1804 离线安装kubesphere2.0.2 安装方式all-in-one离线

    错误信息或截图

    TASK [ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db] *****************************************************************************************************************************************************************************
    Tuesday 29 October 2019  21:06:32 +0800 (0:00:00.776)       0:06:25.399 ******* 
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (15 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (14 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (13 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (12 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (11 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (10 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (9 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (8 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (7 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (6 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (5 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (4 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (3 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (2 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left).
    fatal: [ks-allinone]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl -n openpitrix-system get pod | grep openpitrix-db-deployment | awk '{print $3}'", "delta": "0:00:00.277817", "end": "2019-10-29 21:11:40.450306", "rc": 0, "start": "2019-10-29 21:11:40.172489", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]}
    
    PLAY RECAP **************************************************************************************************************************************************************************************************************************************
    ks-allinone                : ok=244  changed=90   unreachable=0    failed=1   
    
    Tuesday 29 October 2019  21:11:40 +0800 (0:05:07.919)       0:11:33.318 ******* 
    =============================================================================== 
    ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 307.92s
    openpitrix : OpenPitrix | Installing OpenPitrix(2) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 139.90s
    ks-monitor : ks-monitor | Getting monitor installation files ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 20.57s
    openpitrix : OpenPitrix | Getting OpenPitrix installation files ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 18.29s
    prepare/nodes : Ceph RBD | Installing ceph-common (YUM) --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 14.92s
    prepare/nodes : KubeSphere| Installing JQ (YUM) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 8.59s
    metrics-server : Metrics-Server | Getting metrics-server installation files -------------------------------------------------------------------------------------------------------------------------------------------------------------- 7.80s
    prepare/base : KubeSphere | Labeling system-workspace ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 6.91s
    ks-monitor : ks-monitor | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.35s
    ks-logging : ks-logging | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.61s
    prepare/base : KubeSphere | Getting installation init files ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 4.03s
    download : Download items ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.85s
    prepare/base : KubeSphere | Init KubeSphere ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.70s
    prepare/nodes : GlusterFS | Installing glusterfs-client (YUM) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.38s
    prepare/base : KubeSphere | Create kubesphere namespace ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.32s
    prepare/base : KubeSphere | Creating manifests ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.03s
    ks-console : ks-console | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.95s
    openpitrix : OpenPitrix | Getting OpenPitrix installation files -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.82s
    ks-devops/s2i : S2I | Creating manifests ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.76s
    ks-monitor : ks-monitor | Installing prometheus-operator --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.73s
    failed!
    [[email protected] scripts]# kubectl -n openpitrix-system get pod
    NAME                                                      READY   STATUS     RESTARTS   AGE
    openpitrix-api-gateway-deployment-6bc9747f6c-xxdl2        0/1     Init:0/2   0          8m43s
    openpitrix-app-manager-deployment-7df95d8848-hv8cj        0/1     Init:0/2   0          8m42s
    openpitrix-category-manager-deployment-694bd85647-6pdwm   0/1     Init:0/2   0          8m42s
    openpitrix-cluster-manager-deployment-5c8c797d59-265wf    0/1     Init:0/2   0          8m42s
    openpitrix-db-deployment-79f9db9dd9-fcp5s                 0/1     Pending    0          8m45s
    openpitrix-etcd-deployment-84d677449b-mt6r5               0/1     Pending    0          8m44s
    openpitrix-iam-service-deployment-6bc657d9c6-khgtk        0/1     Init:0/2   0          8m41s
    openpitrix-job-manager-deployment-d9d966976-7q7dv         0/1     Init:0/2   0          8m41s
    openpitrix-minio-deployment-594df9bb5-wssbw               0/1     Pending    0          8m44s
    openpitrix-repo-indexer-deployment-5856985997-fvh76       0/1     Init:0/2   0          8m40s
    openpitrix-repo-manager-deployment-b9888bf58-87zrs        0/1     Init:0/2   0          8m40s
    openpitrix-runtime-manager-deployment-54c6bb64f4-zmfnd    0/1     Init:0/2   0          8m40s
    openpitrix-task-manager-deployment-5479966bfc-lj6xl       0/1     Init:0/2   0          8m39s
    [[email protected] scripts]# cat /etc/resolv.conf
    

    Installer版本 8vCPU 32GB内存 CentOS Linux release 7.5.1804 离线安装kubesphere2.0.2 安装方式all-in-one离线

    opened by arraycto 28
  • Install Error

    Install Error

    OS Version: CentOS Linux release 7.5.1804 (Core) Kubesphere Version: kubesphere-all-advanced-2.0.0-dev-20190514

    selinux已关闭、swapoff已关闭、firewalld已关闭

    报错信息如下: kubernetes/preinstall : Update package management cache (YUM) ------------------------------------------------------------------------------------------------------------------------------- 25.34s kubernetes/preinstall : Install packages requirements ---------------------------------------------------------------------------------------------------------------------------------------- 2.96s gather facts from all instances -------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.72s bootstrap-os : Install libselinux-python and yum-utils for bootstrap ------------------------------------------------------------------------------------------------------------------------- 0.70s bootstrap-os : Check python-pip package ------------------------------------------------------------------------------------------------------------------------------------------------------ 0.69s bootstrap-os : Install pip for bootstrap ----------------------------------------------------------------------------------------------------------------------------------------------------- 0.68s download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.66s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --------------------------------------------------------------------------------------------------- 0.65s kubernetes/preinstall : Create kubernetes directories ---------------------------------------------------------------------------------------------------------------------------------------- 0.64s download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.64s download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.62s download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.62s bootstrap-os : Gather nodes hostnames -------------------------------------------------------------------------------------------------------------------------------------------------------- 0.60s kubernetes/preinstall : Set selinux policy --------------------------------------------------------------------------------------------------------------------------------------------------- 0.54s container-engine/docker : Ensure old versions of Docker are not installed. | RedHat ---------------------------------------------------------------------------------------------------------- 0.44s container-engine/docker : ensure service is started if docker packages are already present --------------------------------------------------------------------------------------------------- 0.44s bootstrap-os : Install epel-release for bootstrap -------------------------------------------------------------------------------------------------------------------------------------------- 0.43s kubernetes/preinstall : Create cni directories ----------------------------------------------------------------------------------------------------------------------------------------------- 0.41s kubernetes/preinstall : Hosts | populate inventory into hosts file --------------------------------------------------------------------------------------------------------------------------- 0.41s kubernetes/preinstall : Remove swapfile from /etc/fstab -------------------------------------------------------------------------------------------------------------------------------------- 0.39s failed!

    opened by cnicy 28
  • support OIDC identity provider

    support OIDC identity provider

    Signed-off-by: hongming [email protected]

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    Support OIDC identity provider.

    Which issue(s) this PR fixes:

    Fixes #2941

    Additional documentation, usage docs, etc.:

    See also:

    https://github.com/kubesphere/community/blob/master/sig-multitenancy/auth/how-to-configure-authentication.md

    https://github.com/kubesphere/community/blob/master/sig-multitenancy/auth/oidc-Identity-provider.md

    kind/feature approved size/XXL lgtm dco-signoff: yes 
    opened by wansir 25
  • centos7.6安装几小时之后机器卡死

    centos7.6安装几小时之后机器卡死

    General remarks

    全新安装centos7.6,然后安装KubeSphere v2.1,正常运行一段时间后,机器卡死

    Describe the bug(描述下问题)

    1. 全新安装的centos7.6,然后再关闭firewalld,安装最新版KubeSphere v2.1;

    2. 安装后能正常使用,top的load average在1左右如图:

    3. 过几小时,top命令的load average能够飙涨到200以上,而且登录不上去,手动重启服务器后,又能够重新使用了,会有报错信息,情况如图:

    4. 登录页面正常显示,输入账号密码登录之后返回500,如图:

    For UI issues please also add a screenshot that shows the issue.

    Versions used(KubeSphere/Kubernetes的版本) KubeSphere:kubesphere-all-v2.1.0

    Environment(环境的硬件配置)

    1 masters: 72cpu/32g 0 nodes centos版本:CentOS Linux release 7.6.1810 (Core)

    (and other info are welcomed to help us debugging)

    To Reproduce(复现步骤) Steps to reproduce the behavior:

    1. uninstall kubeSphere,第二天来查看机器,能够正常使用,负载正常;
    2. 重新安装kubeSphere, 第二天来查看,机器无法登录,登录页面可以正常显示,问题复现。
    stale 
    opened by huanghe 25
  • upgrade ingress nginx version

    upgrade ingress nginx version

    What type of PR is this?

    /kind bug

    What this PR does / why we need it:

    ingress nginx controller deployment failure due to K8s v1.22+ api removal

    Which issue(s) this PR fixes:

    Fixes #4548 #4486

    Special notes for reviewers:

    Does this PR introduced a user-facing change?

    None
    

    Additional documentation, usage docs, etc.:

    upgrade ingress nginx version
    https://github.com/kubernetes/ingress-nginx/blob/helm-chart-4.0.13/Changelog.md
    
    kind/bug ok-to-test approved lgtm size/S release-note-none 
    opened by chaunceyjiang 24
  • Proxy DevOps APIs with group name and version

    Proxy DevOps APIs with group name and version

    What type of PR is this?

    /kind api-change /kind cleanup /area devops

    What this PR does / why we need it:

    1. Proxy DevOps APIs with group name and version
    2. Refactor old DevOps API registers in kapis package

    Which issue(s) this PR fixes:

    Fixes #4684

    Special notes for reviewers:

    Docker image for test: johnniang/ks-apiserver:proxy-devops-v1alpha1.

    Does this PR introduced a user-facing change?

    None
    

    Additional documentation, usage docs, etc.:

    
    

    /cc @kubesphere/sig-devops /cc @zryfish /cc @wansir

    area/devops approved lgtm size/L tide/merge-method-squash kind/cleanup kind/api-change release-note-none 
    opened by JohnNiang 23
  • remove capability CRDs and update controller

    remove capability CRDs and update controller

    Signed-off-by: f10atin9 [email protected]

    What type of PR is this?

    /kind feature /area storage

    What this PR does / why we need it:

    Add new feature to manage storage capabilities in console

    Which issue(s) this PR fixes:

    Fixes #4075

    Special notes for reviewers:

    /cc @stoneshi-yunify @dkeven @kubesphere/sig-storage 
    

    Does this PR introduced a user-facing change?

    None
    

    Additional documentation, usage docs, etc.:

    
    
    kind/feature ok-to-test approved size/XXL lgtm area/storage release-note-none 
    opened by f10atin9 23
  • Add a function to shell access to the node in the kubesphere

    Add a function to shell access to the node in the kubesphere

    What type of PR is this?

    /kind documentation /kind feature

    Why we need it:

    Add a function to shell access to the node in the kubesphere

    Which issue(s) this PR fixes:

    see #4569

    Add a function to shell access to the node in the kubesphere.
    

    the proposal

    kind/feature ok-to-test approved lgtm size/L kind/api-change kind/documentation release-note 
    opened by lynxcat 22
  • Failed to bind to LDAP: userDnuid=admin,ou=Users,dc=kubesphere,dc=io  username=admin

    Failed to bind to LDAP: userDnuid=admin,ou=Users,dc=kubesphere,dc=io username=admin

    Jenkins cannot trigger the pipeline correctly

    kubesphere-version:3.3.0 jenkins-version:ks-jenkins:v3.3.0-2.319.1

    jenkins.log

    2022-09-30 14:18:25.685+0000 [id=35171] WARNING o.s.c.s.ResourceBundleMessageSource#getResourceBundle: ResourceBundle [org.acegisecurity.messages] not found for MessageSource: Can't find bundle for base name org.acegisecurity.messages, locale en 2022-09-30 14:18:25.696+0000 [id=34897] SEVERE i.k.j.d.a.KubesphereApiTokenAuthenticator#authenticate: cannot get the review response or status is null by admin 2022-09-30 14:18:25.700+0000 [id=34897] WARNING o.s.c.s.ResourceBundleMessageSource#getResourceBundle: ResourceBundle [org.acegisecurity.messages] not found for MessageSource: Can't find bundle for base name org.acegisecurity.messages, locale en 2022-09-30 14:18:25.701+0000 [id=34897] WARNING o.a.p.l.a.BindAuthenticator2#handleBindException: Failed to bind to LDAP: userDnuid=admin,ou=Users,dc=kubesphere,dc=io username=admin javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials] at java.naming/com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3259) at java.naming/com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3205) at java.naming/com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2991) at java.naming/com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2905) at java.naming/com.sun.jndi.ldap.LdapCtx.(LdapCtx.java:348) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxFromUrl(LdapCtxFactory.java:262) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:226) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:280) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:185) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:115) at java.naming/javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:730) at java.naming/javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:305) at java.naming/javax.naming.InitialContext.init(InitialContext.java:236) at java.naming/javax.naming.InitialContext.(InitialContext.java:208) at java.naming/javax.naming.directory.InitialDirContext.(InitialDirContext.java:101) at org.acegisecurity.ldap.DefaultInitialDirContextFactory.connect(DefaultInitialDirContextFactory.java:180) at org.acegisecurity.ldap.DefaultInitialDirContextFactory.newInitialDirContext(DefaultInitialDirContextFactory.java:261) at org.acegisecurity.ldap.LdapTemplate.execute(LdapTemplate.java:123) at org.acegisecurity.ldap.LdapTemplate.retrieveEntry(LdapTemplate.java:165) at org.acegisecurity.providers.ldap.authenticator.BindAuthenticator.bindWithDn(BindAuthenticator.java:87) at org.acegisecurity.providers.ldap.authenticator.BindAuthenticator.authenticate(BindAuthenticator.java:72) at org.acegisecurity.providers.ldap.authenticator.BindAuthenticator2.authenticate(BindAuthenticator2.java:49) at org.acegisecurity.providers.ldap.LdapAuthenticationProvider.retrieveUser(LdapAuthenticationProvider.java:233) at org.acegisecurity.providers.dao.AbstractUserDetailsAuthenticationProvider$1.retrieveUser(AbstractUserDetailsAuthenticationProvider.java:52) at org.springframework.security.authentication.dao.AbstractUserDetailsAuthenticationProvider.authenticate(AbstractUserDetailsAuthenticationProvider.java:133) at org.acegisecurity.providers.dao.AbstractUserDetailsAuthenticationProvider.authenticate(AbstractUserDetailsAuthenticationProvider.java:66) at org.acegisecurity.providers.ProviderManager.doAuthentication(ProviderManager.java:200) at org.acegisecurity.AbstractAuthenticationManager.authenticate(AbstractAuthenticationManager.java:47) at hudson.security.LDAPSecurityRealm$LDAPAuthenticationManager.authenticate(LDAPSecurityRealm.java:1019) at org.acegisecurity.AuthenticationManager.lambda$toSpring$1(AuthenticationManager.java:48) at jenkins.security.BasicHeaderRealPasswordAuthenticator.authenticate2(BasicHeaderRealPasswordAuthenticator.java:59) at jenkins.security.BasicHeaderProcessor.doFilter(BasicHeaderProcessor.java:83) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:97) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:110) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:80) at hudson.security.HttpSessionContextIntegrationFilter2.doFilter(HttpSessionContextIntegrationFilter2.java:62) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:97) at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:109) at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:171) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:51) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:85) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.kohsuke.stapler.DiagnosticThreadNameFilter.doFilter(DiagnosticThreadNameFilter.java:30) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at jenkins.security.SuspiciousRequestFilter.doFilter(SuspiciousRequestFilter.java:39) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:386) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.base/java.lang.Thread.run(Thread.java:829) 2022-09-30 14:18:25.701+0000 [id=34897] WARNING o.s.c.s.ResourceBundleMessageSource#getResourceBundle: ResourceBundle [org.acegisecurity.messages] not found for MessageSource: Can't find bundle for base name org.acegisecurity.messages, locale en

    kind/bug 
    opened by admintertar 0
  • From ks3.2.1/ks 3.3.0 upgrade to ks 3.3.1, workspaces-manager and users-manager still exist

    From ks3.2.1/ks 3.3.0 upgrade to ks 3.3.1, workspaces-manager and users-manager still exist

    Describe the Bug I use kubekey upgrade ks form v3.2.1/v3.3.0 to v3.3.1, platform-role workspaces-manager and users-manager still exist.

    ./kk upgrade --with-kubesphere v3.3.1-rc.3

    image

    Versions Used KubeSphere: v3.2.1 and v3.3.0

    kind/bug priority/high 
    opened by wenxin-01 0
  • Fix: Can not resolve the resource scope correctly

    Fix: Can not resolve the resource scope correctly

    Signed-off-by: Wenhao Zhou [email protected]

    What type of PR is this?

    /kind bug

    What this PR does / why we need it:

    It can not resolve the resource scope of clusters.cluster.kubesphere.io correctly. Clsuter's resource scope is judged as cluster scope, because the resource name of the request did not analyze before analyzing the scope

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for reviewers:

    Does this PR introduced a user-facing change?

    
    

    Additional documentation, usage docs, etc.:

    
    

    /cc @wansir

    kind/bug size/XS do-not-merge/release-note-label-needed 
    opened by zhou1203 2
  • The pod is already running, but the UI shows it as waiting

    The pod is already running, but the UI shows it as waiting

    Describe the Bug Status.conditions has an extra ContainerDiskPressure of False which causes the pod status to be displayed incorrectly in the UI. image

    Versions Used KubeSphere: v3.2.1 Kubernetes: 1.20.4

    kind/bug area/console 
    opened by hongzhouzi 1
  • platform-self-provisioner + cluster-admin can not see any clusters when created workspace.

    platform-self-provisioner + cluster-admin can not see any clusters when created workspace.

    Describe the Bug There is a user test, his platform-role is platform-self-provisioner, and had been invited to cluster host as cluster-admin. This user cannot see any clusters when created workspace.

    Versions Used KubeSphere: ks v3.3.1-rc.3

    Expected behavior This user can see cluster host when created workspace.

    kind/bug priority/low 
    opened by wenxin-01 0
  • Cluster-admin can not edit basic information and cluster visibility in cluster details page.

    Cluster-admin can not edit basic information and cluster visibility in cluster details page.

    Describe the Bug There is a user wx2, his platform role is platform-self-provisioner, and had been invite to cluster member-01 as cluster-admin. This user can not edit basic information and cluster visibility.

    image

    image

    image

    Versions Used KubeSphere: ks v3.3.1-rc.3

    kind/bug priority/high 
    opened by wenxin-01 1
Releases(v3.3.1-rc.3)
  • v3.3.1-rc.3(Sep 29, 2022)

  • v3.3.0-rc.3(Jun 22, 2022)

    What's Changed

    • Add the corresponding label 'kind/bug' to the issue template by @LinuxSuRen in https://github.com/kubesphere/kubesphere/pull/4952

    Full Changelog: https://github.com/kubesphere/kubesphere/compare/v3.3.0-rc.2...v3.3.0-rc.3

    Source code(tar.gz)
    Source code(zip)
  • v3.3.0-rc.2(Jun 9, 2022)

    What's Changed

    • enable es v7 TrackTotalHits parameter, make sure kubesphere cosole wo… by @styshoo in https://github.com/kubesphere/kubesphere/pull/4399
    • Fix sonarqube scan results by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4406
    • add kubesphere icon animation and katacoda scenario by @FeynmanZhou in https://github.com/kubesphere/kubesphere/pull/4401
    • Allows to override nginx ingress controller image in kubesphere config by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4418
    • Update the download version to 3.2.0 by @FeynmanZhou in https://github.com/kubesphere/kubesphere/pull/4419
    • Support query pods by status by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4434
    • fix: users can't login with ldap provider by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4436
    • Namespace should not be filterd for Cluster Gateway by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4457
    • fix : modify the version of ks core components by @123liubao in https://github.com/kubesphere/kubesphere/pull/4443
    • Typo fixing in the README_zh.md file by @Lnek in https://github.com/kubesphere/kubesphere/pull/4402
    • remove obsolete package pkg/simple/client/mysql by @123liubao in https://github.com/kubesphere/kubesphere/pull/4463
    • fix hyperlink format in README_zh.md by @wangyao-cmss in https://github.com/kubesphere/kubesphere/pull/4455
    • fix groupbinding controller unittest by @wansir in https://github.com/kubesphere/kubesphere/pull/4471
    • fix: Account password settings can have no capital letters by @live77 in https://github.com/kubesphere/kubesphere/pull/4481
    • fix: generate manifests by @anhoder in https://github.com/kubesphere/kubesphere/pull/4476
    • remove unused files by @zryfish in https://github.com/kubesphere/kubesphere/pull/4495
    • feat: add ExternalKubeAPIEnabled to cluster by @lxm in https://github.com/kubesphere/kubesphere/pull/4528
    • add --controllers option in ks-controller-manager by @live77 in https://github.com/kubesphere/kubesphere/pull/4512
    • fix: spell mistake: hostClusterNmae -> hostClusterName by @songf0011 in https://github.com/kubesphere/kubesphere/pull/4572
    • fix: All ports should be added to VitrualService by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4560
    • Add update cluster kubeconfig API by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4562
    • upgrade ingress nginx version by @chaunceyjiang in https://github.com/kubesphere/kubesphere/pull/4551
    • Support snapshotclass management by @f10atin9 in https://github.com/kubesphere/kubesphere/pull/4578
    • fix: the configuration of the Istio virtualservice is overwritten by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4581
    • update quick start and feature list in README by @FeynmanZhou in https://github.com/kubesphere/kubesphere/pull/4460
    • Sync the expiration time of kubeconfig cert file of the cluster by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4584
    • fix helm_controller assignment to entry in nil map by @chaunceyjiang in https://github.com/kubesphere/kubesphere/pull/4602
    • Add a function to shell access to the node in the kubesphere by @lynxcat in https://github.com/kubesphere/kubesphere/pull/4579
    • Fix: deepcopy before mutating shared objects by @dkeven in https://github.com/kubesphere/kubesphere/pull/4599
    • Remove unused helm template by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4618
    • Support importing Grafana templates to the workspace level. by @zhu733756 in https://github.com/kubesphere/kubesphere/pull/4446
    • Adjust metrics query for monitoring components upgrade by @junotx in https://github.com/kubesphere/kubesphere/pull/4621
    • Delete gateway when namespace is deleted by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4626
    • Convert compoent kubeedge to edgeruntime by @zhu733756 in https://github.com/kubesphere/kubesphere/pull/4478
    • feat: Serving CRD in ks apiserver by @RolandMa1986 in https://github.com/kubesphere/kubesphere/pull/4617
    • Support snapshotcontent management by @f10atin9 in https://github.com/kubesphere/kubesphere/pull/4596
    • Optimize the error message by @wenchajun in https://github.com/kubesphere/kubesphere/pull/4643
    • fix typo in comment by @lining2020x in https://github.com/kubesphere/kubesphere/pull/4664
    • Update automatically generated files by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4669
    • Fix registry verification failed by @wansir in https://github.com/kubesphere/kubesphere/pull/4678
    • Use the kube-system UID to identify if the member cluster already exists by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4651
    • feat: live-reload when configuration changed by @x893675 in https://github.com/kubesphere/kubesphere/pull/4659
    • Unify the omitempty configuration of YAML annotation by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4681
    • add pvc-autoresizer controller to ks-controller-manager by @f10atin9 in https://github.com/kubesphere/kubesphere/pull/4660
    • Fix the "index out of range" issue when sort metrics by @larryliuqing in https://github.com/kubesphere/kubesphere/pull/4691
    • add node device usage metrics by @junotx in https://github.com/kubesphere/kubesphere/pull/4705
    • Proxy DevOps APIs with group name and version by @JohnNiang in https://github.com/kubesphere/kubesphere/pull/4686
    • Set the name of the current cluster into the kubesphere-config configmap by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4679
    • add container processes/threads metrics by @junotx in https://github.com/kubesphere/kubesphere/pull/4711
    • Fix cannot change user status to disabled by @wansir in https://github.com/kubesphere/kubesphere/pull/4695
    • Add "readyToUse" filter field for volumesnapshotcontent by @f10atin9 in https://github.com/kubesphere/kubesphere/pull/4701
    • chore: add licenses check tools by @mangoGoForward in https://github.com/kubesphere/kubesphere/pull/4706
    • Add "snapshot-count" annotation for volumesnapshotClass by @f10atin9 in https://github.com/kubesphere/kubesphere/pull/4718
    • Refactor workspace API and introduced tenant v1alpha3 version by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4721
    • Add ClusterRole field in the multicluster option by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4710
    • Check if the cluster is the same when updating kubeconfig by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4734
    • Add omitempty option to LoginHistoryMaximumEntries field to avoid it being set to 0 by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4751
    • Update go mod files by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4752
    • Add storageclass accessor to ks by @f10atin9 in https://github.com/kubesphere/kubesphere/pull/4770
    • change the default audit webhook port by @wanjunlei in https://github.com/kubesphere/kubesphere/pull/4784
    • docs: update kubekey version to v2.0.0 by @polym in https://github.com/kubesphere/kubesphere/pull/4803
    • Double check in clusterclient if the cluster exists but is not cached by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4831
    • fix controller-manager Dockerfile kustomize targetos by @SinTod in https://github.com/kubesphere/kubesphere/pull/4833
    • fix: fix the gateway variable name. by @2hangchen in https://github.com/kubesphere/kubesphere/pull/4815
    • Fix crash caused by resouce discovery failed by @wansir in https://github.com/kubesphere/kubesphere/pull/4835
    • Fix typo by @wansir in https://github.com/kubesphere/kubesphere/pull/4838
    • Fix: e2e test failed by @xyz-li in https://github.com/kubesphere/kubesphere/pull/4847
    • Cleanup cluster controller and remove unused code by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4605
    • Fix disabled status not work for OAuth by @wansir in https://github.com/kubesphere/kubesphere/pull/4862
    • Fix gpu null pointer exception by @wenchajun in https://github.com/kubesphere/kubesphere/pull/4866
    • fix tcp match error by @StevenBrown008 in https://github.com/kubesphere/kubesphere/pull/4861
    • fix:modify the default resource reservation of gateway system by @hongzhouzi in https://github.com/kubesphere/kubesphere/pull/4865
    • Fix: deny the blocked user request by @wansir in https://github.com/kubesphere/kubesphere/pull/4868
    • Fix: restricted users cannot activate manually by @wansir in https://github.com/kubesphere/kubesphere/pull/4870
    • Add get workspace API by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4880
    • Reduce unnecessary status updates by @wansir in https://github.com/kubesphere/kubesphere/pull/4877
    • refactor: remove the useless CRD by @wansir in https://github.com/kubesphere/kubesphere/pull/4888
    • fix unformatted log by @xiaoping378 in https://github.com/kubesphere/kubesphere/pull/4879
    • cluster not found and repo not found by @xyz-li in https://github.com/kubesphere/kubesphere/pull/4889
    • WIP: add lint workflow by @xyz-li in https://github.com/kubesphere/kubesphere/pull/4856
    • fix: cluster list granted to users is incorrect by @wansir in https://github.com/kubesphere/kubesphere/pull/4896
    • add workspace to review list by @xyz-li in https://github.com/kubesphere/kubesphere/pull/4904
    • chore: update vendor by @wansir in https://github.com/kubesphere/kubesphere/pull/4915
    • add some unit test for models by @ONE7live in https://github.com/kubesphere/kubesphere/pull/4916
    • fix:goroutine leak when open terminal by @anhoder in https://github.com/kubesphere/kubesphere/pull/4918
    • feature:test functions in package resources/v1alpha3 by building restful's re… by @suwliang3 in https://github.com/kubesphere/kubesphere/pull/4881
    • gateway: avoid pod log connection leak by @qingwave in https://github.com/kubesphere/kubesphere/pull/4927
    • complete the help doc by @xyz-li in https://github.com/kubesphere/kubesphere/pull/4921
    • Fix kubeconfig generate bug by @xyz-li in https://github.com/kubesphere/kubesphere/pull/4936
    • Add agent to report additional information. by @xyz-li in https://github.com/kubesphere/kubesphere/pull/4928
    • add unit test for GetServiceTracing by @zhanghw0354 in https://github.com/kubesphere/kubesphere/pull/4937
    • Unified call WriteEntity func by @SinTod in https://github.com/kubesphere/kubesphere/pull/4941
    • Promptly handle the cluster when it is deleted by @iawia002 in https://github.com/kubesphere/kubesphere/pull/4939
    • fix some typos by @qingwave in https://github.com/kubesphere/kubesphere/pull/4938
    • create default token for service account by @xyz-li in https://github.com/kubesphere/kubesphere/pull/4940

    New Contributors

    • @Lnek made their first contribution in https://github.com/kubesphere/kubesphere/pull/4402
    • @wangyao-cmss made their first contribution in https://github.com/kubesphere/kubesphere/pull/4455
    • @live77 made their first contribution in https://github.com/kubesphere/kubesphere/pull/4481
    • @anhoder made their first contribution in https://github.com/kubesphere/kubesphere/pull/4476
    • @songf0011 made their first contribution in https://github.com/kubesphere/kubesphere/pull/4572
    • @chaunceyjiang made their first contribution in https://github.com/kubesphere/kubesphere/pull/4551
    • @lynxcat made their first contribution in https://github.com/kubesphere/kubesphere/pull/4579
    • @lining2020x made their first contribution in https://github.com/kubesphere/kubesphere/pull/4664
    • @larryliuqing made their first contribution in https://github.com/kubesphere/kubesphere/pull/4691
    • @mangoGoForward made their first contribution in https://github.com/kubesphere/kubesphere/pull/4706
    • @polym made their first contribution in https://github.com/kubesphere/kubesphere/pull/4803
    • @SinTod made their first contribution in https://github.com/kubesphere/kubesphere/pull/4833
    • @2hangchen made their first contribution in https://github.com/kubesphere/kubesphere/pull/4815
    • @StevenBrown008 made their first contribution in https://github.com/kubesphere/kubesphere/pull/4861
    • @hongzhouzi made their first contribution in https://github.com/kubesphere/kubesphere/pull/4865
    • @xiaoping378 made their first contribution in https://github.com/kubesphere/kubesphere/pull/4879
    • @ONE7live made their first contribution in https://github.com/kubesphere/kubesphere/pull/4916
    • @suwliang3 made their first contribution in https://github.com/kubesphere/kubesphere/pull/4881
    • @qingwave made their first contribution in https://github.com/kubesphere/kubesphere/pull/4927
    • @zhanghw0354 made their first contribution in https://github.com/kubesphere/kubesphere/pull/4937

    Full Changelog: https://github.com/kubesphere/kubesphere/compare/v3.2.1...v3.3.0-rc.2

    Source code(tar.gz)
    Source code(zip)
  • v3.2.1(Dec 20, 2021)

    New Features and Enhancements

    New Features

    • Add support for filtering Pods by status. (#4434, @iawia002, #2620, @weili520)
    • Add a tip in the image builder creation dialog box, which indicates that containerd is not supported. (#2734, @weili520)
    • Add information about available quotas in the Edit Project Quotas dialog box. (#2619, @weili520)

    Enhancements

    • Change the password verification rules to prevent passwords without uppercase letters. (#4481, @live77)
    • Fix a login issue, where a user from an LDAP identity provider cannot log in if information about the user does not exist on KubeSphere. (#4436, @RolandMa1986)
    • Fix an issue where cluster gateway metrics cannot be obtained. (#4457, @RolandMa1986)
    • Fix incorrect access modes displayed in the volume list. (#2686, @weili520)
    • Remove the Update button on the Gateway Settings page. (#2608, @weili520)
    • Fix a display error of the time range selection drop-down list. (#2715, @weili520)
    • Fix an issue where Secret data text is not displayed correctly when the text is too long. (#2600, @weili520)
    • Fix an issue where StatefulSet creation fails when a volume template is mounted. (#2730, @weili520)
    • Fix an issue where cluster gateway information fails to be obtained when the user does not have permission to view cluster information. (#2695, @harrisonliu5)
    • Fix an issue where status and run records of pipelines are not automatically updated. (#2594, @harrisonliu5)
    • Add a tip for the kubernetesDeply pipeline step, which indicates that the step is about to be deprecated. (#2660, @harrisonliu5)
    • Fix an issue where HTTP registry addresses of image registry Secrets cannot be validated. (#2795, @harrisonliu5)
    • Fix the incorrect URL of the Harbor image. (#2784, @harrisonliu5)
    • Fix a display error of log search results. (#2598, @weili520)
    • Fix an error in the volume instance YAML configuration. (#2629, @weili520)
    • Fix incorrect available workspace quotas displayed in the Edit Project Quotas dialog box. (#2613, @weili520)
    • Fix an issue in the Monitoring dialog box, where the time range selection drop-down list does not function properly. (#2722, @weili520)
    • Fix incorrect available quotas displayed in the Deployment creation page. (#2668, @weili520)
    • Change the documentation address to kubesphere.io and kubesphere.com.cn. (#2628, @weili520)
    • Fix an issue where Deployment volume settings cannot be modified. (#2656, @weili520)
    • Fix an issue where the container terminal cannot be accessed when the browser language is not English, Simplified Chinese, or Traditional Chinese. (#2702, @weili520)
    • Fix incorrect volume status displayed in the Deployment editing dialog box. (#2622, @weili520)
    • Remove labels displayed on the credential details page. (#2621, @123liubao)
    Source code(tar.gz)
    Source code(zip)
  • v3.2.0(Nov 3, 2021)

    Multi-tenancy & Multi-cluster

    New Features

    • Add support for setting the host cluster name in multi-cluster scenarios, which defaults to host. (#4211, @yuswift)
    • Add support for setting the cluster name in single-cluster scenarios. (#4220, @yuswift)
    • Add support for initializing the default cluster name by using globals.config. (#2283, @harrisonliu5)
    • Add support for scheduling Pod replicas across multiple clusters when creating a Deployment. (#2191, @weili520)
    • Add support for changing cluster weights on the project details page. (#2192, @weili520)

    Bug Fixes

    • Fix an issue in the Create Deployment dialog box in Cluster Management, where a multiple-cluster project can be selected by directly entering the project name. (#2125, @fuchunlan)
    • Fix an error that occurs when workspace or cluster basic information is edited. (#2188, @xuliwenwenwen)
    • Remove information about deleted clusters on the Basic Information page of the host cluster. (#2211, @fuchunlan)
    • Add support for sorting Services and editing Service settings in multi-cluster projects. (#2167, @harrisonliu5)
    • Refactor the gateway feature of multi-cluster projects. (#2275, @harrisonliu5)
    • Fix an issue where multi-cluster projects cannot be deleted after the workspace is deleted. (#4365, @wansir)

    Observability

    New Features

    • Add support for HTTPS communication with Elasticsearch. (#4176, @wanjunlei)
    • Add support for setting GPU types when scheduling GPU workloads. (#4225, @zhu733756)
    • Add support for validating notification settings. (#4216, @wenchajun)
    • Add support for importing Grafana dashboards by specifying a dashboard URL or by uploading a Grafana dashboard JSON file. KubeSphere automatically converts Grafana dashboards into KubeSphere cluster dashboards. (#4194, @zhu733756)
    • Add support for creating Grafana dashboards in Custom Monitoring. (#2214, @harrisonliu5)
    • Optimize the Notification Configuration feature. (#2261, @xuliwenwenwen)
    • Add support for setting a GPU limit in the Edit Default Container Quotas dialog box. (#2253, @weili520)
    • Add a default GPU monitoring dashboard.(#2580, @harrisonliu5
    • Add the Leader tag to the etcd leader on the etcd monitoring page. (#2445, @xuliwenwenwen)

    Bug Fixes

    • Fix the incorrect Pod information displayed on the Alerting Messages page and alerting policy details page. (#2215, @harrisonliu5)

    Authentication & Authorization

    New Features

    • Add a built-in OAuth 2.0 server that supports OpenID Connect. (#3525, @wansir)
    • Remove information confirmation required when an external identity provider is used. (#4238, @wansir)

    Bug Fixes

    • Fix incorrect source IP addresses in the login history. (#4331, @wansir)

    Storage

    New Features

    • Change the parameters that determine whether volume clone, volume snapshot, and volume expansion are allowed. (#2199, @weili520)
    • Add support for setting the volume binding mode during storage class creation. (#2220, @weili520)
    • Add the volume instance management feature. (#2226, @weili520)
    • Add support for multiple snapshot classes. Users are allowed to select a snapshot type when creating a snapshot. (#2218, @weili520)

    Bug Fixes

    • Change the volume access mode options on the Volume Settings tab page. (#2348, @live77)

    Network

    New Features

    • Add the Route sorting, routing rule editing, and annotation editing features on the Route list page. (#2165, @harrisonliu5)
    • Refactor the cluster gateway and project gateway features. (#2262, @harrisonliu5)
    • Add the service name auto-completion feature in routing rule creation. (#2196, @wengzhisong-hz)
    • DNS optimizations for ks-console:
      • Use the name of the ks-apiserver Service directly instead of ks-apiserver.kubesphere-system.svc as the API URL.
      • Add a DNS cache plugin (dnscache) for caching DNS results. (#2435, @live77)

    Bug Fixes

    • Add a Cancel button in the Enable Gateway dialog box. (#2245, @weili520)

    Apps & App Store

    New Features

    • Add support for setting a synchronization interval during app repository creation and editing. (#2311, @xuliwenwenwen)
    • Add a disclaimer in the App Store. (#2173, @xuliwenwenwen)
    • Add support for dynamically loading community-developed Helm charts into the App Store. (#4250, @xyz-li)

    Bug Fixes

    • Fix an issue where the value of kubesphere_app_template_count is always 0 when GetKubeSphereStats is called. (#4130, @ks-ci-bot)

    DevOps

    New Features

    • Set the system to hide the Branch column on the Run Records tab page when the current pipeline is not a multi-branch pipeline. (#2379, @live77)
    • Add the feature of automatically loading Jenkins configurations from ConfigMaps. (#75, @JohnNiang)
    • Add support for triggering pipelines by manipulating CRDs instead of calling Jenkins APIs. (#41, @rick)
    • Add support for containerd-based pipelines. (#171, @rick)
    • Add Jenkins job metadata into pipeline annotations. (#254, @JohnNiang)

    Bug Fixes

    • Fix an issue where credential creation and update fails when the value length of a parameter is too long. (#123, @shihh)
    • Fix an issue where ks-apiserver crashes when the Run Records tab page of a parallel pipeline is opened. (#93, @JohnNiang)

    Dependency Upgrades

    • Upgrade the version of Configuration as Code to 1.53. (#42, @rick)

    Installation

    New Features

    • Add support for Kubernetes v1.21.5 and v1.22.1. (#634, @pixiake)
    • Add support for automatically setting the container runtime. (#738, @pixiake)
    • Add support for automatically updating Kubernetes certificates. (#705, @pixiake)
    • Add support for installing Docker and conatinerd using a binary file. (#657, @pixiake)
    • Add support for Flannel VxLAN and direct routing. (#606, @kinglong08)
    • Add support for deploying etcd using a binary file. (#634, @pixiake)
    • Add an internal load balancer for deploying a high availability system. (#567, @24sama)

    Bug Fixes

    • Fix a runtime.RawExtension serialization error. (#731, @pixiake)
    • Fix the nil pointer error during cluster upgrade. (#684, @24sama)
    • Add support for updating certificates of Kubernetes v1.20.0 and later. (#690, @24sama)
    • Fix a DNS address configuration error. (#637, @pixiake)
    • Fix a cluster creation error that occurs when no default gateway address exists. (#661, @liulangwa)

    User Experience

    • Fix language mistakes and optimize wording. (@Patrick-LuoYu, @Felixnoo, @serenashe)
    • Fix incorrect function descriptions. (@Patrick-LuoYu, @Felixnoo, @serenashe)
    • Remove hard-coded and concatenated UI strings to better support UI localization and internationalization. (@Patrick-LuoYu, @Felixnoo, @serenashe)
    • Add conditional statements to display correct English singular and plural forms. (@Patrick-LuoYu, @Felixnoo, @serenashe)
    • Optimize the Pod Scheduling Rules area in the Create Deployment dialog box. (#2170, @qinyueshang)
    • Fix an issue in the Edit Project Quotas dialog box, where the quota value changes to 0 when it is set to infinity. (#2118, @fuchunlan)
    • Fix an issue in the Create ConfigMap dialog box, where the position of the hammer icon is incorrect when the data entry is empty. (#2206, @fuchunlan)
    • Fix the incorrect default value of the time range drop-down list on the Overview page of projects. (#2340, @fuchunlan)
    • Fix an error that occurs during login redirection, where redirection fails if the referer URL contains an ampersand (&). (#2194, @harrisonliu5)
    • Change 1 hours to 1 hour on the custom monitoring dashboard creation page. (#2276, @live77)
    • Fix the incorrect Service types on the Service list page. (#2178, @xuliwenwenwen)
    • Fix the incorrect traffic data displayed in grayscale release job details. (#2422, @harrisonliu5)
    • Fix an issue in the Edit Project Quotas dialog box, where values with two decimal places and values greater than 8 cannot be set. (#2127, @weili520)
    • Allow the About dialog box to be closed by clicking other areas of the window. (#2114, @fuchunlan)
    • Optimize the project title so that the cursor is changed into a hand when hovering over the project title. (#2128, @fuchunlan)
    • Add support for creating ConfigMaps and Secrets in the Environment Variables area of the Create Deployment dialog box. (#2227, @harrisonliu5)
    • Add support for setting Pod annotations in the Create Deployment dialog box. (#2129, @harrisonliu5)
    • Allow domain names to start with an asterisk (*). (#2432, @wengzhisong-hz)
    • Add support for searching for Harbor images in the Create Deployment dialog box. (#2132, @wengzhisong-hz)
    • Add support for mounting volumes to init containers. (#2166, @Sigboom)
    • Remove the workload auto-restart feature in volume expansion. (#4121, @wenhuwang)

    APIs

    Component Changes

    • kubefed: v0.7.0 -> v0.8.1
    • prometheus-operator: v0.42.1 -> v0.43.2
    • notification-manager: v1.0.0 -> v1.4.0
    • fluent-bit: v1.6.9 -> v1.8.3
    • kube-events: v0.1.0 -> v0.3.0
    • kube-auditing: v0.1.2 -> v0.2.0
    • istio: 1.6.10 -> 1.11.1
    • jaeger: 1.17 -> 1.27
    • kiali: v1.26.1 -> v1.38
    • KubeEdge: v1.6.2 -> 1.7.2
    Source code(tar.gz)
    Source code(zip)
  • v3.1.1(Jul 7, 2021)

    User Experience 👨‍💻‍

    Enhancements

    Bug Fixes

    Observability 🔍

    Enhancements

    • Optimize port format restrictions in notification settings. #1885
    • Add the function of specifying an existing Prometheus stack during installation. #1528

    Bug Fixes

    • Fix the mail server synchronization error. #1969
    • Fix an issue where the notification manager is reset after installer restart. #1564
    • Fix an issue where the alerting policy cannot be deleted after the monitored object is deleted. #2045
    • Add a default template for monitoring resource creation. #2029
    • Fix an issue where containers display only outdated logs. #1972
    • Fix the incorrect timestamp in alerting information. #1978
    • Optimize parameter rules in alerting policy creation. #1958
    • Fix an issue in custom monitoring, where metrics are not completely displayed due to the incorrect height of the view area. #1989
    • Adjust the limits of the node exporter and kube-state-metrics. #1537
    • Adjust the selector of the etcdHighNumberOfFailedGRPCRequests rule to prevent incorrect etcd alerts. #1540
    • Fix an issue during system upgrade, where the events ruler component is not upgraded to the latest version. #1594
    • Fix bugs of the kube_node_status_allocatable_memory_bytes and kube_resourcequota selectors. #1560

    Service Mesh 🕸

    Enhancements

    • Add a time range selector to the Tracing tab. #2022

    Bug Fixes

    DevOps 🤖

    Enhancements

    Bug Fixes

    Authentication and Authorization 🔓

    Bug Fixes

    Multi-tenant Management 🏡

    Bug Fixes

    Multi-cluster Management 🌥

    Enhancements

    Bug Fixes

    Metering and Billing 💰

    Enhancements

    • Optimize the metering and billing UI. #1896
    • Change the color of a button on the metering and billing page. #1934

    Bug Fixes

    • Fix an issue where OpenPitrix resources are not included in metering and billing. #3871
    • Fix an error generated in metering and billing for the system-workspace workspace. #2083
    • Fix an issue where projects are not completely displayed in the multi-cluster metering and billing list. #2066
    • Fix an error on the billing page generated when a dependent cluster is not loaded. #2054

    App Store 📱

    Enhancements

    Security 🛅

    Enhancements

    • Switch the branch of jwt-go to fix CVE-2020-26160. #3991
    • Upgrade the Protobuf version to v1.3.2 to fix CVE-2021-3121. #3944
    • Upgrade the Crypto version to the latest version to fix CVE-2020-29652. #3997
    • Remove the yarn.lock file to prevent incorrect CVE bug reports. #2024

    Bug Fixes

    Storage 🗃

    Enhancements

    • Improve the concurrency performance of the S3 uploader. #4011
    • Add preset CSI Provisioner CR settings. #1536

    Bug Fixes

    • Remove the invalid function of automatic storage class detection. #3947
    • Fix incorrect storage resource units of project quotas.#3973

    KubeEdge Integration 🔗

    Enhancements

    Bug Fixes

    • Fix the incorrect advertiseAddress setting of the KubeEdge CloudCore component. #1561

    Quick Start 📚

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Aug 28, 2020)

  • v2.1.1(Feb 24, 2020)

  • v2.1.0(Nov 12, 2019)

  • advanced-2.0.2(Jul 11, 2019)

  • advanced-2.0.0(May 20, 2019)

  • advanced-1.0.1(Jan 29, 2019)

    See Release Note for more details

    SHA512 for kubesphere-advanced-1.0.1.tar.gz : 29da7ecabff9a4541a8bead46f0f9ad81862df14dfafff019c8b9aaff1996861b93a00349dbf977c21024e705239b9605e37fb975bfe4eec385c1b1c9063d97c

    Source code(tar.gz)
    Source code(zip)
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Kubernetes 92.5k Oct 2, 2022
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Shiwen Cheng 465 Sep 27, 2022
Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd data, and intelligent diagnosis.

Kstone 中文 Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd

TKEStack 570 Sep 21, 2022
KubeCube is an open source enterprise-level container platform

KubeCube English | 中文文档 KubeCube is an open source enterprise-level container platform that provides enterprises with visualized management of Kuberne

KubeCube IO 303 Sep 27, 2022
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration Karmada (Kubernetes Armada) is a Kubernetes management system that enables

null 2.5k Sep 23, 2022
Enable dynamic and seamless Kubernetes multi-cluster topologies

Enable dynamic and seamless Kubernetes multi-cluster topologies Explore the docs » View Demo · Report Bug · Request Feature About the project Liqo is

LiqoTech 697 Sep 21, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 60 Aug 16, 2022
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Snigdha Sambit Aryakumar 1 Jan 25, 2022
Go WhatsApp Multi-Device Implementation in REST API with Multi-Session/Account Support

Go WhatsApp Multi-Device Implementation in REST API This repository contains example of implementation go.mau.fi/whatsmeow package with Multi-Session/

Dimas Restu H 47 Sep 21, 2022
⎈ Multi pod and container log tailing for Kubernetes

stern Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging. T

wercker 6.2k Sep 25, 2022
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Emre Savcı 33 Aug 2, 2022
K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.

AWS Cloud Map MCS Controller for K8s Introduction AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements e

Amazon Web Services 62 Sep 20, 2022
CoreDNS plugin implementing K8s multi-cluster services DNS spec.

corends-multicluster Name multicluster - implementation of Multicluster DNS Description This plugin implements the Kubernetes DNS-Based Multicluster S

Henri Yandell 28 Sep 5, 2022
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

null 4 Sep 18, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Mert Doğan 0 Oct 24, 2021
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Aidan Melen 27 Sep 14, 2022
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Computer Architecture and VLSI Systems (CARV) Laboratory 6 Sep 21, 2022
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Moby 64.1k Sep 29, 2022
Boxygen is a container as code framework that allows you to build container images from code

Boxygen is a container as code framework that allows you to build container images from code, allowing integration of container image builds into other tooling such as servers or CLI tooling.

nitric 5 Dec 13, 2021