Enterprise-grade container platform tailored for multicloud and multi-cluster management

Overview

KubeSphere Container Platform

License Build Status Go Report Card KubeSphere release

logo


What is KubeSphere

English | 中文

KubeSphere is a distributed operating system providing cloud native stack with Kubernetes as its kernel, and aims to be plug-and-play architecture for third-party applications seamless integration to boost its ecosystem. KubeSphere is also a multi-tenant enterprise-grade container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform, which includes most common functionalities needed for enterprise Kubernetes strategy, see Feature List for details.

The following screenshots give a close insight into KubeSphere. Please check What is KubeSphere for further information.

Workbench Project Resources
CI/CD Pipeline App Store

Demo Environment

Using the account demo1 / Demo123 to log in the demo environment. Please note the account is granted view access. You can also have a quick view of KubeSphere Demo Video.

Architecture

KubeSphere uses a loosely-coupled architecture that separates the frontend from the backend. External systems can access the components of the backend which are delivered as Docker containers through the REST APIs. See Architecture for details.

Architecture

Features

Feature Description
Provisioning Kubernetes Cluster Support deploy Kubernetes on your infrastructure out of box, including online and air gapped installation
Multi-cluster Management Provide a centralized control plane to manage multiple Kubernetes Clusters, support application distribution across multiple clusters and cloud providers
Kubernetes Resource Management Provide web console for creating and managing Kubernetes resources, with powerful observability including monitoring, logging, events, alerting and notification
DevOps System Provide out-of-box CI/CD based on Jenkins, and offers automated workflow tools including binary-to-image (B2I) and source-to-image (S2I)
Application Store Provide application store for Helm-based applications, and offers application lifecycle management
Service Mesh (Istio-based) Provide fine-grained traffic management, observability and tracing for distributed microservice applications, provides visualization for traffic topology
Rich Observability Provide multi-dimensional monitoring metrics, and provides multi-tenant logging, events and auditing management, support alerting and notification for both application and infrastructure
Multi-tenant Management Provide unified authentication with fine-grained roles and three-tier authorization system, supports AD/LDAP authentication
Infrastructure Management Support node management and monitoring, and supports adding new nodes for Kubernetes cluster
Storage Support Support GlusterFS, CephRBD, NFS, LocalPV (default), etc. open source storage solutions, provides CSI plugins to consume storage from cloud providers
Network Support Support Calico, Flannel, etc., provides Network Policy management, and load balancer plug-in Porter for bare metal.
GPU Support Support add GPU node, support vGPU, enables running ML applications on Kubernetes, e.g. TensorFlow

Please see the Feature and Benefits for further information.


Latest Release

KubeSphere 3.0.0 is now generally available! See the Release Notes For 3.0.0 for the updates.

Installation

KubeSphere can run anywhere from on-premise datacenter to any cloud to edge. In addition, it can be deployed on any version-compatible running Kubernetes cluster.

QuickStarts

Quickstarts include six hands-on lab exercises that help you quickly get started with KubeSphere.

Installing on Existing Kubernetes Cluster

Installing on Linux

Contributing, Support, Discussion, and Community

We ❤️ your contribution. The community walks you through how to get started contributing KubeSphere. The development guide explains how to set up development environment.

Please submit any KubeSphere bugs, issues, and feature requests to KubeSphere GitHub Issue.

Who are using KubeSphere

The user case studies page includes the user list of the project. You can submit a PR to add your institution name and homepage if you are using KubeSphere.

Landscapes



    

KubeSphere is a member of CNCF and a Kubernetes Conformance Certified platform , which enriches the CNCF CLOUD NATIVE Landscape.

Issues
  • install kubesphere error always show waiting for etcd to start

    install kubesphere error always show waiting for etcd to start

    [master1 172.16.0.2] MSG: Configuration file already exists Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start Waiting for etcd to start WARN[23:31:55 CST] Task failed ...
    WARN[23:31:55 CST] error: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1 Error: Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1 Usage: kk create cluster [flags]

    Flags: -f, --filename string Path to a configuration file -h, --help help for cluster --skip-pull-images Skip pre pull images --with-kubernetes string Specify a supported version of kubernetes --with-kubesphere Deploy a specific version of kubesphere (default v3.0.0) -y, --yes Skip pre-check of the installation

    Global Flags: --debug Print detailed information (default true)

    Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.16.0.2:2379,https://172.16.0.3:2379,https://172.16.0.4:2379 cluster-health | grep -q 'cluster is healthy'" Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout ; error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout ; error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout

    error #0: client: endpoint https://172.16.0.4:2379 exceeded header timeout error #1: client: endpoint https://172.16.0.2:2379 exceeded header timeout error #2: client: endpoint https://172.16.0.3:2379 exceeded header timeout: Process exited with status 1

    area/installation kind/support 
    opened by Widerstehen 33
  • CentOS7.6安装离线版2.0失败

    CentOS7.6安装离线版2.0失败

    使用的是2.0离线安装包,在root下执行install.sh安装失败 (https://kubesphere.io/download/offline/advanced-2.0.0)

    屏幕快照 2019-07-19 上午10 29 45
    opened by bowenliang123 28
  • Install Error

    Install Error

    OS Version: CentOS Linux release 7.5.1804 (Core) Kubesphere Version: kubesphere-all-advanced-2.0.0-dev-20190514

    selinux已关闭、swapoff已关闭、firewalld已关闭

    报错信息如下: kubernetes/preinstall : Update package management cache (YUM) ------------------------------------------------------------------------------------------------------------------------------- 25.34s kubernetes/preinstall : Install packages requirements ---------------------------------------------------------------------------------------------------------------------------------------- 2.96s gather facts from all instances -------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.72s bootstrap-os : Install libselinux-python and yum-utils for bootstrap ------------------------------------------------------------------------------------------------------------------------- 0.70s bootstrap-os : Check python-pip package ------------------------------------------------------------------------------------------------------------------------------------------------------ 0.69s bootstrap-os : Install pip for bootstrap ----------------------------------------------------------------------------------------------------------------------------------------------------- 0.68s download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.66s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --------------------------------------------------------------------------------------------------- 0.65s kubernetes/preinstall : Create kubernetes directories ---------------------------------------------------------------------------------------------------------------------------------------- 0.64s download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.64s download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.62s download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.62s bootstrap-os : Gather nodes hostnames -------------------------------------------------------------------------------------------------------------------------------------------------------- 0.60s kubernetes/preinstall : Set selinux policy --------------------------------------------------------------------------------------------------------------------------------------------------- 0.54s container-engine/docker : Ensure old versions of Docker are not installed. | RedHat ---------------------------------------------------------------------------------------------------------- 0.44s container-engine/docker : ensure service is started if docker packages are already present --------------------------------------------------------------------------------------------------- 0.44s bootstrap-os : Install epel-release for bootstrap -------------------------------------------------------------------------------------------------------------------------------------------- 0.43s kubernetes/preinstall : Create cni directories ----------------------------------------------------------------------------------------------------------------------------------------------- 0.41s kubernetes/preinstall : Hosts | populate inventory into hosts file --------------------------------------------------------------------------------------------------------------------------- 0.41s kubernetes/preinstall : Remove swapfile from /etc/fstab -------------------------------------------------------------------------------------------------------------------------------------- 0.39s failed!

    opened by cnicy 28
  • 【加急】CentOS Linux release 7.5.1804 安装失败kubesphere2.0.2离线版

    【加急】CentOS Linux release 7.5.1804 安装失败kubesphere2.0.2离线版

    问题描述

    TASK [ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db] *****************************************************************************************************************************************************************************
    Tuesday 29 October 2019  21:06:32 +0800 (0:00:00.776)       0:06:25.399 ******* 
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (15 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (14 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (13 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (12 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (11 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (10 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (9 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (8 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (7 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (6 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (5 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (4 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (3 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (2 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left).
    fatal: [ks-allinone]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl -n openpitrix-system get pod | grep openpitrix-db-deployment | awk '{print $3}'", "delta": "0:00:00.277817", "end": "2019-10-29 21:11:40.450306", "rc": 0, "start": "2019-10-29 21:11:40.172489", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]}
    
    PLAY RECAP **************************************************************************************************************************************************************************************************************************************
    ks-allinone                : ok=244  changed=90   unreachable=0    failed=1   
    
    Tuesday 29 October 2019  21:11:40 +0800 (0:05:07.919)       0:11:33.318 ******* 
    =============================================================================== 
    

    安装环境的硬件配置 8vCPU 32GB内存 CentOS Linux release 7.5.1804 离线安装kubesphere2.0.2 安装方式all-in-one离线

    错误信息或截图

    TASK [ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db] *****************************************************************************************************************************************************************************
    Tuesday 29 October 2019  21:06:32 +0800 (0:00:00.776)       0:06:25.399 ******* 
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (15 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (14 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (13 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (12 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (11 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (10 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (9 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (8 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (7 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (6 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (5 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (4 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (3 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (2 retries left).
    FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left).
    fatal: [ks-allinone]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl -n openpitrix-system get pod | grep openpitrix-db-deployment | awk '{print $3}'", "delta": "0:00:00.277817", "end": "2019-10-29 21:11:40.450306", "rc": 0, "start": "2019-10-29 21:11:40.172489", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]}
    
    PLAY RECAP **************************************************************************************************************************************************************************************************************************************
    ks-allinone                : ok=244  changed=90   unreachable=0    failed=1   
    
    Tuesday 29 October 2019  21:11:40 +0800 (0:05:07.919)       0:11:33.318 ******* 
    =============================================================================== 
    ks-devops/ks-devops : OpenPitrix | Waiting for openpitrix-db --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 307.92s
    openpitrix : OpenPitrix | Installing OpenPitrix(2) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 139.90s
    ks-monitor : ks-monitor | Getting monitor installation files ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 20.57s
    openpitrix : OpenPitrix | Getting OpenPitrix installation files ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 18.29s
    prepare/nodes : Ceph RBD | Installing ceph-common (YUM) --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 14.92s
    prepare/nodes : KubeSphere| Installing JQ (YUM) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 8.59s
    metrics-server : Metrics-Server | Getting metrics-server installation files -------------------------------------------------------------------------------------------------------------------------------------------------------------- 7.80s
    prepare/base : KubeSphere | Labeling system-workspace ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 6.91s
    ks-monitor : ks-monitor | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.35s
    ks-logging : ks-logging | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.61s
    prepare/base : KubeSphere | Getting installation init files ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 4.03s
    download : Download items ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.85s
    prepare/base : KubeSphere | Init KubeSphere ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.70s
    prepare/nodes : GlusterFS | Installing glusterfs-client (YUM) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.38s
    prepare/base : KubeSphere | Create kubesphere namespace ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.32s
    prepare/base : KubeSphere | Creating manifests ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.03s
    ks-console : ks-console | Creating manifests --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.95s
    openpitrix : OpenPitrix | Getting OpenPitrix installation files -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.82s
    ks-devops/s2i : S2I | Creating manifests ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.76s
    ks-monitor : ks-monitor | Installing prometheus-operator --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.73s
    failed!
    [[email protected] scripts]# kubectl -n openpitrix-system get pod
    NAME                                                      READY   STATUS     RESTARTS   AGE
    openpitrix-api-gateway-deployment-6bc9747f6c-xxdl2        0/1     Init:0/2   0          8m43s
    openpitrix-app-manager-deployment-7df95d8848-hv8cj        0/1     Init:0/2   0          8m42s
    openpitrix-category-manager-deployment-694bd85647-6pdwm   0/1     Init:0/2   0          8m42s
    openpitrix-cluster-manager-deployment-5c8c797d59-265wf    0/1     Init:0/2   0          8m42s
    openpitrix-db-deployment-79f9db9dd9-fcp5s                 0/1     Pending    0          8m45s
    openpitrix-etcd-deployment-84d677449b-mt6r5               0/1     Pending    0          8m44s
    openpitrix-iam-service-deployment-6bc657d9c6-khgtk        0/1     Init:0/2   0          8m41s
    openpitrix-job-manager-deployment-d9d966976-7q7dv         0/1     Init:0/2   0          8m41s
    openpitrix-minio-deployment-594df9bb5-wssbw               0/1     Pending    0          8m44s
    openpitrix-repo-indexer-deployment-5856985997-fvh76       0/1     Init:0/2   0          8m40s
    openpitrix-repo-manager-deployment-b9888bf58-87zrs        0/1     Init:0/2   0          8m40s
    openpitrix-runtime-manager-deployment-54c6bb64f4-zmfnd    0/1     Init:0/2   0          8m40s
    openpitrix-task-manager-deployment-5479966bfc-lj6xl       0/1     Init:0/2   0          8m39s
    [[email protected] scripts]# cat /etc/resolv.conf
    

    Installer版本 8vCPU 32GB内存 CentOS Linux release 7.5.1804 离线安装kubesphere2.0.2 安装方式all-in-one离线

    opened by arraycto 28
  • 【2.1】单独开启安装 DevOps 后,创建 DevOps 工程报 Service Unavailable

    【2.1】单独开启安装 DevOps 后,创建 DevOps 工程报 Service Unavailable

    单独开启安装 DevOps 后,创建 DevOps 工程时报 Service Unavailable,DevOps 组件都是正常的

    image image

    area/devops kind/bug kind/need-to-verify 
    opened by FeynmanZhou 27
  • support OIDC identity provider

    support OIDC identity provider

    Signed-off-by: hongming [email protected]

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    Support OIDC identity provider.

    Which issue(s) this PR fixes:

    Fixes #2941

    Additional documentation, usage docs, etc.:

    See also:

    https://github.com/kubesphere/community/blob/master/sig-multitenancy/auth/how-to-configure-authentication.md

    https://github.com/kubesphere/community/blob/master/sig-multitenancy/auth/oidc-Identity-provider.md

    approved dco-signoff: yes kind/feature lgtm size/XXL 
    opened by wansir 25
  • centos7.6安装几小时之后机器卡死

    centos7.6安装几小时之后机器卡死

    General remarks

    全新安装centos7.6,然后安装KubeSphere v2.1,正常运行一段时间后,机器卡死

    Describe the bug(描述下问题)

    1. 全新安装的centos7.6,然后再关闭firewalld,安装最新版KubeSphere v2.1;

    2. 安装后能正常使用,top的load average在1左右如图:

    3. 过几小时,top命令的load average能够飙涨到200以上,而且登录不上去,手动重启服务器后,又能够重新使用了,会有报错信息,情况如图:

    4. 登录页面正常显示,输入账号密码登录之后返回500,如图:

    For UI issues please also add a screenshot that shows the issue.

    Versions used(KubeSphere/Kubernetes的版本) KubeSphere:kubesphere-all-v2.1.0

    Environment(环境的硬件配置)

    1 masters: 72cpu/32g 0 nodes centos版本:CentOS Linux release 7.6.1810 (Core)

    (and other info are welcomed to help us debugging)

    To Reproduce(复现步骤) Steps to reproduce the behavior:

    1. uninstall kubeSphere,第二天来查看机器,能够正常使用,负载正常;
    2. 重新安装kubeSphere, 第二天来查看,机器无法登录,登录页面可以正常显示,问题复现。
    stale 
    opened by huanghe 25
  • cronjob定时任务时间默认是UTC

    cronjob定时任务时间默认是UTC

    [v2.02][cronjob][UTC] 今天设置了一个定时任务,发现到了时间没有执行,后来排查发现,时间默认是utc时间。 希望有个根据本地时间添加任务的办法?

    kind/design kind/feature-request priority/high 
    opened by errorcode7 21
  • ks-account pod in CrashLoopBackOff after fresh install of kubesphere v2.1.1

    ks-account pod in CrashLoopBackOff after fresh install of kubesphere v2.1.1

    Describe the Bug I install kubesphere v2.1.1 on a fresh install of rke v1.0.4. Everything seems OK except the"ks-account"pod that is in"CrashLoopBackOff" mode. The pod fail with"create client certificate failed: <nil>"

    I can display the console login page but can't login, it fails with"unable to access backend services" I did the procedure twice after resetting the nodes..and the rke cluster is healthy and fully operational

    Versions Used KubeSphere: 2.1.1 Kubernetes: rancher/rke v1.0.4 fresh install

    Environment 3 masters 8G + 3 workers 8G, all with centos 7.7 fully updated, selinux and firewalld disabled

    How To Reproduce Steps to reproduce the behavior:

    1. Setup 6 nodes with centos 7.7 8G
    2. Install rke with 3 masters and 3 workers
    3. Install kubesystem by following the instructions here

    Expected behavior all pods in the kubesphere-system up and running, then be able to login to the console

    Logs

    kubectl version
    Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
    
    kubectl get pods -n kubesphere-system
    NAME                                   READY   STATUS             RESTARTS   AGE
    ks-account-789cd8bbd5-nlvg9            0/1     CrashLoopBackOff   20         79m
    ks-apigateway-5664c4b76f-8vsf4         1/1     Running            0          79m
    ks-apiserver-75f468d48b-9dfwb          1/1     Running            0          79m
    ks-console-78bddc5bfb-zlzq9            1/1     Running            0          79m
    ks-controller-manager-d4788677-6pxhd   1/1     Running            0          79m
    ks-installer-75b8d89dff-rl76c          1/1     Running            0          81m
    openldap-0                             1/1     Running            0          80m
    redis-6fd6c6d6f9-6nfmd                 1/1     Running            0          80m
    
    kubectl logs -n kubesphere-system ks-account-789cd8bbd5-nlvg9
    W0226 00:40:43.093650       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
    E0226 00:40:44.709957       1 kubeconfig.go:62] create client certificate failed: <nil>
    E0226 00:40:44.710030       1 im.go:1030] create user kubeconfig failed sonarqube create client certificate failed: <nil>
    E0226 00:40:44.710057       1 im.go:197] user init failed sonarqube create client certificate failed: <nil>
    E0226 00:40:44.710073       1 im.go:87] create default users user sonarqube init failed: create client certificate failed: <nil>
    Error: user sonarqube init failed: create client certificate failed: <nil>
    Usage:
      ks-iam [flags]
    Flags:
          --add-dir-header                            If true, adds the file directory to the header
          --admin-email string                        default administrator's email (default "[email protected]")
          --admin-password string                     default administrator's password (default "passw0rd")
    {...}
    
    
    kubectl describe pod ks-account-789cd8bbd5-nlvg9 -n kubesphere-system
    Name:         ks-account-789cd8bbd5-nlvg9
    Namespace:    kubesphere-system
    Priority:     0
    Node:         worker3/192.168.5.47
    Start Time:   Tue, 25 Feb 2020 18:22:55 -0500
    Labels:       app=ks-account
                  pod-template-hash=789cd8bbd5
                  tier=backend
                  version=v2.1.1
    Annotations:  cni.projectcalico.org/podIP: 10.62.5.7/32
    Status:       Running
    IP:           10.62.5.7
    IPs:
      IP:           10.62.5.7
    Controlled By:  ReplicaSet/ks-account-789cd8bbd5
    Init Containers:
      wait-redis:
        Container ID:  docker://1d63b336dac9e322155ee8cc31bc266df5ab4f734de5cf683b33d8cf6abc940b
        Image:         alpine:3.10.4
        Image ID:      docker-pullable://docker.io/[email protected]:7c3773f7bcc969f03f8f653910001d99a9d324b4b9caa008846ad2c3089f5a5f
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
          until nc -z redis.kubesphere-system.svc 6379; do echo "waiting for redis"; sleep 2; done;
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Tue, 25 Feb 2020 18:22:56 -0500
          Finished:     Tue, 25 Feb 2020 18:22:56 -0500
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-hk59s (ro)
      wait-ldap:
        Container ID:  docker://b51a105434877c6a17cd4cc14bc6ad40e9d06c5542eadf1b62855a1c12cb847c
        Image:         alpine:3.10.4
        Image ID:      docker-pullable://docker.io/[email protected]:7c3773f7bcc969f03f8f653910001d99a9d324b4b9caa008846ad2c3089f5a5f
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
          until nc -z openldap.kubesphere-system.svc 389; do echo "waiting for ldap"; sleep 2; done;
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Tue, 25 Feb 2020 18:22:57 -0500
          Finished:     Tue, 25 Feb 2020 18:23:13 -0500
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-hk59s (ro)
    Containers:
      ks-account:
        Container ID:  docker://033c55c2a717e672d4abe256a9955f01d46ee47e08147a0660470ac0a9ae1055
        Image:         kubesphere/ks-account:v2.1.1
        Image ID:      docker-pullable://docker.io/kubesphere/[email protected]:6fccef53ab7a269160ce7816dfe3583730ac7fe2064ea5c9e3ce5e366f3470eb
        Port:          9090/TCP
        Host Port:     0/TCP
        Command:
          ks-iam
          --logtostderr=true
          --jwt-secret=$(JWT_SECRET)
          --admin-password=$(ADMIN_PASSWORD)
          --enable-multi-login=False
          --token-idle-timeout=40m
          --redis-url=redis://redis.kubesphere-system.svc:6379
          --generate-kubeconfig=true
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Tue, 25 Feb 2020 19:55:58 -0500
          Finished:     Tue, 25 Feb 2020 19:55:59 -0500
        Ready:          False
        Restart Count:  23
        Limits:
          cpu:     1
          memory:  500Mi
        Requests:
          cpu:     20m
          memory:  100Mi
        Environment:
          KUBECTL_IMAGE:   kubesphere/kubectl:v1.0.0
          JWT_SECRET:      <set to the key 'jwt-secret' in secret 'ks-account-secret'>      Optional: false
          ADMIN_PASSWORD:  <set to the key 'admin-password' in secret 'ks-account-secret'>  Optional: false
        Mounts:
          /etc/ks-iam from user-init (rw)
          /etc/kubesphere from kubesphere-config (rw)
          /etc/kubesphere/rules from policy-rules (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-hk59s (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             False
      ContainersReady   False
      PodScheduled      True
    Volumes:
      policy-rules:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      policy-rules
        Optional:  false
      user-init:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      user-init
        Optional:  false
      kubesphere-config:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      kubesphere-config
        Optional:  false
      kubesphere-token-hk59s:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kubesphere-token-hk59s
        Optional:    false
    QoS Class:       Burstable
    Node-Selectors:  <none>
    Tolerations:     CriticalAddonsOnly
                     node-role.kubernetes.io/master:NoSchedule
                     node.kubernetes.io/not-ready:NoExecute for 60s
                     node.kubernetes.io/unreachable:NoExecute for 60s
    Events:
      Type     Reason   Age                    From              Message
      ----     ------   ----                   ----              -------
      Warning  BackOff  3m46s (x413 over 93m)  kubelet, worker3  Back-off restarting failed container
    
    opened by titou10titou10 19
  • KubeSphere 2.0.1 安装失败 :no matches for kind \

    KubeSphere 2.0.1 安装失败 :no matches for kind \"S2iBuilderTemplate\

    安装KubeSphere 2.0.1时报错: TASK [ks-devops/s2i : S2I | Deploy S2I template&&operator] ********************************************************************************************************* Wednesday 19 June 2019 10:48:52 +0800 (0:00:02.092) 0:09:16.843 ******** failed: [ksm1] (item=java.yaml) => {"changed": true, "cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/s2i/java.yaml", "delta": "0:00:00.954746", "end": "2019-06-19 10:48:53.557794", "item": "java.yaml", "msg": "non-zero return code", "rc": 1, "start": "2019-06-19 10:48:52.603048", "stderr": "error: unable to recognize "/etc/kubesphere/s2i/java.yaml": no matches for kind "S2iBuilderTemplate" in version "devops.kubesphere.io/v1alpha1"", "stderr_lines": ["error: unable to recognize "/etc/kubesphere/s2i/java.yaml": no matches for kind "S2iBuilderTemplate" in version "devops.kubesphere.io/v1alpha1""], "stdout": "", "stdout_lines": []} failed: [ksm1] (item=nodejs.yaml) => {"changed": true, "cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/s2i/nodejs.yaml", "delta": "0:00:01.154140", "end": "2019-06-19 10:48:54.893544", "item": "nodejs.yaml", "msg": "non-zero return code", "rc": 1, "start": "2019-06-19 10:48:53.739404", "stderr": "error: unable to recognize "/etc/kubesphere/s2i/nodejs.yaml": no matches for kind "S2iBuilderTemplate" in version "devops.kubesphere.io/v1alpha1"", "stderr_lines": ["error: unable to recognize "/etc/kubesphere/s2i/nodejs.yaml": no matches for kind "S2iBuilderTemplate" in version "devops.kubesphere.io/v1alpha1""], "stdout": "", "stdout_lines": []} 操作系统版本是centos 7.6

    docker -v

    Docker version 18.06.2-ce, build 6d37f41

    opened by alex8158 19
  • ks-jenkins deploy often fail

    ks-jenkins deploy often fail

    system: centos7 k8s version: 1.19.10 kubesphere version: v3.1.0

    ks-Jenkins has raised the resource quota and provided JVM parameters, but the deployment failed. Moreover, even if Jenkins is successfully deployed, the UI page is very slow, especially in the system configuration of system management. It takes about 50 seconds to open the page. image image image

    kind/support 
    opened by yyt6200 4
  • go build with vendor by default

    go build with vendor by default

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    Set GOFLAGS to -mod=vendor by default to enable go build with vendor.

    Currently, make all, make ks-apiserver and other make target will download the modules, this is unnecessary as all of the modules are include in vendor/ dir. This PR export GOFLAGS=-mod=vendor by default, so the go build, go test or go vet will use the modules in vendor/ dir.

    Which issue(s) this PR fixes:

    Fixes #4094

    Special notes for reviewers:

    do-not-merge/release-note-label-needed kind/feature-request size/XS 
    opened by xyz-li 5
  • ks should build with vendor by default.

    ks should build with vendor by default.

    What's it about?

    What's the reason why we need it? Currently, all of the modules are include in vendor/ dir. But when run make ks-apiserver or make ks-controller-manager in a clean environment, go will load the modules first, which is unnecessary.

    [email protected]:~/kubesphere/kubesphere# make ks-apiserver
    hack/gobuild.sh cmd/ks-apiserver
    go: downloading github.com/spf13/cobra v0.0.5
    go: downloading k8s.io/apimachinery v0.18.6
    go: downloading k8s.io/component-base v0.18.6
    go: downloading k8s.io/klog v1.0.0
    go: downloading github.com/spf13/pflag v1.0.5
    go: downloading sigs.k8s.io/controller-runtime v0.6.4
    go: downloading github.com/spf13/viper v1.4.0
    go: downloading github.com/emicklei/go-restful v2.14.3+incompatible
    go: downloading github.com/emicklei/go-restful-openapi v1.4.1
    go: downloading github.com/docker/engine v1.4.2-0.20200203170920-46ec8731fbce
    go: downloading istio.io/client-go v0.0.0-20201113183938-0734e976e785
    go: downloading kubesphere.io/monitoring-dashboard v0.1.2
    go: downloading sigs.k8s.io/application v0.8.4-0.20201016185654-c8e2959e57a0
    go: downloading k8s.io/apiserver v0.18.6
    go: downloading github.com/kubernetes-csi/external-snapshotter/client/v3 v3.0.0
    go: downloading github.com/prometheus-operator/prometheus-operator v0.42.2-0.20200928114327-fbd01683839a
    go: downloading k8s.io/apiextensions-apiserver v0.18.6
    
    [email protected]:~/kubesphere/kubesphere# make vet
    go vet ./pkg/... ./cmd/...
    go: downloading k8s.io/api v0.18.6
    go: downloading github.com/prometheus/common v0.10.0
    go: downloading github.com/prometheus/prometheus v1.8.2-0.20200507164740-ecee9c8abfd1
    

    Area Suggestion

    /kind feature-request

    kind/feature-request 
    opened by xyz-li 0
  • Configurations module is disappeared sometimes

    Configurations module is disappeared sometimes

    Describe the Bug

    Normal

    image

    Abormal

    image

    Versions Used KubeSphere: v3.1

    /kind bug

    area/console kind/bug 
    opened by daniel-hutao 1
  • Supports more object fields in console when creating storageclass

    Supports more object fields in console when creating storageclass

    What's it about?

    image

    The ks-console doesn't support edit below fields:

    • volumebindingmode
    • allowedTopologies
    • mountOptions

    /area console /milestone v3.2 /kind feature-request

    area/console kind/design kind/feature-request 
    opened by stoneshi-yunify 1
  • Support multiple snapshotclass in console when taking snapshot

    Support multiple snapshotclass in console when taking snapshot

    What's it about? image

    Currently ks-controller will create a snapshotcalss for each storageclass with the same name, and the snapshotclass will be used when user takes snapshot in console.

    The problem is the default snapshotclass ks-console generated might not work, e.g. it lacks ![image](https://user-images.githubusercontent.com/70880165/126417227-0d9191b4-953d-4c15-ae58-0423e06bcddc.png).

    And just like the storageclass, user might want to create multiple different snapshotclass and take snapshot by each. So we need to support this scenario in console. Maybe just adding a dropdown list to choose the different snapshotclass.

    /kind feature-request /kind storage /milestone v3.2

    area/console kind/design kind/feature-request 
    opened by stoneshi-yunify 2
  • Remove workload auto-restart function when pvc expanded

    Remove workload auto-restart function when pvc expanded

    Describe the Bug Need to remove the controller function in kubesphere/pkg/controller/storage/expansion

    We can not stop a workload (by scaling down to replicas=0) without user's confirmation, because it may interrupt user's application and affect user's business. And current function can only handle deployments/statefulsets, so it's incomplete.

    And there are some storage supporting online filesystem expansion that doesn't require pod restart.

    Versions Used KubeSphere: v3.1.1 and before Kubernetes: (If KubeSphere installer used, you can skip this)

    /kind bug /area storage /milestone v3.2 /good-first-issue

    area/storage good first issue help wanted kind/bug 
    opened by stoneshi-yunify 1
  • ks-console: change pvc access modes to list

    ks-console: change pvc access modes to list

    Describe the Bug image

    Current the access mode is a single selection, but it should be a string array in fact.

    spec: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#persistentvolumeclaimspec-v1-core

    Versions Used KubeSphere: v3.1.1 and before Kubernetes: (If KubeSphere installer used, you can skip this)

    /area console /milestone v3.2 /kind bug

    area/console good first issue help wanted kind/bug 
    opened by stoneshi-yunify 2
  • [v3.2] o11y/Edge/GPU plan

    [v3.2] o11y/Edge/GPU plan

    • [ ] https://github.com/kubesphere/kubesphere/issues/4087
    • [ ] https://github.com/kubesphere/kubesphere/issues/4081
    • [ ] https://github.com/kubesphere/kubesphere/issues/4082
    • [ ] https://github.com/kubesphere/kubesphere/issues/4083
    • [ ] https://github.com/kubesphere/kubesphere/issues/4086

    /area console /area edge /area monitoring /area notification /area observability /kind feature-request

    area/console area/edge area/monitoring area/notification area/observability kind/feature-request 
    opened by benjaminhuo 0
  • [v3.2/o11y/notification] Notification Enhancement

    [v3.2/o11y/notification] Notification Enhancement

    • [ ] Notification filter
    • [ ] Notification setting validation

    /kind design /area notification /area observability /kind feature-request

    area/notification area/observability kind/feature-request 
    opened by benjaminhuo 1
Releases(v3.1.1)
A curated list of awesome Kubernetes tools and resources.

Awesome Kubernetes Resources A curated list of awesome Kubernetes tools and resources. Inspired by awesome list and donnemartin/awesome-aws. The Fiery

Tom Huang 806 Jul 22, 2021
Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform What is KubeSphere English | 中文 KubeSphere is a distributed operating system providing cloud native stack with Kubernete

KubeSphere 6.2k Jul 27, 2021
KubeCube is an open source enterprise-level container platform

KubeCube English | 中文文档 KubeCube is an open source enterprise-level container platform that provides enterprises with visualized management of Kuberne

KubeCube IO 25 Jul 22, 2021
An operator for managing ephemeral clusters in GKE

Test Cluster Operator for GKE This operator provides an API-driven cluster provisioning for integration and performance testing of software that integ

Isovalent 28 Mar 19, 2021
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Kuma 2.3k Jul 28, 2021
A Simple and Comprehensive Vulnerability Scanner for Container Images, Git Repositories and Filesystems. Suitable for CI

A Simple and Comprehensive Vulnerability Scanner for Containers and other Artifacts, Suitable for CI. Table of Contents Abstract Features Installation

Aqua Security 8k Jul 23, 2021
👀 A Kubernetes cluster resource sanitizer

Popeye - A Kubernetes Cluster Sanitizer Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources an

Fernand Galiana 3k Jul 25, 2021
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

Kilo Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes. Overview Kilo connects nodes in a cluster by providing an e

Lucas Servén Marín 981 Jul 21, 2021
Large-scale Kubernetes cluster diagnostic tool.

English | 简体中文 KubeProber What is KubeProber? KubeProber is a diagnostic tool designed for large-scale Kubernetes clusters. It is used to perform diag

Erda 18 Jul 20, 2021
GitHub中文排行榜,帮助你发现高分优秀中文项目、更高效地吸收国人的优秀经验成果;榜单每周更新一次,敬请关注!

榜单设立目的 ???? GitHub中文排行榜,帮助你发现高分优秀中文项目; 各位开发者伙伴可以更高效地吸收国人的优秀经验、成果; 中文项目只能满足阶段性的需求,想要有进一步提升,还请多花时间学习高分神级英文项目; 榜单设立范围 设立1个总榜(所有语言项目汇总排名)、18个分榜(单个语言项目排名);

kon9chunkit 36.7k Jul 25, 2021
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Fernand Galiana 12.8k Jul 21, 2021
The DataStax Kubernetes Operator for Apache Cassandra

Cass Operator The DataStax Kubernetes Operator for Apache Cassandra®. This repository replaces the old datastax/cass-operator for use-cases in the k8s

K8ssandra 25 Jul 19, 2021
A tool to dump and restore Prometheus data blocks.

promdump promdump dumps the head and persistent blocks of Prometheus. It supports filtering the persistent blocks by time range. Why This Tool When de

Ivan Sim 70 Jul 19, 2021
Hubble - Network, Service & Security Observability for Kubernetes using eBPF

Network, Service & Security Observability for Kubernetes What is Hubble? Getting Started Features Service Dependency Graph Metrics & Monitoring Flow V

Cilium 1.5k Jul 26, 2021