Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Overview

Karmada

Karmada-logo

build Go Report Card LICENSE Releases Slack CII Best Practices

Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.

Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.

Why Karmada:

  • K8s Native API Compatible

    • Zero change upgrade, from single-cluster to multi-cluster
    • Seamless integration of existing K8s tool chain
  • Out of the Box

    • Built-in policy sets for scenarios, including: Active-active, Remote DR, Geo Redundant, etc.
    • Cross-cluster applications auto-scaling, failover and load-balancing on multi-cluster.
  • Avoid Vendor Lock-in

    • Integration with mainstream cloud providers
    • Automatic allocation, migration across clusters
    • Not tied to proprietary vendor orchestration
  • Centralized Management

    • Location agnostic cluster management
    • Support clusters in Public cloud, on-prem or edge
  • Fruitful Multi-Cluster Scheduling Policies

    • Cluster Affinity, Multi Cluster Splitting/Rebalancing,
    • Multi-Dimension HA: Region/AZ/Cluster/Provider
  • Open and Neutral

    • Jointly initiated by Internet, finance, manufacturing, teleco, cloud providers, etc.
    • Target for open governance with CNCF

Notice: this project is developed in continuation of Kubernetes Federation v1 and v2. Some basic concepts are inherited from these two versions.

Architecture

Architecture

The Karmada Control Plane consists of the following components:

  • Karmada API Server
  • Karmada Controller Manager
  • Karmada Scheduler

ETCD stores the karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager perform operations based on the API objects you create through the API server.

The Karmada Controller Manager runs the various controllers, the controllers watch karmada objects and then talk to the underlying clusters' API servers to create regular Kubernetes resources.

  1. Cluster Controller: attach kubernetes clusters to Karmada for managing the lifecycle of the clusters by creating cluster object.

  2. Policy Controller: the controller watches PropagationPolicy objects. When PropagationPolicy object is added, it selects a group of resources matching the resourceSelector and create ResourceBinding with each single resource object.

  3. Binding Controller: the controller watches ResourceBinding object and create Work object corresponding to each cluster with single resource manifest.

  4. Execution Controller: the controller watches Work objects.When Work objects are created, it will distribute the resources to member clusters.

Concepts

Resource template: Karmada uses Kubernetes Native API definition for federated resource template, to make it easy to integrate with existing tools that already adopt on Kubernetes

Propagation Policy: Karmada offers standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.

  • Support 1:n mapping of Policy: workload, users don't need to indicate scheduling constraints every time creating federated applications.
  • With default policies, users can just interact with K8s API

Override Policy: Karmada provides standalone Override Policy API for specializing cluster relevant configuration automation. E.g.:

  • Override image prefix according to member cluster region
  • Override StorageClass according to cloud provider

The following diagram shows how Karmada resources are involved when propagating resources to member clusters.

karmada-resource-relation

Quick Start

This guide will cover:

  • Install karmada control plane components in a Kubernetes cluster which as known as host cluster.
  • Join a member cluster to karmada control plane.
  • Propagate an application by karmada.

Prerequisites

Install karmada control plane

1. Clone this repo to your machine:

git clone https://github.com/karmada-io/karmada

2. Change to karmada directory:

cd karmada

3. Deploy and run karmada control plane:

run the following script:

# hack/local-up-karmada.sh

This script will do following tasks for you:

  • Start a Kubernetes cluster to run the karmada control plane, aka. the host cluster.
  • Build karmada control plane components based on a current codebase.
  • Deploy karmada control plane components on host cluster.
  • Create member clusters and join to Karmada.

If everything goes well, at the end of the script output, you will see similar messages as follows:

Local Karmada is running.

To start using your karmada, run:
  export KUBECONFIG="$HOME/.kube/karmada.config"
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.

To manage your member clusters, run:
  export KUBECONFIG="$HOME/.kube/members.config"
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.

There are two contexts about karmada:

  • karmada-apiserver kubectl config use-context karmada-apiserver
  • karmada-host kubectl config use-context karmada-host

The karmada-apiserver is the main kubeconfig to be used when interacting with karmada control plane, while karmada-host is only used for debugging karmada installation with the host cluster. You can check all clusters at any time by running: kubectl config view. To switch cluster contexts, run kubectl config use-context [CONTEXT_NAME]

Demo

Demo

Propagate application

In the following steps, we are going to propagate a deployment by karmada.

1. Create nginx deployment in karmada.

First, create a deployment named nginx:

kubectl create -f samples/nginx/deployment.yaml

2. Create PropagationPolicy that will propagate nginx to member cluster

Then, we need create a policy to drive the deployment to our member cluster.

kubectl create -f samples/nginx/propagationpolicy.yaml

3. Check the deployment status from karmada

You can check deployment status from karmada, don't need to access member cluster:

$ kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           20s

Kubernetes compatibility

Kubernetes 1.15 Kubernetes 1.16 Kubernetes 1.17 Kubernetes 1.18 Kubernetes 1.19 Kubernetes 1.20 Kubernetes 1.21
Karmada v0.6 -
Karmada v0.7 -
Karmada v0.8
Karmada HEAD (master)

Key:

  • Karmada and the Kubernetes version are exactly compatible.
  • + Karmada has features or API objects that may not be present in the Kubernetes version.
  • - The Kubernetes version has features or API objects that Karmada can't use.

Meeting

Regular Community Meeting:

Resources:

Contact

If you have questions, feel free to reach out to us in the following ways:

Contributing

If you're interested in being a contributor and want to get involved in developing the Karmada code, please see CONTRIBUTING for details on submitting patches and the contribution workflow.

License

Karmada is under the Apache 2.0 license. See the LICENSE file for details.

Comments
  • crds 资源丢失

    crds 资源丢失

    What happened: 重启服务器之后,,karmada下的crds 资源都丢失了 What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • Karmada version:
    • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
    • Others:
    kind/bug 
    opened by zzz-Uzi 43
  • Join cluster error

    Join cluster error

    [[email protected] ~]# kubectl karmada join kubernetes-admin --kubeconfig=/etc/karmada/karmada-apiserver.config --cluster-kubeconfig=$HOME/.kube/config W0120 16:17:30.264037 12485 cluster.go:106] failed to create cluster(kubernetes-admin). error: Cluster.cluster.karmada.io "kubernetes-admin" is invalid: [spec.secretRef.namespace: Required value, spec.secretRef.name: Required value, spec.impersonatorSecretRef.namespace: Required value, spec.impersonatorSecretRef.name: Required value] W0120 16:17:30.264245 12485 cluster.go:50] failed to create cluster(kubernetes-admin). error: Cluster.cluster.karmada.io "kubernetes-admin" is invalid: [spec.secretRef.namespace: Required value, spec.secretRef.name: Required value, spec.impersonatorSecretRef.namespace: Required value, spec.impersonatorSecretRef.name: Required value] Error: failed to create cluster(kubernetes-admin) object. error: Cluster.cluster.karmada.io "kubernetes-admin" is invalid: [spec.secretRef.namespace: Required value, spec.secretRef.name: Required value, spec.impersonatorSecretRef.namespace: Required value, spec.impersonatorSecretRef.name: Required value]

    kubectl-karmada version: make kubectl-karmada by the latest codes in github [[email protected] ~]# kubectl-karmada version kubectl karmada version: version.Info{GitVersion:"", GitCommit:"", GitTreeState:"clean", BuildDate:"2022-01-20T02:30:56Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

    kind/question 
    opened by chenrc0529-ai 42
  • karmadactl support apply command

    karmadactl support apply command

    Signed-off-by: carlory [email protected]

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    Some opensource products is deployed by a long-size yaml,such as calico https://docs.projectcalico.org/manifests/calico.yaml. this pr provide an easy way to deploy it to member cluster.

    (⎈ |karmada:default)➜  karmada git:(karmadactl-apply) go run cmd/karmadactl/karmadactl.go apply -h
    Apply a configuration to a resource by file name or stdin and propagate them into member clusters. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
    
     JSON and YAML formats are accepted.
    
     Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See https://issues.k8s.io/34274.
    
     Note: It implements the function of 'kubectl apply' by default. If you want to propagate them into member clusters, please use 'kubectl apply --all-clusters'.
    
    Usage:
      karmadactl apply (-f FILENAME | -k DIRECTORY) [flags]
    
    Examples:
      # Apply the configuration without propagation into member clusters. It acts as 'kubectl apply'.
      karmadactl apply -f manifest.yaml
    
      # Apply resources from a directory and propagate them into all member clusters.
      karmadactl apply -f dir/ --all-clusters
    
    Flags:
          --all                             Select all resources in the namespace of the specified resource types.
          --all-clusters                    If present, propagates a group of resources to all member clusters.
          --allow-missing-template-keys     If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. (default true)
          --cascade string[="background"]   Must be "background", "orphan", or "foreground". Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background. (default "background")
          --dry-run string[="unchanged"]    Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. (default "none")
          --field-manager string            Name of the manager used to track field ownership. (default "kubectl-client-side-apply")
      -f, --filename strings                that contains the configuration to apply
          --force                           If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
          --force-conflicts                 If true, server-side apply will force the changes against conflicts.
          --grace-period int                Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion). (default -1)
      -h, --help                            help for apply
          --karmada-context string          Name of the cluster context in control plane kubeconfig file.
      -k, --kustomize string                Process a kustomization directory. This flag can't be used together with -f or -R.
      -n, --namespace string                If present, the namespace scope for this CLI request
          --openapi-patch                   If true, use openapi to calculate diff when the openapi presents and the resource can be found in the openapi spec. Otherwise, fall back to use baked-in types. (default true)
      -o, --output string                   Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).
          --overwrite                       Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration (default true)
          --prune                           Automatically delete resource objects, that do not appear in the configs and are created by either apply or create --save-config. Should be used with either -l or --all.
          --prune-whitelist stringArray     Overwrite the default whitelist with <group/version/kind> for --prune
      -R, --recursive                       Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
      -l, --selector string                 Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.
          --server-side                     If true, apply runs in the server instead of the client.
          --show-managed-fields             If true, keep the managedFields when printing objects in JSON or YAML format.
          --template string                 Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
          --timeout duration                The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object
          --validate string                 Must be one of: strict (or true), warn, ignore (or false).
                                            		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.
                                            		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise.
                                            		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields. (default "strict")
          --wait                            If true, wait for resources to be gone before returning. This waits for finalizers.
    
    Global Flags:
          --add-dir-header                   If true, adds the file directory to the header of the log messages
          --alsologtostderr                  log to standard error as well as files
          --kubeconfig string                Paths to a kubeconfig. Only required if out-of-cluster.
          --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
          --log-dir string                   If non-empty, write log files in this directory
          --log-file string                  If non-empty, use this log file
          --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
          --logtostderr                      log to standard error instead of files (default true)
          --one-output                       If true, only write logs to their native severity level (vs also writing to each lower severity level)
          --skip-headers                     If true, avoid header prefixes in the log messages
          --skip-log-headers                 If true, avoid headers when opening log files
          --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
      -v, --v Level                          number for the log level verbosity
          --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging
    (⎈ |karmada:default)➜  karmada git:(karmadactl-apply) go run cmd/karmadactl/karmadactl.go apply -f ~/manifests.yaml
    deployment.apps/micro-dao-2048 created
    service/micro-dao-2048 created
    (⎈ |karmada:default)➜  karmada git:(karmadactl-apply) go run cmd/karmadactl/karmadactl.go apply -f ~/manifests.yaml --all-clusters
    deployment.apps/micro-dao-2048 unchanged
    propagationpolicy.policy.karmada.io/micro-dao-2048-6d7f8d5f5b created
    service/micro-dao-2048 unchanged
    propagationpolicy.policy.karmada.io/micro-dao-2048-76579ccd86 created
    (⎈ |karmada:default)➜  karmada git:(karmadactl-apply) kubectl get deploy,svc,pp
    NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/micro-dao-2048   0/2     4            0           37s
    
    NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP   2m7s
    service/micro-dao-2048   ClusterIP   10.99.253.139   <none>        80/TCP    37s
    
    NAME                                                            AGE
    propagationpolicy.policy.karmada.io/micro-dao-2048-6d7f8d5f5b   27s
    propagationpolicy.policy.karmada.io/micro-dao-2048-76579ccd86   26s
    

    Which issue(s) this PR fixes: Ref #1934

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    `karmadactl`: Introduced `apply` subcommand to apply a configuration to a resource by file name or stdin.
    
    kind/feature approved size/XXL lgtm 
    opened by carlory 39
  • Reschedule bindings on cluster change

    Reschedule bindings on cluster change

    What happened: Unjoined clusters still remain in binding.spec.clusters

    What you expected to happen: Unjoined clusters should be deleted from binding.spec.clusters

    How to reproduce it (as minimally and precisely as possible): 1.Set up environment(script v0.8)

    [email protected]:~/karmada# hack/local-up-karmada.sh
    
    [email protected]:~/karmada# hack/create-cluster.sh member1 $HOME/.kube/karmada.config
    
    [email protected]:~/karmada# kubectl config use-context karmada-apiserver
    
    [email protected]:~/karmada# karmadactl join member1 --cluster-kubeconfig=$HOME/.kube/karmada.config
    
    [email protected]:~/karmada# kubectl apply -f samples/nginx
    
    [email protected]:~/karmada# kubectl get deploy
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   1/1     1            1           47h
    

    2.Unjoin member1

    [email protected]:~/karmada# karmadactl unjoin member1
    
    [email protected]:~/karmada# kubectl get clusters
    No resources found
    

    3.Check binding.spec.clusters

    [email protected]:~/karmada# kubectl describe rb
    ......
    Spec:
      Clusters:
        Name:  member1
    ......
    

    Anything else we need to know?: Is it an expected behavior? If not, who is supposed to take the responsibility to delete unjoined clusters from binding? Scheduler or other controllers (like cluster controller)?

    Environment:

    • Karmada version:v0.8.0
    • Others:
    kind/bug priority/important-soon 
    opened by dddddai 39
  • speed up docker build

    speed up docker build

    Signed-off-by: yingjinhui [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: Speed up docker image building.

    Which issue(s) this PR fixes: Fixes #1729

    Special notes for your reviewer: Implement of https://github.com/karmada-io/karmada/issues/1729#issuecomment-1120238596.

    Does this PR introduce a user-facing change?:

    NONE
    
    kind/feature approved size/XL lgtm 
    opened by ikaven1024 34
  • custom enable or disable of scheduler plugins

    custom enable or disable of scheduler plugins

    Signed-off-by: chaunceyjiang [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: custom enable or disable of scheduler plugins

    Which issue(s) this PR fixes: Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    `karmada-scheduler`: Introduced `--plugins` flag to enable or disable scheduler plugins 
    
    kind/feature approved size/L lgtm 
    opened by chaunceyjiang 33
  • feat: agent report secret

    feat: agent report secret

    Signed-off-by: charlesQQ [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: Allowed karmada-agent report secret for Pull mode cluster

    Which issue(s) this PR fixes: Part of https://github.com/karmada-io/karmada/issues/1946

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    `karmada-agent`: Introduced `--report-secrets` flag to allow secrets to be reported to the Karmada control plane during registering.
    
    
    kind/feature approved size/XXL lgtm 
    opened by CharlesQQ 32
  • add e2etest for aggregated api endpoint

    add e2etest for aggregated api endpoint

    What type of PR is this?

    What this PR does / why we need it: Add e2e test case for aggregated-api-endpoint Which issue(s) this PR fixes: Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
    approved size/L lgtm 
    opened by wwwnay 31
  • Reschedule ResourceBinding when adding a cluster

    Reschedule ResourceBinding when adding a cluster

    Signed-off-by: chaunceyjiang [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it:

    Which issue(s) this PR fixes: Fixes #2261

    Special notes for your reviewer: When a new cluster is joined, if the Placement is empty or the replicaSchedulingType is Duplicated. Will propagate resources to the new cluster

    Does this PR introduce a user-facing change?:

    `karmada-scheduler`: Now the scheduler starts to re-schedule in case of cluster state changes.
    
    kind/feature approved size/L lgtm 
    opened by chaunceyjiang 30
  • Add karmadactl addons subcommand

    Add karmadactl addons subcommand

    Co-authored-by: duanmeng [email protected] Signed-off-by: wuyingjun [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: Add karmadactl addons subcommand Which issue(s) this PR fixes: Fixes https://github.com/karmada-io/karmada/issues/1957

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
    kind/feature approved size/XXL lgtm 
    opened by wuyingjun-lucky 30
  • set karmadactl default config value from “” to karmada-apiserver.config

    set karmadactl default config value from “” to karmada-apiserver.config

    Signed-off-by: wuyingjun [email protected]

    What type of PR is this? /kind bug

    What this PR does / why we need it: fix unjoin cluster example when dry-run option is set Which issue(s) this PR fixes: Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
    kind/bug size/XS 
    opened by wuyingjun-lucky 29
  • Support Deploy karmada-scheduler-estimator in Physical Machine

    Support Deploy karmada-scheduler-estimator in Physical Machine

    Signed-off-by: raymondmiaochaoyue [email protected]

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    See https://github.com/karmada-io/karmada/issues/2487

    Which issue(s) this PR fixes:

    Fixes #2487

    Special notes for your reviewer:

    NONE

    Does this PR introduce a user-facing change?:

    NONE
    
    kind/feature size/M 
    opened by cmicat 1
  • How to create helm application in multi clusters

    How to create helm application in multi clusters

    Please provide an in-depth description of the question you have: I want to create applications in multiple clusters with argocd and karmada .I try to deploy a grafana by chart,but it keep creating sercret for serviceaccount

    grafana-token-xx9sq   kubernetes.io/service-account-token   3      3s
    grafana-token-z29d5   kubernetes.io/service-account-token   3      7s
    grafana-token-zlq2m   kubernetes.io/service-account-token   3      7s
    grafana-token-zsrp7   kubernetes.io/service-account-token   3      9s
    grafana-token-zxm25   kubernetes.io/service-account-token   3      14s
    grafana-token-znxwf   kubernetes.io/service-account-token   3      0s
    grafana-token-zpvnh   kubernetes.io/service-account-token   3      0s
    grafana-token-zsx79   kubernetes.io/service-account-token   3      0s
    grafana-token-s9mrp   kubernetes.io/service-account-token   3      0s
    grafana-token-xhdnd   kubernetes.io/service-account-token   3      0s
    grafana-token-9dst5   kubernetes.io/service-account-token   3      0s
    grafana-token-6g2nc   kubernetes.io/service-account-token   3      0s
    grafana-token-vwgh6   kubernetes.io/service-account-token   3      0s
    grafana-token-f9h5w   kubernetes.io/service-account-token   3      0s
    grafana-token-5m2dv   kubernetes.io/service-account-token   3      0s
    grafana-token-xtbp9   kubernetes.io/service-account-token   3      0s
    grafana-token-6qbk8   kubernetes.io/service-account-token   3      0s
    grafana-token-kr7gl   kubernetes.io/service-account-token   3      0s
    grafana-token-r7vp6   kubernetes.io/service-account-token   3      0s
    grafana-token-9tmj2   kubernetes.io/service-account-token   3      0s
    grafana-token-q2j9m   kubernetes.io/service-account-token   3      0s
    
    [[email protected] test]# kubectl get secrets | wc -l
    4981
    [[email protected] test]# kubectl get secrets | wc -l
    4991
    [[email protected] test]# kubectl get secrets | wc -l
    5002
    [[email protected] test]# kubectl get secrets | wc -l
    5013
    
    1. I used the command to create it
    argocd app create grafana --repo https://charts.bitnami.com/bitnami --helm-chart grafana --revision 8.1.1 --dest-namespace default --dest-name karmada-apiserver --helm-set service.type=NodePort
    

    微信图片

    1. apply the propagation
    apiVersion: policy.karmada.io/v1alpha1
    kind: PropagationPolicy
    metadata:
      name: grafana
    spec:
      resourceSelectors:
        - apiVersion: apps/v1
          kind: Deployment
          name: grafana
        - apiVersion: v1
          kind: Service
          name: grafana
        - apiVersion: v1
          kind: PersistentVolumeClaim
          name: grafana
        - apiVersion: v1
          kind: ConfigMap
          name: grafana-envvars
        - apiVersion: v1
          kind: Secret
          name: grafana-admin
        - apiVersion: v1
          kind: ServiceAccount
          name: grafana
      placement:
        clusterAffinity:
          clusterNames:
            - member1
    

    And I tried another way to verify the cause of the problem

    1. helm install bitnami/grafana --kubeconfig /etc/karmada/karmada-apiserver.config
    2. apply the propagation

    But the results are the same,I think it's Karmada, not argocd caused it

    What do you think about this question?:

    How should I make the application correctly distributed. What's the problem with my method?

    Environment:

    • Karmada version: 1.2.0
    • Kubernetes version: 1.22.9
    • Others: grafana 8.1.1
    kind/question 
    opened by lts0609 5
  • Add component descriptions

    Add component descriptions

    Signed-off-by: Poor12 [email protected]

    What type of PR is this? /kind cleanup

    What this PR does / why we need it: Now the descriptions of our components are too simplistic, and some are not even complete sentences. This pr complements the descriptions of all components to be able to generate documentation.

    Which issue(s) this PR fixes: Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?: None

    size/M kind/cleanup 
    opened by Poor12 2
  • Member cluster healthy checking does not work

    Member cluster healthy checking does not work

    Please provide an in-depth description of the question you have: After registering member cluster to karmada with push mode, and using "kubectl get cluster", found the cluster status was ready. Then disconnect member by firewall, after more than 10 minutes, the cluster status was also ready, not change to fail. Are there configurations needed for cluster heathy checking ? What do you think about this question?:

    Environment:

    • Karmada version: 1.3.0
    • Kubernetes version: 1.23.4
    • Others:
    kind/question 
    opened by alex-wong123 19
  • karmadactl apply uses factory to access member cluster

    karmadactl apply uses factory to access member cluster

    What type of PR is this? /kind feature

    Which issue(s) this PR fixes: Fixes # Part of#2349 Special notes for your reviewer: Test

    go run ./cmd/karmadactl/karmadactl.go --kubeconfig /etc/karmada/karmada-apiserver.config apply -f ./_tmp/nginx.yaml
    namespace/test1 created
    deployment.apps/nginx1 created
    

    Does this PR introduce a user-facing change?:

    NONE
    
    kind/feature size/M 
    opened by helen-frank 4
Releases(v1.3.0)
  • v1.3.0(Aug 31, 2022)

    What's New

    Taint-based eviction in graceful way

    We introduced a new controller named taint manager which aims to evict workloads from faulty clusters after a grace period. Then the scheduler would select new best-fit clusters for the workloads. In addition, if the feature GracefulEviction is enabled, the eviction will be very smooth, that is, the removal of evicted workloads will be delayed until the workloads are available on new clusters or reach the maximum grace period. For more details please refer to Failover Overview.

    (Feature contributor: @Garrybest, @XiShanYongYe-Chang)

    Global proxy for resources across multi-clusters

    We introduced a new proxy feature to karmada-search that allows users to access resources in multiple clusters in a way just like accessing resources in a single cluster. No matter whether the resources are managed by Karmada, by leveraging the proxy, users can manipulate the resources from the Karmada control plane. For more details please refer to Global Resource Proxy.

    (Feature contributor: @ikaven1024, @XiShanYongYe-Chang)

    Cluster resource modeling

    To provide a more accurate scheduling basis for the scheduler, we introduced a way to model the cluster's available resources. The cluster status controller will model the resources as per the customized resource models, which is more accurate than the general resource summary. For more details please refer to Cluster Resource Modeling.

    (Feature contributor: @halfrost, @Poor12)

    Bootstrap token-based cluster registration

    Now for clusters in Pull mode, we provide a way for them to register with the Karmada control plane. By leveraging the commands token and register in kubectl, the registration process including deploying the karmada-agent can be completed very easily. For more details please refer to Register cluster with Pull mode.

    (Feature contributor: @lonelyCZ )

    Significant improvement in system scalability

    We improved the system scalability, such as:

    • Enable pprof(#2008)
    • Introduce cachedRESTMapper(#2187)
    • Adopt the transform function to reduce memory usage(#2383)

    With these improvements, Karmada can easily manage hundreds of huge clusters. The detailed test report will be released soon.

    Other Notable Changes

    API changes

    • The Cluster API is added optional field ID to uniquely identify the cluster. (@RainbowMango, #2180)
    • The Cluster API is added optional field ProxyHeader to specify the HTTP header required by the proxy server. (@mrlihanbo, #1874)
    • The Cluster API is added optional field named ResourceModels to specify resource modeling. (@halfrost, #2386)
    • The Work and ResourceBinding/ClusterResourceBinding APIs are added field health to represent the state of workload. (@XiShanYongYe-Chang, #2351)

    Bug Fixes

    • karmadactl: Fixed issue that Kubernetes v1.24 cannot be joined. (@zgfh, #1972)
    • karmadactl: Fixed a panic issue when retrieving resources from an unknown cluster(karmadactl get xxx --cluster=not-exist). (@my-git9, #2171)
    • karmadactl: Fixed failed promoting if a resource with another kind using the same name has been promoted before. (@wuyingjun-lucky, #1824)
    • karmada-search: Fixed panic when the resource annotation is nil. (@XiShanYongYe-Chang, #1921)
    • karmada-search: Fixed panic comparing uncomparable type cache.ResourceEventHandlerFuncs. (@liys87x, #1951)
    • karmada-search: Fixed failed query on a single namespace (@luoMonkeyKing, #2227)
    • karmada-controller-manager: Fixed that Job status might be incorrectly marked as Completed. (@Garrybest, #1987)
    • karmada-controller-manager: Fixed returning err when the interpreter webhook returns nil patch and nil patchType. (@CharlesQQ, #2161)
    • karmada-controller-manager: Fixed that Argo CD cannot assess Deployment health status. (@xuqianjins, #2241)
    • karmada-controller-manager: Fixed that Argo CD cannot assess StatefulSet/DaemonSet health status. (@RainbowMango, #2252)
    • karmada-controller-manager/karmada-agent: Fixed an resource status can not be collected issue in case of Resource Interpreter returns an error. (@XiShanYongYe-Chang, #2428)
    • karmada-sechduler: Fixed a panic issue when replicaDivisionPreference is Weighted and WeightPreference is nil. (@XiShanYongYe-Chang, #2451)

    Features & Enhancements

    • karmadactl: Added --force flag to deinit to skip confirmation. (@zgfh, #2016)
    • karmadactl: The flag -c of sub-command promote now has been changed to uppercase -C. (@Fish-pro, #2140)
    • ``karmadactl: Introduced--cluster-zoneand--cluster-regionflags tojoin` command to specify the zone and region of joining cluster. (@chaunceyjiang, #2048)
    • karmadactl: Introduced --namespace flag to exec command to specify the workload namespace. (@carlory, #2092)
    • karmadactl: Allowed reading namespaces from the context field of karmada config for get command. (@carlory, #2148)
    • karmadactl: Introduced apply subcommand to apply a configuration to a resource by file name or stdin. (@carlory, #2000)
    • karmadactl: Introduced --namespace flag to describe command to specify the namespace the workload belongs to. (@TheStylite, #2153)
    • karmadactl: Introduced --cluster flag for apply command to allow users to select one or many member clusters to propagate resources. (@carlory, #2192)
    • karmadactl: Introduced options subcmd to list global command-line options. (@lonelyCZ, #2283)
    • karmadactl: Introduced thetoken command to manage bootstrap tokens. (@lonelyCZ, #2399)
    • karmadactl: Introduced theregister command for joining PULL mode cluster. (@lonelyCZ, #2388)
    • karmada-scheduler: Introduced --enable-empty-workload-propagation flag to enable propagating empty workloads. (@CharlesQQ, #1720)
    • karmada-scheduler: Allowed extended plugins in an out-of-tree mode. (@kerthcet, #1663)
    • karmada-scheduler: Introduced --disable-scheduler-estimator-in-pull-mode flag to disable scheduler-estimator for clusters in pull mode. (@prodanlabs, #2064)
    • karmada-scheduler: Introduced --plugins flag to enable or disable scheduler plugins. (@chaunceyjiang, #2135)
    • karmada-scheduler: Now the scheduler starts to re-schedule in case of cluster state changes. (@chaunceyjiang, #2301)
    • karmada-search: The search API supports searching for resources according to labels. (@XiShanYongYe-Chang, #1917)
    • karmada-search: The annotation cluster.karmada.io/name which is used to represent the source of cache now has been changed to resource.karmada.io/cached-from-cluster. (@calvin0327, #1960)
    • karmada-search: Fixed panic issue when dumping error info. (@AllenZMC, #2231)
    • karmada-controller-manager/karmada-agent: Cluster state controller now able to collect partial API list in the case of discovery failure. (@duanmengkk, #1968)
    • karmada-controller-manager/karmada-agent: Introduced --cluster-success-threshold flag to specify cluster success threshold. Default to 30s. (@dddddai, #1884)
    • karmada-controller-manager/karmada-agent: Added CronJob support to the default resource interpreter framework. (@chaunceyjiang, #2060)
    • karmada-controller-manager/karmada-agent: Introduced --leader-elect-lease-duration, --leader-elect-renew-deadline and --leader-elect-retry-period flags to specify leader election behaviors. (@CharlesQQ, #2056)
    • karmada-controller-manager/karmada-agent : Fixed panic issue when dumping error infos. (@AllenZMC, #2117)
    • karmada-controller-manager/karmada-agent: Supported interpreting health state by levering Resource Interpreter Framework. (@zhuwint, #2329)
    • karmada-controller-manager/karmada-agent: Introduced --enable-cluster-resource-modeling flag to enable or disable cluster modeling feature. (@RainbowMango, #2387)
    • karmada-controller-manager/karmada-agent: Now able to retain .spec.claimRef field of PersistentVolume. (@Garrybest, #2415)
    • karmada-controller-manager: interpreter framework starts to support Pod state aggregation. (@xyz2277, #1913)
    • karmada-controller-manager: interpreter framework starts to support PVC state aggregation. (@chaunceyjiang, #2070)
    • karmada-controller-manager: Stopped reporting and refreshing lease for clusters in Push mode. (@dddddai, #2033)
    • karmada-controller-manager: interpreter framework starts to support Pod Failded and Succeeded aggregation. (@chaunceyjiang, #2146)
    • karmada-controller-manager: namespace controller starts to apply ClusterOverridePolicy during propagation namespaces. (@zirain, #2263)
    • karmada-controller-manager: Propagation dependencies support propagating ServiceAccounts. (@chaunceyjiang, #2035)
    • karmada-agent: Introduced --metrics-bind-address flag to specify the address for serving Prometheus metrics. (@1953, #1953)
    • karmada-agent: Introduced --report-secrets flag to allow secrets to be reported to the Karmada control plane during registering. (@CharlesQQ, #1990)
    • karmada-agent: Introduced --cluster-provider and --cluster-region flags to specify cluster-provider and cluster-region during registering. (@CharlesQQ, #2152)
    • karmada-webhook: Added default tolerations, defaultNotReadyTolerationSeconds, and defaultUnreachableTolerationSeconds, for (Cluster)PropagationPolicy. (@Garrybest, #2284)
    • karmada-webhook: The '.spec.ttlSecondsAfterFinished' field of the Job resource will be removed before propagating to member clusters. (@chaunceyjiang, #2294)
    • karmada-agent/karmadactl: Now an error will be reported when registering the same cluster to Karmada. (@yy158775, #2369)

    Other

    Helm Chart

    • Helm chart: Updated default kube-apiserver from v1.21 to v1.22. (@AllenZMC, #1941)
    • Helm Chart: Added missing APIService configuration for karmada-aggregated-apiserver. (@zhixian82, #2258)
    • Helm Chart: Fixed the webhook service mismatch issue in the case of customized release name. (@calvin0327, #2275)
    • Helm Chart: Introduced --cluster-api-endpoint for karmada-agent. (@my-git9, #2299)
    • Helm Chart: Fixed misconfigured MutatingWebhookConfiguration. (@zhixian82, #2401)

    Dependencies

    • Karmada is now built with Golang 1.18.3. (@RainbowMango, #2032)
    • Kubernetes dependencies are now updated to v1.24.2. (@RainbowMango, #2050)

    Deprecation

    • karmadactl: Removed --dry-run flag from describe, exec and log commands. (@carlory, #2023)
    • karmadactl: Removed the --cluster-namespace flag for get command. (@carlory, #2190)
    • karmadactl: Removed the --cluster-namespace flag for promote command. (@carlory, #2193)

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @AllenZMC
    • @calvin0327
    • @carlory
    • @CharlesQQ
    • @Charlie17Li
    • @chaunceyjiang
    • @cutezhangq
    • @dapengJacky
    • @dddddai
    • @duanmengkk
    • @Fish-pro
    • @Garrybest
    • @gy95
    • @halfrost
    • @hanweisen
    • @huntsman-li
    • @ikaven1024
    • @joengjyu
    • @JoshuaAndrew
    • @kerthcet
    • @kevin-wangzefeng
    • @kinzhi
    • @likakuli
    • @lonelyCZ
    • @luoMonkeyKing
    • @maoyangLiu
    • @mathlsj
    • @mikeshng
    • @Momeaking
    • @mrlihanbo
    • @my-git9
    • @nuclearwu
    • @Poor12
    • @prodanlabs
    • @RainbowMango
    • @suwliang3
    • @TheStylite
    • @wawa0210
    • @weilaaa
    • @windsonsea
    • @wlp1153468871
    • @wuyingjun-lucky
    • @XiShanYongYe-Chang
    • @xuqianjins
    • @xyz2277
    • @yusank
    • @yy158775
    • @zgfh
    • @zhixian82
    • @zhuwint
    • @zirain
    Source code(tar.gz)
    Source code(zip)
    crds.tar.gz(33.54 KB)
    karmada-chart-v1.3.0.tgz(66.73 KB)
    karmadactl-darwin-amd64.tgz(26.76 MB)
    karmadactl-darwin-amd64.tgz.sha256(94 bytes)
    karmadactl-darwin-arm64.tgz(25.80 MB)
    karmadactl-darwin-arm64.tgz.sha256(94 bytes)
    karmadactl-linux-amd64.tgz(26.97 MB)
    karmadactl-linux-amd64.tgz.sha256(93 bytes)
    karmadactl-linux-arm64.tgz(25.15 MB)
    karmadactl-linux-arm64.tgz.sha256(93 bytes)
    kubectl-karmada-darwin-amd64.tgz(26.76 MB)
    kubectl-karmada-darwin-amd64.tgz.sha256(99 bytes)
    kubectl-karmada-darwin-arm64.tgz(25.80 MB)
    kubectl-karmada-darwin-arm64.tgz.sha256(99 bytes)
    kubectl-karmada-linux-amd64.tgz(26.97 MB)
    kubectl-karmada-linux-amd64.tgz.sha256(98 bytes)
    kubectl-karmada-linux-arm64.tgz(25.15 MB)
    kubectl-karmada-linux-arm64.tgz.sha256(98 bytes)
  • v1.2.2(Aug 25, 2022)

    Changes since v1.2.1

    Bug Fixes

    • karmadactl: Fixed a panic issue when retrieving resources from an unknown cluster(karmadactl get xxx --cluster=not-exist). (#2171, #2201, @my-git9)
    • karmadactl: Fix Kubernetes v1.24 can not be joined issue. (#1972, @zgfh)
    • karmada-controller-manager: Fixed Argo CD can not assess Deployment health status issue. (#2256, @xuqianjins)
    • karmada-controller-manager: Fixed Argo CD can not assess StatefulSet/DaemonSet health status issue. (#2264, @RainbowMango)
    • karmada-search: Fixed can not query a single namespace issue. (#2274, @luoMonkeyKing)
    • karmada-search: Fixed panic issue when dumps error info. (#2333, @AllenZMC)
    • Helm Chart: Fixed misconfigured MutatingWebhookConfiguration and added missing APIService configuration for karmada-aggregated-apiserver. (#2420, @zhixian82)
    Source code(tar.gz)
    Source code(zip)
    karmada-chart-v1.2.2.tgz(63.50 KB)
    karmadactl-darwin-amd64.tgz(24.61 MB)
    karmadactl-darwin-amd64.tgz.sha256(94 bytes)
    karmadactl-darwin-arm64.tgz(23.90 MB)
    karmadactl-darwin-arm64.tgz.sha256(94 bytes)
    karmadactl-linux-amd64.tgz(24.80 MB)
    karmadactl-linux-amd64.tgz.sha256(93 bytes)
    karmadactl-linux-arm64.tgz(22.96 MB)
    karmadactl-linux-arm64.tgz.sha256(93 bytes)
    kubectl-karmada-darwin-amd64.tgz(24.61 MB)
    kubectl-karmada-darwin-amd64.tgz.sha256(99 bytes)
    kubectl-karmada-darwin-arm64.tgz(23.90 MB)
    kubectl-karmada-darwin-arm64.tgz.sha256(99 bytes)
    kubectl-karmada-linux-amd64.tgz(24.80 MB)
    kubectl-karmada-linux-amd64.tgz.sha256(98 bytes)
    kubectl-karmada-linux-arm64.tgz(22.96 MB)
    kubectl-karmada-linux-arm64.tgz.sha256(98 bytes)
  • v1.1.4(Aug 25, 2022)

  • v1.0.5(Aug 25, 2022)

  • v1.2.1(Jul 14, 2022)

    Changes since v1.2.0

    Bug Fixes

    • karmada-search: Fixed panic when the resource annotation is nil. (#1939, @XiShanYongYe-Chang )
    • karmada-search: Fixed a panic issue comparing uncomparable type cache.ResourceEventHandlerFuncs. (#1971, @liys87x )
    • karmadactl: fixed promoting failed if a resource with another kind using the same name has been promoted before. (#1983, @wuyingjun-lucky )
    • karmadactl: Removed --dry-run flag from describe, exec and log commands. (#2036, @wlp1153468871 )
    • karmada-controller-manager: Fixed Job status might be incorrectly marked as Completed issue. (#2007, @Garrybest )
    • karmada-controller-manager/karmada-agent : fixed panic issue when dumps error info. (#2127, @AllenZMC )
    Source code(tar.gz)
    Source code(zip)
    karmada-chart-v1.2.1.tgz(63.50 KB)
    karmadactl-darwin-amd64.tgz(24.61 MB)
    karmadactl-darwin-amd64.tgz.sha256(94 bytes)
    karmadactl-darwin-arm64.tgz(23.90 MB)
    karmadactl-darwin-arm64.tgz.sha256(94 bytes)
    karmadactl-linux-amd64.tgz(24.81 MB)
    karmadactl-linux-amd64.tgz.sha256(93 bytes)
    karmadactl-linux-arm64.tgz(22.96 MB)
    karmadactl-linux-arm64.tgz.sha256(93 bytes)
    kubectl-karmada-darwin-amd64.tgz(24.61 MB)
    kubectl-karmada-darwin-amd64.tgz.sha256(99 bytes)
    kubectl-karmada-darwin-arm64.tgz(23.90 MB)
    kubectl-karmada-darwin-arm64.tgz.sha256(99 bytes)
    kubectl-karmada-linux-amd64.tgz(24.81 MB)
    kubectl-karmada-linux-amd64.tgz.sha256(98 bytes)
    kubectl-karmada-linux-arm64.tgz(22.96 MB)
    kubectl-karmada-linux-arm64.tgz.sha256(98 bytes)
  • v1.0.4(Jul 14, 2022)

    Changes since v1.0.3

    Bug Fixes

    • karmadactl: Fixed namespace can not be customized issue. (#1826, @likakuli)
    • karmadactl: fixed can not taint while karmada control plane config is not located on default path. (#1837, @wuyingjun-lucky)
    • karmada-controller-manager: Fixed Job status might be incorrectly marked as Completed issue. (#2011, @Garrybest)
    • karmada-controller-manager/karmada-agent : fixed panic issue when dumps error info. (#2133, @AllenZMC)
    Source code(tar.gz)
    Source code(zip)
    kubectl-karmada-darwin-amd64.tgz(23.41 MB)
    kubectl-karmada-darwin-arm64.tgz(22.87 MB)
    kubectl-karmada-linux-amd64.tgz(23.57 MB)
    kubectl-karmada-linux-arm64.tgz(21.94 MB)
  • v1.1.3(Jul 14, 2022)

    Changes since v1.1.2

    Bug Fixes

    • karmadactl: Fixed namespace can not be customized issue. (#1827, @likakuli )
    • karmadactl: fixed karmadactl can not taint while karmada control plane config is not located on default path. (#1838, @wuyingjun-lucky )
    • karmada-controller-manager: Fixed Job status might be incorrectly marked as Completed issue. (#2010, @Garrybest )
    • karmada-controller-manager/karmada-agent : fixed panic issue when dumps error info. (#2126, @AllenZMC )
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(May 28, 2022)

    What's New

    Significant improvement on scheduling capability and scalability

    1. Karmada Descheduler

    A new component karmada-descheduler was introduced, for rebalancing the scheduling decisions over time. One example use case is: it helps evict pending replicas (Pods) from resource-starved clusters so that karmada-scheduler can "reschedule" these replicas (Pods) to a cluster with sufficient resources. For more details please refer to Descheduler user guide.

    (Feature contributor: @Garrybest)

    2. Multi region HA support

    By leveraging the newly added spread-by-region constraint, users are now able to deploy workloads aross regions, e.g. people may want their workloads always running on different regions for HA purposes. We also introduced two plugins to karmada-scheduler, which add to accurate scheduling.

    • ClusterLocality is a scoring plugin that favors clusters already assigned.
    • SpreadConstraint is a filter plugin that filters clusters as per spread constraints.

    (Feature contributors: @huone1, @gf457832386)

    We are also in the progress of enhancing the multi-cluster failover mechanism. Part of the work has been included in this release. For example:

    • A new flag(--cluster-failure-threshold) has been added to both karmada-controller-manager and karmada-agent, which specifies the cluster failure threshold (defaults to 30s). A cluster will be considered not-ready only when it stays unhealthy longer than supposed.
    • A new flag(--failover-eviction-timeout) has been added to karmada-controller-manager, which specifies the grace period of eviction (defaults to 5 minutes). If a cluster stays not-ready longer than supposed, the controller taints the cluster. (Note: The taint is essentially the eviction order and the implementation is planned for the next release.)

    (Feature contributors: @Garrybest, @dddddai)

    Fully adopted aggregated API

    The Aggregated API was initially introduced in Release 1.0, which allows users to access clusters through Karmada by a single aggregated API endpoint. By leveraging this feature, we introduced a lot of interesting features to karmadactl and kubectl-karmada.

    1. The get sub-command now supports clusters both in push and pull mode.

    # karmadactl get deployment -n default
    NAME      CLUSTER   READY   UP-TO-DATE   AVAILABLE   AGE     ADOPTION
    nginx     member1   2/2     2            2           33h     N
    nginx     member2   1/1     1            1           4m38s   Y
    podinfo   member3   2/2     2            2           27h     N
    

    2. The newly added logs command prints the container logs in a specific cluster.

    # ./karmadactl logs nginx-6799fc88d8-9mpxn -c nginx  -C member1
    /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
    ...
    

    3. We also added watch and exec commands to karmadactl, in addition to get and logs. They all use the aggregated API.

    (Feature contributor: @lonelyCZ)

    Distributed search and analytics engine for Kubernetes resources (alpha)

    The newly introduced karmada-search caches resources in clusters and allows users to search for resources without directly touching real clusters.

    # kubectl get --raw /apis/search.karmada.io/v1alpha1/search/cache/apis/apps/v1/deployments
    {
    	"apiVersion": "v1",
    	"kind": "List",
    	"metadata": {},
    	"items": [{
    		"apiVersion": "apps/v1",
    		"kind": "Deployment",
    		"metadata": {
    			"annotations": {
    				"cluster.karmada.io/name": "member1",
    			},
    		}
    	},
    	]
    }
    

    The karmada-search also supports syncing cached resources to backend stores like Elasticsearch or OpenSearch. By leveraging the search engine, you can perform full-text searches with all desired features, by field, and by indice; rank results by score, sort results by field, and aggregate results.

    (Feature contributors: @huntsman-li, @liys87x)

    Resource Interpreter Webhook enhancement

    Introduced InterpretStatus for the Resource Interpreter Webhook framework, which enables customized resource status collection. Karmada can thereby learn how to collect status for your resources, especially custom resources. For example, a custom resource may have many status fields and only Karmada can collect only those you want.

    Refer to [Customizing Resource Interpreter][https://github.com/karmada-io/karmada/blob/master/docs/userguide/customizing-resource-interpreter.md] for more details.

    (Feature contributor: @XiShanYongYe-Chang)

    Integrating verification with the ecosystem

    Benefiting from the Kubernetes native APIs, Karmada can easily integrate the Kubernetes ecosystem. The following components are verified by the Karmada community:

    (Feature contributors: @Poor12, @learner0810)

    Other Notable Changes

    Bug Fixes

    • karmadactl: Fixed the cluster joining failures in the case of legacy secrets. (@zgfh, #1306)
    • karmadactl: Fixed the issue that you cannot use the '-v 6' log level. (@zgfh, #1426)
    • karmadactl: Fixed the issue that the --namespace flag of init command did not work. (@sayaoailun, #1416)
    • karmadactl: Allowed namespaces to be customized. (@sayaoailun, #1449)
    • karmadactl: Fixed the init failure due to data path not clean. (@prodanlabs, #1455)
    • karmadactl: Fixed the init failure to read the KUBECONFIG environment variable. (@lonelyCZ, #1437)
    • karmadactl: Fixed the init command failure to select the default release version. (@prodanlabs, #1456)
    • karmadactl: Fixed the issue that the karmada-system namespace already exists when deploying karmada-agent. (@hanweisen, #1604)
    • karmadactl: Fixed the issue that the karmada-controller-manager args did not honor customized namespaces.` (@prodanlabs, #1683)
    • karmadactl: Fixed a panic due to nil annotation when promoting resources to Karmada.` (@duanmengkk, #1759)
    • karmadactl: Fixed the promote command failure to migrate cluster-scoped resources. (@duanmengkk, #1766)
    • karmadactl: fixed the karmadactl taint failure while the karmada control plane config is not located in the default path. (@wuyingjun-lucky, #1825)
    • helm-chart: Fixed the karmada-agent installation failure due to the lack of permission. (@AllenZMC, #1457)
    • helm-chart: Fixed the issue that version constraints skip pre-releases. (@pigletfly, #1444)
    • karmada-controller-manager: Fixed the issue that ResourceBinding may hinder en-queue in the case of schedule failures. (@mrlihanbo, #1499)
    • karmada-controller-manager: Fixed the panic when the interpreter webhook returns nil patch. (@CharlesQQ, #1584)
    • karmada-controller-manager: Fixed the RB/CRB controller failure to aggregate status in the case of work condition update. (@mrlihanbo, #1513)
    • karmada-aggregate-apiserver: Fixed timeout issue when requesting cluster/proxy with options -w or logs -f from karmadactl get. (@XiShanYongYe-Chang, #1620)
    • karmada-aggregate-apiserver: Fixed exec failed: error: unable to upgrade connection: you must specify at least 1 of stdin, stdout, stderr. (@pangsq, #1632)

    Features & Enhancements

    • karmada-controller-manager: Introduced several flags to specify controller's concurrent capacities(--rate-limiter-base-delay, --rate-limiter-max-delay, --rate-limiter-qps, --rate-limiter-bucket-size). (@pigletfly, #1399)
    • karmada-controller-manager: The klog flags now have been grouped for better readability. (@RainbowMango, #1468)
    • karmada-controller-manager: Fixed the FullyApplied condition of ResourceBinding/ClusterResourceBinding mislabeling issue in the case of non-scheduling. (@huone1, #1512)
    • karmada-controller-manager: Added default AggregateStatus webhook for DaemonSet and StatefulSet. (@Poor12, #1586)
    • karmada-controller-manager: OverridePolicy with empty ResourceSelector will be considered to match all resources just like nil. (@likakuli, #1706)
    • karmada-controller-manager: Introduced --failover-eviction-timeout to specify the grace period of eviction. Tants(cluster.karmada.io/not-ready or cluster.karmada.io/unreachable) will be set on unhealthy clusters after the period. (@Garrybest, #1781)
    • karmada-controller-manager/karmada-agent: Introduced --cluster-failure-threshold flag to specify cluster failure threshold. (@dddddai, #1829)
    • karmada-scheduler: Workloads can now be rescheduled after the cluster is unregistered. (@huone1, #1383)
    • karmada-scheduler: The klog flags now have been grouped for better readability. (@jameszhangyukun, #1491)
    • karmada-scheduler: Added a scoring plugin ClusterLocality to favor clusters already requested. (@huone1, #1334)
    • karmada-scheduler: Introduced filter plugin SpreadConstraint to filter clusters that do not meet the spread constraints. (@gf457832386, #1570)
    • karmada-scheduler: Supported spread constraints by region strategy. (@huone1, #1646)
    • karmada-webhook: Introduced --tls-cert-file-name and --tls-private-key-file-name flags to specify the server certificate and private key. (@mrlihanbo, #1464)
    • karmada-agent: The klog flags now have been grouped for better readability. (@lonelyCZ, #1389)
      • karmada-agent: Introduced several flags to specify controller's concurrent capacities(--rate-limiter-base-delay, --rate-limiter-max-delay, --rate-limiter-qps, --rate-limiter-bucket-size). (@dddddai, #1505)
    • karmada-scheduler-estimator: The klog flags now have been grouped for better readability. (@AllenZMC, #1493)
    • karmadactl: Introduced --context flag to specify the context name to use. (@lonelyCZ, #1748)
    • karmadactl: Introduced --kube-image-mirror-country and --kube-image-registry flags to init subcommand for Chinese mainland users. (@wuyingjun-lucky, #1764)
    • karmadactl: Introduceddeinitsub-command to uninstall Karmada. (@prodanlabs, #1337)
    • Introduced Swagger docs for Karmada API. (@lonelyCZ, #1401)

    Other (Dependencies)

    • The base image alpine has been promoted to v3.15.1. (@RainbowMango, #1519)

    Deprecation

    • karmada-controller-manager: The hpa controller is disabled by default now. (@Poor12, #1580)
    • karmada-aggregated-apiserver: The deprecated flags --karmada-config and --master in v1.1 have been removed from the codebase. (@AllenZMC, #1834)

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @AllenZMC
    • @anu491
    • @carlory
    • @CharlesQQ
    • @chaunceyjiang
    • @chinmaym07
    • @CuiDengdeng
    • @dddddai
    • @duanmeng
    • @duanmengkk
    • @ErikJiang
    • @fanzhihai0215
    • @fleeto
    • @Garrybest
    • @gf457832386
    • @gy95
    • @hanweisen
    • @huiwq1990
    • @huntsman-li
    • @huone1
    • @ikaven1024
    • @jameszhangyukun
    • @kerthcet
    • @learner0810
    • @lfbear
    • @likakuli
    • @liys87x
    • @lonelyCZ
    • @lvyanru8200
    • @mikeshng
    • @mrlihanbo
    • @my-git9
    • @pangsq
    • @pigletfly
    • @Poor12
    • @prodanlabs
    • @RainbowMango
    • @sayaoailun
    • @snowplayfire
    • @stingshen
    • @Tingtal
    • @wuyingjun-lucky
    • @wwwnay
    • @XiShanYongYe-Chang
    • @xyz2277
    • @YueHonghui
    • @zgfh
    • @zirain
    Source code(tar.gz)
    Source code(zip)
    crds.tar.gz(31.69 KB)
    karmada-chart-v1.2.0.tgz(47.54 KB)
    karmadactl-darwin-amd64.tgz(24.57 MB)
    karmadactl-darwin-amd64.tgz.sha256(94 bytes)
    karmadactl-darwin-arm64.tgz(23.88 MB)
    karmadactl-darwin-arm64.tgz.sha256(94 bytes)
    karmadactl-linux-amd64.tgz(24.78 MB)
    karmadactl-linux-amd64.tgz.sha256(93 bytes)
    karmadactl-linux-arm64.tgz(22.93 MB)
    karmadactl-linux-arm64.tgz.sha256(93 bytes)
    kubectl-karmada-darwin-amd64.tgz(24.57 MB)
    kubectl-karmada-darwin-amd64.tgz.sha256(99 bytes)
    kubectl-karmada-darwin-arm64.tgz(23.88 MB)
    kubectl-karmada-darwin-arm64.tgz.sha256(99 bytes)
    kubectl-karmada-linux-amd64.tgz(24.78 MB)
    kubectl-karmada-linux-amd64.tgz.sha256(98 bytes)
    kubectl-karmada-linux-arm64.tgz(22.93 MB)
    kubectl-karmada-linux-arm64.tgz.sha256(98 bytes)
  • v1.1.2(Apr 29, 2022)

    Changes since v1.1.1

    Bug Fixes

    • karmadactl: fixed thekarmada-systemnamespace already existing issue when deployingkarmada-agent` issue. (#1608, @hanweisen)
    • karmadactl: Fixed karmada-controller-manager args not honor customized namespace issue. (#1689, @prodanlabs)
    • karmada-controller-manager: Fixed ResourceBinding maybe prevents en-queue in case of schedule failure. (#1507, @mrlihanbo)
    • karmada-controller-manager: Fixed the FullyApplied condition of ResourceBinding/ClusterResourceBinding mislabeling issue in case of non-scheduled. (#1517, @huone1)
    • karmada-controller-manager: Fixed RB/CRB controller can't aggregate status in case of work condition update issue. (#1523, @mrlihanbo)
    • karmada-controller-manager: Fixed panic in case of interpreter webhook returns nil patch. (#1592, @CharlesQQ)
    • karmada-aggregate-apiserver: Fixed timeout issue when request cluster/proxy with options -w or logs -f from karmadactl get. (#1630, @XiShanYongYe-Chang)
    • karmada-aggregate-apiserver: Fixed exec failed: error: unable to upgrade connection: you must specify at least 1 of stdin, stdout, stderr. (#1642, @XiShanYongYe-Chang)

    Other

    • The base image alpine has been promoted to v3.15.1. (#1583, @RainbowMango)
    Source code(tar.gz)
    Source code(zip)
    kubectl-karmada-darwin-amd64.tgz(23.87 MB)
    kubectl-karmada-darwin-arm64.tgz(23.19 MB)
    kubectl-karmada-linux-amd64.tgz(24.06 MB)
    kubectl-karmada-linux-arm64.tgz(22.26 MB)
  • v1.0.3(Apr 29, 2022)

    Changes since v1.0.2

    Bug Fixes

    • karmadactl: fixed thekarmada-systemnamespace already existing issue when deployingkarmada-agent` issue. (#1609, @hanweisen)
    • karmadactl: Fixed karmada-controller-manager args not honor customized namespace issue. (#1690, @prodanlabs)
    • karmada-controller-manager: Fixed the FullyApplied condition of ResourceBinding/ClusterResourceBinding mislabeling issue in case of non-scheduled. (#1518, @huone1)
    • karmada-controller-manager: Fixed RB/CRB controller can't aggregate status in case of work condition update issue. (#1524, @mrlihanbo)
    • karmada-controller-manager: Fixed panic in case of interpreter webhook returns nil patch. (#1591, @CharlesQQ)
    • karmada-aggregate-apiserver: Fixed timeout issue when request cluster/proxy with options -w or logs -f from karmadactl get. (#1631, @XiShanYongYe-Chang)
    • karmada-aggregate-apiserver: Fixed exec failed: error: unable to upgrade connection: you must specify at least 1 of stdin, stdout, stderr. (#1641, @pangsq)

    Other

    • The base image alpine has been promoted to v3.15.1. (#1582, @RainbowMango)
    Source code(tar.gz)
    Source code(zip)
    kubectl-karmada-darwin-amd64.tgz(23.17 MB)
    kubectl-karmada-darwin-arm64.tgz(22.50 MB)
    kubectl-karmada-linux-amd64.tgz(23.37 MB)
    kubectl-karmada-linux-arm64.tgz(21.63 MB)
  • v1.0.2(Mar 18, 2022)

    Changes since v1.0.1

    Bug Fixes

    • karmadactl: Fixed --namespace flag of init command not work issue. (#1452, @sayaoailun)
    • karmadactl: Fixed init failure due to data path not clean issue. (#1473, @prodanlabs)
    • karmadactl: Fixed init can not select default release version issue. (#1495, @prodanlabs)
    • karmadactl: Fixed init can not read KUBECONFIG environment variable issue. (#1482, @lonelyCZ)
    • helm chart: Fixed version constraints skip the pre-releases issue. (#1466, @pigletfly)
    • karmada-controller-manager: Fixed a bug where resource binding is not created occasionally. (#1384, @dddddai)
    Source code(tar.gz)
    Source code(zip)
    kubectl-karmada-darwin-amd64.tgz(23.41 MB)
    kubectl-karmada-darwin-arm64.tgz(22.86 MB)
    kubectl-karmada-linux-amd64.tgz(23.57 MB)
    kubectl-karmada-linux-arm64.tgz(21.94 MB)
  • v1.1.1(Mar 18, 2022)

    Changes since v1.1.0

    Bug Fixes

    • karmadactl: Fixed --namespace flag of init command not work issue. (#1452, @sayaoailun)
    • karmadactl: Fixed init failure due to data path not clean issue. (#1473, @prodanlabs)
    • karmadactl: Fixed init can not read KUBECONFIG environment variable issue. (#1482, @lonelyCZ)
    • karmadactl: Fixed init can not select default release version issue. (#1496, @prodanlabs)
    • helm chart: Fixed version constraints skip the pre-releases issue. (#1466, @pigletfly)
    Source code(tar.gz)
    Source code(zip)
    kubectl-karmada-darwin-amd64.tgz(23.87 MB)
    kubectl-karmada-darwin-arm64.tgz(23.18 MB)
    kubectl-karmada-linux-amd64.tgz(24.06 MB)
    kubectl-karmada-linux-arm64.tgz(22.26 MB)
  • v1.1.0(Feb 28, 2022)

    What's New

    Multi-Cluster Ingress

    The newly introduced MultiClusterIngress API exposes HTTP and HTTPS routes that target multi-cluster services within the Karmada control plane. The specification of MultiClusterIngress is compatible with Kubernetes Ingress.

    Traffic routing is controlled by rules defined on the MultiClusterIngress resource, an MultiClusterIngress controller is responsible for fulfilling the ingress. The Multi-Cluster-Nginx Ingress Controller is one of the MultiClusterIngress controller implementations maintained by the community.

    (Feature contributors: @GitHubxsy @XiShanYongYe-Chang)

    Federated ResourceQuota

    The newly introduced FederatedResourceQuota provides constraints that limit total resource consumption per namespace across all clusters. It can limit the number of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.

    (Feature contributors: @RainbowMango @XiShanYongYe-Chang)

    Configurability improvement for performance tuning

    The default number of reconciling workers has been enlarged and configurable. A larger number of workers means higher responsiveness but heavier CPU and network load. The number of concurrent workers could be configured by the flags introduced to karmada-controller-manager and karmada-agent.

    Flags introduced to karmada-controller-manager:

    • --concurrent-work-syncs
    • --concurrent-namespace-syncs
    • --concurrent-resource-template-syncs
    • --concurrent-cluster-syncs
    • --concurrent-clusterresourcebinding-syncs
    • --concurrent-resourcebinding-syncs

    Flags introduced to karmada-agent:

    • --concurrent-work-syncs
    • --concurrent-cluster-syncs

    (Feature contributor: @pigletfly)

    Resource Interpreter Webhook Enhancement

    Introduced AggregateStatus support for the Resource Interpreter Webhook framework, which enables customized resource status aggregating.

    Introduced InterpreterOperationInterpretDependency support for the Resource Interpreter Webhook framework, which enables propagating workload's dependencies automatically.

    Refer to Customizing Resource Interpreter for more details.

    (Feature contributors: @iawia002 @mrlihanbo)

    Other Notable Changes

    Bug Fixes

    • karmadactl and kubectl-karmada: Fixed that init cannot update the APIService. (@prodanlabs, #1207)
    • karmada-controller-manager: Fixed ApplyPolicySucceed event type mistake (should be Normal but not Warning). (@Garrybest, #1267)
    • karmada-controller-manager and karmada-agent: Fixed that resync slows down reconciliation. (@Garrybest, #1265)
    • karmada-controller-manager/karmada-agent: Fixed continually updating cluster status due to unordered apiEnablements. (@pigletfly, #1304)
    • karmada-controller-manager: Fixed that Replicas set by OverridePolicy will be reset by the ReviseReplica interpreterhook. (@likakuli, #1352)
    • karmada-controller-manager: Fixed that ResourceBinding couldn't be created in a corner case. (@dddddai, #1368)
    • karmada-scheduler: Fixed inaccuracy in requested resources in the case that pod limits are specified but requests are not. (@Garrybest, #1225)
    • karmada-scheduler: Fixed spreadconstraints[i].MaxGroups is invalidated in some scenarios. (@huone1, #1324)

    Features & Enhancements

    • karmadactl: Introduced --tls-min-version flag to specify the minimum TLS version. (@carlory, #1278)
    • karmadactl: Improved the get command to show more useful information. (@lonelyCZ, #1270)
    • karmada-controller-manager/karmada-agent: Introduced --resync-period flag to specify reflector resync period (defaults to 0, meaning no resync). (@Garrybest, #1261)
    • karmada-controller-manager: Introduced --metrics-bind-address flag to specify the customized address for metrics. (@pigletfly, #1341)
    • karmada-webhook: Introduced --metrics-bind-address and --health-probe-bind-address flags. (@mrlihanbo, #1346)

    Instrumentation (Metrics and Events)

    • karmada-controller-manager: Fixed ApplyPolicySucceed event type mistake (should be Normal but not Warning). (@Garrybest, #1267)

    Deprecation

    • OverridePolicy/ClusterOverridePolicy: The .spec.targetCluster and spec.overriders have been deprecated in favor of spec.overrideRules. (@RainbowMango #1238)
    • karmada-aggregate-apiserver: Deprecated --master and --karmada-config flags. Please use --kubeconfig instead. (@carlory, #1336)

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @AllenZMC
    • @ashley-rongfang
    • @carlory
    • @CuiDengdeng
    • @dddddai
    • @EvaDD
    • @Fish-pro
    • @Garrybest
    • @helen-frank
    • @huone1
    • @iawia002
    • @jinglina
    • @kerthcet
    • @liangyongzhenya
    • @likakuli
    • @lonelyCZ
    • @mrlihanbo
    • @pigletfly
    • @prodanlabs
    • @RainbowMango
    • @RishiKumarRay
    • @Tingtal
    • @viniciuspietscher
    • @weilaaa
    • @wlp1153468871
    • @XiShanYongYe-Chang
    • @zach593
    • @zgfh
    Source code(tar.gz)
    Source code(zip)
    crds.tar.gz(31.69 KB)
    kubectl-karmada-darwin-amd64.tgz(23.86 MB)
    kubectl-karmada-darwin-arm64.tgz(23.17 MB)
    kubectl-karmada-linux-amd64.tgz(24.07 MB)
    kubectl-karmada-linux-arm64.tgz(22.26 MB)
  • v1.0.1(Jan 21, 2022)

    Changes since v1.0.0

    Bug Fixes

    • karmadactl and kubectl-karmada: Fixed init can not update the APIService issue. (#1207, @prodanlabs )
    • karmada-controller-manager: Fixed ApplyPolicySucceed event type mistake(should be Normal but Warning). (#1267, @Garrybest )
    • karmada-controller-manager and karmada-agent: Fixed resync slow down reconcile issue. (#1265, @Garrybest )
    Source code(tar.gz)
    Source code(zip)
    crds.tar.gz(26.87 KB)
    kubectl-karmada-darwin-amd64.tgz(23.40 MB)
    kubectl-karmada-darwin-arm64.tgz(22.85 MB)
    kubectl-karmada-linux-amd64.tgz(23.56 MB)
    kubectl-karmada-linux-arm64.tgz(21.92 MB)
  • v1.0.0(Dec 31, 2021)

    What's New

    Aggregated Kubernetes API Endpoint

    The newly introduced karmada-aggregated-apiserver component aggregates all registered clusters and allows users to access member clusters through Karmada by the proxy endpoint, e.g.

    - Retrieve `Node` from `member1`:  /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes
    - Retrieve `Pod` from `member2`: /apis/cluster.karmada.io/v1alpha1/clusters/member2/proxy/api/v1/namespaces/default/pods
    

    Please refer to user guide for more details.

    (Feature contributor: @kevin-wangzefeng @GitHubxsy @XiShanYongYe-Chang @mrlihanbo @jrkeen @prodanlabs @carlory @RainbowMango)

    Promoting Workloads from Legacy Clusters to Karmada

    Legacy workloads running in Kubernetes now can be promoted to Karmada smoothly without container restart. In favor of promote commands added to Karmada CLI, any kind of Kubernetes resources can be promoted to Karmada easily, e.g.

    # Promote deployment(default/nginx) from cluster1 to Karmada
    kubectl karmada promote deployment nginx -n default -c cluster1
    

    (Feature contributor: @lonelyCZ @iawia002 @dddddai)

    Verified Integration with Ecosystem

    Benefiting from the Kubernetes native API support, Karmada can easily integrate the single cluster ecosystem for multi-cluster, multi-cloud purpose. The following components have been verified by the Karmada community:

    (Feature contributor: @lfbear @learner0810 @zirain @Rains6 @gy95 @XiShanYongYe-Chang )

    OverridePolicy Improvements

    By leverage of the new-introduced RuleWithCluster fields to OverridePolicy and ClusterOverridePolicy, users are now able to define override policies with a single policy for specified workloads.

    (Feature contributor: @iawia002 @lfbear @RainbowMango @lonelyCZ @jameszhangyukun )

    Karmada Installation Improvements

    Introduced init command to Karmada CLI. Users are now able to install Karmada by a single command.

    Please refer to Installing Karmada for more details.

    (Feature contributor: @prodanlabs @lonelyCZ @jrkeen )

    Configuring Karmada Controllers

    Now all controllers provided by Karmada work as plug-ins. Users are now able to turn off any of them from the default enabled list. See --controllers flag of karmada-controller-manager and karmada-agent for more details.

    (Feature contributor: @snowplayfire @iawia002 @jameszhangyukun )

    Resource Interpreter Webhook Enhancement

    Introduced ReviseReplica support for the Resource Interpreter Webhook framework, which enables scheduling all customized workloads just like Kubernetes native ones.

    Refer to Resource Interpreter Webhook Proposal for more design details.

    (Feature contributor: @iawia002)

    Other Notable Changes

    Bug Fixes

    • karmada-controller-manager: Fixed the issue that the annotation of resource template cannot be updated. (@mrlihanbo #1012)
    • karmada-controller-manager: Fixed the issue of generating binding reference key. (@JarHMJ #1003)
    • karmada-controller-manager: Fixed the inefficiency of en-queue failed task issue. (@Garrybest #1068)

    Features & Enhancements

    • Karmada CLI: Introduced --cluster-provider flag to join command to specify provider of joining cluster. (@2hangchen #1025)
    • Karmada CLI: Introduced taint command to set taints for clusters. (@lonelyCZ #889)
    • Karmada CLI: The Applied condition of Work and Scheduled/FullyApplied of ResourceBinding are available for kubectl get. (@lonelyCZ #1110)
    • karmada-controller-manager: The cluster discovery feature now supports v1beta1 of cluster-api. (@iawia002 #1029)
    • karmada-controller-manager: The Job's startTime and completionTime now available at resource template. (@Garrybest #1034)
    • karmada-controller-manager: introduced --controllers flag to enable or disable controllers. (@snowplayfire #1083)
    • karmada-controller-manager: Support retain ownerReference from observed objects. (@snowplayfire #1116)
    • karmada-controller-manager and karmada-agent: Introduced cluster-cache-sync-timeout flag to specify the time waiting for cache sync. (@snowplayfire #1112)

    Instrumentation (Metrics and Events)

    • karmada-scheduler-estimator: Introduced /metrics endpoint to emit metrics. (@Garrybest #1030)
    • Introduced ApplyPolicy and ScheduleBinding events for resource template. (@mrlihanbo #1070)

    Deprecation

    • The ReplicaSchedulingPolicy API deprecated at v0.9.0 now has been removed in favor of ReplicaScheduling of PropagationPolicy. (@iawia002 #1161)

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @2hangchen
    • @aven-ai
    • @BDXGD
    • @carlory
    • @dddddai
    • @eightzero
    • @fanzhihai0215
    • @feeltimeQ
    • @fleeto
    • @Garrybest
    • @ghl116
    • @gy95
    • @haiker2011
    • @Haleygo
    • @iawia002
    • @imroc
    • @JackZxj
    • @jameszhangyukun
    • @JarHMJ
    • @jrkeen
    • @kevin-wangzefeng
    • @leonharetd
    • @lfbear
    • @lonelyCZ
    • @mrlihanbo
    • @Phil-sun
    • @pigletfly
    • @prodanlabs
    • @RainbowMango
    • @Rains6
    • @Shike-Ada
    • @snowplayfire
    • @wawa0210
    • @XiShanYongYe-Chang
    • @zirain
    Source code(tar.gz)
    Source code(zip)
    crds.tar.gz(26.87 KB)
    kubectl-karmada-darwin-amd64.tgz(23.40 MB)
    kubectl-karmada-darwin-arm64.tgz(22.85 MB)
    kubectl-karmada-linux-amd64.tgz(23.56 MB)
    kubectl-karmada-linux-arm64.tgz(21.93 MB)
  • v0.10.1(Nov 24, 2021)

  • v0.10.0(Nov 20, 2021)

    What's New

    Resource Interpreter Webhook

    The newly introduced Resource Interpreter Webhook framework allows users to implement their own CRD plugins that will be consulted at all parts of propagation process. With this feature, CRDs and CRs will be propagated just like Kubernetes native resources, which means all scheduling primitives also support custom resources. An example as well as some helpful utilities are provided to help users better understand how this framework works.

    Refer to Proposal for more details.

    (Feature contributor: @RainbowMango, @XiShanYongYe-Chang, @gy95)

    Significant Scheduling Enhancement

    1. Introduced dynamicWeight primitive to PropagationPolicy and ClusterPropagationPolicy. With this feature, replicas could be divided by a dynamic weight list, and the weight of each cluster will be calculated based on the available replicas during scheduling. This feature can balance the cluster's utilization significantly. (#841)

    2. Introduced Job schedule (divide) support. A Job that desires many replicas now could be divided into many clusters just like Deployment. This feature makes it possible to run huge Jobs across small clusters. (#898)

    (Feature contributor: @Garrybest )

    Workloads Observation from Karmada Control Plane

    After workloads (e.g. Deployments) are propagated to member clusters, users may also want to get the overall workload status across many clusters, especially the status of each pod. In this release, a get subcommand was introduced to the kubectl-karmada. With this command, user are now able get all kinds of resources deployed in member clusters from the Karmada control plane.

    For example (get deployment and pods across clusters):

    $ kubectl karmada get deployment
    NAME    CLUSTER   READY   UP-TO-DATE   AVAILABLE   AGE   ADOPTION
    nginx   member2   1/1     1            1           19m   Y
    nginx   member1   1/1     1            1           19m   Y
    $ kubectl karmada get pods
    NAME                     CLUSTER   READY   STATUS    RESTARTS   AGE
    nginx-6799fc88d8-vzdvt   member1   1/1     Running   0          31m
    nginx-6799fc88d8-l55kk   member2   1/1     Running   0          31m
    

    (Feature contributor: @lfbear @QAQ-rookie)

    Other Notable Changes

    • karmada-scheduler-estimator: The number of pods becomes an important reference when calculating available replicas for the cluster. (@Garrybest, #777)
    • The labels (resourcebinding.karmada.io/namespace, resourcebinding.karmada.io/name, clusterresourcebinding.karmada.io/name) which were previously added on the Work object now have been moved to annotations. (@XiShanYongYe-Chang, #752)
    • Bugfix: Fixed the impact of cluster unjoining on resource status aggregation. (@dddddai, #817)
    • Instrumentation: Introduced events (SyncFailed and SyncSucceed) to the Work object. (@wawa0210, #800)
    • Instrumentation: Introduced condition (Scheduled) to the ResourceBinding and ClusterResourceBinding. (@dddddai, #823)
    • Instrumentation: Introduced events (CreateExecutionNamespaceFailed and RemoveExecutionNamespaceFailed) to the Cluster object. (@pigletfly, #749)
    • Instrumentation: Introduced several metrics (workqueue_adds_total, workqueue_depth, workqueue_longest_running_processor_seconds, workqueue_queue_duration_seconds_bucket) for karmada-agent and karmada-controller-manager. (@Garrybest, #831)
    • Instrumentation: Introduced condition (FullyApplied) to the ResourceBinding and ClusterResourceBinding. (@lonelyCZ, #825)
    • karmada-scheduler: Introduced feature gates. (@iawia002, #805)
    • karmada-controller-manager: Deleted resources from member clusters that use "Background" as the default delete option. (@RainbowMango, #970)

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @2hangchen
    • @algebra2k
    • @benjaminhuo
    • @Charlie17Li
    • @ctripcloud
    • @dddddai
    • @duguhaotian
    • @fleeto
    • @Garrybest
    • @gf457832386
    • @gy95
    • @hyschumi
    • @iawia002
    • @jameszhangyukun
    • @kerthcet
    • @kevin-wangzefeng
    • @learner0810
    • @lfbear
    • @lonelyCZ
    • @mrlihanbo
    • @penghuima
    • @Phil-sun
    • @pigletfly
    • @QAQ-rookie
    • @RainbowMango
    • @snowplayfire
    • @TeodoraBoros
    • @wawa0210
    • @wzshiming
    • @XiShanYongYe-Chang
    • @youhonglian
    • @yvoilee
    Source code(tar.gz)
    Source code(zip)
    crds.tar.gz(23.83 KB)
    kubectl-karmada-darwin-amd64.tgz(22.73 MB)
    kubectl-karmada-darwin-arm64.tgz(22.21 MB)
    kubectl-karmada-linux-amd64.tgz(22.89 MB)
    kubectl-karmada-linux-arm64.tgz(21.29 MB)
  • v0.9.0(Sep 30, 2021)

    What's New

    Upgrading support Users are now able to upgrade from the previous version smoothly. With the multiple version feature of CRD, objects with different schemas can be automatically converted between versions. Karmada uses the semantic versioning and will provide workarounds for inevitable breaking changes.

    In this release, the ResourceBining and ClusterResourceBinding promote to v1alpha2 and the previous v1alpha1 version is still available for one more release. With the upgrading instruction, the previous version of Karmada can promote smoothly.

    (Feature contributor: @RainbowMango )

    Introduced karmada-scheduler-estimator to facilitate end-to-end scheduling accuracy Karmada scheduler aims to assign workload to clusters according to constraints and available resources of each member cluster. The kube-scheduler working on each cluster takes the responsibility to assign Pods to Nodes. Even though Karmada has the capacity to reschedule failure workload between member clusters, but the community still commits lots of effort to improve the accuracy of the end-to-end scheduling.

    The karmada-scheduler-estimator is the effective assistant of karmada-scheduler, it provides prediction-based scheduling decisions that can significantly improve the scheduling efficiency and avoid the wave of rescheduling among clusters. Note that this feature is implemented as a pluggable add-on. For the instructions please refer to scheduler estimator guideline.

    (Feature contributor: @Garrybest )

    Maintainability improvements A bunch of significant maintainability improvements were added to this release, including:

    • Simplified Karmada installation with helm chart. (Feature contributor: @algebra2k @jrkeen )

    • Provided metrics to observe scheduler status, the metrics API now served at /metrics of karmada-scheduler. With these metrics, users are now able to evaluate the scheduler's performance and identify the bottlenecks. (Feature contributor: @qianjun1993 )

    • Provided events to Karmada API objects as supplemental information to debug problems. (Feature contributor: @pigletfly )

    Other Notable Changes

    • karmada-controller-manager: The ResourceBinding/ClusterResourceBinding won't be deleted after associate PropagationPolicy/ClusterPropagationPolicy is removed and is still available until resource template is removed.(@qianjun1993, #601)
    • Introduced --leader-elect-resource-namespace which is used to specify the namespace of election object to components karmada-controller-manager/karmada-scheduler/karmada-agent`. (@XiShanYongYe-Chang #698)
    • Deprecation: The API ReplicaSchedulingPolicy has been deprecated and will be removed from the following release. The feature now has been integrated into ReplicaScheduling.
    • Introduced kubectl-karmada commands as the extensions for kubectl. (@XiShanYongYe-Chang #686)
    • karmada-controller-manager introduced a version command to represent version information. (@RainbowMango #717 )
    • karmada-scheduler/karmada-webhook/karmada-agent/karmada-scheduler-estimator introduced a version command to represent version information. (@lonelyCZ #719 )
    • Provided instructions about how to use the Submariner to connect the network between member clusters. (@XiShanYongYe-Chang #737 )
    • Added four metrics to the karmada-scheduler to monitor scheduler performance. (@qianjun1993 #747)

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @2hangchen
    • @CKchen0726
    • @dddddai
    • @ESonata
    • @fleeto
    • @Garrybest
    • @Hrishikesh156
    • @iawia002
    • @jrkeen
    • @just1900
    • @kerthcet
    • @lfbear
    • @lonelyCZ
    • @mrlihanbo
    • @MukulKolpe
    • @Onyinye91-ctrl
    • @phantooom
    • @pigletfly
    • @qianjun1993
    • @RainbowMango
    • @saiyan86
    • @smartding
    • @XiShanYongYe-Chang
    • @zhuyaguang
    Source code(tar.gz)
    Source code(zip)
    crds.tar.gz(23.05 KB)
    kubectl-karmada-darwin-amd64.tar.gz(19.05 MB)
    kubectl-karmada-darwin-arm64.tar.gz(18.61 MB)
    kubectl-karmada-linux-amd64.tar.gz(19.14 MB)
    kubectl-karmada-linux-arm64.tar.gz(17.79 MB)
  • v0.8.0(Aug 20, 2021)

    What's New

    Automatic cluster discovery with cluster-api For users who are using cluster-api (sigs.k8s.io/cluster-api), Karmada is now able to automatically discover & join clusters when provisioned, and unjoin them in case of destroyed.

    Note that this features is implemented as a built-in plugin. To enbale it, simply indicate the following to flags in karmada-controller-manager config:

    --cluster-api-kubeconfig string        Path to the cluster-api management cluster kubeconfig file.
    --cluster-api-context string           Name of the cluster context in cluster-api management cluster kubeconfig file.
    

    (Feature contributor: @XiShanYongYe-Chang )

    Introduced CommandOverrider and ArgsOverrider to simplify commands customization per cluster For multi-cluster applications, it's quite common to set different arguments when running on different clusters or environments. In this release, two overrider plugins: CommandOverrider and ArgsOverrider are introduced, based on industry best practices. These two handy tools allow users to declare complex declarations and avoid configuration mistakes.

    Workload types supported now are: Deployment, ReplicaSet, DaemonSet, StatefulSet and Pod, more types including CRDs will be supported in later releases.

    (Feature contributor: @lfbear @betaincao )

    Better integration support with Kubernetes ecosystem The Kubernetes native APIs support and patterns to run cloud-native applications of Karmada make it quite easy to quickly integrate with other projects in the Kubernetes ecosystem.

    In release, several useful features that will help Karmada work seamlessly with other systems.

    • ResourceBinding and ClusterResourceBinding now supports present the applied status. (@pigletfly #595)
    • More types of resources now support aggregating status to the resource template, inlcuding Job, Service, and Ingress. (@mrlihanbo #609)
    • argo-cd is also verified to run full featured with Karmada to achieve multi-cluster GitOps.

    Other Notable Changes

    • karmadactl: introduced cordon and uncordon commands to mark a cluster schedulable and un-schedulable. (#464, @algebra2k )
    • karmada-controller-manager: introduced --skipped-propagating-namespaces flag to skip resources in certain namespaces from propagating. (#533, @pigletfly )
    • karmada-controller-manager/karmada-agent/karmada-scheduler: Introduced flags to config the QPS and burst which are used to control the client traffic interacting with Karmada or member cluster's kube-apiserver. (#611, @Garrybest )
      • --cluster-api-qps QPS to use while talking with cluster kube-apiserver.
      • --cluster-api-burst Burst to use while talking with cluster kube-apiserver.
      • --kube-api-qps QPS to use while talking with karmada-apiserver.
      • --kube-api-burst Burst to use while talking with karmada-apiserver.
    • Karmada quick-start scripts now support running on Mac OS. (#538, @lfbear )

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @algebra2k
    • @betaincao
    • @garfcat
    • @Garrybest
    • @gy95
    • @Iceber
    • @lfbear
    • @lushenle
    • @mrlihanbo
    • @phantooom
    • @pigletfly
    • @qianjun1993
    • @RainbowMango
    • @wawa0210
    • @weilaaa
    • @XiShanYongYe-Chang
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Jul 12, 2021)

    What's New

    Support multi-cluster service discovery In many cases, a Kubernetes user may want to split their deployments across multiple clusters, but still retain mutual dependencies between workloads running in those clusters.

    Users are now able to export and import services between clusters with Multi-Cluster Service API (MCS-API). (@XiShanYongYe-Chang)

    Support more precise cluster status management Besides reporting cluster status, the cluster status controller now also renews the lease. The newly introduced cluster monitor monitors the lease and will mark cluster ready status to unknown in case of cluster status controller not working. (@Garrybest)

    Support replica scheduling based on cluster resources In some scenarios, users want to divide the replicas in a deployment to multiple clusters if a single cluster doesn't have sufficient resources. Users now able to declare the replica scheduling preference by the new field ReplicaDivisionPreference in PropagationPolicy and ClusterPropagationPolicy. (@qianjun1993)

    Support more convenient APIs to divide replicas by weight list Users now able to declare cluster weight by ReplicaDivisionPreference in PropagationPolicy and ClusterPropagationPolicy, with the preference Weighted, the scheduler will divide replicas according to the WeightPreference. (@qianjun1993)

    This feature is designed to replace the standalone ReplicaSchedulingPolicy API in the future.

    Other Notable Changes

    • karmada-agent: Introduced --karmada-context flag to indicate the cluster context in karmada kubeconfig file. (#415, @mrlihanbo)
    • karmada-agent and karmada-controller-manager: Introduced --cluster-lease-duration and --cluster-lease-renew-interval-fraction flags to specify the lease expiration period and renew interval fraction. (#421, @pigletfly)
    • karmada-scheduler: Added a filter plugin to prevent the cluster from scheduling if the required API is not installed. (#470, @vincent-pli)
    • karmada-controller-manager: Introduced --skipped-propagating-apis flag to skip the resources from propagating. (#345, @pigletfly)
    • Installation: Now the hack/deploy-karmada.sh and hack/deploy-karmada-agent.sh scripts support install Karmada components on both Kind clusters and standalone clusters. (#458, @lfbear)
    • In the case of resources already in member clusters, in order to avoid conflict karmada will refuse to propagate and adopt the resource by default. (#471, @mrlihanbo)

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @garfcat
    • @Garrybest
    • @gy95
    • @huiwq1990
    • @likakuli
    • @lfbear
    • @mrlihanbo
    • @pigletfly
    • @qianjun1993
    • @RainbowMango
    • @RhnSharma
    • @shinytang6
    • @vincent-pli
    • @XiShanYongYe-Chang
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(May 29, 2021)

    What's New

    Support syncing with member cluster behind proxy In some scenarios where certain clusters may not be directly connected from the Internet, such as:

    • The member clusters are behind a NAT gateway from the Karmada control plane
    • The member clusters are in an on-prem Intranet while Karmada runs in the cloud

    By setting proxy-url in the kubeconfig when registering member clusters, Karmada will talk to member clusters through indicated proxy. (#307, @liufen90)

    Introduced ImageOverrider for simplifying image replacement In most scenarios where clusters are running in different cloud or data centers, the workload requires a different image registry. ImageOverrider is a handy tool to override images for a workload before they are propagated to clusters. (#370, @XiShanYongYe-Chang)

    Support scheduling based on cluster taint toleration Karmada-scheduler now reflects taints on member clusters and tolerations defined in PropagationPolicy and ClusterPropagationPolicy when scheduling resources. (#320, @mrlihanbo)

    Support scheduling based on cluster topology Karmada-scheduler now supports scheduling resources according to the topology information(cluster/provider/region/zone) defined in cluster objects. (#357, @mrlihanbo)

    Other Notable Changes

    • Installation: introduced hack/remote-up-karmada.sh to install Karmada on a specified Kubernetes as host. (#367, @lfbear)
    • karmadactl: introduced the version command to show the version it is built from. Try it on by command: # karmadactl version. (#285, @algebra2k)
    • API: added short name for most APIs. (#376, @pigletfly)
    • The resource templates now match PropagationPolicy or ClusterPropagationPolicy in alphabetical order when there are multiple policies that match. (#306, @XiShanYongYe-Chang)
    • Always generates ResourceBinding objects for namespace-scoped resource template. (#315, @vincent-pli)
    • karmada-controller-manager: introduced the leader-elect command line flag to enable or disable leadership election. (#321, @pigletfly)
    • The Work objects name now consist of the resource template's .metada.name, .metada.kind and .metadata.namespace. (#359, @Garrybest)

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @algebra2k
    • @anirudhramnath
    • @daixiang0
    • @futuretea
    • @Garrybest
    • @gy95
    • @hantmac
    • @huiwq1990
    • @Iceber
    • @kevin-wangzefeng
    • @leofang94
    • @LeoLiuYan
    • @liufen90
    • @lfbear
    • @mrlihanbo
    • @pigletfly
    • @RainbowMango
    • @vincent-pli
    • @XiShanYongYe-Chang
    • @yangcheng-icbc
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Apr 20, 2021)

    What's New

    Support resource status aggregation from Karmada Users are now able to query aggregated status of resources(propagated by Karmada) from Karmada API-server, no need to connect to each member cluster. All resource's status in member clusters will be aggregated to its binding objects. In addition, if the resource type is deployment, deployment status will be also reflected.

    karmada-agent to support pull-based synchronization between control plan and member clusters karmada-agent is introduced in this release to support cases the member clusters not directly reachable from the Karmada control plan. The agent basically pulls all useful configurations from the Karmada control plane and applies to member clusters it serves. The karmada-agent also completes cluster registration automatically.

    ReplicaSchedulingPolicy API to customize replica scheduling constraints of Deployments Users are now able to customize replica scheduling constraints of Deployments with ReplicaScheduling Policy API. The replicas will be divided into different numbers for member clusters according to weight list indicated by the policy.

    Other Notable Changes

    • The label karmada.io/override and karmada.io/cluster-override have been deprecated and replaced by policy.karmada.io/applied-overrides and policy.karmada.io/applied-cluster-overrides to indicate applied override rules.
    • The ResourceBinding and ClusterResourceBinding names now consist of resource kind and resource name.
    • Both PropagationPolicy and ClusterPropagationPolicy names now restricted to no more than 63 characters.
    • OverridePolicy and ClusterOverridePolicy changes will take effect immediately now.
    • Users are now able to use new flag --cluster-status-update-frequency when configuring karmada-agent and karmada-controller-manager, to specify cluster status update frequency.

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @kevin-wangzefeng
    • @mrlihanbo
    • @RainbowMango
    • @tinyma123
    • @XiShanYongYe-Chang
    • @yangcheng-icbc
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Mar 13, 2021)

    What's New

    New policy APIs have been added to support cluster level resources propagation and customization Users are now able to use ClusterPropagationPolicy to propagate both cluster-scoped and namespace-scoped resources. In addition, users are able to use ClusterOverridePolicy to define the overall policy to realize differentiation propagation.

    Support resource and policy detector The detector watches both resources and policy (PropagationPolicy and ClusterPropagationPolicy) changes, all changes on resources or policies will take effect immediately.

    Namespace Auto-provision feature get on board Namespaces created on Karmada will be synced to all member clusters automatically. Users don't need to propagate namespaces anymore.

    Scheduler now able to reschedule resources when policy changes Once the Placement rule in the PropagationPolicy changed, the scheduler will reschedule to meet the declaration.

    Scheduler now support failure recovery Once any of the clusters becomes failure, the scheduler now able to re-schedule the resources to available clusters. This feature is controlled by flag --failover and disabled by default.

    Other Notable Changes

    • The PropagationWork API is now Work and located at the work.karmada.io group.
    • The PropagationBinding API is now ResourceBinding and located at the work.karmada.io group.
    • The label karmada.io/driven-by has been deprecated and replaced by propagationpolicy.karmada.io/namespace, propagationpolicy.karmada.io/name, and clusterpropagationpolicy.karmada.io/name.
    • The label karmada.io/created-by has been deprecated and replaced by propagationpolicy.karmada.io/namespace, propagationpolicy.karmada.io/name, clusterpropagationpolicy.karmada.io/name, resourcebinding.karmada.io/namespace, resourcebinding.karmada.io/name, clusterresourcebinding.karmada.io/name, work.karmada.io/namespace, work.karmada.io/name.
    • Added new annotation policy.karmada.io/applied-placement for both ResourceBinding and ClusterResourceBinding resources, to indicate the placement rule.
    • Added Validating Admission Webhook to restrict resource selector change for PropagationPolicy and ClusterPropagationPolicy objects.

    Contributors

    Thank you to everyone who contributed to this release!

    Users whose commits are in this release (alphabetically by user name)

    • @GitHubxsy
    • @kevin-wangzefeng
    • @mrlihanbo
    • @RainbowMango
    • @tinyma123
    • @XiShanYongYe-Chang
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 8, 2021)

    What's New

    Support override resources when propagating to member clusters

    Users are now able to specify override policies to customize specific resource fields for different clusters. (#130, @RainbowMango, @mrlihanbo)

    Support labelselector in cluster affinity

    Users are now able to use ClusterAffinity.LabelSelector in PropagationPolicy API to restrict target clusters to when propagating resources. (#149, @mrlihanbo)

    Support spread constraints

    Users are now able to specify resource spread constraints in propagation policies:

    More constraint options will be introduced in the later releases:

    • SpreadByFieldRegion: resource will be spread by region.
    • SpreadByFieldZone: resource will be spread by zone.
    • SpreadByFieldProvider: resource will be spread by cloud providers.

    Added webhook components to mutating and validating resources automatically Introduced new components named karmada-webhook for implementating Mutating and Validationg webhooks. (#133, @RainbowMango)

    Other Notable Changes

    • E2E testing time consumption has been significantly reduced. (#119, @mrlihanbo)
    • Provided generic client for operating both Kubernetes and Karmada APIs. (#126, @RainbowMango)
    • The MemberCluster API is now Cluster. (#139, @kevin-wangzefeng)
    • The API group propagationstrategy.karmada.io is now policy.karmada.io. (#142, @kevin-wangzefeng)
    • Supported skip member cluster TLS verification. (#159, @mrlihanbo)
    • Any unexpected modification of resource in member cluster will be amended automatically. (#127, @mrlihanbo)
    Source code(tar.gz)
    Source code(zip)
Owner
null
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 60 Aug 16, 2022
provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

null 23 May 8, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

null 0 Oct 19, 2021
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

null 4 Nov 16, 2021
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Aidan Melen 27 Sep 14, 2022
K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.

AWS Cloud Map MCS Controller for K8s Introduction AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements e

Amazon Web Services 61 Sep 8, 2022
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Emre Savcı 33 Aug 2, 2022
Enable dynamic and seamless Kubernetes multi-cluster topologies

Enable dynamic and seamless Kubernetes multi-cluster topologies Explore the docs » View Demo · Report Bug · Request Feature About the project Liqo is

LiqoTech 697 Sep 21, 2022
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Chen Keinan 32 Sep 19, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2k Sep 20, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 63 Sep 8, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

null 1 Dec 17, 2021
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Mert Doğan 0 Oct 24, 2021
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Snigdha Sambit Aryakumar 1 Jan 25, 2022
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Shiwen Cheng 465 Sep 27, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

CloudSnorkel 16 Jun 8, 2022
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

Kilo Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes. Overview Kilo connects nodes in a cluster by providing an e

Lucas Servén Marín 1.5k Sep 20, 2022