OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

Overview

openyurtio/openyurt


Version License Go Report Card

English | 简体中文

notification What is NEW!
Latest Release: September 26th, 2021. OpenYurt v0.5.0. Please check the CHANGELOG for details.
First Release: May 29th, 2020. OpenYurt v0.1.0-beta.1

OpenYurt is built based on upstream Kubernetes and now hosted by the Cloud Native Computing Foundation(CNCF) as a Sandbox Level Project.

OpenYurt has been designed to meet various DevOps requirements against typical edge infrastructures. It provides the same user experience for managing the edge applications as if they were running in the cloud infrastructure. It addresses specific challenges for cloud-edge orchestration in Kubernetes such as unreliable or disconnected cloud-edge networking, edge node autonomy, edge device management, region-aware deployment and so on. OpenYurt preserves intact Kubernetes API compatibility, is vendor agnostic, and more importantly, is SIMPLE to use.

Architecture

OpenYurt follows a classic cloud-edge architecture design. It uses a centralized Kubernetes control plane residing in the cloud site to manage multiple edge nodes residing in the edge sites. Each edge node has moderate compute resources available in order to run edge applications plus the required OpenYurt components. The edge nodes in a cluster can span multiple physical regions, which are referred to as Pools in OpenYurt.


The above figure demonstrates the core OpenYurt architecture. The major components consist of:

  • YurtHub: A node daemon that serves as a proxy for the outbound traffic from typical Kubernetes node daemons such as Kubelet, Kubeproxy, CNI plugins and so on. It caches the states of all the API resources that they might access in the edge node's local storage. In case the edge node is disconnected to the cloud, YurtHub can recover the states when the node restarts.
  • Yurt controller manager: It supplements the upstream node controller to support edge computing requirements. For example, Pods in the nodes that are in the autonomy mode will not be evicted from APIServer even if the node heartbeats are missing.
  • Yurt app manager: It manages two CRD resources introduced in OpenYurt: NodePool and YurtAppSet (previous UnitedDeployment). The former provides a convenient management for a pool of nodes within the same region or site. The latter defines a pool based application management workload.
  • Yurt tunnel (server/agent): TunnelServer connects with the TunnelAgent daemon running in each edge node via a reverse proxy to establish a secure network access between the cloud site control plane and the edge nodes that are connected to the intranet.

In addition, OpenYurt also includes auxiliary controllers for integration and customization purposes.

  • Node resource manager: It manages additional edge node resources such as LVM, QuotaPath and Persistent Memory. Please refer to node-resource-manager repo for more details.
  • Integrating EdgeX Foundry platform and uses Kubernetes CRD to manage edge devices!

OpenYurt introduces Yurt-edgex-manager to manage the lifecycle of the EdgeX Foundry software suite, and Yurt-device-controller to manage edge devices hosted by EdgeX Foundry via Kubernetes custom resources. Please refer to the respective repos for more details.

Prerequisites

Please check the resource and system requirements before installing OpenYurt.

Getting started

OpenYurt supports Kubernetes versions up to 1.20. Using higher Kubernetes versions may cause compatibility issues.

You can setup the OpenYurt cluster manually, but we recommend to start OpenYurt by using the yurtctl CLI tool. To quickly build and install yurtctl, assuming the build system has golang 1.13+ and bash installed, you can simply do the following:

git clone https://github.com/openyurtio/openyurt.git
cd openyurt
make build WHAT=cmd/yurtctl

The yurtctl binary can be found at _output/bin. The commonly used CLI commands include:

yurtctl convert --provider [minikube|kubeadm|kind]  // To convert an existing Kubernetes cluster to an OpenYurt cluster
yurtctl revert                                      // To uninstall and revert back to the original cluster settings
yurtctl join                                        // To allow a new node to join OpenYurt
yurtctl reset                                       // To revert changes to the node made by the join command

Please check yurtctl tutorial for more details.

Tutorials

To experience the power of OpenYurt, please try the detailed tutorials.

Roadmap

Community

Contributing

If you are willing to be a contributor for the OpenYurt project, please refer to our CONTRIBUTING document for details. We have also prepared a developer guide to help the code contributors.

Meeting

Item Value
APAC Friendly Community meeting Bi-weekly APAC (Starting Sep 2, 2020), Wednesday 11:00AM GMT+8
Meeting link APAC Friendly meeting https://us02web.zoom.us/j/82828315928?pwd=SVVxek01T2Z0SVYraktCcDV4RmZlUT09
Meeting notes Notes and agenda
Meeting recordings OpenYurt bilibili Channel

Contact

If you have any questions or want to contribute, you are welcome to communicate most things via GitHub issues or pull requests. Other active communication channels:

License

OpenYurt is under the Apache 2.0 license. See the LICENSE file for details. Certain implementations in OpenYurt rely on the existing code from Kubernetes and the credits go to the original Kubernetes authors.

Issues
  • [BUG] Cannot setup openyurt with `yurtctl convert --provider kind`

    [BUG] Cannot setup openyurt with `yurtctl convert --provider kind`

    What happened: Hello, I'd like to deploy the openyurt cluster with yurtctl and kind. It seems that yurtctl supports kind with option --provider kind. However, when I used the following command, it resulted in error.

    yurtctl convert -t --provider kind --cloud-nodes ${cloudnodes}
    
    F0921 08:08:07.173618   12871 convert.go:98] fail to complete the convert option: failed to read file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, open /etc/systemd/system/kubelet.service.d/10-kubeadm.conf: no such file or directory
    

    I read the code and found that when yurtctl starts, it will read the 10-kubeadm.conf (at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf in default) to get pod manifest path. However, the file and directory does not exsit when using kind. Maybe we should come out a better way to solve it.

    What you expected to happen: We can use yurtctl to deploy openyurt with kind.

    How to reproduce it (as minimally and precisely as possible): Use yurtctl convert -t --provider kind --cloud-nodes ${cloudnodes} to deploy openyurt with kind.

    Environment:

    • OpenYurt version: commit: 797c43d
    • Kubernetes version (use kubectl version): 1.20
    kind/bug 
    opened by Congrool 30
  • Failed to start yurt controller Pod (evicted)

    Failed to start yurt controller Pod (evicted)

    Which jobs are failing:

    Which test(s) are failing:

    Since when has it been failing:

    Testgrid link:

    Reason for failure:

    Anything else we need to know:

    labels

    /kind failing-test

    kind/failing-test 
    opened by alanz2015 27
  • tunnel-server: server connection closed

    tunnel-server: server connection closed

    I have setup a k8s cluster with the master and a working node on separate networks. I referenced this tutorial to setup the tunnel sever and agent, but I can't access the pod on edge node through yurt-tunnel. The logs from the tunnel-server:

    $ kubectl logs yurt-tunnel-server-74cfdd4bc7-7rrmr -n kube-system
    I1110 12:53:57.737387       1 cmd.go:143] server will accept yurttunnel-agent requests at: 192.168.1.101:10262, server will accept master https requests at: 192.168.1.101:10263server will accept master http request at: 192.168.1.101:10264
    W1110 12:53:57.737429       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
    I1110 12:53:57.968315       1 iptables.go:474] clear conntrack entries for ports ["10250" "10255"] and nodes ["192.168.1.101" "192.168.122.55" "127.0.0.1"]
    E1110 12:53:57.992841       1 iptables.go:491] clear conntrack for 192.168.1.101:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.011089       1 iptables.go:491] clear conntrack for 192.168.122.55:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.025873       1 iptables.go:491] clear conntrack for 127.0.0.1:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.035197       1 iptables.go:491] clear conntrack for 192.168.1.101:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.042357       1 iptables.go:491] clear conntrack for 192.168.122.55:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.048433       1 iptables.go:491] clear conntrack for 127.0.0.1:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    I1110 12:54:03.073595       1 csrapprover.go:52] starting the crsapprover
    I1110 12:54:03.209064       1 csrapprover.go:174] successfully approve yurttunnel csr(csr-sqfnw)
    I1110 12:54:08.070368       1 anpserver.go:101] start handling request from interceptor
    I1110 12:54:08.070787       1 anpserver.go:137] start handling https request from master at 192.168.1.101:10263
    I1110 12:54:08.070872       1 anpserver.go:151] start handling http request from master at 192.168.1.101:10264
    I1110 12:54:08.071365       1 anpserver.go:189] start handling connection from agents
    I1110 12:54:09.087254       1 server.go:418] Connect request from agent ubuntu-standard-pc-i440fx-piix-1996
    I1110 12:54:09.087319       1 backend_manager.go:99] register Backend &{0xc000158480} for agentID ubuntu-standard-pc-i440fx-piix-1996
    W1110 12:54:24.273510       1 server.go:451] stream read error: rpc error: code = Canceled desc = context canceled
    I1110 12:54:24.273532       1 backend_manager.go:119] remove Backend &{0xc000158480} for agentID ubuntu-standard-pc-i440fx-piix-1996
    I1110 12:54:24.273562       1 server.go:531] <<< Close backend &{0xc000158480} of agent ubuntu-standard-pc-i440fx-piix-1996
    I1110 12:54:37.682857       1 csrapprover.go:174] successfully approve yurttunnel csr(csr-6lcjl)
    I1110 12:54:42.969063       1 server.go:418] Connect request from agent ubuntu-standard-pc-i440fx-piix-1996
    I1110 12:54:42.969111       1 backend_manager.go:99] register Backend &{0xc000158180} for agentID ubuntu-standard-pc-i440fx-piix-1996
    

    Login to the edge node, the tunnel-agent container log indicates a "connection closed" error. Any idea how to solve this issue? Thanks.

    I1110 12:54:37.583915       1 cmd.go:106] neither --kube-config nor --apiserver-addr is set, will use /etc/kubernetes/kubelet.conf as the kubeconfig
    I1110 12:54:37.583964       1 cmd.go:110] create the clientset based on the kubeconfig(/etc/kubernetes/kubelet.conf).
    I1110 12:54:37.647689       1 cmd.go:135] yurttunnel-server address: 192.168.1.101:31302
    I1110 12:54:37.647990       1 anpagent.go:54] start serving grpc request redirected from yurttunel-server: 192.168.1.101:31302
    E1110 12:54:37.657318       1 clientset.go:155] rpc error: code = Unavailable desc = connection closed
    I1110 12:54:42.970218       1 stream.go:255] Connect to server 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1
    I1110 12:54:42.970241       1 clientset.go:184] sync added client connecting to proxy server 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1
    I1110 12:54:42.970266       1 client.go:122] Start serving for serverID 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1
    
    opened by ttzeng 27
  • [Vote] About the naming of the networking project.

    [Vote] About the naming of the networking project.

    Hi, dear community:

    As we are going to develop a new networking project to enhance the networking capabilities of OpenYurt. (proposal link https://github.com/openyurtio/openyurt/pull/637).

    We are looking forward to your advice on the project naming. Here are some candidates:

    • lite-vpn
    • NetEdge
    • FusionNet
    • FiberLink
    • Cobweb
    • Pigeon
    • Magpie
    • Wormhole
    • Raven
    • Over-connector
    • Hypernet
    • Roaming
    • PodNet

    Welcome to vote! Other names are very welcome too.

    opened by DrmagicE 26
  • [vote] the name of component that provide governance ability in nodepool.

    [vote] the name of component that provide governance ability in nodepool.

    What would you like to be added: In the proposal, a new component(named pool-spirit) will be added in the edge NodePool. the new component mainly provides the following ability:

    • To store metadata for NodePool as a kv storage
    • To provide a distributed lock for leader electing.
    • To use native kubernetes API to provide these above two abilities.

    For these abilities, the name pool-spirit maybe make end user confused, so OpenYurt community have decided to rename the new component. all candidate names as following:

    • pool-spirit: 节点池精灵
    • pool-coordinator: 节点池协调器
    • pool-linker: 节点池连接器
    • pool-fort: 大本营
    • sheepdog:牧羊犬
    • yurt-minister: yurt主持
    • shepherd:牧羊人
    • hive: 蜂巢
    • pool-harbor: 节点池港口

    please select your favourite name and reply this issue.

    btw: other names for new component are also welcome.

    kind/feature 
    opened by rambohe-ch 25
  • After completing the test node autonomy, the edge node status still keep ready

    After completing the test node autonomy, the edge node status still keep ready

    Situation description

    1. I installed the kubernetes cluster using kubeadm. The version of the cluster is 1.16. The cluster has a master and three nodes.

    2. After I finished installing open-yurt manually, I started trying to test whether the result of my installation was successful

    3. I used the Test node autonomy chapter in https://github.com/alibaba/openyurt/blob/master/docs/tutorial/yurtctl.md to test

    4. After I completed the actions in the Test node autonomy chapter, the edge node status still keep reday

    Operation steps

    1. I created a sample pod
    kubectl apply -f-<<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: bbox
    spec:
      nodeName: node3       
      containers:
      - image: busybox
        command:
        - top
        name: bbox
    EOF
    
    • node3 is the edge node. I chose the simplest way to schedule the sample pod to the edge node, although this method is not recommended in the kubernetes documentation
    1. I modified yurt-hub.yaml. make the value of --server-addr= a non-existent ip and port
      - --server-addr=https://1.1.1.1:6448
      
    2. Then I used the curl -s http://127.0.0.1:10261 command to test and verify whether the edge node can work normally in offline mode. the result of the command is as expected
      {
        "kind": "Status",
        "metadata": {
      
        },
        "status": "Failure",
        "message": "request( get : /) is not supported when cluster is unhealthy",
        "reason": "BadRequest",
        "code": 400
      }
      
    3. But node3 status still keep ready. and yurt-hub enters pending state
      kubectl get nodes
      NAME     STATUS   ROLES    AGE   VERSION
      master   Ready    master   23h   v1.16.6
      node1    Ready    <none>   23h   v1.16.6
      node2    Ready    <none>   23h   v1.16.6
      node3    Ready    <none>   23h   v1.16.6
      
      # kubectl get pods -n kube-system | grep yurt
      yurt-controller-manager-59544577cc-t948z   1/1     Running   0          5h42m
      yurt-hub-node3                             0/1     Pending   0          5h32m
      

    Some configuration items and logs that may be used as reference

    1. Label information of each node
      [email protected]:~# kubectl describe nodes master | grep Labels
      Labels:             alibabacloud.com/is-edge-worker=false
      [email protected]:~# kubectl describe nodes node1 | grep Labels
      Labels:             alibabacloud.com/is-edge-worker=false
      [email protected]:~# kubectl describe nodes node2 | grep Labels
      Labels:             alibabacloud.com/is-edge-worker=false
      [email protected]:~# kubectl describe nodes node3 | grep Labels
      Labels:             alibabacloud.com/is-edge-worker=true
      
    2. Configuration of kube-controller-manager
          - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
          - --controllers=*,bootstrapsigner,tokencleaner,-nodelifecycle
          - --kubeconfig=/etc/kubernetes/controller-manager.conf
      
    3. /etc/kubernetes/manifests/yurthub.yml
      # cat yurthub.yml
      apiVersion: v1
      kind: Pod
      metadata:
        labels:
          k8s-app: yurt-hub
        name: yurt-hub
        namespace: kube-system
      spec:
        volumes:
        - name: pki
          hostPath:
            path: /etc/kubernetes/pki
            type: Directory
        - name: kubernetes
          hostPath:
            path: /etc/kubernetes
            type: Directory
        - name: pem-dir
          hostPath:
            path: /var/lib/kubelet/pki
            type: Directory
        containers:
        - name: yurt-hub
          image: openyurt/yurthub:latest
          imagePullPolicy: Always
          volumeMounts:
          - name: kubernetes
            mountPath: /etc/kubernetes
          - name: pki
            mountPath: /etc/kubernetes/pki
          - name: pem-dir
            mountPath: /var/lib/kubelet/pki
          command:
          - yurthub
          - --v=2
          - --server-addr=https://1.1.1.1:6448
          - --node-name=$(NODE_NAME)
          livenessProbe:
            httpGet:
              host: 127.0.0.1
              path: /v1/healthz
              port: 10261
            initialDelaySeconds: 300
            periodSeconds: 5
            failureThreshold: 3
          resources:
            requests:
              cpu: 150m
              memory: 150Mi
            limits:
              memory: 300Mi
          securityContext:
            capabilities:
              add: ["NET_ADMIN", "NET_RAW"]
          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
        hostNetwork: true
        priorityClassName: system-node-critical
        priority: 2000001000
      
    4. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      # cat  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      # Note: This dropin only works with kubeadm and kubelet v1.11+
      [Service]
      #Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/var/lib/openyurt/kubelet.conf"
      Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/var/lib/openyurt/kubelet.conf"
      Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
      # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
      EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
      # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
      # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
      EnvironmentFile=-/etc/default/kubelet
      ExecStart=
      ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
      
    5. /var/lib/openyurt/kubelet.conf
      # cat /var/lib/openyurt/kubelet.conf
      apiVersion: v1
      clusters:
      - cluster:
          server: http://127.0.0.1:10261
        name: default-cluster
      contexts:
      - context:
          cluster: default-cluster
          namespace: default
        name: default-context
      current-context: default-context
      kind: Config
      preferences: {}
      users:
      - name: default-auth
      
    6. Use kubectl describe to view yurt-hub pod information
      # kubectl describe pods yurt-hub-node3 -n kube-system
      Name:                 yurt-hub-node3
      Namespace:            kube-system
      Priority:             2000001000
      Priority Class Name:  system-node-critical
      Node:                 node3/
      Labels:               k8s-app=yurt-hub
      Annotations:          kubernetes.io/config.hash: 7be1318d63088969eafcd2fa5887f2ef
                            kubernetes.io/config.mirror: 7be1318d63088969eafcd2fa5887f2ef
                            kubernetes.io/config.seen: 2020-08-18T08:41:27.431580091Z
                            kubernetes.io/config.source: file
      Status:               Pending
      IP:
      IPs:                  <none>
      Containers:
        yurt-hub:
          Image:      openyurt/yurthub:latest
          Port:       <none>
          Host Port:  <none>
          Command:
            yurthub
            --v=2
            --server-addr=https://10.10.13.82:6448
            --node-name=$(NODE_NAME)
          Limits:
            memory:  300Mi
          Requests:
            cpu:     150m
            memory:  150Mi
          Liveness:  http-get http://127.0.0.1:10261/v1/healthz delay=300s timeout=1s period=5s #success=1 #failure=3
          Environment:
            NODE_NAME:   (v1:spec.nodeName)
          Mounts:
            /etc/kubernetes from kubernetes (rw)
            /etc/kubernetes/pki from pki (rw)
            /var/lib/kubelet/pki from pem-dir (rw)
      Volumes:
        pki:
          Type:          HostPath (bare host directory volume)
          Path:          /etc/kubernetes/pki
          HostPathType:  Directory
        kubernetes:
          Type:          HostPath (bare host directory volume)
          Path:          /etc/kubernetes
          HostPathType:  Directory
        pem-dir:
          Type:          HostPath (bare host directory volume)
          Path:          /var/lib/kubelet/pki
          HostPathType:  Directory
      QoS Class:         Burstable
      Node-Selectors:    <none>
      Tolerations:       :NoExecute
      Events:            <none>
      
    7. Use docker ps on the edge node to view the log of the yurt-hub container. Intercept the last 20 lines
      # docker logs 0c89efbe949b --tail 20
      I0818 13:54:13.293068       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
      I0818 13:54:13.561262       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 331.836µs, left 10 requests in flight
      I0818 13:54:15.746576       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 83.127µs, left 10 requests in flight
      I0818 13:54:15.828560       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 436.489µs, left 10 requests in flight
      I0818 13:54:15.829628       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 307.187µs, left 10 requests in flight
      I0818 13:54:17.831366       1 util.go:177] kubelet delete pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 147.492µs, left 10 requests in flight
      I0818 13:54:17.833762       1 util.go:177] kubelet create pods: /api/v1/namespaces/kube-system/pods with status code 201, spent 111.762µs, left 10 requests in flight
      I0818 13:54:22.273899       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
      I0818 13:54:23.486523       1 util.go:177] kubelet watch configmaps: /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=2161&timeout=7m54s&timeoutSeconds=474&watch=true with status code 200, spent 7m54.000780359s, left 9 requests in flight
      I0818 13:54:23.648871       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 266.182µs, left 10 requests in flight
      I0818 13:54:25.748497       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 189.694µs, left 10 requests in flight
      I0818 13:54:25.830919       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 1.375535ms, left 10 requests in flight
      I0818 13:54:25.835015       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 1.363765ms, left 10 requests in flight
      I0818 13:54:33.733913       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 303.499µs, left 10 requests in flight
      I0818 13:54:34.261504       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
      I0818 13:54:35.751002       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 144.723µs, left 10 requests in flight
      I0818 13:54:35.830895       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 1.146812ms, left 10 requests in flight
      I0818 13:54:35.834366       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 744.857µs, left 10 requests in flight
      I0818 13:54:42.274049       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
      I0818 13:54:43.818381       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 248.672µs, left 10 requests in flight
      
    8. Use kubectl logs to view the logs of yurt-controller-manager. Intercept the last 20 lines
      # kubectl logs yurt-controller-manager-59544577cc-t948z -n kube-system --tail 20
      E0818 13:56:07.239721       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:10.560864       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:13.288544       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:16.726605       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:19.623694       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:23.572803       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:26.809117       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:29.021205       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:31.271086       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:34.083918       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:37.493386       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:40.222869       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:44.149011       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:47.699211       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:50.177053       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:52.553163       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:55.573328       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:58.677034       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:57:02.844152       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:57:05.044990       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      

    At last

    ​ I very much hope that you can help me solve the problem or point out my mistakes. If there is any other information that needs to be provided, please communicate with me in time

    opened by HirazawaUi 22
  • ci(master): ln hotfix to build in macOS

    ci(master): ln hotfix to build in macOS

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line: /kind bug /kind documentation /kind enhancement /kind good-first-issue /kind feature /kind question /kind design /sig ai /sig iot /sig network /sig storage /sig storage

    /kind feature

    What this PR does / why we need it:

    hotfix build linux arch in macOS 'ln' not in this os

    Which issue(s) this PR fixes:

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    
    in macOS exec `GOOS=linux GOARCH=amd64 make build WHAT=cmd/yurtctl`,but build after  the local dir ln  into  linux dir.
    so I fix the `HOST_PLATFORM` default value is `$(go env GOOS)/$(go env GOARCH)`
    
    

    other Note

    approved lgtm size/S kind/feature 
    opened by cuisongliu 21
  • Proposal: OpenYurt Convertor Operator for converting K8S to OpenYurt

    Proposal: OpenYurt Convertor Operator for converting K8S to OpenYurt

    Signed-off-by: nunu [email protected]

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line: /kind bug /kind documentation /kind enhancement /kind good-first-issue /kind feature /kind question /kind design /sig ai /sig iot /sig network /sig storage /sig storage

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    
    

    other Note

    approved lgtm size/L kind/feature 
    opened by gnunu 21
  • add `yurtctl test init` cmd to setup OpenYurt cluster with kind

    add `yurtctl test init` cmd to setup OpenYurt cluster with kind

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    As mentioned is #736, we will deprecate yurtctl convert/revert which is used by github actions. To run e2e tests in github actions, we still need a way to setup OpenYurt cluster with kind. It is this pr.

    Which issue(s) this PR fixes:

    Fixes #754

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    Currently, the yurtctl test looks like as follows:

    $ _output/bin/yurtctl test -h
    Tools for developers to test the OpenYurt Cluster
    
    Usage:
      yurtctl test [flags]
      yurtctl test [command]
    
    Available Commands:
      init        Using kind to setup OpenYurt cluster for test
    
    Flags:
      -h, --help   help for test
    
    $ _output/bin/yurtctl test init -h
    Using kind to setup OpenYurt cluster for test
    
    Usage:
      yurtctl test init [flags]
    
    Flags:
          --cloud-nodes string          Comma separated list of cloud nodes. The control-plane will always be cloud node.If no cloud node specified, the control-plane node will be the only one cloud node.
          --cluster-name string         The cluster name of the new-created kind cluster. (default "openyurt")
      -h, --help                        help for init
          --ignore-error                Igore error when using openyurt version that is not officially released.
          --kind-config-path string     Specify the path where the kind config file will be generated. (default "/tmp/kindconfig.yaml")
          --kube-config string          Path where the kubeconfig file of new cluster will be stored. The default is ${HOME}/.kube/config.
          --kubernetes-version string   The version of kubernetes that the openyurt cluster is based on. (default "v1.21")
          --node-num int                Specify the node number of the kind cluster. (default 2)
          --openyurt-version string     The version of openyurt components. (default "latest")
          --use-local-images            If set, local images stored by docker will be used first.
    

    So, that developers can setup openyurt cluster with cmd yurtctl test init.

    CHANGLOG-2022.4.9 Revise the local_up_openyurt.sh to use yurtctl test init to setup OpenYurt cluster.

    approved lgtm kind/feature size/XXL 
    opened by Congrool 19
  • [Discussion] Extend the capabilities of YurtTunnel to forward not only O&M and monitoring traffic from cloud to edge

    [Discussion] Extend the capabilities of YurtTunnel to forward not only O&M and monitoring traffic from cloud to edge

    The YurtTunnel was originally designed to deal with the challenges of O&M and monitoring traffic in edge scenarios. In this context, YurtTunnel can :

    1. handle the traffic to kubelet (to port 10250) on edge nodes through the tunnel by default ;
    2. and handle the traffic to hostNetwork pod and localhost on edge nodes with a little configuration.

    By these two fundamental features, users can execute kubectl exec/logs to edge nodes and scrap metric from exporters on edge nodes, but it does not provide us a convenient way to forward other kinds of traffic from cloud to edge.

    Although building a general purpose proxy may not be the goal of Yurttunnel, I am wondering if we can extend the capabilities of Yurttunnel to forward not only O&M and monitoring traffic. (see aslo https://github.com/openyurtio/openyurt/issues/522)

    For example, we can add more features to YurtTunnel to support these cases:

    1. the cloud components can use service DNS to request services on edge nodes;
    2. and use PodIP to request pods on edge nodes.

    Here are some rough designs:

    For case 1, we can maintain a special zone in CoreDNS using file plugin, (for example: .openyurt.tunnel), all domains belong to the zone will have an A record that holds the yurttunnel server service IP. The DNS name in this zone will follow such schema: <service_name>.<namespace>.<nodepool_name>.openyurt.tunnel. The components on the cloud side can use this schema to request specific services on specific nodepool. With this method, the user can manage their services in the service topology way, which is provided by data-filtering framework, which means the user do not need to create additional unique service per nodepool.

    For case 2, we can list watch all pods that do not belong to podCIDR of the cloud side, and then add them to DNAT rule according to the yurt-tunnel-server-cfg configmap configuration.

    Not sure if we should do this, I would like to hear your opinion. Welcome to discuss.

    /kind design

    kind/feature kind/design wontfix 
    opened by DrmagicE 19
  • [BUG] Edgenode Turns to NotReady, kubelet on It Restart Rapidly

    [BUG] Edgenode Turns to NotReady, kubelet on It Restart Rapidly

    What happened:

    After executing _output/local/bin/linux/amd64/yurtctl convert --provider kubeadm --cloud-nodes master on master node of an established Kubernetes cluster of two nodes, all pods are pulled and started as expected, but the status of the edge node turns to NotReady. Check kubelet's log with journalctl -xeu kubelet on that device, we can see kubelet keep restart every few minutes.

    What you expected to happen:

    After conversion, edge node rejoins the cluster, and has it status Ready.

    How to reproduce it (as minimally and precisely as possible): (updated due to further investigation)

    • Setup a Kubernetes cluster (I did it with kubeadm)
    • Execute _output/local/bin/linux/amd64/yurtctl convert --provider kubeadm --cloud-nodes master on master node (called master)
    • Reset yurt with _output/local/bin/linux/amd64/yurtctl convert
    • Reset the cluster (with kubeadm reset)
    • Setup the Kubernetes cluster again with new credential
    • Execute yurtctl convert again
    • Issue happens

    Anything else we need to know?:

    • If I do the conversion manually following https://github.com/openyurtio/openyurt/blob/master/docs/tutorial/manually-setup.md, everything is fine.
    • /etc/kubernetes/manifests/yurt-hub.yaml created by openyurt/yurtctl-servant is different from what created in this way: https://github.com/openyurtio/openyurt/blob/master/docs/tutorial/manually-setup.md#setup-yurthub
    • If I replace yurt-hub.yaml with the file used in a manually-setup, the system recovers from the issue.

    Environment:

    • OpenYurt version: V0.4.0
    • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
    
    • OS (e.g: cat /etc/os-release):
      Cloud node:
    NAME="Ubuntu"
    VERSION="18.04.3 LTS (Bionic Beaver)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 18.04.3 LTS"
    VERSION_ID="18.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=bionic
    UBUNTU_CODENAME=bionic
    

    Edge node:

    NAME="Ubuntu"
    VERSION="20.04.1 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.1 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal
    
    • Kernel (e.g. uname -a):
      Cloud node:
    Linux node-3 5.4.0-74-generic #83~18.04.1-Ubuntu SMP Tue May 11 16:01:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    

    Edge node:

    Linux ubuntu 5.4.0-1036-raspi #39-Ubuntu SMP PREEMPT Wed May 12 17:37:51 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
    
    • Install tools: kubeadm
    • Others:

    others https://github.com/openyurtio/openyurt/issues/360#issuecomment-865690544

    /kind bug

    kind/bug 
    opened by Windrow 19
  • [feature request] add grafana to help monitor openyurt itself

    [feature request] add grafana to help monitor openyurt itself

    What would you like to be added:

    add grafana page to monitor openyurt.

    Why is this needed:

    to help monitoring openyurt components. I see some like yurthub、yurttunnel have exposed metric already.

    if we provide an out-of-box grafana page, it will be nice.

    others /kind feature

    kind/feature 
    opened by adamzhoul 0
  • [feature request] use `kube-webhook-certgen` manage webhook certificate

    [feature request] use `kube-webhook-certgen` manage webhook certificate

    What would you like to be added:

    Use kube-webhook-certgen to generate webhook certificate, like [ingress-nginx](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx/templates/admission-webhooks or kubevela.

    Why is this needed: yurt-app-manager and raven-controller-manager implement generate certificate by themself, which is not easy to maintenance. Detail code show in file webhook_controller.go

    others /kind feature

    kind/feature 
    opened by huiwq1990 4
  • [BUG]yurt-hub container can't start when rejoin yurt cluster.

    [BUG]yurt-hub container can't start when rejoin yurt cluster.

    What happened: yurt-hub container report follow error: iptables v1.8.7 (legacy): Couldn't load target `NOTRACK':No such file or directory when rejoin yurt cluster.

    What you expected to happen: yurt-hub container start successed.

    How to reproduce it (as minimally and precisely as possible): rejoin cluster, manual install Yurthub.

    Anything else we need to know?: hardware is arm64.

    Environment:

    • OpenYurt version:
    • Kubernetes version (use kubectl version): v1.19.8
    • OS (e.g: cat /etc/os-release): Ubuntu 18.04.6 LTS (Bionic Beaver)
    • Kernel (e.g. uname -a): 4.9.140-tegra
    • Install tools:
    • Others:

    others

    /kind bug

    kind/bug kind/good-first-issue 
    opened by showeriszero 0
  • yurtadm init failed on sealer v0.8.5 cluster

    yurtadm init failed on sealer v0.8.5 cluster

    yurtadm versioin image

    i have initialized the k8s cluster with sealer v0.8.5, then i init openyurt with yurtadm failed: Error: The existing sealer version v0.8.5 is not supported, please clean it. Valid server versions are [v0.6.1].

    image

    image

    kind/bug 
    opened by jgmiao 3
  • [feature request]optimize `yurtadm join` command

    [feature request]optimize `yurtadm join` command

    What would you like to be added: Code of kubeadm join are imported into OpenYurt by yurtadm join command, how about import kubeadm binary file instead of importing code? if kubeadm file does not exist, we can install it online. then we need to make sure that kubelet can access kube-apiserver through yurthub and we do not need to restart kubelet and other pods.

    Why is this needed: Compare to import code, using kubeadm binary directly has the following advantages:

    1. It's very easy to follow K8s to update kubeadm version, do not need to maintain the code of kubeadm join in OpenYurt.
    2. reduce the complexity of yurtadm join

    others /kind feature

    help wanted kind/feature 
    opened by rambohe-ch 1
Releases(v0.7.0)
  • v0.7.0(May 30, 2022)

    This is an official release. Please check the CHANGELOG for a list of changes compared to previous release.

    Images:

    The official openyurt v0.7.0 images are in hosted under dockerhub and alicloud. and component images list as following:

    openyurt/yurt-controller-manager:v0.7.0
    openyurt/yurt-tunnel-server:v0.7.0
    openyurt/yurt-tunnel-agent:v0.7.0
    openyurt/yurthub:v0.7.0
    openyurt/node-servant:v0.7.0
    openyurt/yurt-app-manager:v0.5.0
    

    and if pull images from dockerhub timeout, you can use v0.7.0 images in the repo: registry.cn-hangzhou.aliyuncs.com/openyurt/ that hosted on alicloud.

    Get Started

    There are several ways to install OpenYurt cluster and you can choose the way that matches your situation.

    | Methods | Instruction | Estimated time | | --------------------------------------------------------- | -------------------------- | ---------- | | Try via OpenYurt experience center | OpenYurt experience center | < 1 minutes | | Install a new Kubernetes cluster with all OpenYurt components from scratch | yurtadm init/join | < 5 minutes | | Install an OpenYurt cluster manually based on Kubernetes cluster | manual | > 10 minutes |

    Source code(tar.gz)
    Source code(zip)
    openyurt-cni-0.8.7-0.x86_64.rpm(24.97 MB)
    yurtadm(32.12 MB)
  • v0.6.1(Feb 15, 2022)

    What's Changed

    • Fix: panic happened when x-tunnel-server-svc service type is lb by @rambohe-ch in #724
    • Fix: tunnel-server supports to proxy requests that access tunnel-server directly with specified destination by @rambohe-ch in #725
    • Fix: cache-agent for yurthub support '*' by @hhstu in #727
    • Fix: add NoArgs check for cmds by @Congrool in #728
    • Fix: not initialized sets.String cause panic by @DrmagicE in #733

    Images

    The official openyurt v0.6.1 images are in hosted under dockerhub and alicloud. and component images list as following:

    openyurt/yurt-controller-manager:v0.6.1
    openyurt/yurt-tunnel-server:v0.6.1
    openyurt/yurt-tunnel-agent:v0.6.1
    openyurt/yurthub:v0.6.1
    openyurt/yurt-app-manager:v0.5.0
    

    and if pull images from dockerhub timeout, you can use v0.6.1 images in the repo: registry.cn-hangzhou.aliyuncs.com/openyurt/ that hosted on alicloud.

    Source code(tar.gz)
    Source code(zip)
    openyurt-cni-0.8.7-0.x86_64.rpm(24.97 MB)
    yurtctl(31.28 MB)
  • v0.6.0(Jan 12, 2022)

    This is an official release. Please check the CHANGELOG for a list of changes compared to previous release.

    Images:

    The official openyurt v0.6.0 images are in hosted under dockerhub and alicloud. and component images list as following:

    openyurt/yurt-controller-manager:v0.6.0
    openyurt/yurt-tunnel-server:v0.6.0
    openyurt/yurt-tunnel-agent:v0.6.0
    openyurt/yurthub:v0.6.0
    openyurt/yurt-app-manager:v0.5.0
    

    and if pull images from dockerhub timeout, you can use v0.6.0 images in the repo: registry.cn-hangzhou.aliyuncs.com/openyurt/ that hosted on alicloud.

    Get Started

    There are several ways to install OpenYurt cluster and you can choose the way that matches your situation.

    | Situations | Installation | Link | installation time | | ------------------------------------------------------------ | -------------------------- | :----------------------------------------------------------- | ---------- | | only have edge worker nodes | OpenYurt Experience Center | https://openyurt.io/docs/next/installation/openyurt-experience-center/overview | < 1min | | install an OpenYurt cluster from scratch | yurtctl init/join | https://openyurt.io/docs/next/installation/yurtctl-init-join | <5min | | convert a Kubernetes cluster to OpenYurt cluster in a declarative way | yurtcluster-operator | https://openyurt.io/docs/next/installation/yurtcluster | <5min | | convert a Kubernetes cluster to OpenYurt cluster in a imperative way | yurtctl convert/revert | https://openyurt.io/docs/next/installation/yurtctl-convert-revert | <5min | | convert a Kubernetes cluster to OpenYurt cluster in manual way | - | https://openyurt.io/docs/next/installation/manually-setup | >10min |

    Source code(tar.gz)
    Source code(zip)
    openyurt-cni-0.8.7-0.x86_64.rpm(24.97 MB)
    yurtctl(31.28 MB)
  • v0.5.0(Sep 27, 2021)

    This is an official release. Please check the CHANGELOG for a list of changes compared to previous release.

    The official openyurt v0.5.0 images are in hosted under dockerhub and alicloud.

    To convert a Kubernetes cluster using the v0.5.0 dockerhub images, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes {node-name} --provider kubeadm\
     --yurt-controller-manager-image="openyurt/yurt-controller-manager:v0.5.0"\
     --yurt-tunnel-agent-image="openyurt/yurt-tunnel-agent:v0.5.0"\
     --yurt-tunnel-server-image="openyurt/yurt-tunnel-server:v0.5.0"\
     --yurtctl-servant-image="openyurt/yurtctl-servant:v0.5.0"\
     --yurthub-image="openyurt/yurthub:v0.5.0"
    

    and if pull images from dockerhub timeout, you can use v0.5.0 images hosted on alicloud, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes {node-name} --provider kubeadm\
     --yurt-controller-manager-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-controller-manager:v0.5.0"\
     --yurt-tunnel-agent-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-agent:v0.5.0"\
     --yurt-tunnel-server-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-server:v0.5.0"\
     --yurtctl-servant-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurtctl-servant:v0.5.0"\
     --yurthub-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurthub:v0.5.0"
    
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Aug 6, 2021)

    This is an official release. Please check the CHANGELOG for a list of changes compared to previous release.

    The official openyurt v0.4.1 images are in hosted under dockerhub and alicloud.

    To convert a Kubernetes cluster using the v0.4.1 dockerhub images, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes {node-name} --provider kubeadm\
     --yurt-controller-manager-image="openyurt/yurt-controller-manager:v0.4.1"\
     --yurt-tunnel-agent-image="openyurt/yurt-tunnel-agent:v0.4.1"\
     --yurt-tunnel-server-image="openyurt/yurt-tunnel-server:v0.4.1"\
     --yurtctl-servant-image="openyurt/yurtctl-servant:v0.4.1"\
     --yurthub-image="openyurt/yurthub:v0.4.1"
    

    and if you pull image from dockerhub timeout, use v0.4.1 alicloud images, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes {node-name} --provider kubeadm\
     --yurt-controller-manager-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-controller-manager:v0.4.1"\
     --yurt-tunnel-agent-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-agent:v0.4.1"\
     --yurt-tunnel-server-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-server:v0.4.1"\
     --yurtctl-servant-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurtctl-servant:v0.4.1"\
     --yurthub-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurthub:v0.4.1"
    
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(May 26, 2021)

    This is an official release. Please check the CHANGELOG for a list of changes compared to previous release.

    The official openyurt v0.4.0 images are in hosted under dockerhub and alicloud.

    To convert a Kubernetes cluster using the v0.4.0 dockerhub images, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes {node-name} --provider minikube\
     --yurt-controller-manager-image="openyurt/yurt-controller-manager:v0.4.0"\
     --yurt-tunnel-agent-image="openyurt/yurt-tunnel-agent:v0.4.0"\
     --yurt-tunnel-server-image="openyurt/yurt-tunnel-server:v0.4.0"\
     --yurtctl-servant-image="openyurt/yurtctl-servant:v0.4.0"\
     --yurthub-image="openyurt/yurthub:v0.4.0"
    

    and if you pull image from dockerhub timeout, use v0.4.0 alicloud images, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes {node-name} --provider minikube\
     --yurt-controller-manager-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-controller-manager:v0.4.0"\
     --yurt-tunnel-agent-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-agent:v0.4.0"\
     --yurt-tunnel-server-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-server:v0.4.0"\
     --yurtctl-servant-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurtctl-servant:v0.4.0"\
     --yurthub-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurthub:v0.4.0"
    
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(May 26, 2021)

    This is an official release. Please check the CHANGELOG for a list of changes compared to previous release.

    The official openyurt v0.3.0 images are in hosted under dockerhub and alicloud.

    To convert a Kubernetes cluster using the v0.3.0 dockerhub images, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes {node-name} --provider minikube\
     --yurt-controller-manager-image="openyurt/yurt-controller-manager:v0.3.0"\
     --yurt-tunnel-agent-image="openyurt/yurt-tunnel-agent:v0.3.0"\
     --yurt-tunnel-server-image="openyurt/yurt-tunnel-server:v0.3.0"\
     --yurtctl-servant-image="openyurt/yurtctl-servant:v0.3.0"\
     --yurthub-image="openyurt/yurthub:v0.3.0"
    

    and if you pull image from dockerhub timeout, use v0.3.0 alicloud images, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes {node-name} --provider minikube\
     --yurt-controller-manager-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-controller-manager:v0.3.0"\
     --yurt-tunnel-agent-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-agent:v0.3.0"\
     --yurt-tunnel-server-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-server:v0.3.0"\
     --yurtctl-servant-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurtctl-servant:v0.3.0"\
     --yurthub-image="registry.cn-hangzhou.aliyuncs.com/openyurt/yurthub:v0.3.0"
    
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(May 26, 2021)

    This is an official release. Please check the CHANGELOG for a list of changes compared to previous release.

    The official openyurt v0.2.0 images are in hosted under dockerhub.

    To convert a Kubernetes cluster using the v0.2.0 images, use the following command:

    yurtctl convert --deploy-yurttunnel --cloud-nodes minikube --provider minikube\
     --yurt-controller-manager-image="openyurt/yurt-controller-manager:v0.2.0-amd64"\
     --yurt-tunnel-agent-image="openyurt/yurt-tunnel-agent:v0.2.0-amd64"\
     --yurt-tunnel-server-image="openyurt/yurt-tunnel-server:v0.2.0-amd64"\
     --yurtctl-servant-image="openyurt/yurtctl-servant:v0.2.0-amd64"\
     --yurthub-image="openyurt/yurthub:v0.2.0-amd64"
    
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0-beta.1(May 26, 2021)

Owner
OpenYurt
Extending your native Kubernetes to edge
OpenYurt
dockin ops is a project used to handle the exec request for kubernetes under supervision

Dockin Ops - Dockin Operation service English | 中文 Dockin operation and maintenance management system is a safe operation and maintenance management s

WeBankFinTech 33 Feb 19, 2022
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Kuma 2.3k Aug 10, 2021
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Kuma 2.8k Jun 28, 2022
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Clusternet 922 Jun 27, 2022
a small form factor OpenShift/Kubernetes optimized for edge computing

Microshift Microshift is OpenShift1 Kubernetes in a small form factor and optimized for edge computing. Edge devices deployed out in the field pose ve

Red Hat Emerging Technologies 340 Jun 24, 2022
Secure Edge Networking Based On Kubernetes And KubeEdge.

What is FabEdge FabEdge is an open source edge networking solution based on kubernetes and kubeedge. It solves the problems including complex network

FabEdge 365 Jun 15, 2022
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

null 1 Dec 17, 2021
A tool to bring existing Azure resources under Terraform's management

Azure Terrafy A tool to bring your existing Azure resources under the management of Terraform. Install go install github.com/magodo/[email protected] Usage

magodo 0 Dec 9, 2021
A tool to bring existing Azure resources under Terraform's management

Azure Terrafy A tool to bring your existing Azure resources under the management of Terraform. Goal Azure Terrafy imports the resources inside a resou

Microsoft Azure 650 Jun 22, 2022
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Litmus Chaos 3.1k Jun 30, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 22 Jun 17, 2022
Simplified network and services for edge applications

English | 简体中文 EdgeMesh Introduction EdgeMesh is a part of KubeEdge, and provides a simple network solution for the inter-communications between servi

KubeEdge 124 Jun 24, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

null 0 Oct 19, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
A simple project (which is visitor counter) on kubernetesA simple project (which is visitor counter) on kubernetes

k8s playground This project aims to deploy a simple project (which is visitor counter) on kubernetes. Deploy steps kubectl apply -f secret.yaml kubect

null 11 Mar 23, 2022
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Oliver Borchert 30 Mar 2, 2022
Kubernetes Native Policy Management

Kyverno Kubernetes Native Policy Management Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations u

Kyverno 2.6k Jun 28, 2022
Kubernetes Native Serverless Framework

kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code without having to worry about the underlying infrastructu

Kubeless 6.8k Jun 30, 2022
Cloud Native Configurations for Kubernetes

CNCK CNCK = Cloud Native Configurations for Kubernetes Make your Kubernetes applications more cloud native by injecting runtime cluster information in

Tal Liron 5 Nov 4, 2021