vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Overview

WebsiteQuickstartDocumentationBlogTwitterSlack

Latest Release License: Apache-2.0

Join us on Slack!

vcluster - Virtual Clusters For Kubernetes

  • Lightweight & Low-Overhead - Based on k3s, bundled in a single pod and with super-low resource consumption
  • No Performance Degradation - Pod are scheduled in the underlying host cluster, so they get no performance hit at all while running
  • Reduced Overhead On Host Cluster - Split up large multi-tenant clusters into smaller vcluster to reduce complexity and increase scalability
  • Flexible & Easy Provisioning - Create via vcluster CLI, helm, kubectl, Argo any of your favorite tools (it is basically just a StatefulSet)
  • No Admin Privileges Required - If you can deploy a web app to a Kubernetes namespace, you will be able to deploy a vcluster as well
  • Single Namespace Encapsulation - Every vcluster and all of its workloads are inside a single namespace of the underlying host cluster
  • Easy Cleanup - Delete the host namespace and the vcluster plus all of its workloads will be gone immediately

Learn more on www.vcluster.com.


Architecture

vcluster Intro

vcluster Compatibility

Learn more in the documentation.


⭐️ Do you like vcluster? Support the project with a star ⭐️


Quick Start

To learn more about vcluster, open the full getting started guide.

1. Download vcluster CLI

Use one of the following commands to download the Loft CLI binary from GitHub:

Mac (Intel/AMD)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-darwin-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Mac (Silicon/ARM)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-darwin-arm64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Linux (AMD)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Linux (ARM)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-arm64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Windows (Powershell)
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -UseBasicParsing ((Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/latest" -UseBasicParsing).Content -replace "(?ms).*`"([^`"]*vcluster-windows-amd64.exe)`".*","https://github.com/`$1") -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);

If you get the error that Windows cannot find vcluster after installing it, you will need to restart your computer, so that the changes to the PATH variable will be applied.


Alternatively, you can download the binary for your platform from the GitHub Releases page and add this binary to your PATH.


2. Create a vcluser

vcluster create vcluster-1 -n host-namespace-1
Alternative A: Helm

Create file vcluster.yaml:

vcluster:
  image: rancher/k3s:v1.19.5-k3s2    
  extraArgs:
    - --service-cidr=10.96.0.0/12    
  baseArgs:
    - server
    - --write-kubeconfig=/k3s-config/kube-config.yaml
    - --data-dir=/data
    - --no-deploy=traefik,servicelb,metrics-server,local-storage
    - --disable-network-policy
    - --disable-agent
    - --disable-scheduler
    - --disable-cloud-controller
    - --flannel-backend=none
    - --kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle
storage:
  size: 5Gi

Deploy vcluster via helm:

helm upgrade --install vcluster-1 vcluster \
  --values vcluster.yaml \
  --repo https://charts.loft.sh \
  --namespace vcluster-1 \
  --repository-config=''

Alternative B: kubectl

Create file vcluster.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: vcluster-1
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: vcluster-1
rules:
  - apiGroups: [""]
    resources: ["configmaps", "secrets", "services", "services/proxy", "pods", "pods/proxy", "pods/attach", "pods/portforward", "pods/exec", "pods/log", "events", "endpoints", "persistentvolumeclaims"]
    verbs: ["*"]
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["*"]
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources: ["statefulsets"]
    verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: vcluster-1
subjects:
  - kind: ServiceAccount
    name: vcluster-1
roleRef:
  kind: Role
  name: vcluster-1
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
  name: vcluster-1
spec:
  type: ClusterIP
  ports:
    - name: https
      port: 443
      targetPort: 8443
      protocol: TCP
  selector:
    app: vcluster-1
---
apiVersion: v1
kind: Service
metadata:
  name: vcluster-1-headless
spec:
  ports:
    - name: https
      port: 443
      targetPort: 8443
      protocol: TCP
  clusterIP: None
  selector:
    app: vcluster-1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: vcluster-1
  labels:
    app: vcluster-1
spec:
  serviceName: vcluster-1-headless
  replicas: 1
  selector:
    matchLabels:
      app: vcluster-1
  template:
    metadata:
      labels:
        app: vcluster-1
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: vcluster-1
      containers:
      - image: rancher/k3s:v1.19.5-k3s2
        name: virtual-cluster
        command:
          - "/bin/k3s"
        args:
          - "server"
          - "--write-kubeconfig=/k3s-config/kube-config.yaml"
          - "--data-dir=/data"
          - "--disable=traefik,servicelb,metrics-server,local-storage"
          - "--disable-network-policy"
          - "--disable-agent"
          - "--disable-scheduler"
          - "--disable-cloud-controller"
          - "--flannel-backend=none"
          - "--kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle"  
          - "--service-cidr=10.96.0.0/12"  
        volumeMounts:
          - mountPath: /data
            name: data
      - name: syncer
        image: "loftsh/virtual-cluster:0.0.27"
        args:
          - --service-name=vcluster-1
          - --suffix=vcluster-1
          - --owning-statefulset=vcluster-1
          - --out-kube-config-secret=vcluster-1
        volumeMounts:
          - mountPath: /data
            name: data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 5Gi

Create vcluster using kubectl:

kubectl apply -f vcluster.yaml
Alternative C: Other Get the Helm chart or Kubernetes manifest and use any tool you like for the deployment of a vcluster, e.g. Argo, Flux etc.

3. Use the vcluster

# Start port-forwarding to the vcluster service + set kube-config file
vcluster connect vcluster-1 -n host-namespace-1
export KUBECONFIG=./kubeconfig.yaml

# OR: Start port-forwarding and add kube-context to current kube-config file
vcluster connect vcluster-1 -n host-namespace-1 --update-current

# Run any kubectl, helm, etc. command in your vcluster
kubectl get namespace
kubectl get pods -n kube-system
kubectl create namespace demo-nginx
kubectl create deployment nginx-deployment -n demo-nginx --image=nginx
kubectl get pods -n demo-nginx

4. Cleanup

vcluster delete vcluster-1 -n host-namespace-1

Alternatively, you could also delete the host-namespace using kubectl.

Issues
  • DaemonSet Pods stays on node even if no other workload is present

    DaemonSet Pods stays on node even if no other workload is present

    This might be intentional, so please close the issue if so and thanks for a wonderful project btw.

    What When spinning up a multi node cluster with e.g. minikube and deploying a nginx:latest deployment with a single replica, I only see the node where the workload is scheduled. I then scale the deployment to 10 replicas and I see more nodes, because of the 10 workload Pods being spread out by the scheduler. I then scale down to a single replica again and I see a single node after some waiting around. It is all to be expected.

    But when I do the same procedure with a DaemonSet deployed, my node count never goes back down due to the DaemonSet workloads still running on those nodes even no other relevant workloads are running on those nodes.

    Expected behavior I would have expected the DaemonSet Pods to be terminated (after some time) on the nodes where no relevant workload is running in order to minimize unnecessary vcluster workload pressure on the "mother ship" cluster.

    Test spec

    • Vcluster version 0.4.5 (latest available version at the time)
    • Minikube version v1.24.0
    • Kubectl version v1.22.4
    # 0) Spin up minikube test cluster
    minikube start --cpus 2 --memory 2048 --nodes=3 --cni=flannel
    
    # 1) Create vcluster
    vcluster create vcluster1 -n vcluster1 --create-namespace --disable-ingress-sync
    
    # 2) Connect to vcluster
    vcluster connect vcluster1 -n vcluster1
    
    # 3) Create deployment
    kubectl --kubeconfig ./kubeconfig.yaml create deployment workload --image=nginx:latest --replicas=1
    
    #  4) Scale deployment
    kubectl --kubeconfig ./kubeconfig.yaml scale deployment workload --replicas=10
    
    # 5) Get nodes
     kubectl --kubeconfig ./kubeconfig.yaml get nodes
    
    # 6) Scale down
    kubectl --kubeconfig ./kubeconfig.yaml scale deployment workload --replicas=1
    
    # 7) Get nodes (OBS: you have to wait for the count to go down)
     kubectl --kubeconfig ./kubeconfig.yaml get nodes
    
    # 8) Apply DaemonSet
    cat <<EOF | kubectl --kubeconfig ./kubeconfig.yaml create -f -
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      labels:
        app: nginx
      name: daemonset
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx:latest
            name: nginx
    EOF
    
    # Repeat from step 4 up and including step 7
    
    area/syncer 
    opened by wcarlsen 17
  • Istio Injection issue on vcluster >= 0.5.x

    Istio Injection issue on vcluster >= 0.5.x

    opened by rmathagiarun 15
  • Cannot create vcluster neither inside k3d nor inside kind on Linux

    Cannot create vcluster neither inside k3d nor inside kind on Linux

    vcluster pod fails with the following error:

    time="2021-12-24T17:56:38.949209082Z" level=fatal msg="failed to evacuate root cgroup: mkdir /sys/fs/cgroup/init: read-only file system"

    ❯ vcluster --version vcluster version 0.5.0-beta.0

    ❯ kind --version kind version 0.11.1

    ❯ k3d --version k3d version v5.2.2 k3s version v1.21.7-k3s1 (default)

    area/k3s 
    opened by pgagarinov 15
  • k0s support beside k3s?

    k0s support beside k3s?

    Hi everyone,

    please can you also support k0s beside k3s? k0s has much more use cases (bare-metal, cloud, iot, edge etc.) compared to k3s (iot, edge). In addition, k0s is much less opinionated regarding networking, storage, ingress etc. + the size is also small (187 MB). Finally, k0s is used for conventional staging & production clusters (bare-metal or cloud) which means that dev vClusters with k0s will be much closer to staging & production. So it would be great, if you can support it. Please, see the following link: https://k0sproject.io/

    Best regards, Thomas

    area/syncer area/k3s kind/feature 
    opened by ThomasLohmann 12
  • vcluster on k3s on WSL2

    vcluster on k3s on WSL2

    Hello, trying out vcluster. Any idea why my attempt to create a simple vcluster is failing here? I succesfully installed k3s here in WSL2 and now i'm trying to create my first vcluster inside it...

    k3s version v1.22.2+k3s2 (3f5774b4) go version go1.16.8

    image

    This is the k3s log on WSL2:

    ╰─ I1023 18:55:27.145605 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:27.145634 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:27.145866 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:55:40.145573 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:40.145604 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:40.145915 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 W1023 18:55:46.540702 31070 sysinfo.go:203] Nodes topology is not available, providing CPU topology I1023 18:55:54.145219 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:54.145249 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:54.145498 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 E1023 18:55:57.670079 31070 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.4 (legacy): Couldn't load matchlimit':No such file or directory

    Error occurred at line: 83 Try iptables-restore -h' or 'iptables-restore --help' for more information. ) *filter :INPUT ACCEPT [40874:12238794] - [0:0] :FORWARD DROP [0:0] - [0:0] :OUTPUT ACCEPT [41155:11345096] - [0:0] :DOCKER - [0:0] - [0:0] :DOCKER-ISOLATION-STAGE-1 - [0:0] - [0:0] :DOCKER-ISOLATION-STAGE-2 - [0:0] - [0:0] :DOCKER-USER - [0:0] - [0:0] :KUBE-EXTERNAL-SERVICES - [0:0] - [0:0] :KUBE-FIREWALL - [0:0] - [0:0] :KUBE-FORWARD - [0:0] - [0:0] :KUBE-KUBELET-CANARY - [0:0] - [0:0] :KUBE-NODEPORTS - [0:0] - [0:0] :KUBE-NWPLCY-DEFAULT - [0:0] - [0:0] :KUBE-PROXY-CANARY - [0:0] - [0:0] :KUBE-ROUTER-FORWARD - [0:0] - [0:0] :KUBE-ROUTER-INPUT - [0:0] - [0:0] :KUBE-ROUTER-OUTPUT - [0:0] - [0:0] :KUBE-SERVICES - [0:0] - [0:0] :KUBE-POD-FW-YTHDYMA2CBWLR2PW - [0:0] :KUBE-POD-FW-CEOFHLPKKYLD56IO - [0:0] :KUBE-POD-FW-NAOEZKKUB5NO4KBI - [0:0] :KUBE-POD-FW-4M52UXR2EWFBQ6QH - [0:0] :KUBE-POD-FW-K2TVNHK5E5ZQHCLK - [0:0] :KUBE-POD-FW-C7NCCGNSUR3CKZKN - [0:0] -A INPUT -m comment --comment "kube-router netpol - 4IA2OSFRMVNDXBVV" -j KUBE-ROUTER-INPUT -A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A INPUT -j KUBE-FIREWALL -A FORWARD -m comment --comment "kube-router netpol - TEMCG2JMHZYE7H7T" -j KUBE-ROUTER-FORWARD -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -o br-ba82458eff39 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o br-ba82458eff39 -j DOCKER -A FORWARD -i br-ba82458eff39 ! -o br-ba82458eff39 -j ACCEPT -A FORWARD -i br-ba82458eff39 -o br-ba82458eff39 -j ACCEPT -A FORWARD -s 10.42.0.0/16 -j ACCEPT -A FORWARD -d 10.42.0.0/16 -j ACCEPT -A OUTPUT -m comment --comment "kube-router netpol - VEAAIY32XVBHCSCY" -j KUBE-ROUTER-OUTPUT -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -i br-ba82458eff39 ! -o br-ba82458eff39 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -o br-ba82458eff39 -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN -A DOCKER-USER -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP -A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-NWPLCY-DEFAULT -m comment --comment "rule to mark traffic matching a network policy" -j MARK --set-xmark 0x10000/0x10000 -A KUBE-ROUTER-FORWARD -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-ROUTER-INPUT -d 10.43.0.0/16 -m comment --comment "allow traffic to cluster IP - M66LPN4N3KB5HTJR" -j RETURN -A KUBE-ROUTER-INPUT -p tcp -m comment --comment "allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN -A KUBE-ROUTER-INPUT -p udp -m comment --comment "allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN -A KUBE-ROUTER-INPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-ROUTER-OUTPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-SERVICES -d 10.43.44.208/32 -p udp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A KUBE-SERVICES -d 10.43.44.208/32 -p tcp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A KUBE-SERVICES -d 10.43.44.208/32 -p tcp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -d 10.42.0.7 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -s 10.42.0.7 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.7 -j ACCEPT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "rule to log dropped traffic POD name:svclb-traefik-k4whv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "rule to REJECT traffic destined for POD name:svclb-traefik-k4whv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -d 10.42.0.11 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -s 10.42.0.11 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.11 -j ACCEPT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "rule to log dropped traffic POD name:vcluster-1-0 namespace: host-namespace-1" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "rule to REJECT traffic destined for POD name:vcluster-1-0 namespace: host-namespace-1" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-CEOFHLPKKYLD56IO -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -d 10.42.0.8 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -s 10.42.0.8 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.8 -j ACCEPT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "rule to log dropped traffic POD name:traefik-74dd4975f9-4fpmk namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "rule to REJECT traffic destined for POD name:traefik-74dd4975f9-4fpmk namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -d 10.42.0.2 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -s 10.42.0.2 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.2 -j ACCEPT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "rule to log dropped traffic POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "rule to REJECT traffic destined for POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -d 10.42.0.3 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -s 10.42.0.3 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.3 -j ACCEPT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "rule to log dropped traffic POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "rule to REJECT traffic destined for POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -d 10.42.0.5 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -s 10.42.0.5 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.5 -j ACCEPT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "rule to log dropped traffic POD name:coredns-85cb69466-sfmfm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "rule to REJECT traffic destined for POD name:coredns-85cb69466-sfmfm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 COMMIT I1023 18:56:09.144877 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:09.144909 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" ╭─    ~ ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 1 ✘    at 18:54:23   ╰─ I1023 18:56:20.145216 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:20.145252 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:20.145522 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:56:31.145003 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:31.145035 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:31.145307 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:56:42.145112 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:42.145144 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:42.145447 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 Pr

    area/syncer area/k3s 
    opened by pmualaba 10
  • sign vCluster binaries/images with cosign

    sign vCluster binaries/images with cosign

    Let's sign vCluster binaries with cosign, and make them verifiable for the end-users by providing clear text on the releases page about how they can do it. AFAIK, vCluster uses GitHub Actions to make a new release, so, it is easy to integrate the signing process into the workflow because there is already GitHub action for installing cosign.

    But, the question is should we consider using keyless mode or using public/private key pairs while signing binaries/images?

    • https://chainguard.dev/posts/2021-12-01-zero-friction-keyless-signing
    • https://github.com/sigstore/cosign-installer cc: @Dentrax

    To complete the issue, the followings need to be done:

    • [x] ~~sign container images~~
    • [x] ~~sign binaries, maybe sha256 file only~~
    kind/enhancement 
    opened by developer-guy 8
  • Deploy workload during vcluster creation

    Deploy workload during vcluster creation

    It would be helpful for people building multi-tenancy on top of vcluster to be able to deploy a workload in the same step as the vcluster creation.

    My use case is that I would like to build an operator that deploys a vcluster and some workload on top of it. The problem that I'm facing is that I need to switch my kubeconfig from the service account to one that's provided mid-script. While this is likely not too hard if I shell out to work with the vcluster cli, I would much rather have a way to pass through rendered yaml to the vcluster helm chart.

    area/syncer kind/feature 
    opened by agracey 7
  • etcd: use pod IP instead of localhost on startup and liveness probes

    etcd: use pod IP instead of localhost on startup and liveness probes

    I'm not exactly sure why this is the case but looks that #212 did broke on etcd startup totally on my env where I use Calico + BGP without any overlay or NAT.

    This was the error:

    kubectl -n vcluster-test describe pod/test-etcd-0
      Normal   Started                 72s                kubelet                  Started container etcd
      Warning  Unhealthy               9s (x6 over 59s)   kubelet                  Startup probe failed: Get "http://127.0.0.1:2381/health": dial tcp 127.0.0.1:2381: connect: connection refused
    

    and if I only updated startup probe then error was:

    kubectl -n vcluster-test describe pod/test-etcd-0
      Warning  Unhealthy               8s    kubelet                  Liveness probe failed: Get "http://127.0.0.1:2381/health": dial tcp 127.0.0.1:2381: connect: connection refused
    

    Solution looks to be not defining host: value so pod IP gets used instead of https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#http-probes

    opened by olljanat 7
  • Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden

    Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden

    Not sure which kind of issues this causes but I noticed when I was testing scenario where k8s with --target-namespace (host cluster is v1.22.3+rke2r1 and vcluster and noticed this error on syncer log:

    E1130 15:24:10.731530       1 webhook.go:127] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:vcluster-test-k8s:vc-test" cannot create resource "tokenreview
    s" in API group "authentication.k8s.io" at the cluster scope
    E1130 15:24:10.731872       1 authentication.go:63] "Unable to authenticate the request" err="[[invalid bearer token, Token has been invalidated], tokenreviews.authentication.k8s.io is forbidden: User \"system:serviceaccount:vcluster-test-k8s:vc-test\" cannot create resource \"tokenreviews\" in API group \"authentication.k8s.io\" at the cluster scope]"
    

    I see that logic on code https://github.com/loft-sh/vcluster/blob/a76788b12e7349c58874dd49ce4907b6ec1fe86a/pkg/authentication/delegatingauthenticator/delegatingauthenticator.go#L43-L46 so I guess that cluster role system:auth-delegator should be delegated to service account?

    area/syncer 
    opened by olljanat 7
  • Set server URL in kubeconfig secret to hostname that would be used to access vcluster.

    Set server URL in kubeconfig secret to hostname that would be used to access vcluster.

    At present the --tls-san option can be used to supply an alternate hostname to the generated certificate to match hostname that would be used when accessing the vcluster.

    If this option is used then perhaps the kubeconfig secret if generated could also use that hostname rather than default of localhost. This would make the kubeconfig secret when mounted into a different deployment usable immediately without actually needing to copy it and run kubectl config set-cluster --server to fix up the server name.

    Because though the default actually includes a port of 8443, which would not be what vcluster is accessed by when using the Kubernetes service inside the cluster, then piggy backing on --tls-san may not be appropriate . In this case then maybe a separate option should instead be added specifically to set what the server should be that is set in the kubeconfig secret when generated although this is duplicating the hostname in two options.

    Either way, would be nice that an option existed to be able to set what the server URL is in the kubeconfig secret when generated so that it works as is, rather than needing to be copied and modified before it can be used.

    area/syncer kind/feature 
    opened by GrahamDumpleton 7
  • sts syncing ignores limit & requests

    sts syncing ignores limit & requests

    What happened?

    When we sync a sts with : ... resources: requests: memory: "1Gi" cpu: "200m" limits: memory: "2Gi" cpu: "1" ... vcluster fails to sync this to the host k8s. Our host k8s has strict enforcement for limits & requests and gives us following error : Warning SyncError 8s (x13 over 28s) pod-syncer Error syncing to physical cluster: pods "XXXX-x-kayra-x-YYY" is forbidden: failed quota: compute-resources: must specify limits.cpu,limits.memory,requests.cpu,requests.memory

    When we try to deploys with same requests/limits vcluster properly syncs them to the host cluster. This is only happening on sts, and all other constructs (svc,pvc etc) are properly synced.

    What did you expect to happen?

    Proper syncing of all requests & limits down to host cluster

    How can we reproduce it (as minimally and precisely as possible)?

    Create a sts, and specify requests & limits and see if they're synced to host cluster.

    Anything else we need to know?

    No response

    Host cluster Kubernetes version

    $ kubectl version
    1.16
    

    Host cluster Kubernetes distribution

    # Write here
    icp
    

    vlcuster version

    $ vcluster --version
    vcluster version 0.5.3```
    
    </details>
    
    
    ### Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)
    
    <details>
    
    

    Write here

    k0s
    </details>
    
    
    ### OS and Arch
    
    <details>
    
    

    OS: Arch:

    </details>
    
    good first issue kind/bug 
    opened by kotaner 6
  • feat: add isolation mode to eks

    feat: add isolation mode to eks

    What issue type does this pull request address? (keep at least one, remove the others) /kind enhancement /kind feature

    What does this pull request do? Add isolation mode to EKS distro.

    opened by pratikjagrut 0
  • test: add e2e test for rootless mode

    test: add e2e test for rootless mode

    What issue type does this pull request address? (keep at least one, remove the others) /kind test

    What does this pull request do? Which issues does it resolve? (use resolves #<issue_number> if possible) resolves #503

    opened by pratikjagrut 0
  • Exemptions in isolated mode

    Exemptions in isolated mode

    Is your feature request related to a problem?

    Wanting to deploy a manifest which violates the vCluster isolated mode admission requirements. This can be anything, for example mounting a certain directory on the host.

    This is possible with PodSecurity (see suggested solution). Sadly configuring this for the vCluster kube-apiserver doesn't have the desired effect as vCluster isolated mode handles this differently.

    Which solution do you suggest?

    PodSecurity allows to specify users (and more) in an AdmissionConfiguration manifest for the kube-apiserver, which then allows manifests to be deployed as that user which will then not be checked for any violations.

    For example:

    apiVersion: apiserver.config.k8s.io/v1
    kind: AdmissionConfiguration
    plugins:
      - name: PodSecurity
        configuration:
          apiVersion: pod-security.admission.config.k8s.io/v1beta1
          kind: PodSecurityConfiguration
          exemptions:
            usernames:
              - benchperson
    

    Have this work for the vCluster kube-apiserver too (or use a similar implementation working for the vCluster syncer).

    Which alternative solutions exist?

    None as far as I know. Unable to have violating manifests in an isolated vcluster.

    Additional context

    As discussed with Fabian Kramm and Oleg Matskiv on Slack

    kind/feature 
    opened by rhtenhove 1
  • Support actions and conditions annotations for ALB ingress controller

    Support actions and conditions annotations for ALB ingress controller

    Is your feature request related to a problem?

    The actions and conditions ALB annotation are not working as expected for the Ingress resources that are synced to the host, where the AWS ALB ingress controller is installed. There are two problems here. Firstly the annotation key suffix needs to be equal to the service name, which becomes different("translated") when synced to the host. The annotations have the following format:

    alb.ingress.kubernetes.io/actions.${action-name}
    alb.ingress.kubernetes.io/conditions.${conditions-name}
    

    and as the docs state:

    The action-name in the annotation must match the serviceName in the Ingress rules ...
    The conditions-name in the annotation must match the serviceName in the Ingress rules ...
    

    Second problem is in the value of the action of the forward type, which references serviceName in the .forwardConfig.targetGroups, e.g.:

    alb.ingress.kubernetes.io/actions.forward-multiple-tg: >
          {"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"service-1","servicePort":"http" ...
    

    the values of the serviceName fields should be translated the same way we translate the name of Service that gets synced to the host.

    Which solution do you suggest?

    Implement detection of the annotations in question in the Ingress syncer. Implement the translation for the key/value of these annotations based on the description of the problem above.

    Which alternative solutions exist?

    No response

    Additional context

    No response

    kind/feature 
    opened by matskiv 0
  • Create docs page on how to upgrade vcluster to a newer kubernetes version

    Create docs page on how to upgrade vcluster to a newer kubernetes version

    Currently we lack a docs page on how vcluster can be upgraded (or downgraded) to a different kubernetes version. This is different from distro to distro and we can reference to k3s, k0s etc. docs, but for vcluster the upgrade process is usually much simpler than for a real kubernetes cluster as the cluster is usually smaller and its easier to update just the image version of the deployed vcluster distro.

    kind/enhancement area/documentation 
    opened by FabianKramm 0
  • Add init helm charts

    Add init helm charts

    What issue type does this pull request address? (keep at least one, remove the others) /kind enhancement /kind feature

    What does this pull request do? Which issues does it resolve? (use resolves #<issue_number> if possible) Adds support to initialize a newly created vcluster with helm charts

    Please provide a short message that should be published in the vcluster release notes Add capability where vcluster and be initialized with helm charts.

    What else do we need to know?

    opened by ishankhare07 1
Releases(v0.10.2)
Owner
Loft Labs
Superpowers for your Kubernetes clusters
Loft Labs
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

null 18 Jun 2, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Mert Doğan 0 Oct 24, 2021
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

null 4 Nov 16, 2021
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Trendyol Open Source 353 May 8, 2022
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

vesoft inc. 51 Jun 6, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 56 Jun 20, 2022
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Computer Architecture and VLSI Systems (CARV) Laboratory 5 Jun 24, 2022
Simple-go-api - This porject deploys a simple go app inside a EKS Cluster

SimpleGoApp This porject deploys a simple go app inside a EKS Cluster Prerequisi

null 0 Jan 19, 2022
Rotate is a tool for rotating out AWS Auto-Scaling Groups within a k8s cluster

k8s-r8 rotate is a tool for rotating out AWS Auto-Scaling Groups within a k8s cluster. It was developed to make upgrading AMIs as a one command experi

maikxchd 0 Mar 27, 2022
Command kube-tmux prints Kubernetes context and namespace to tmux status line.

kube-tmux Command kube-tmux prints Kubernetes context and namespace to tmux status line.

null 7 Sep 10, 2021
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 59 Jun 23, 2022
Good enough Kubernetes namespace visualization tool

Kubesurveyor Good enough Kubernetes namespace visualization tool. No provisioning to a cluster required, only Kubernetes API is scrapped. Installation

Peter Gasper 77 Jun 20, 2022
K8s-ingress-health-bot - A K8s Ingress Health Bot is a lightweight application to check the health of the ingress endpoints for a given kubernetes namespace.

k8s-ingress-health-bot A K8s Ingress Health Bot is a lightweight application to check the health of qualified ingress endpoints for a given kubernetes

Aaron Tam 0 Jan 2, 2022
Kubernetes Admission Controller Demo: Validating Webhook for Namespace lifecycle events

Kubernetes Admission Controller Based on How to build a Kubernetes Webhook | Admission controllers Local Kuberbetes cluster # create kubernetes cluste

Marco Lehmann 2 Feb 27, 2022
I'd like to share random apps in the spare times. Thus, I'm going to try learning some concepts of Go and as much as I can I try to clarify each line.

go-samples I'd like to share random apps in the spare times. Thus, I'm going to try learning some concepts of Go and as much as I can I try to clarify

Mert Simsek 1 Mar 16, 2022
GoScanPlayers - Hypixel online player tracker. Runs as an executable and can notify a Discord Webhook

GoScanPlayers Hypixel online player tracker. Runs as an executable and can notif

null 0 Feb 17, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Aidan Melen 23 May 31, 2022
Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Reactive Tech Ltd 1k Jun 30, 2022