vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Overview

WebsiteQuickstartDocumentationBlogTwitterSlack

Latest Release License: Apache-2.0

Join us on Slack!

vcluster - Virtual Clusters For Kubernetes

  • Lightweight & Low-Overhead - Based on k3s, bundled in a single pod and with super-low resource consumption
  • No Performance Degradation - Pod are scheduled in the underlying host cluster, so they get no performance hit at all while running
  • Reduced Overhead On Host Cluster - Split up large multi-tenant clusters into smaller vcluster to reduce complexity and increase scalability
  • Flexible & Easy Provisioning - Create via vcluster CLI, helm, kubectl, Argo any of your favorite tools (it is basically just a StatefulSet)
  • No Admin Privileges Required - If you can deploy a web app to a Kubernetes namespace, you will be able to deploy a vcluster as well
  • Single Namespace Encapsulation - Every vcluster and all of its workloads are inside a single namespace of the underlying host cluster
  • Easy Cleanup - Delete the host namespace and the vcluster plus all of its workloads will be gone immediately

Learn more on www.vcluster.com.


Architecture

vcluster Intro

vcluster Compatibility

Learn more in the documentation.


⭐️ Do you like vcluster? Support the project with a star ⭐️


Quick Start

To learn more about vcluster, open the full getting started guide.

1. Download vcluster CLI

Use one of the following commands to download the Loft CLI binary from GitHub:

Mac (Intel/AMD)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-darwin-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Mac (Silicon/ARM)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-darwin-arm64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Linux (AMD)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Linux (ARM)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-arm64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Windows (Powershell)
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -UseBasicParsing ((Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/latest" -UseBasicParsing).Content -replace "(?ms).*`"([^`"]*vcluster-windows-amd64.exe)`".*","https://github.com/`$1") -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);

If you get the error that Windows cannot find vcluster after installing it, you will need to restart your computer, so that the changes to the PATH variable will be applied.


Alternatively, you can download the binary for your platform from the GitHub Releases page and add this binary to your PATH.


2. Create a vcluser

vcluster create vcluster-1 -n host-namespace-1
Alternative A: Helm

Create file vcluster.yaml:

vcluster:
  image: rancher/k3s:v1.19.5-k3s2    
  extraArgs:
    - --service-cidr=10.96.0.0/12    
  baseArgs:
    - server
    - --write-kubeconfig=/k3s-config/kube-config.yaml
    - --data-dir=/data
    - --no-deploy=traefik,servicelb,metrics-server,local-storage
    - --disable-network-policy
    - --disable-agent
    - --disable-scheduler
    - --disable-cloud-controller
    - --flannel-backend=none
    - --kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle
storage:
  size: 5Gi

Deploy vcluster via helm:

helm upgrade --install vcluster-1 vcluster \
  --values vcluster.yaml \
  --repo https://charts.loft.sh \
  --namespace vcluster-1 \
  --repository-config=''

Alternative B: kubectl

Create file vcluster.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: vcluster-1
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: vcluster-1
rules:
  - apiGroups: [""]
    resources: ["configmaps", "secrets", "services", "services/proxy", "pods", "pods/proxy", "pods/attach", "pods/portforward", "pods/exec", "pods/log", "events", "endpoints", "persistentvolumeclaims"]
    verbs: ["*"]
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["*"]
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources: ["statefulsets"]
    verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: vcluster-1
subjects:
  - kind: ServiceAccount
    name: vcluster-1
roleRef:
  kind: Role
  name: vcluster-1
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
  name: vcluster-1
spec:
  type: ClusterIP
  ports:
    - name: https
      port: 443
      targetPort: 8443
      protocol: TCP
  selector:
    app: vcluster-1
---
apiVersion: v1
kind: Service
metadata:
  name: vcluster-1-headless
spec:
  ports:
    - name: https
      port: 443
      targetPort: 8443
      protocol: TCP
  clusterIP: None
  selector:
    app: vcluster-1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: vcluster-1
  labels:
    app: vcluster-1
spec:
  serviceName: vcluster-1-headless
  replicas: 1
  selector:
    matchLabels:
      app: vcluster-1
  template:
    metadata:
      labels:
        app: vcluster-1
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: vcluster-1
      containers:
      - image: rancher/k3s:v1.19.5-k3s2
        name: virtual-cluster
        command:
          - "/bin/k3s"
        args:
          - "server"
          - "--write-kubeconfig=/k3s-config/kube-config.yaml"
          - "--data-dir=/data"
          - "--disable=traefik,servicelb,metrics-server,local-storage"
          - "--disable-network-policy"
          - "--disable-agent"
          - "--disable-scheduler"
          - "--disable-cloud-controller"
          - "--flannel-backend=none"
          - "--kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle"  
          - "--service-cidr=10.96.0.0/12"  
        volumeMounts:
          - mountPath: /data
            name: data
      - name: syncer
        image: "loftsh/virtual-cluster:0.0.27"
        args:
          - --service-name=vcluster-1
          - --suffix=vcluster-1
          - --owning-statefulset=vcluster-1
          - --out-kube-config-secret=vcluster-1
        volumeMounts:
          - mountPath: /data
            name: data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 5Gi

Create vcluster using kubectl:

kubectl apply -f vcluster.yaml
Alternative C: Other Get the Helm chart or Kubernetes manifest and use any tool you like for the deployment of a vcluster, e.g. Argo, Flux etc.

3. Use the vcluster

# Start port-forwarding to the vcluster service + set kube-config file
vcluster connect vcluster-1 -n host-namespace-1
export KUBECONFIG=./kubeconfig.yaml

# OR: Start port-forwarding and add kube-context to current kube-config file
vcluster connect vcluster-1 -n host-namespace-1 --update-current

# Run any kubectl, helm, etc. command in your vcluster
kubectl get namespace
kubectl get pods -n kube-system
kubectl create namespace demo-nginx
kubectl create deployment nginx-deployment -n demo-nginx --image=nginx
kubectl get pods -n demo-nginx

4. Cleanup

vcluster delete vcluster-1 -n host-namespace-1

Alternatively, you could also delete the host-namespace using kubectl.

Comments
  • DaemonSet Pods stays on node even if no other workload is present

    DaemonSet Pods stays on node even if no other workload is present

    This might be intentional, so please close the issue if so and thanks for a wonderful project btw.

    What When spinning up a multi node cluster with e.g. minikube and deploying a nginx:latest deployment with a single replica, I only see the node where the workload is scheduled. I then scale the deployment to 10 replicas and I see more nodes, because of the 10 workload Pods being spread out by the scheduler. I then scale down to a single replica again and I see a single node after some waiting around. It is all to be expected.

    But when I do the same procedure with a DaemonSet deployed, my node count never goes back down due to the DaemonSet workloads still running on those nodes even no other relevant workloads are running on those nodes.

    Expected behavior I would have expected the DaemonSet Pods to be terminated (after some time) on the nodes where no relevant workload is running in order to minimize unnecessary vcluster workload pressure on the "mother ship" cluster.

    Test spec

    • Vcluster version 0.4.5 (latest available version at the time)
    • Minikube version v1.24.0
    • Kubectl version v1.22.4
    # 0) Spin up minikube test cluster
    minikube start --cpus 2 --memory 2048 --nodes=3 --cni=flannel
    
    # 1) Create vcluster
    vcluster create vcluster1 -n vcluster1 --create-namespace --disable-ingress-sync
    
    # 2) Connect to vcluster
    vcluster connect vcluster1 -n vcluster1
    
    # 3) Create deployment
    kubectl --kubeconfig ./kubeconfig.yaml create deployment workload --image=nginx:latest --replicas=1
    
    #  4) Scale deployment
    kubectl --kubeconfig ./kubeconfig.yaml scale deployment workload --replicas=10
    
    # 5) Get nodes
     kubectl --kubeconfig ./kubeconfig.yaml get nodes
    
    # 6) Scale down
    kubectl --kubeconfig ./kubeconfig.yaml scale deployment workload --replicas=1
    
    # 7) Get nodes (OBS: you have to wait for the count to go down)
     kubectl --kubeconfig ./kubeconfig.yaml get nodes
    
    # 8) Apply DaemonSet
    cat <<EOF | kubectl --kubeconfig ./kubeconfig.yaml create -f -
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      labels:
        app: nginx
      name: daemonset
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx:latest
            name: nginx
    EOF
    
    # Repeat from step 4 up and including step 7
    
    area/syncer 
    opened by wcarlsen 17
  • Error: failed pre-install:timed out waiting for condition

    Error: failed pre-install:timed out waiting for condition

    What happened?

    New install of Vcluster fails/times out. Pl. see the attached logs. We have a successful install in the same VM/RHEL 7 host. But something happened last month and is not allowing us to create a new Vcluster.

    What did you expect to happen?

    Should have a new cluster created.

    How can we reproduce it (as minimally and precisely as possible)?

    Tried to create a new Vcluster but it times out.

    Anything else we need to know?

    vcluster-log.pdf ssg-log.pdf

    Host cluster Kubernetes version

    $ kubectl version
    # paste output here
    1.21.1
    
    </details>
    
    
    ### Host cluster Kubernetes distribution
    
    <details>
    

    Write here

    1.21. x
    </details>
    
    
    ### vlcuster version
    
    <details>
    
    ```console
    $ vcluster --version
    # paste output here
    Vcluster 0.7.0 and 0.10.2
    
    </details>
    
    
    ### Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)
    
    <details>
    

    Write here

    
    </details>
    K8s
    
    ### OS and Arch
    
    <details>
    

    OS: Arch: RHEL 7

    kind/bug 
    opened by skhota 16
  • feat(syncer): sync csi objects when scheduler enabled

    feat(syncer): sync csi objects when scheduler enabled

    Signed-off-by: Rohan CJ [email protected]

    See description in #773

    What issue type does this pull request address? (keep at least one, remove the others) /kind bugfix /kind feature

    What does this pull request do? Which issues does it resolve? (use resolves #<issue_number> if possible) resolves #773

    Please provide a short message that should be published in the vcluster release notes Automatically syncs some storage related objects when scheduler is enabled.

    What else do we need to know?

    opened by rohantmp 15
  • Istio Injection issue on vcluster >= 0.5.x

    Istio Injection issue on vcluster >= 0.5.x

    opened by rmathagiarun 15
  • Cannot create vcluster neither inside k3d nor inside kind on Linux

    Cannot create vcluster neither inside k3d nor inside kind on Linux

    vcluster pod fails with the following error:

    time="2021-12-24T17:56:38.949209082Z" level=fatal msg="failed to evacuate root cgroup: mkdir /sys/fs/cgroup/init: read-only file system"

    ❯ vcluster --version vcluster version 0.5.0-beta.0

    ❯ kind --version kind version 0.11.1

    ❯ k3d --version k3d version v5.2.2 k3s version v1.21.7-k3s1 (default)

    area/k3s 
    opened by pgagarinov 15
  • vcluster does not start in limited RKE cluster

    vcluster does not start in limited RKE cluster

    I got a restricted namespace in our internal RKE cluster managed by Rancher. However, vcluster won't start up. I have no idea what the concrete reason is, given that the log contains a massive output.

    Things seem to start going wrong with this log entry: cluster_authentication_trust_controller.go:493] kube-system/extension-apiserver-authentication failed with : Internal error occurred: resource quota evaluation timed out

    But probably the attached log file will indicate the underlying reason better. vcluster1.log

    The syncer log is very short:

    I0629 13:25:32.393511       1 main.go:223] Using physical cluster at https://10.43.0.1:443
    I0629 13:25:32.575521       1 main.go:254] Can connect to virtual cluster with version v1.20.4+k3s1
    F0629 13:25:32.587987       1 main.go:138] register controllers: register secrets indices: no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"```
    
    Any ideas?
    
    area/syncer needs information 
    opened by MShekow 14
  • k0s support beside k3s?

    k0s support beside k3s?

    Hi everyone,

    please can you also support k0s beside k3s? k0s has much more use cases (bare-metal, cloud, iot, edge etc.) compared to k3s (iot, edge). In addition, k0s is much less opinionated regarding networking, storage, ingress etc. + the size is also small (187 MB). Finally, k0s is used for conventional staging & production clusters (bare-metal or cloud) which means that dev vClusters with k0s will be much closer to staging & production. So it would be great, if you can support it. Please, see the following link: https://k0sproject.io/

    Best regards, Thomas

    area/syncer area/k3s kind/feature 
    opened by ThomasLohmann 12
  • vcluster remains in pending after creation, then enters CrashLoopBackOff

    vcluster remains in pending after creation, then enters CrashLoopBackOff

    What happened?

    After creating a vcluster using vcluster create csh-vcluster-01 --debug, the setup process hangs here until the command times out:

    [email protected]:~# vcluster create csh-vcluster-01 --debug
    debug  Will use namespace vcluster-csh-vcluster-01 to create the vcluster...
    info   Creating namespace vcluster-csh-vcluster-01
    info   Create vcluster csh-vcluster-01...
    debug  execute command: helm upgrade csh-vcluster-01 https://charts.loft.sh/charts/vcluster-0.10.2.tgz --kubeconfig /tmp/3510279876 --namespace vcluster-csh-vcluster-01 --install --repository-config='' --values /tmp/2170583696
    done √ Successfully created virtual cluster csh-vcluster-01 in namespace vcluster-csh-vcluster-01
    info   Waiting for vcluster to come up...
    

    Error messages are not verbose enough for me to figure out what exactly causes this to hang. Once this process is either interrupted via keyboard interrupt or by letting it time out, the following is visible when the command vcluster list is run:

    [email protected]:~# vcluster list
    
     NAME              NAMESPACE                  STATUS    CONNECTED   CREATED                         AGE
     csh-vcluster-01   vcluster-csh-vcluster-01   Pending               2022-07-10 21:51:02 -0400 EDT   5m45s
    

    Attempts to connect to the vcluster demonstrate that the vcluster is similarly unresponsive:

    [email protected]:~# vcluster connect csh-vcluster-01 --debug
    info   Waiting for vcluster to come up...
    

    What did you expect to happen?

    The vcluster come up and become active, as expected via the documentation's getting started guide.

    How can we reproduce it (as minimally and precisely as possible)?

    install vcluster as outlined in the documentation, then run vcluster create

    Anything else we need to know?

    This cluster is being virtualized within proxmox. However, all other cluster functions are working as expected.

    Host cluster Kubernetes version

    [email protected]:~# kubectl version
    WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
    Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
    Kustomize Version: v4.5.4
    Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:15:38Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
    

    Host cluster Kubernetes distribution

    k8s 1.24.2
    

    vlcuster version

    [email protected]:~# vcluster --version
    vcluster version 0.10.2
    

    Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)

    k8s
    

    OS and Arch

    OS:  Debian GNU/Linux 11 (bullseye) 
    Arch: x86_64
    
    kind/bug 
    opened by viv-codes 10
  • Services syncer cannot be disabled

    Services syncer cannot be disabled

    What happened?

    I tried to disable the service syncer in favor of a custom syncer as a vcluster plugin (inspired by the original syncer code). But regardless of the --sync-flag, services remain in sync, even if they are disabled. It also doesn't matter if a custom plugin is loaded or not (as a sidecar container).

    What did you expect to happen?

    No service should be synchronized by default (if disabled), at least as long as no custom plugin is added.

    How can we reproduce it (as minimally and precisely as possible)?

    First of all the tiny piece of extra.yaml-configuration:

    syncer:
      extraArgs:
        - "--sync=-services"
    

    And a dummy service service.yaml:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - name: http
          protocol: TCP
          port: 80
          targetPort: 9376
    
    1. vcluster create test --namespace test -f extra.yaml
    2. vcluster connect test --namespace test &
    3. kubectl --kubeconfig ./kubeconfig.yaml apply -f service.yaml

    The physical cluster lists the service, even if sync is disabled:

    ...
    test           service/my-service-x-default-x-vc-test   ClusterIP      10.111.80.113    <none>        80/TCP
    ...
    

    Anything else we need to know?

    No response

    Host cluster Kubernetes version

    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:32:32Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
    

    Host cluster Kubernetes distribution

    Docker4Mac
    

    vlcuster version

    $ vcluster --version
    vcluster version 0.6.0
    

    Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)

    k3s
    

    OS and Arch

    OS: MacOSX
    Arch: Intel
    
    kind/bug 
    opened by gruberro 10
  • vcluster on k3s on WSL2

    vcluster on k3s on WSL2

    Hello, trying out vcluster. Any idea why my attempt to create a simple vcluster is failing here? I succesfully installed k3s here in WSL2 and now i'm trying to create my first vcluster inside it...

    k3s version v1.22.2+k3s2 (3f5774b4) go version go1.16.8

    image

    This is the k3s log on WSL2:

    ╰─ I1023 18:55:27.145605 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:27.145634 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:27.145866 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:55:40.145573 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:40.145604 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:40.145915 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 W1023 18:55:46.540702 31070 sysinfo.go:203] Nodes topology is not available, providing CPU topology I1023 18:55:54.145219 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:54.145249 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:54.145498 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 E1023 18:55:57.670079 31070 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.4 (legacy): Couldn't load matchlimit':No such file or directory

    Error occurred at line: 83 Try iptables-restore -h' or 'iptables-restore --help' for more information. ) *filter :INPUT ACCEPT [40874:12238794] - [0:0] :FORWARD DROP [0:0] - [0:0] :OUTPUT ACCEPT [41155:11345096] - [0:0] :DOCKER - [0:0] - [0:0] :DOCKER-ISOLATION-STAGE-1 - [0:0] - [0:0] :DOCKER-ISOLATION-STAGE-2 - [0:0] - [0:0] :DOCKER-USER - [0:0] - [0:0] :KUBE-EXTERNAL-SERVICES - [0:0] - [0:0] :KUBE-FIREWALL - [0:0] - [0:0] :KUBE-FORWARD - [0:0] - [0:0] :KUBE-KUBELET-CANARY - [0:0] - [0:0] :KUBE-NODEPORTS - [0:0] - [0:0] :KUBE-NWPLCY-DEFAULT - [0:0] - [0:0] :KUBE-PROXY-CANARY - [0:0] - [0:0] :KUBE-ROUTER-FORWARD - [0:0] - [0:0] :KUBE-ROUTER-INPUT - [0:0] - [0:0] :KUBE-ROUTER-OUTPUT - [0:0] - [0:0] :KUBE-SERVICES - [0:0] - [0:0] :KUBE-POD-FW-YTHDYMA2CBWLR2PW - [0:0] :KUBE-POD-FW-CEOFHLPKKYLD56IO - [0:0] :KUBE-POD-FW-NAOEZKKUB5NO4KBI - [0:0] :KUBE-POD-FW-4M52UXR2EWFBQ6QH - [0:0] :KUBE-POD-FW-K2TVNHK5E5ZQHCLK - [0:0] :KUBE-POD-FW-C7NCCGNSUR3CKZKN - [0:0] -A INPUT -m comment --comment "kube-router netpol - 4IA2OSFRMVNDXBVV" -j KUBE-ROUTER-INPUT -A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A INPUT -j KUBE-FIREWALL -A FORWARD -m comment --comment "kube-router netpol - TEMCG2JMHZYE7H7T" -j KUBE-ROUTER-FORWARD -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -o br-ba82458eff39 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o br-ba82458eff39 -j DOCKER -A FORWARD -i br-ba82458eff39 ! -o br-ba82458eff39 -j ACCEPT -A FORWARD -i br-ba82458eff39 -o br-ba82458eff39 -j ACCEPT -A FORWARD -s 10.42.0.0/16 -j ACCEPT -A FORWARD -d 10.42.0.0/16 -j ACCEPT -A OUTPUT -m comment --comment "kube-router netpol - VEAAIY32XVBHCSCY" -j KUBE-ROUTER-OUTPUT -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -i br-ba82458eff39 ! -o br-ba82458eff39 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -o br-ba82458eff39 -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN -A DOCKER-USER -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP -A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-NWPLCY-DEFAULT -m comment --comment "rule to mark traffic matching a network policy" -j MARK --set-xmark 0x10000/0x10000 -A KUBE-ROUTER-FORWARD -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-ROUTER-INPUT -d 10.43.0.0/16 -m comment --comment "allow traffic to cluster IP - M66LPN4N3KB5HTJR" -j RETURN -A KUBE-ROUTER-INPUT -p tcp -m comment --comment "allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN -A KUBE-ROUTER-INPUT -p udp -m comment --comment "allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN -A KUBE-ROUTER-INPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-ROUTER-OUTPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-SERVICES -d 10.43.44.208/32 -p udp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A KUBE-SERVICES -d 10.43.44.208/32 -p tcp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A KUBE-SERVICES -d 10.43.44.208/32 -p tcp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -d 10.42.0.7 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -s 10.42.0.7 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.7 -j ACCEPT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "rule to log dropped traffic POD name:svclb-traefik-k4whv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "rule to REJECT traffic destined for POD name:svclb-traefik-k4whv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -d 10.42.0.11 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -s 10.42.0.11 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.11 -j ACCEPT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "rule to log dropped traffic POD name:vcluster-1-0 namespace: host-namespace-1" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "rule to REJECT traffic destined for POD name:vcluster-1-0 namespace: host-namespace-1" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-CEOFHLPKKYLD56IO -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -d 10.42.0.8 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -s 10.42.0.8 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.8 -j ACCEPT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "rule to log dropped traffic POD name:traefik-74dd4975f9-4fpmk namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "rule to REJECT traffic destined for POD name:traefik-74dd4975f9-4fpmk namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -d 10.42.0.2 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -s 10.42.0.2 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.2 -j ACCEPT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "rule to log dropped traffic POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "rule to REJECT traffic destined for POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -d 10.42.0.3 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -s 10.42.0.3 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.3 -j ACCEPT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "rule to log dropped traffic POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "rule to REJECT traffic destined for POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -d 10.42.0.5 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -s 10.42.0.5 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.5 -j ACCEPT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "rule to log dropped traffic POD name:coredns-85cb69466-sfmfm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "rule to REJECT traffic destined for POD name:coredns-85cb69466-sfmfm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 COMMIT I1023 18:56:09.144877 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:09.144909 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" ╭─    ~ ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 1 ✘    at 18:54:23   ╰─ I1023 18:56:20.145216 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:20.145252 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:20.145522 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:56:31.145003 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:31.145035 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:31.145307 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:56:42.145112 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:42.145144 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:42.145447 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 Pr

    area/syncer area/k3s 
    opened by pmualaba 10
  • create PersistentVolume for PersistentVolumeClaim in case of cluster is not having dynamic volume provisioning support

    create PersistentVolume for PersistentVolumeClaim in case of cluster is not having dynamic volume provisioning support

    Hello, the vCluster CLI creates a PVC but not creates PV instead it waits for creating PV via dynamic volume provisioning (IMHO) but if the cluster is only supporting static provisioning, the vCluster Pods stay in the Pending state.

    It would be nice to create PV also to avoid having this kind of problems 😋

    area/cli kind/feature 
    opened by developer-guy 10
  • pdb default

    pdb default

    Is your feature request related to a problem?

    Not sure if this is a bug fix or a feature request.

    Had a user load up a vcluster, install the percona operator and spin up a db cluster. Then expected everything to work properly.

    But no pdb's were synced from the over cluster to the under one, so it would break during the rolling upgrades of the nodes of the undercluster.

    Which solution do you suggest?

    Defaulting pdb sync to on. Not sure there is a reason not to, and users using operators expect them to "just work" out of the box.

    Which alternative solutions exist?

    No response

    Additional context

    No response

    good first issue kind/enhancement help wanted 
    opened by kfox1111 2
  • Feature request: init manifests that would be applied after charts

    Feature request: init manifests that would be applied after charts

    Is your feature request related to a problem?

    The "init manifests" are applied by the vcluster before applying charts. I can foresee use cases where this is desired, but also use cases where a manifest needs to be applied after the charts, or different manifests before and after. For example, an operator with CRDs is installed via a helm chart, and operands that use the CRDs can only be installed afterward.

    I create this issue to gather interest in adding such a feature. Please upvote and we will consider it for one of the future releases.

    Which solution do you suggest?

    Add an additional helm chart option to specify manifests that will be applied after helm charts.

    Which alternative solutions exist?

    Alternative - a helm chart can be created from a manifest and added as the last item in the list of init helm charts.

    Additional context

    No response

    kind/feature 
    opened by matskiv 0
  • Add Multiple Vcluster support for hostpath mapping daemonset

    Add Multiple Vcluster support for hostpath mapping daemonset

    Is your feature request related to a problem?

    Yes, currently when running the hostpath mapper it creates a daemonset with each vcluster install. This means if we are running multiple vclusters against the same set of physical nodes we are duplicating the number of daemonsets we are installing. For environments with high numbers of tenants or customers using vcluster, this creates overhead on the physical nodes that could be unnecessary.

    Which solution do you suggest?

    Suggestion is to modify the logic in the hostpath mapping code to support multiple vclusters, and allow for a single deployment on physical nodes. This allows you to deploy hostpath mapper once, and many vclusters without having to re-deploy.

    Which alternative solutions exist?

    Not that I am aware of.

    Additional context

    As per discussion with @ishankhare07 , this seems like a nice enhancement.

    kind/feature 
    opened by brandonrjacobs 2
  • Allow annotation of all objects created by the eks,8s and k0s charts

    Allow annotation of all objects created by the eks,8s and k0s charts

    Signed-off-by: Ab-hishek [email protected]

    What issue type does this pull request address? /kind enhancement

    What does this pull request do? Which issues does it resolve? resolves #714

    Please provide a short message that should be published in the vcluster release notes Added helm value "global.annotations" to eke, k8s and k0s charts that will add its annotations to all objects created in the chart. Annotations for specific objects have precedence over global ones.

    What else do we need to know?

    opened by Ab-hishek 0
  • Feature gated changes to PVC and storageclass behaviour in v1.25

    Feature gated changes to PVC and storageclass behaviour in v1.25

    Is your feature request related to a problem?

    PVC.Spec.StorageClassName might be required to be upsynced.

    New functional annotation to investigate for StorageClass: https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3333-reconcile-default-storage-class.

    The changes are feature-gated behind RetroactiveDefaultStorageClass

    Which solution do you suggest?

    I started on changes, but don't have bandwidth to verify and implement status syncing: https://github.com/rohantmp/vcluster/blob/4484fc5c6f7d89481c6fb631169eb653ac960ccb/pkg/controllers/resources/persistentvolumeclaims/translate.go#L135-L140

    If anyone picks this up, that could be a good starting point.

    I will get to this when other tasks allow.

    Which alternative solutions exist?

    No response

    Additional context

    This is documented behavior, controllers might come to rely on it.

    kind/feature 
    opened by rohantmp 1
  • Add Flag to clear node images from node status

    Add Flag to clear node images from node status

    What issue type does this pull request address? /kind enhancement

    What does this pull request do? Which issues does it resolve? This PR adds the capability to clear node images from the physical nodes when syncing down to the virtual node.

    Please provide a short message that should be published in the vcluster release notes Enabling node-clear-image-status clears all status.images from the node when it is synced to the virtual cluster. Images can leak information about a node when running in a multi-tenant environment where customers share the nodes they run on. Enabling this flag helps prevent information leaking.

    opened by brandonrjacobs 2
Releases(v0.13.0)
Owner
Loft Labs
Superpowers for your Kubernetes clusters
Loft Labs
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

null 23 Nov 8, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Mert Doğan 0 Oct 24, 2021
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

null 4 Sep 18, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Trendyol Open Source 361 Nov 15, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 61 Oct 27, 2022
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

vesoft inc. 57 Nov 30, 2022
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Computer Architecture and VLSI Systems (CARV) Laboratory 7 Oct 26, 2022
Simple-go-api - This porject deploys a simple go app inside a EKS Cluster

SimpleGoApp This porject deploys a simple go app inside a EKS Cluster Prerequisi

null 0 Jan 19, 2022
Rotate is a tool for rotating out AWS Auto-Scaling Groups within a k8s cluster

k8s-r8 rotate is a tool for rotating out AWS Auto-Scaling Groups within a k8s cluster. It was developed to make upgrading AMIs as a one command experi

maikxchd 0 Mar 27, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 63 Nov 3, 2022
Command kube-tmux prints Kubernetes context and namespace to tmux status line.

kube-tmux Command kube-tmux prints Kubernetes context and namespace to tmux status line.

null 7 Sep 10, 2021
Good enough Kubernetes namespace visualization tool

Kubesurveyor Good enough Kubernetes namespace visualization tool. No provisioning to a cluster required, only Kubernetes API is scrapped. Installation

Peter Gasper 80 Aug 6, 2022
K8s-ingress-health-bot - A K8s Ingress Health Bot is a lightweight application to check the health of the ingress endpoints for a given kubernetes namespace.

k8s-ingress-health-bot A K8s Ingress Health Bot is a lightweight application to check the health of qualified ingress endpoints for a given kubernetes

Aaron Tam 0 Jan 2, 2022
Kubernetes Admission Controller Demo: Validating Webhook for Namespace lifecycle events

Kubernetes Admission Controller Based on How to build a Kubernetes Webhook | Admission controllers Local Kuberbetes cluster # create kubernetes cluste

Marco Lehmann 2 Feb 27, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Aidan Melen 27 Sep 14, 2022
I'd like to share random apps in the spare times. Thus, I'm going to try learning some concepts of Go and as much as I can I try to clarify each line.

go-samples I'd like to share random apps in the spare times. Thus, I'm going to try learning some concepts of Go and as much as I can I try to clarify

Mert Simsek 1 Mar 16, 2022
GoScanPlayers - Hypixel online player tracker. Runs as an executable and can notify a Discord Webhook

GoScanPlayers Hypixel online player tracker. Runs as an executable and can notif

null 2 Oct 16, 2022
Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Reactive Tech Ltd 1.1k Dec 2, 2022