Cluster API Provider for KubeVirt

Overview

Kubernetes Template Project

The Kubernetes Template Project is a template for starting new projects in the GitHub organizations owned by Kubernetes. All Kubernetes projects, at minimum, must have the following files:

  • a README.md outlining the project goals, sponsoring sig, and community contact information
  • an OWNERS with the project leads listed as approvers (docs on OWNERS files)
  • a CONTRIBUTING.md outlining how to contribute to the project
  • an unmodified copy of code-of-conduct.md from this repo, which outlines community behavior and the consequences of breaking the code
  • a LICENSE which must be Apache 2.0 for code projects, or Creative Commons 4.0 for documentation repositories, without any custom content
  • a SECURITY_CONTACTS with the contact points for the Product Security Team to reach out to for triaging and handling of incoming issues. They must agree to abide by the Embargo Policy and will be removed and replaced if they violate that agreement.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Issues
  • Increase KubevirtMachine coverage

    Increase KubevirtMachine coverage

    This PR adds unit tests to the repo.

    Special notes for your reviewer:

    extracted kubevirt.machine to an interface and replaced direct call to NewMachine with a MachineFactory interface

    Release notes:

    cncf-cla: yes approved size/XL ok-to-test lgtm 
    opened by isaacdorfman 22
  • Added integration with the image-builder kubevirt image

    Added integration with the image-builder kubevirt image

    What this PR does / why we need it: I created a kubevirt target in my fork of the image-builder project: https://github.com/isaacdorfman/image-builder/tree/kubevirt-target

    This image is the image of the machines in the cluster. I pushed the image I built in my image-builder fork into quay and updated the capk to use this image instead of the kubevirtci one. In this PR I also fixed the problems that this change caused. *I also fixed a problem with the local container reigstry by changing the url of the registry from localhost to 127.0.0.1, this caused problems related to ipv6 in some devices. Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #

    Special notes for your reviewer:

    Release notes:

    cncf-cla: yes approved size/S ok-to-test lgtm 
    opened by isaacdorfman 19
  • Do not wait for SSH keys if there is an external controller

    Do not wait for SSH keys if there is an external controller

    When a KubeVirt Cluster object is managed by an external controller (meaning not by our capk cluster controller) then the ssh keys generated by our capk cluster controller in order to introspect VM guests will not be present. The machine controller needs to know and expect that these ssh keys won't be available when the cluster is externaly managed and still process the VM.

    This PR also addresses timing issues with how a worker node's provider ID doesn't always get set in certain situations when external clusters are in use.

    In addition to the logical fixes, unit tests and e2e functional tests have been added to exercise the external cluster behavior. I had to add a new workaround for our e2e test suite setup that sources cert-manager from the new cert-manager repo. This addressed a new bug in clusterctl that caused clusterctl init to fail.

    cncf-cla: yes approved size/L ok-to-test lgtm 
    opened by davidvossel 16
  • [Initial implementation migration] API definition

    [Initial implementation migration] API definition

    What this PR does / why we need it: Migrate API definition and cluster definition YAML template

    Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): N/A

    Special notes for your reviewer: N/A

    Release notes:

    N/A
    
    cncf-cla: yes approved tide/merge-method-squash do-not-merge/hold size/XXL lgtm 
    opened by cchengleo 13
  • Clusterctl integration

    Clusterctl integration

    What this PR does / why we need it: In order to use capk with clusterctl, i.e: "clusterctl init", "clusterctl generate", the github repo must contain a release with manifests in the form specified by the clusterctl. https://cluster-api.sigs.k8s.io/clusterctl/provider-contract.html

    This PR is intended to create a github action that creates such release each time a PR is merged into the main branch.

    Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #

    Special notes for your reviewer:

    Release notes:

    cncf-cla: yes approved size/L ok-to-test lgtm 
    opened by isaacdorfman 11
  • Added new CLI named clusterkubevirtadm, to manage the infra cluster

    Added new CLI named clusterkubevirtadm, to manage the infra cluster

    Added a new CLI tool to manage the infra cluster. For now, the tool supports two commands:

    1. clusterkubevirtadm create credentials this command creates a namespace, serviceAccount, role and roleBinding in the infra cluster. If the namespace is already exist. the command returns error.
    2. clusterkubevirtadm apply credentials this command creates a namespace, serviceAccount, role and roleBinding in the infra cluster. If the namespace is already exist. the command will update the resources to the latest version.
    3. clusterkubevirtadm get credentials this command generates a kubeconfig file to use the serviceAccount created with the first command.

    for all the three commans above, the cred and creds aliases can replace credentials.

    The build the new tool, use:

    • make clusterkubevirtadm-linux - for linux application
    • make clusterkubevirtadm-macos - for macOS application
    • make clusterkubevirtadm-win - for windows application
    • make clusterkubevirtadm-all - for all the above + run the unit tests
    • make clusterkubevirtadm-test - to run the unit tests

    Fixes #106

    Signed-off-by: Nahshon Unna-Tsameret [email protected]

    Release notes:

    Added new CLI named clusterkubevirtadm, to manage the infra cluster
    
    cncf-cla: yes approved size/XXL ok-to-test lgtm 
    opened by nunnatsa 11
  • update README.md

    update README.md

    What this PR does / why we need it:

    Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #

    Special notes for your reviewer:

    Release notes:

    cncf-cla: yes size/M needs-rebase do-not-merge/hold 
    opened by Thephenil 11
  • Cluster deletion race

    Cluster deletion race

    Fixes #65

    Overview

    A KubevirtMachine object should be able to successfully terminate and be removed regardless if the cooresponding Cluster or KubevirtCluster object exists.

    Today, we have no strong guarantee that a corresponding KubevirtCluster will still exist when a KubevirtMachine is being torn down. If the kv cluster object disappears before the kv machine, the machine controller will block indefinitely waiting for the kv cluster object to be accessible before processing the kv machine's deletion.

    Steps to Reproduce

    Delete a KubevirtCluster's namespace and the kubevirt machines will block the namespace from being terminated indefinitely.

    Fix Explanation

    The KubevirtMachine objects depended on the KubevirtCluster in order to retrieve a reference to a secret that represents the kubeconfig used to create/delete VMs. If the KubevirtCluster disappears before the KubevirtMachine, then the kv machine can not process the VM deletion accurately because it doesn't know the secret ref to use.

    To fix this, the KubevirtMachine object now stores the secret ref itself on its spec in the KubevirtMachine.Spec.InfraClusterSecretRef field similar to the KubevirtCluster. This value will default to the cluster's secret ref when nil, which preserves existing behavior.

    By storing the secret ref on the kv machine itself, the machine can be torn down independently of whether or not the kv cluster still exists.

    Alternate solutions

    An alternate solution to this that was given consideration was to put a finalizer on the KubevirtCluster object and not allow it to be deleted until all corresponding KubevirtMachines are gone. The problem with this is KubevirtCluster's which are externally managed outside of the capk controllers #79. When a KubevirtCluster is externally managed, the capk controller won't reconcile it, which means we don't maintain control over a finalizer or it's usage.

    Fixes race condition between kubevirtcluster and kubevirtmachine during tear down
    
    cncf-cla: yes approved size/L ok-to-test lgtm 
    opened by davidvossel 10
  • fix issue #74

    fix issue #74

    What this PR does / why we need it: This PR add another label cluster.x-k8s.io/cluster-name for each VM and VMI in workload cluster because existing label cluster.x-k8s.io/role=control-plane is not enough to identify different cluster. With existing label works with this new label, now we can identify each workload cluster master node in service selector.

    Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes # This PR is resolve issue https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/issues/74

    cncf-cla: yes approved size/XS ok-to-test lgtm 
    opened by wangxin311 10
  • Add the new --fail-with-job flag to let mkpj fail when the prowjob fails

    Add the new --fail-with-job flag to let mkpj fail when the prowjob fails

    What this PR does / why we need it:

    https://github.com/kubernetes/test-infra/pull/24686 landed a while ago, the new mkpj flag can be used now. The github action will now also fail when the prowjob failed and not just indicate when the prowjob is done.

    Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #

    Special notes for your reviewer:

    The change will only be effective after this PR with the change is merged.

    Release notes:

    NONE
    
    cncf-cla: yes approved size/XS ok-to-test lgtm 
    opened by rmohr 10
  • Meeting Details

    Meeting Details

    Added the weekly meeting schedule in the office hours section:

    Meeting schedule is missing in the README.md (optional, in fixes #29 format, will close that issue when PR gets merged): fixes #29

    Special notes for your reviewer: Please review

    Release notes:

    cncf-cla: yes size/XS 
    opened by aaseem 10
  • WIP: Graceful deletion of a VirtualMachineInstance

    WIP: Graceful deletion of a VirtualMachineInstance

    When a host node is deleted, the guest VMIs are evicted, killing the running tenant node in this VMI.

    This PR gracefully delete the VMI, by first drain the guest node. Only if the drain successful, CAPK will delete the VMI.

    The PR is based on a feature in KubeVirt, where by setting the eviction strategy to "external" KubeVirt will not delete the VMI, but will set the vmi.Status.EvacuationNodeName field to signal to the external controller (CAPK in this case) that the VMI should be evicted.

    Working on progress. TODO:

    • check if can drain the node [?]
    • add unit tests
    • add functional tests

    Known Issues

    • does not support external clusters (#100)

    Signed-off-by: Nahshon Unna-Tsameret [email protected]

    Release notes:

    Added graceful eviction of a VMI
    
    cncf-cla: yes approved do-not-merge/work-in-progress size/XL 
    opened by nunnatsa 1
  • Publish arm64 images for the capk-manager

    Publish arm64 images for the capk-manager

    When downloading the current release and installing it (https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/releases/download/v0.1.0-rc.0/infrastructure-components.yaml), the manager container goes into CrashLoopBackOff due to exec /manager: exec format error on an ARM64 cluster. It seems that there is only the amd64 image available on Quay (quay.io/capk/capk-manager-amd64). It looks like there is already automation in place for this but I can't find the resulting artifacts and the manifest is set to pull the amd64 image. https://github.com/kubevirt/kubevirt/issues/3558

    https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/blob/4c4743db82f5e5ed3a74a1e807a5db9d3b5e33d2/Makefile#L61 https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/blob/4c4743db82f5e5ed3a74a1e807a5db9d3b5e33d2/Makefile#L201-L203

      Normal   Scheduled    57s                default-scheduler  Successfully assigned capk-system/capk-controller-manager-8b94f79db-gf94l to majora
      Normal   Pulled       52s                kubelet            Successfully pulled image "quay.io/capk/capk-manager-amd64:v0.1.0-rc.0" in 3.585999603s
      Normal   Pulling      51s                kubelet            Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0"
      Normal   Pulled       48s                kubelet            Successfully pulled image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" in 3.028312891s
      Normal   Created      48s                kubelet            Created container kube-rbac-proxy
      Normal   Started      48s                kubelet            Started container kube-rbac-proxy
      Normal   Pulled       47s                kubelet            Successfully pulled image "quay.io/capk/capk-manager-amd64:v0.1.0-rc.0" in 447.556578ms
      Normal   Pulling      26s (x3 over 55s)  kubelet            Pulling image "quay.io/capk/capk-manager-amd64:v0.1.0-rc.0"
      Normal   Created      26s (x3 over 51s)  kubelet            Created container manager
      Normal   Pulled       26s                kubelet            Successfully pulled image "quay.io/capk/capk-manager-amd64:v0.1.0-rc.0" in 448.153154ms
      Normal   Started      25s (x3 over 51s)  kubelet            Started container manager
      Warning  BackOff      17s (x7 over 46s)  kubelet            Back-off restarting failed container
    
    opened by jbpratt 0
  • [e2e tests] Cover creating tenant clusters in external infra clusters in the integration tests

    [e2e tests] Cover creating tenant clusters in external infra clusters in the integration tests

    What this PR does / why we need it: Add an integration test for tenant cluster creation on external infrastructure.

    Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #90

    Special notes for your reviewer: Please ensure test logic and structure is consistent with other e2e tests.

    Release notes:

    NONE
    
    cncf-cla: yes size/M approved ok-to-test 
    opened by agradouski 3
  • Machine::IsBootstrapped will wait indefinitely if

    Machine::IsBootstrapped will wait indefinitely if "kubeadm init" fails

    The command to run in cloud-init is "kubeadm init --config /run/kubeadm/kubeadm.yaml && echo success > /run/cluster-api/bootstrap-success.complete'" and Machine::IsBootstrapped waits for the existence of /run/cluster-api/bootstrap-success.complete. If kubeadm init exits with error the cluster creation will be stuck.

    opened by isaacdorfman 1
  • CAPK depends on KubeVirt running on the management cluster

    CAPK depends on KubeVirt running on the management cluster

    Today, CAPK provider will fail to initialize if the management cluster does not have KubeVirt CRDs deployed.

    Here's an example (note CAPK provider is failing to initialize):

    NAMESPACE                           NAME                                                             READY   STATUS             RESTARTS   AGE
    capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-5c7c767585-mxfjv       1/1     Running            0          34m
    capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-76c769d87c-kjbdp   1/1     Running            0          34m
    capi-system                         capi-controller-manager-66fc7b7785-lznb9                         1/1     Running            0          34m
    capk-system                         capk-controller-manager-59d88f64fb-fk46z                         1/2     CrashLoopBackOff   7          34m
    

    CAPK manager log shows the following error:

    E0222 22:34:33.654495       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    

    Full log from CAPK manager:

    I0222 22:34:10.170500       1 request.go:665] Waited for 1.041200068s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/coordination.k8s.io/v1?timeout=32s
    I0222 22:34:10.673334       1 logr.go:249] controller-runtime/metrics "msg"="Metrics server is starting to listen"  "addr"="127.0.0.1:8080"
    I0222 22:34:10.674078       1 logr.go:249] controller-runtime/builder "msg"="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called"  "GVK"={"Group":"infrastructure.cluster.x-k8s.io","Version":"v1alpha1","Kind":"KubevirtMachineTemplate"}
    I0222 22:34:10.674140       1 logr.go:249] controller-runtime/builder "msg"="Registering a validating webhook"  "GVK"={"Group":"infrastructure.cluster.x-k8s.io","Version":"v1alpha1","Kind":"KubevirtMachineTemplate"} "path"="/validate-infrastructure-cluster-x-k8s-io-v1alpha1-kubevirtmachinetemplate"
    I0222 22:34:10.674259       1 server.go:146] controller-runtime/webhook "msg"="Registering webhook" "path"="/validate-infrastructure-cluster-x-k8s-io-v1alpha1-kubevirtmachinetemplate" 
    I0222 22:34:10.674373       1 logr.go:249] setup "msg"="starting manager"  
    I0222 22:34:10.674461       1 server.go:214] controller-runtime/webhook/webhooks "msg"="Starting webhook server"  
    I0222 22:34:10.674583       1 internal.go:362]  "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics" 
    I0222 22:34:10.674611       1 leaderelection.go:248] attempting to acquire leader lease capk-system/controller-leader-election-capk...
    I0222 22:34:10.674700       1 internal.go:362]  "msg"="Starting server" "addr"={"IP":"::","Port":9440,"Zone":""} "kind"="health probe" 
    I0222 22:34:10.674734       1 logr.go:249] controller-runtime/certwatcher "msg"="Updated current TLS certificate"  
    I0222 22:34:10.674863       1 logr.go:249] controller-runtime/webhook "msg"="Serving webhook server"  "host"="" "port"=9443
    I0222 22:34:10.674929       1 logr.go:249] controller-runtime/certwatcher "msg"="Starting certificate watcher"  
    I0222 22:34:29.050812       1 leaderelection.go:258] successfully acquired lease capk-system/controller-leader-election-capk
    I0222 22:34:29.051055       1 controller.go:178] controller/kubevirtmachine "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtMachine" "source"="kind source: *v1alpha1.KubevirtMachine"
    I0222 22:34:29.051056       1 controller.go:178] controller/kubevirtcluster "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtCluster" "source"="kind source: *v1alpha1.KubevirtCluster"
    I0222 22:34:29.051080       1 controller.go:178] controller/kubevirtmachine "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtMachine" "source"="kind source: *v1beta1.Machine"
    I0222 22:34:29.051108       1 controller.go:178] controller/kubevirtcluster "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtCluster" "source"="kind source: *v1beta1.Cluster"
    I0222 22:34:29.051137       1 controller.go:186] controller/kubevirtcluster "msg"="Starting Controller" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtCluster" 
    I0222 22:34:29.051117       1 controller.go:178] controller/kubevirtmachine "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtMachine" "source"="kind source: *v1alpha1.KubevirtCluster"
    I0222 22:34:29.051195       1 controller.go:178] controller/kubevirtmachine "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtMachine" "source"="kind source: *v1.VirtualMachineInstance"
    I0222 22:34:29.051222       1 controller.go:178] controller/kubevirtmachine "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtMachine" "source"="kind source: *v1.VirtualMachine"
    I0222 22:34:29.051245       1 controller.go:178] controller/kubevirtmachine "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtMachine" "source"="kind source: *v1beta1.Cluster"
    I0222 22:34:29.051279       1 controller.go:186] controller/kubevirtmachine "msg"="Starting Controller" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtMachine" 
    I0222 22:34:30.102053       1 request.go:665] Waited for 1.04679704s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/cluster.x-k8s.io/v1alpha4?timeout=32s
    E0222 22:34:30.605112       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    I0222 22:34:30.605140       1 controller.go:220] controller/kubevirtcluster "msg"="Starting workers" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtCluster" "worker count"=1
    E0222 22:34:33.654495       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:34:41.656424       1 request.go:665] Waited for 1.046721061s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/storage.k8s.io/v1?timeout=32s
    E0222 22:34:42.158825       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:34:45.208681       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:34:51.706753       1 request.go:665] Waited for 1.09658386s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/clusterctl.cluster.x-k8s.io/v1alpha3?timeout=32s
    E0222 22:34:52.158035       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:34:55.208798       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:35:01.756577       1 request.go:665] Waited for 1.147322771s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/certificates.k8s.io/v1?timeout=32s
    E0222 22:35:02.158878       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:35:05.208588       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:35:11.757088       1 request.go:665] Waited for 1.1477616s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/networking.k8s.io/v1beta1?timeout=32s
    E0222 22:35:12.158267       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:35:15.208665       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:35:21.806808       1 request.go:665] Waited for 1.197688109s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
    E0222 22:35:22.159396       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:35:25.209345       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:35:31.856282       1 request.go:665] Waited for 1.247766642s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/cluster.x-k8s.io/v1alpha4?timeout=32s
    E0222 22:35:32.158802       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:35:35.208210       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:35:41.906657       1 request.go:665] Waited for 1.297384124s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/bootstrap.cluster.x-k8s.io/v1beta1?timeout=32s
    E0222 22:35:42.158276       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:35:45.208500       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:35:51.907172       1 request.go:665] Waited for 1.297277575s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/authorization.k8s.io/v1?timeout=32s
    E0222 22:35:52.159635       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:35:55.209096       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:36:01.956369       1 request.go:665] Waited for 1.347632595s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/scheduling.k8s.io/v1beta1?timeout=32s
    E0222 22:36:02.158553       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:36:05.208705       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:36:12.006399       1 request.go:665] Waited for 1.397125003s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/coordination.k8s.io/v1?timeout=32s
    E0222 22:36:12.158084       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:36:15.208906       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:36:22.056959       1 request.go:665] Waited for 1.447452524s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/policy/v1beta1?timeout=32s
    E0222 22:36:22.159369       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:36:25.208501       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachine"}
    I0222 22:36:32.105701       1 request.go:665] Waited for 1.49679813s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s
    E0222 22:36:32.158523       1 logr.go:265] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind \"VirtualMachineInstance\" in version \"kubevirt.io/v1\""  "kind"={"Group":"kubevirt.io","Kind":"VirtualMachineInstance"}
    E0222 22:36:33.756698       1 controller.go:203] controller/kubevirtmachine "msg"="Could not wait for Cache to sync" "error"="failed to wait for kubevirtmachine caches to sync: timed out waiting for cache to be synced" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtMachine" 
    I0222 22:36:33.756772       1 logr.go:249]  "msg"="Stopping and waiting for non leader election runnables"  
    I0222 22:36:33.756789       1 logr.go:249]  "msg"="Stopping and waiting for leader election runnables"  
    I0222 22:36:33.756815       1 controller.go:240] controller/kubevirtcluster "msg"="Shutdown signal received, waiting for all workers to finish" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtCluster" 
    I0222 22:36:33.756849       1 controller.go:242] controller/kubevirtcluster "msg"="All workers finished" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="KubevirtCluster" 
    I0222 22:36:33.756862       1 logr.go:249]  "msg"="Stopping and waiting for caches"  
    I0222 22:36:33.756940       1 logr.go:249]  "msg"="Stopping and waiting for webhooks"  
    I0222 22:36:33.756982       1 logr.go:249] controller-runtime/webhook "msg"="shutting down webhook server"  
    I0222 22:36:33.758960       1 logr.go:249]  "msg"="Wait completed, proceeding to shutdown the manager"  
    E0222 22:36:33.759092       1 logr.go:265] setup "msg"="problem running manager" "error"="failed to wait for kubevirtmachine caches to sync: timed out waiting for cache to be synced"  
    

    Investigate a possibility of eliminating this dependency on KubeVirt in the management cluster. This is relevant for the case when management and infra clusters are decoupled. Then, arguably we should impose KubeVirt deployment requirement only on infra cluster.

    opened by agradouski 2
  • Documentation - goals, use cases, design decisions

    Documentation - goals, use cases, design decisions

    For the sake of potential users and contributors, it would be good to have a statement on the project goals, prioritized and dismissed use cases as well as major design decisions that were made by the core team.

    It will make it much easier for those who consider contributing to the project to decide whether it's going to match their needs.

    opened by mzayats 2
Owner
Kubernetes SIGs
Org for Kubernetes SIG-related work
Kubernetes SIGs
Kubernetes Cluster API Provider AWS

Kubernetes Cluster API Provider AWS Kubernetes-native declarative infrastructure for AWS. What is the Cluster API Provider AWS The Cluster API brings

null 1 Oct 28, 2021
capc (cap ka) is a cluster api provider for the civo platform created for the hackathon for fun

capc (cap ka) is a cluster api provider for the civo platform created for the hackathon for fun! Interested in helping drive it forward? you are more then welcome to join in!

The Null Channel 2 Jan 3, 2022
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

null 4 Nov 16, 2021
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Mert Do─čan 0 Oct 24, 2021
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Snigdha Sambit Aryakumar 1 Jan 25, 2022
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Shiwen Cheng 274 Jun 29, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Aidan Melen 23 May 31, 2022
OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file)

Terraform Provider OpenAPI This terraform provider aims to minimise as much as possible the efforts needed from service providers to create and mainta

Daniel I. Khan Ramiro 207 Jun 26, 2022
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

terraform-provider-awsutils Terraform provider for performing various tasks that cannot be performed with the official AWS Terraform Provider from Has

Cloud Posse 19 Jun 24, 2022
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)

Terraform Provider for Azure (Resource Manager) Version 2.x of the AzureRM Provider requires Terraform 0.12.x and later, but 1.0 is recommended. Terra

null 0 Oct 16, 2021
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Terraform-provider-mailcow - Terraform provider for Mailcow

Terraform Provider Scaffolding (Terraform Plugin SDK) This template repository i

Owen Valentine 0 Dec 31, 2021
Provider-generic-workflows - A generic provider which uses argo workflows to define the backend actions.

provider-generic-workflows provider-generic-workflows is a generic provider which uses argo workflows for managing the external resource. This will re

Shailendra Sirohi 0 Jan 1, 2022
Terraform-provider-buddy - Terraform Buddy provider For golang

Terraform Provider for Buddy Documentation Requirements Terraform >= 1.0.11 Go >

Buddy 1 Jan 5, 2022
Hashicups-tf-provider - HashiCups Terraform Provider Tutorial

Terraform Provider HashiCups Run the following command to build the provider go

Andrew Xie 1 Jan 10, 2022
Terraform-provider-vercel - Terraform Vercel Provider With Golang

Vercel Terraform Provider Website: https://www.terraform.io Documentation: https

Vercel 54 Jun 25, 2022
Provider-milvus - Milvus provider for crossplane

provider-milvus provider-milvus is a minimal Crossplane Provider that is meant t

The Milvus Project 2 Feb 9, 2022
Terraform-provider-age - Age Terraform Provider with golang

Age Terraform Provider This provider lets you generate an Age key pair. Using th

ConsenSys Software 0 Feb 15, 2022
Terraform-equinix-migration-tool - Tool to migrate code from Equinix Metal terraform provider to Equinix terraform provider

Equinix Terraform Provider Migration Tool This tool targets a terraform working

Equinix 1 Feb 15, 2022