Kubernetes Virtualization API and runtime in order to define and manage virtual machines.

Overview

KubeVirt

Build Status Go Report Card Licensed under Apache License version 2.0 Coverage Status CII Best Practices Visit our Slack channel FOSSA Status

KubeVirt is a virtual machine management add-on for Kubernetes. The aim is to provide a common ground for virtualization solutions on top of Kubernetes.

Note: KubeVirt is a heavy work in progress.

Introduction

Virtualization extension for Kubernetes

At its core, KubeVirt extends Kubernetes by adding additional virtualization resource types (especially the VM type) through Kubernetes's Custom Resource Definitions API. By using this mechanism, the Kubernetes API can be used to manage these VM resources alongside all other resources Kubernetes provides.

The resources themselves are not enough to launch virtual machines. For this to happen the functionality and business logic needs to be added to the cluster. The functionality is not added to Kubernetes itself, but rather added to a Kubernetes cluster by running additional controllers and agents on an existing cluster.

The necessary controllers and agents are provided by KubeVirt.

As of today KubeVirt can be used to declaratively

  • Create a predefined VM
  • Schedule a VM on a Kubernetes cluster
  • Launch a VM
  • Stop a VM
  • Delete a VM

Example:

asciicast

To start using KubeVirt

Try our quickstart at kubevirt.io.

See our user documentation at kubevirt.io/docs.

Once you have the basics, you can learn more about how to run KubeVirt and its newest features by taking a look at:

To start developing KubeVirt

To set up a development environment please read our Getting Started Guide. To learn how to contribute, please read our contribution guide.

You can learn more about how KubeVirt is designed (and why it is that way), and learn more about the major components by taking a look at our developer documentation:

Community

If you got enough of code and want to speak to people, then you got a couple of options:

Related resources

Submitting patches

When sending patches to the project, the submitter is required to certify that they have the legal right to submit the code. This is achieved by adding a line

Signed-off-by: Real Name 

to the bottom of every commit message. Existence of such a line certifies that the submitter has complied with the Developer's Certificate of Origin 1.1, (as defined in the file docs/developer-certificate-of-origin).

This line can be automatically added to a commit in the correct format, by using the '-s' option to 'git commit'.

License

KubeVirt is distributed under the Apache License, Version 2.0.

Copyright 2016

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

FOSSA Status

Issues
  • Make VMI non-migratable when incompatible CPU

    Make VMI non-migratable when incompatible CPU

    What this PR does / why we need it: During migration compatible CPU model have to be used. This is hard when cpu-passthrough or host-model is used as a choice for CPU.

    This issue can be ovecomed in a short term by blocking migration for host-model or cpu-passthrough.

    Signed-off-by: Petr Kotas [email protected]

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes # https://bugzilla.redhat.com/show_bug.cgi?id=1760028

    Release note:

    NONE
    
    size/L release-note-none do-not-merge/hold kind/api-change approved dco-signoff: yes 
    opened by petrkotas 140
  • Support ipv6 masquarade

    Support ipv6 masquarade

    What this PR does / why we need it:

    • Adding masquarade and dnat rules to ip6tables.
    • Adding masquarade and dnat rules to ip6 nat nftable.
    • Giving ipv6 address to k6t-eth0 bridge.
    • Turning on 'net.ipv6.conf.all.forwarding' on the virt-launcher pod (done via the virt-handler).

    The PR doesn't contain:

    • ipv6 support to bind methods other than masquarade.

    Further changes needed:

    • DHCP server support for ipv6 or deciding on another method to set the ip + default gateway on the vm.
    • Adding vmIpv6NetworkCIDR to podInterface schema.
    • Support migration - PR https://github.com/kubevirt/kubevirt/pull/3221
    • Remove the skip of the PVC tests once PR https://github.com/kubevirt/kubevirt/pull/3171 is merged.
    • Remove the skip of [ref_id:1182]Probes for readiness should succeed [test_id:1200][posneg:positive]with working HTTP probe and http server - once https://issues.redhat.com/browse/CNV-4454 is done.

    The PR was tested by specifying manually on the vm- $ sudo ip -6 addr add fd2e:f1fe:9490:a8ff::2/120 dev eth0 $ sudo ip -6 route add default via fd2e:f1fe:9490:a8ff::1 src fd2e:f1fe:9490:a8ff::2

    To configure dns on the vm echo "nameserver " >> /etc/resolv.conf dnsClusterIp - the output of kubectl get services -n kube-system -> ClusterIP

    Release note:

    Add IPv6 support for masquerade. IPv6 support is still highly experimental and not supported.
    
    release-note size/XXL lgtm approved dco-signoff: yes 
    opened by AlonaKaplan 125
  • Non root vms

    Non root vms

    What this PR does / why we need it: Switch virt-launcher to run as non-root user(qemu). VMs that needs VirtioFs will use root implementation.

    Existing workloads will use root implementation.

    Limitations:

    • VirtioFs unsupported configuration: virtiofs is not yet supported in session
    • Any device that is not integrated into Kubevirt. For example, a device that is passed with some external device plugin and the domain is modified by sidecarhooks. Relevant Issue

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer: I split the PR into as small commits as I could(they are not necessarily ordered for what I apologize) Release note:

    Virt-launcher can run as non-root user
    
    release-note size/XXL needs-rebase kind/api-change do-not-merge/work-in-progress kind/build-change dco-signoff: no 
    opened by xpivarc 120
  • Test sriov over vlan

    Test sriov over vlan

    What this PR does / why we need it: we need to improve sriov testing. so in this pr we:

    1. Change pingVirtualMachine to return a error and move the assertion outside of it's scope. This is done so that we can assert that the ping failed as well.

    2. warped the http requests for creating network function and deleting network attachment definitions.

    3. Test connectivity between two VMs that share vlan over an SRIOV network. A third vm without which do not share the vlan should not get connected. Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    1. second commit relies on the change in the first one, because as a part of the test we check that a ping to a vm which is not connected to vlan- fails.
    2. if this PR would be approved , we will add follow up PR to wrap all not pretty http requests in functions. I thought it was not related to this PR so I made it just in the SRIOV scope. Release note:
    NONE
    
    size/M release-note-none lgtm approved dco-signoff: yes 
    opened by alonSadan 110
  • Enable Kubevirt works on ARM64 platform

    Enable Kubevirt works on ARM64 platform

    What this PR does / why we need it: Make kubevirt works on ARM64 platform.

    Special notes for your reviewer: In order to build successfully there is the need to get to the kubevirt dockerhub a equivalent image of the builder for aarch64.

    For following reason, I add new virtual machine config file for ARM platform in example, vm-cirros-arm64.yaml and vmi-fedora-arm64.yaml.

    1. Only UEFI boot is supported on ARM64 platform.
    2. The cpu model "host-model" is not support well on ARM64 platform, so we change the cpu model to "host-passthrough".
    3. VGA device not support by qemu-kvm in virt-launcher, so we set "autoattachGraphicsDevice" to false. Discuss about VGA device can be seen in following link: https://github.com/kubevirt/libvirt/issues/49
    4. minimal memory size for UEFI boot with edk2 package should be larger than 128M. Detailed can be seen in following link: https://github.com/tianocore/edk2/blob/ceacd9e992cd12f3c07ae1a28a75a6b8750718aa/ArmVirtPkg/Library/QemuVirtMemInfoLib/QemuVirtMemInfoPeiLibConstructor.c#L93

    Be caution:

    1. that this PR only prepares the ground for arm64
    2. before cross-compilation no releases will happen for arm64 and
    3. that no one guarantees that arm64 works to any degree
    Make kubevirt code fit for arm64 support. No testing is at this stage performed against arm64 at this point.
    
    release-note size/XXL lgtm approved needs-ok-to-test kind/build-change dco-signoff: yes 
    opened by zhlhahaha 98
  • Add discard=unmap option

    Add discard=unmap option

    What this PR does / why we need it: This PR adds the option discard=unmap that allows freeing space when the underlying storage supports the trim operation. However, this option is not suited for all disk types. For example, it should be avoided if the disk is preallocated or thick provisioned. The option is ignored if the PVC or DV has an annotation that contains:

    • preallocated disks: /storage.preallocation: true
    • thick provisioned disks: /storage.thick-provisioned: true

    For example, CDI set the annotation: cdi.kubevirt.io/storage.preallocation: true when the preallocation is configured. Additionally, this annotation allows the user or external tools to set these annotations.

    In the other cases, the discard=unmap is set as the default when we use either disk or lun as disk types.

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):

    Fixes #4781

    Special notes for your reviewer:

    Release note:

    Add discard=unmap option
    
    size/L release-note lgtm approved ok-to-test dco-signoff: yes 
    opened by alicefr 95
  • Functional test for customized XML marshal/unmarshal for user defined…

    Functional test for customized XML marshal/unmarshal for user defined…

    … aliases

    Signed-off-by: Hao Yu [email protected]

    What this PR does / why we need it:

    This PR adds function test to demonstrate and test the effect of the recently merged PR, #4927. The said PR fixed a VMI migration issue where some device aliases in the domain XML were not kept consistent during the migration.

    Following a sound review suggestion, https://github.com/kubevirt/kubevirt/pull/4927#discussion_r573059909, we added a end-to-end migration test case with a tablet device in the migratable VMI. The newly added test will carry out a successfully migration with the fix defined in #4927 , but will fail the migration without the fix.

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    The added function test directly follows the suggestion at https://github.com/kubevirt/kubevirt/pull/4927#discussion_r573059909, in PR #4927 .

    Release note:

    "NONE"
    
    size/M release-note-none lgtm approved ok-to-test dco-signoff: yes 
    opened by yuhaohaoyu 91
  • Initial EFI support

    Initial EFI support

    What this PR does / why we need it:

    This PR add preliminary support for only the UEFI bootloader.

    Which issue(s) this PR fixes (optional, in fixes #(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    Hi, this PR hopefully adds UEFI support by adding support for the OS BIOS bootloader parameter of libvirt.

    I say hopefully because while the existing tests all pass, I'm extremely unfamiliar with Golang, libvirt, QEMU, and k8s, so I'm unsure how to specifically test this. I realize this is probably not my best first foray into the world of contributing back to open source, however I really need this functionality so I have done my best.

    I'd very much like to add the following tests but am not sure how to, and all my attempts caused existing tests to fail:

    • update schema_tests.go
    • test updated converter function so that it actually assigns EFI support as requested
    • somehow include an example yaml that can boot a EFI vmi.

    Thank you!

    Release note:

    Add preliminary OS bootloader support for only UEFI.
    
    size/XL release-note kind/api-change 
    opened by gaahrdner 90
  • Create tap device on virt handler

    Create tap device on virt handler

    What this PR does / why we need it: Currently, libvirt is responsible for creating & configuring the tap devices that will be used by the kubevirt VMs. As such, virt-launcher pods require the NET_ADMIN capability.

    This PR moves the tap device creation / configuration stage to the virt-handler pod, which will create / configure the tap devices on behalf of the virt-launchers (and in their netns).

    To allow a non-privileged libvirt to consume a pre-configured tap device, the kubevirt's VM domxml specification has to be updated.

    Which issue(s) this PR fixes: Partially fixes #3085

    Special notes for your reviewer: This PR is part of the change-set to remove the NET_ADMIN capability from the virt-launcher pod.

    Release note:

    Have virt-handler (KubeVirt agent) create the tap devices on behalf of the virt-launchers.
    This is part of the work to remove the NET_ADMIN capability from the virt-launcher pods.
    
    The virt-launcher process now runs as virt_launcher.process selinux context by default.
    
    release-note size/XXL lgtm approved kind/build-change dco-signoff: yes 
    opened by maiqueb 87
  • Enhance support for real time workloads

    Enhance support for real time workloads

    What this PR does / why we need it:

    This PR aims to improve the support for real-time workloads and tune the libvirt's XML for low CPU latency. The motivation for this change is to make kubevirt aware of realtime workloads and tune libvirt's XML to reduce the CPU latency, following libvirt's recommendation. It archieves this goal by implementing the following changes:

    • Extends the VMI schema to add a new knob structure inside spec.domain.cpu named realtime. This new knob is on itself a new structure that so far only contains the field mask. The mask field allows the user to define the range of vcpus to pin in the vcpusched xml field for scheduling type as fifo priority 1. If this field is left undefined, the logic will pin all allocated vcpus for real-time workload.

    Example when enabling the realtime knob without defining the mask:

        spec:
          domain:
            cpu:
              realtime: {}
    
    • When the realtime knob is enabled, the virt-launcher will add the following libvirt XML elements:
    • When huge pages is supported
      • <nosharepages/>
      • virt-handler will set the memory lock limits for the qemu-kvm process.
    • Disable the Performance Monitoring Unit in the Features section:
      • <pmu state="off"/>
    • Add the vcpusched element in the CPUTune section:
      • <vcpusched scheduler="fifo" priority="1" vcpus="<pinned vcpus>">
    • Nodes that support realtime workloads (kernel setting kernel.sched_rt_runtime_us=-1 to allow unlimited runtime of processes running with realtime scheduling) will be labeled with kubevirt.io/realtime.
    • When deploying a VMI that has the realtime knob enabled in its manifest, the generated pod manifest will be mutated to include the node label selector kubevirt.io/realtime, so that the pod is scheduled to run in a node that supports realtime workloads.
    • In short, a VMI with the realtime knob enabled will require a node that is capable of running realtime workloads ( kubevirt.io/realtime label), is able to isolate CPUs (cpumanager=true label) and has been configured with hugepages.
    • It also includes a commit for a test that will run a realtime cpu latency tool (cyclictest) to measure the CPU latency in the VM. There is another PR in the project-infra that will trigger the job periodically.

    Example of a complete VM manifest:

    ---
    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm-realtime
      name: vm-realtime
      namespace: poc
    spec:
      running: true
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-realtime
        spec:
          domain:
            devices:
              autoattachSerialConsole: true
              autoattachMemBalloon: false
              autoattachGraphicsDevice: false
              disks:
              - disk:
                  bus: virtio
                name: containerdisk      
            machine:
              type: ""
            resources:
              requests:
                memory: 4Gi
                cpu: 2
              limits:
                memory: 4Gi
                cpu: 2
            cpu:
              model: host-passthrough
              cores: 2
              sockets: 1
              threads: 1
              dedicatedCpuPlacement: true
              isolateEmulatorThread: true
              ioThreadsPolicy: auto
              features:
                - name: tsc-deadline
                  policy: require
              numa:
                guestMappingPassthrough: {}
              realtime: {}
            memory:
              hugepages:
                pageSize: 1Gi
              guest: 3Gi
          terminationGracePeriodSeconds: 0
          volumes:
          - containerDisk:
              image: quay.io/jordigilh/centos8-realtime:latest
            name: containerdisk
    

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):

    TBD

    Special notes for your reviewer:

    /cc @rmohr @vladikr @davidvossel /cc @pkliczewski @fabiand

    Release note:

    adds support for real time workloads
    
    release-note size/XXL kind/api-change lgtm approved dco-signoff: yes area/monitoring 
    opened by jordigilh 86
  • Add common labels into alert definitions

    Add common labels into alert definitions

    We want to be able to list all kubevirt alerts so we added labels to differentiate them.

    Signed-off-by: assafad [email protected]

    Release note:

    Added common labels into alert definitions
    
    size/L release-note do-not-merge/hold needs-ok-to-test dco-signoff: yes area/monitoring 
    opened by assafad 4
  • docs: Elaborate more on disk expansion usage and why-nots

    docs: Elaborate more on disk expansion usage and why-nots

    It had occurred to me that many people may be wondering why looking at the DV size is more invalid than it was before, or the VM size request, so I should document what were the motivations for this decision. While here document why file systems are not expanded (I hear it's possible in some scenarios with guestfs, though), which is another question I heard.

    Release note:

    Improve documentation for disk expansion
    
    size/M release-note dco-signoff: yes 
    opened by maya-r 2
  • Add severity for outdated workload alert

    Add severity for outdated workload alert

    Adds a severity (warning) to the alert that fires after workloads are not updated over 24 hours.

    NONE
    
    size/XS release-note-none lgtm dco-signoff: yes area/monitoring 
    opened by davidvossel 3
  • VMs cannot reboot when emulation is used

    VMs cannot reboot when emulation is used

    Is this a BUG REPORT or FEATURE REQUEST?:

    /kind bug

    What happened:

    With useEmulation: true, VMs are unable to soft reboot.

    What you expected to happen:

    It should be slow, but they should eventually reboot.

    How to reproduce it (as minimally and precisely as possible):

    Deploy KubeVirt with:

    ---
    apiVersion: kubevirt.io/v1
    kind: KubeVirt
    metadata:
      name: kubevirt
      namespace: kubevirt
    spec:
      certificateRotateStrategy: {}
      configuration:
        developerConfiguration:
          featureGates: []
          useEmulation: true
      customizeComponents: {}
      imagePullPolicy: IfNotPresent
      workloadUpdateStrategy: {}
    

    Create a cirros VM (although it is reproducible with Fedora too):

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm-cirros
      name: vm-cirros
    spec:
      running: true
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-cirros
        spec:
          domain:
            devices:
              interfaces:
              - masquerade: {}
                name: default
              disks:
              - disk:
                  bus: virtio
                name: containerdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
            resources:
              requests:
                memory: 2Gi
          terminationGracePeriodSeconds: 0
          networks:
          - name: default
            pod: {}
          volumes:
          - containerDisk:
              image: quay.io/kubevirt/cirros-container-disk-demo:latest
            name: containerdisk
          - cloudInitNoCloud:
              userData: |
                #!/bin/sh
                echo 'printed from cloud-init userdata'
            name: cloudinitdisk
    

    Connect using console and reboot:

    virtctl console vm-cirros
    reboot
    

    It turns down but never turns up again:

    The system is going down NOW!
    Sent SIGTERM to all processes
    Sent SIGKILL to all processes
    Requesting system reboot
    [  215.325881] reboot: Restarting system
    [  215.326396] reboot: machine restart
    

    Environment:

    • KubeVirt version (use virtctl version): [kubevirt] virtctl version Client Version: version.Info{GitVersion:"v0.43.0", GitCommit:"7c7a2f4ace9ce3a88b164d4d282db55f08b6dc5e", GitTreeState:"clean", BuildDate:"2021-07-09T15:54:26Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{GitVersion:"v0.41.4-dirty", GitCommit:"0b3c97aed8051d2985b3ee2dee944ffa656822bc", GitTreeState:"dirty", BuildDate:"2021-10-20T07:29:44Z", GoVersion:"go1.13.14", Compiler:"gc", Platform:"linux/amd64"}
    • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v0.21.0-beta.1", GitCommit:"96e95cef877ba04872b88e4e2597eabb0174d182", GitTreeState:"clean", BuildDate:"2021-09-10T13:09:35Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.17", GitCommit:"68b4e26caf6ede7af577db4af62fb405b4dd47e6", GitTreeState:"clean", BuildDate:"2021-03-18T00:54:02Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (0.21) and server (1.18) exceeds the supported minor version skew of +/-1
    • VM or VMI specifications: In description above
    • Others: using kubevirtci on KubeVirt's release-0.41
    kind/bug 
    opened by phoracek 2
  • Run bazelisk run //robots/cmd/uploader:uploader -- -workspace /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirt/WORKSPACE -dry-run=false

    Run bazelisk run //robots/cmd/uploader:uploader -- -workspace /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirt/WORKSPACE -dry-run=false

    Automatic run of "bazelisk run //robots/cmd/uploader:uploader -- -workspace /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirt/WORKSPACE -dry-run=false". Please review

    size/M release-note-none lgtm dco-signoff: yes 
    opened by kubevirt-bot 2
  • Pass the VirtualMachineFlavor Name to cloud-init as instance-type

    Pass the VirtualMachineFlavor Name to cloud-init as instance-type

    KubeVirt recently added a Flavors API https://github.com/kubevirt/kubevirt/pull/6026, which provides a framework to name common VM configurations.

    This PR adds a Flavor field to the VirtualMachineInstance.Spec, which will be used to pass the flavor name as the instance-type field in cloud-init metadata when appropriate.

    I chose instance-type as the field name since I saw other cloud providers use that name.

    Use cases:

    A user has a cloud-init script that consumes instance-type as input to do some work on the guest.

    Parity with other cloud provider cloud-init solutions so there's more opportunity to migrate between providers https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html.

    Add instance-type to cloud-init metadata
    
    size/M release-note do-not-merge/hold dco-signoff: yes 
    opened by rthallisey 4
  • tests suite reporter: Collect guest VM cloud-init logs

    tests suite reporter: Collect guest VM cloud-init logs

    Signed-off-by: Or Mergi [email protected]

    What this PR does / why we need it: With this PR test suite reporter will collect cloud-init logs from guest VM's from failing tests and save them at artifacts/cloud-init/vmis.

    Having those logs will help debugging failing tests due to cloud-init configuration failures.

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer: This PR changes will help with working on https://github.com/kubevirt/kubevirt/issues/6776

    Release note:

    NONE
    
    size/M release-note-none dco-signoff: yes 
    opened by ormergi 2
  • tests, SR-IOV: Shorten tests SR-IOV NIC name

    tests, SR-IOV: Shorten tests SR-IOV NIC name

    Signed-off-by: Or Mergi [email protected]

    What this PR does / why we need it: We see errors on CI due to cloud-init fails to configure the SR-IOV network interface name on connectivity tests because its not valid SearchCI. This PR rename the SR-IOV test network interfaces to valid one.

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer: Network interface name length can be up to 15 characters.

    Release note:

    NONE
    
    size/XS release-note-none lgtm approved dco-signoff: yes 
    opened by ormergi 5
  • Add full Podman support for `make` and `make test` instead of hard-coded docker

    Add full Podman support for `make` and `make test` instead of hard-coded docker

    What this PR does / why we need it: This PR uses determine_cri_bin() function, which is now defined in hack/common.sh to dynamically discover right CRI to use if KUBEVIRT_CRI environment variable is not defined. Also, all hard-coded docker calls under hack/ directory are now replaced with ${KUBEVIRT_CRI}.

    This allows Kubevirt developers to be able to develop either in Podman or in Docker to their choosing.

    PR is tested to being able to perform make and make test in an environment without docker installed and without defining KUBEVIRT_CRI explicitly.

    Note: make cluster-up and make cluster-sync are still not supported. In order to support Podman there as well further changes in CI are required (for example, https://github.com/kubevirt/kubevirtci/blob/main/cluster-provision/gocli/cmd/run.go uses docker explicitly, which is used in quay.io/kubevirtci/k8s-1.XX)

    Release note:

    Add full Podman support for `make` and `make test`
    
    size/M release-note dco-signoff: yes 
    opened by iholder-redhat 4
  • Fix image upload with force-bind flag

    Fix image upload with force-bind flag

    What this PR does / why we need it:

    This PR adapts the tests to run in environments without ceph. It fixes CI failures in cgroup v2 periodic job: https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/logs/periodic-kubevirt-e2e-k8s-1.20-cgroupsv2/1464383532250959872

    Additionally the PR fixes virtctl image-upload command with --force-bind flag: previously the flag wan't checked for DataVolumes causing an error if the phase was WaitForFirstConsumer (while the flag description states that WaitForFirstConsumer logic is supposed to be ignored) .

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    Release note:

    NONE
    
    size/XS release-note-none lgtm dco-signoff: yes 
    opened by vasiliy-ul 4
Releases(v0.47.1)
Owner
KubeVirt
Managing virtualization workloads on Kubernetes
KubeVirt
Manage your dotfiles across multiple diverse machines, securely.

chezmoi Manage your dotfiles across multiple diverse machines, securely. With chezmoi, you can install chezmoi and your dotfiles on a new, empty machi

Tom Payne 5.4k Nov 30, 2021
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 1.1k Dec 5, 2021
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 462 Nov 29, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 11 Nov 10, 2021
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 41 Nov 3, 2021
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 22 Oct 27, 2021
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Chen Keinan 27 Nov 8, 2021
The dumb container runtime trying to be compatible with Kubernetes CRI

Go Dumb CRI The dumb container runtime trying to be compatible with Kubernetes CRI. Usage Run the server and create an IPC socket in /tmp/go-dumbcri.s

Ayaz Badouraly 0 Nov 10, 2021
Command-line tool to remotely execute commands on Windows machines through WinRM

winrm-cli This is a Go command-line executable to execute remote commands on Windows machines through the use of WinRM/WinRS. Note: this tool doesn't

Brice Figureau 129 Nov 25, 2021
Infrastructure testing helper for AWS Resources that uses AWS SSM to remotely execute commands on EC2 machines.

Infrastructure testing helper for AWS Resources that uses AWS SSM to remotely execute commands on EC2 machines, to enable infrastructure engineering teams to write tests that validate behaviour.

Ankit Wal 19 Nov 8, 2021
reboot of Ebowla in order to simplify / modernize the codebase and provide ongoing support

Ebowla-2 reboot of https://github.com/Genetic-Malware/Ebowla in order to simplify / modernize the codebase and provide ongoing support Building / Runn

null 15 Sep 21, 2021
Testcontainers is a Golang library that providing a friendly API to run Docker container. It is designed to create runtime environment to use during your automatic tests.

When I was working on a Zipkin PR I discovered a nice Java library called Testcontainers. It provides an easy and clean API over the go docker sdk to

null 1.1k Dec 4, 2021
Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Reactive Tech Ltd 843 Nov 29, 2021
Modular Kubernetes operator to manage the lifecycle of databases

Ensemble Ensemble is a simple and modular Kubernetes Operator to manage the lifecycle of a wide range of databases. Infrastructure as code with Kubern

Tesera 41 Nov 18, 2021
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Fernand Galiana 14.3k Nov 30, 2021
Manage large fleets of Kubernetes clusters

Introduction Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a si

Rancher 998 Dec 3, 2021
The cortex-operator is a project to manage the lifecycle of Cortex in Kubernetes.

cortex-operator The cortex-operator is a project to manage the lifecycle of Cortex in Kubernetes. Project status: alpha Not all planned features are c

Opstrace inc. 30 Nov 22, 2021
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Trendyol Open Source 325 Nov 26, 2021