Kubernetes Virtualization API and runtime in order to define and manage virtual machines.

Overview

KubeVirt

Build Status Go Report Card Licensed under Apache License version 2.0 Coverage Status CII Best Practices Visit our Slack channel FOSSA Status

KubeVirt is a virtual machine management add-on for Kubernetes. The aim is to provide a common ground for virtualization solutions on top of Kubernetes.

Note: KubeVirt is a heavy work in progress.

Introduction

Virtualization extension for Kubernetes

At its core, KubeVirt extends Kubernetes by adding additional virtualization resource types (especially the VM type) through Kubernetes's Custom Resource Definitions API. By using this mechanism, the Kubernetes API can be used to manage these VM resources alongside all other resources Kubernetes provides.

The resources themselves are not enough to launch virtual machines. For this to happen the functionality and business logic needs to be added to the cluster. The functionality is not added to Kubernetes itself, but rather added to a Kubernetes cluster by running additional controllers and agents on an existing cluster.

The necessary controllers and agents are provided by KubeVirt.

As of today KubeVirt can be used to declaratively

  • Create a predefined VM
  • Schedule a VM on a Kubernetes cluster
  • Launch a VM
  • Stop a VM
  • Delete a VM

Example:

asciicast

To start using KubeVirt

Try our quickstart at kubevirt.io.

See our user documentation at kubevirt.io/docs.

Once you have the basics, you can learn more about how to run KubeVirt and its newest features by taking a look at:

To start developing KubeVirt

To set up a development environment please read our Getting Started Guide. To learn how to contribute, please read our contribution guide.

You can learn more about how KubeVirt is designed (and why it is that way), and learn more about the major components by taking a look at our developer documentation:

Community

If you got enough of code and want to speak to people, then you got a couple of options:

Related resources

Submitting patches

When sending patches to the project, the submitter is required to certify that they have the legal right to submit the code. This is achieved by adding a line

Signed-off-by: Real Name 

to the bottom of every commit message. Existence of such a line certifies that the submitter has complied with the Developer's Certificate of Origin 1.1, (as defined in the file docs/developer-certificate-of-origin).

This line can be automatically added to a commit in the correct format, by using the '-s' option to 'git commit'.

License

KubeVirt is distributed under the Apache License, Version 2.0.

Copyright 2016

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

FOSSA Status

Comments
  • Revert

    Revert "Merge pull request #7986 from iholder-redhat/fix/MigrationFor…

    …VMX"

    This reverts commit f963215842eb86dce271d1ae4bfc320e97079915, reversing changes made to 9b9848d90d6da884e253e0280a2b68973562adac.

    Reverting as this patch requires at least 1 node to have the capability to start a Windows VM with Hyper-V re-enlightenment enabled, this is a regression as before this patch this was required only on migration.

    What this PR does / why we need it:

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    Release note:

    Fix the start of Windows VMs with Hyper-V enlightenments on nodes without the 'tsc' counter capability.
    
    size/L release-note do-not-merge/hold dco-signoff: yes release-blocker/release-0.56 
    opened by acardace 399
  • Make VMI non-migratable when incompatible CPU

    Make VMI non-migratable when incompatible CPU

    What this PR does / why we need it: During migration compatible CPU model have to be used. This is hard when cpu-passthrough or host-model is used as a choice for CPU.

    This issue can be ovecomed in a short term by blocking migration for host-model or cpu-passthrough.

    Signed-off-by: Petr Kotas [email protected]

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes # https://bugzilla.redhat.com/show_bug.cgi?id=1760028

    Release note:

    NONE
    
    size/L release-note-none do-not-merge/hold kind/api-change approved dco-signoff: yes 
    opened by petrkotas 140
  • Support ipv6 masquarade

    Support ipv6 masquarade

    What this PR does / why we need it:

    • Adding masquarade and dnat rules to ip6tables.
    • Adding masquarade and dnat rules to ip6 nat nftable.
    • Giving ipv6 address to k6t-eth0 bridge.
    • Turning on 'net.ipv6.conf.all.forwarding' on the virt-launcher pod (done via the virt-handler).

    The PR doesn't contain:

    • ipv6 support to bind methods other than masquarade.

    Further changes needed:

    • DHCP server support for ipv6 or deciding on another method to set the ip + default gateway on the vm.
    • Adding vmIpv6NetworkCIDR to podInterface schema.
    • Support migration - PR https://github.com/kubevirt/kubevirt/pull/3221
    • Remove the skip of the PVC tests once PR https://github.com/kubevirt/kubevirt/pull/3171 is merged.
    • Remove the skip of [ref_id:1182]Probes for readiness should succeed [test_id:1200][posneg:positive]with working HTTP probe and http server - once https://issues.redhat.com/browse/CNV-4454 is done.

    The PR was tested by specifying manually on the vm- $ sudo ip -6 addr add fd2e:f1fe:9490:a8ff::2/120 dev eth0 $ sudo ip -6 route add default via fd2e:f1fe:9490:a8ff::1 src fd2e:f1fe:9490:a8ff::2

    To configure dns on the vm echo "nameserver " >> /etc/resolv.conf dnsClusterIp - the output of kubectl get services -n kube-system -> ClusterIP

    Release note:

    Add IPv6 support for masquerade. IPv6 support is still highly experimental and not supported.
    
    release-note size/XXL lgtm approved dco-signoff: yes 
    opened by AlonaKaplan 125
  • Non root vms

    Non root vms

    What this PR does / why we need it: Switch virt-launcher to run as non-root user(qemu). VMs that needs VirtioFs will use root implementation.

    Existing workloads will use root implementation.

    Limitations:

    • VirtioFs unsupported configuration: virtiofs is not yet supported in session
    • Any device that is not integrated into Kubevirt. For example, a device that is passed with some external device plugin and the domain is modified by sidecarhooks. Relevant Issue

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer: I split the PR into as small commits as I could(they are not necessarily ordered for what I apologize) Release note:

    Virt-launcher can run as non-root user
    
    release-note size/XXL needs-rebase kind/api-change do-not-merge/work-in-progress kind/build-change dco-signoff: no 
    opened by xpivarc 120
  • Fully support cgroups v2 and introduce new cohesive cgroup package

    Fully support cgroups v2 and introduce new cohesive cgroup package

    What this PR does / why we need it:

    Currently Kubevirt does not fully support cgroups v2, especially when it comes to devices subsystem. This is mainly due to the fact that in v2 the devices subsystem is implemented entirely different (using eBPFs programs instread file-system interface).

    Furthermore, today's cgroup-related code is spread around different components and is very incohesive. Also, a lot of this code is duplicated from runc repository with small changes that serve as workarounds for runc's logic that was severly broken until recently when they have released the first stable version in 5 years. This code was very messy, hard to reason about, unmaintained and scary to touch.

    Therefore, this PR is aimed to accomplish the following goals:

    1. Fully support cgroups v2 (*).
    2. Introduce a new cohesive cgroup package that will help manage cgroup configurations. This package has to be v1/v2 agnostic, easy to use and serve as a very thin glue layer between us and runc.
    3. Perform major refactoring and cleanning to old cgroup-related code and delete a lot of dead and unmaintained code which is duplicated from other libraries and consists of many workarounds.

    (*) container-selinux had a "bug" in their policy - it wasn't possible to support v2 with it since it does not grant Super Privileged Containers to create eBPF programs. I've issued this PR to fix it their policy, they closed it (for some reason) in favour of this PR that was merged. When we are affected from their changes we are supposed to have full cgroups v2 support. I confirm that I have tested v2 locally after setting SELinux to permissive and everything works as expected.

    The current SELinux denial that should go away after we're affected by container-selinux changes is the following:

    ----
    time->Tue Aug 24 23:37:13 2021
    type=PROCTITLE msg=audit(1629848233.345:26872): proctitle=766972742D6368726F6F74007365742D6367726F75707376322D64657669636573002D2D7069640037
    3930373134002D2D70617468002F7379732F66732F6367726F75702F6B756265706F64732F627572737461626C652F706F6465336662623234302D646366612D346134612D62
    3630352D3937393332643534383866642F
    type=SYSCALL msg=audit(1629848233.345:26872): arch=c000003e syscall=321 success=yes exit=8 a0=5 a1=c00018f230 a2=78 a3=10 items=0 ppid=78852
    9 pid=791348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="virt-chroot" exe="/usr/
    bin/virt-chroot" subj=system_u:system_r:virt_launcher.process:s0:c332,c810 key=(null)
    type=AVC msg=audit(1629848233.345:26872): avc:  denied  { prog_run } for  pid=791348 comm="virt-chroot" scontext=system_u:system_r:virt_laun
    cher.process:s0:c332,c810 tcontext=system_u:system_r:virt_launcher.process:s0:c332,c810 tclass=bpf permissive=1
    type=BPF msg=audit(1629848233.345:26872): prog-id=7923 op=LOAD
    type=AVC msg=audit(1629848233.345:26872): avc:  denied  { prog_load } for  pid=791348 comm="virt-chroot" scontext=system_u:system_r:virt_lau
    ncher.process:s0:c332,c810 tcontext=system_u:system_r:virt_launcher.process:s0:c332,c810 tclass=bpf permissive=1
    
    

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): addresses this issue: https://github.com/kubevirt/kubevirt/issues/5336

    Special notes for your reviewer:

    1. Previously all cgroup tests were at the unit test level. This is problematic for many reasons. One of them being that kernel features are very hard to mock in a valuable way for testing. IMHO, in this case, unit tests are only valuable to check OUR logic, which is very thin right now since the whole idea is to leverage runc for the heavy lifting. Functional tests are not yet written. I will work on that on parallel to the review and would be happy to hear suggestions regarding it.

    2. I've introduced a RunWithChroot() in pkg/virt-handler/cgroup/util.go which basically chroots into a given path, executes a given function and chroots back to the original path. I've tried to place this function in pkg/util/util.go to be more global, as it shouldn't be under cgroup package, but when tried to compile I got undefined: syscall.Chroot. After reading a bit online I saw that syscall package contains different functions / variables according to the OS to which it's being compiled. When compiling this appears:

    Use --sandbox_debug to see verbose messages from the sandbox builder failed: error executing command bazel-out/host/bin/external/go_sdk/builder compilepkg -sdk external/go_sdk -installsuffix windows_amd64 -tags selinux -src pkg/util/os_helper.go -src pkg/util/util.go -arc ... (remaining 19 argument(s) skipped)
    

    The -installsuffix windows_amd64 troubles me. Can it be the reason for chroot not being defined? Do we compile it to windows on purpose? if we do - what's the reason for it? Where would you suggest placing this function then?

    Release note:

    Fully support cgroups v2, include a new cohesive package and perform major refactoring.
    
    release-note size/XXL lgtm approved kind/build-change dco-signoff: yes 
    opened by iholder-redhat 119
  • Test sriov over vlan

    Test sriov over vlan

    What this PR does / why we need it: we need to improve sriov testing. so in this pr we:

    1. Change pingVirtualMachine to return a error and move the assertion outside of it's scope. This is done so that we can assert that the ping failed as well.

    2. warped the http requests for creating network function and deleting network attachment definitions.

    3. Test connectivity between two VMs that share vlan over an SRIOV network. A third vm without which do not share the vlan should not get connected. Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    1. second commit relies on the change in the first one, because as a part of the test we check that a ping to a vm which is not connected to vlan- fails.
    2. if this PR would be approved , we will add follow up PR to wrap all not pretty http requests in functions. I thought it was not related to this PR so I made it just in the SRIOV scope. Release note:
    NONE
    
    size/M release-note-none lgtm approved dco-signoff: yes 
    opened by alonSadan 110
  • Enable Kubevirt works on ARM64 platform

    Enable Kubevirt works on ARM64 platform

    What this PR does / why we need it: Make kubevirt works on ARM64 platform.

    Special notes for your reviewer: In order to build successfully there is the need to get to the kubevirt dockerhub a equivalent image of the builder for aarch64.

    For following reason, I add new virtual machine config file for ARM platform in example, vm-cirros-arm64.yaml and vmi-fedora-arm64.yaml.

    1. Only UEFI boot is supported on ARM64 platform.
    2. The cpu model "host-model" is not support well on ARM64 platform, so we change the cpu model to "host-passthrough".
    3. VGA device not support by qemu-kvm in virt-launcher, so we set "autoattachGraphicsDevice" to false. Discuss about VGA device can be seen in following link: https://github.com/kubevirt/libvirt/issues/49
    4. minimal memory size for UEFI boot with edk2 package should be larger than 128M. Detailed can be seen in following link: https://github.com/tianocore/edk2/blob/ceacd9e992cd12f3c07ae1a28a75a6b8750718aa/ArmVirtPkg/Library/QemuVirtMemInfoLib/QemuVirtMemInfoPeiLibConstructor.c#L93

    Be caution:

    1. that this PR only prepares the ground for arm64
    2. before cross-compilation no releases will happen for arm64 and
    3. that no one guarantees that arm64 works to any degree
    Make kubevirt code fit for arm64 support. No testing is at this stage performed against arm64 at this point.
    
    release-note size/XXL lgtm approved needs-ok-to-test kind/build-change dco-signoff: yes 
    opened by zhlhahaha 98
  • Add discard=unmap option

    Add discard=unmap option

    What this PR does / why we need it: This PR adds the option discard=unmap that allows freeing space when the underlying storage supports the trim operation. However, this option is not suited for all disk types. For example, it should be avoided if the disk is preallocated or thick provisioned. The option is ignored if the PVC or DV has an annotation that contains:

    • preallocated disks: /storage.preallocation: true
    • thick provisioned disks: /storage.thick-provisioned: true

    For example, CDI set the annotation: cdi.kubevirt.io/storage.preallocation: true when the preallocation is configured. Additionally, this annotation allows the user or external tools to set these annotations.

    In the other cases, the discard=unmap is set as the default when we use either disk or lun as disk types.

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):

    Fixes #4781

    Special notes for your reviewer:

    Release note:

    Add discard=unmap option
    
    size/L release-note lgtm approved ok-to-test dco-signoff: yes 
    opened by alicefr 95
  • Functional test for customized XML marshal/unmarshal for user defined…

    Functional test for customized XML marshal/unmarshal for user defined…

    … aliases

    Signed-off-by: Hao Yu [email protected]

    What this PR does / why we need it:

    This PR adds function test to demonstrate and test the effect of the recently merged PR, #4927. The said PR fixed a VMI migration issue where some device aliases in the domain XML were not kept consistent during the migration.

    Following a sound review suggestion, https://github.com/kubevirt/kubevirt/pull/4927#discussion_r573059909, we added a end-to-end migration test case with a tablet device in the migratable VMI. The newly added test will carry out a successfully migration with the fix defined in #4927 , but will fail the migration without the fix.

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    The added function test directly follows the suggestion at https://github.com/kubevirt/kubevirt/pull/4927#discussion_r573059909, in PR #4927 .

    Release note:

    "NONE"
    
    size/M release-note-none lgtm approved ok-to-test dco-signoff: yes 
    opened by yuhaohaoyu 91
  • Initial EFI support

    Initial EFI support

    What this PR does / why we need it:

    This PR add preliminary support for only the UEFI bootloader.

    Which issue(s) this PR fixes (optional, in fixes #(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    Hi, this PR hopefully adds UEFI support by adding support for the OS BIOS bootloader parameter of libvirt.

    I say hopefully because while the existing tests all pass, I'm extremely unfamiliar with Golang, libvirt, QEMU, and k8s, so I'm unsure how to specifically test this. I realize this is probably not my best first foray into the world of contributing back to open source, however I really need this functionality so I have done my best.

    I'd very much like to add the following tests but am not sure how to, and all my attempts caused existing tests to fail:

    • update schema_tests.go
    • test updated converter function so that it actually assigns EFI support as requested
    • somehow include an example yaml that can boot a EFI vmi.

    Thank you!

    Release note:

    Add preliminary OS bootloader support for only UEFI.
    
    size/XL release-note kind/api-change 
    opened by gaahrdner 90
  • rpm: Update virtualization packages

    rpm: Update virtualization packages

    What this PR does / why we need it:

    Updates virtualization packages. Specifically:

    • libvirt 7.6.0-6 → 8.0.0-2
    • QEMU 6.0.0-33 → 6.2.0-5

    The libvirt.org/go/libvirt Go package is also updated to match.

    Release note:

    This version of KubeVirt includes upgraded virtualization technology based on libvirt 8.0.0 and QEMU 6.2.0.
    Each new release of libvirt and QEMU contains numerous improvements and bug fixes.
    
    sig/network release-note size/XXL lgtm approved dco-signoff: yes 
    opened by andreabolognani 87
  • Fix flakiness in vmsnapshot tests due to racy indexer

    Fix flakiness in vmsnapshot tests due to racy indexer

    Revert "Add vm snapshot obj to queue in case of content deletion" With the exception of callback to handleVMSnapshotContent in case of delete. This change wasn't required and also introduced a flakiness due to race of updating the content and updating the vmsnapshot status with the content name, hence the indexer didnt work all the time as expected.

    This reverts commit e7621e58921c8615dbd91747b0b55f63798c1672.

    What this PR does / why we need it: This should fix the flakiness in snapshot tests espacially in the last quarantined test: [test_id:6952][QUARANTINE]snapshot change phase to in progress and succeeded and then should not fail

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    Release note:

    NONE
    
    size/S release-note-none dco-signoff: yes 
    opened by ShellyKa13 1
  • reviewer-guide: Update notes on use of informers

    reviewer-guide: Update notes on use of informers

    /cc @rmohr

    What this PR does / why we need it:

    Based on the following ML thread:

    https://groups.google.com/g/kubevirt-dev/c/q_hR1tFH4Rk

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    Release note:

    NONE
    
    size/XS release-note-none dco-signoff: yes 
    opened by lyarwood 1
  • Instancetype: Reintroduce informers in virt controller

    Instancetype: Reintroduce informers in virt controller

    /area instancetype /cc @rmohr /cc @akrejcir /cc @davidvossel

    What this PR does / why we need it:

    The use of these informers was previously removed by https://github.com/kubevirt/kubevirt/commit/ee4e266dd93b168cc84743a797f64aeb36d75d6b. After further discussions on the ML [1] however it has become clear that the removal of these informers from the virt-controller was not required and they can be reintroduced, this change does this.

    The virt-api continues to only use client calls to lookup instancetypes and preferences.

    One small change from the original implementation is that client calls are now made if an object is not found within an informer. This is to ensure the object is indeed missing and the failure has not been caused by a race to update the associated informer.

    [1] https://groups.google.com/g/kubevirt-dev/c/q_hR1tFH4Rk

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    This is currently based on https://github.com/kubevirt/kubevirt/pull/8282 as that series already refactors lots of what we are touching here, only the final commit in this branch is for this PR.

    Release note:

    NONE
    
    size/XXL release-note-none do-not-merge/work-in-progress 
    opened by lyarwood 1
  • cloud-init: Skip networkData lookup if not needed

    cloud-init: Skip networkData lookup if not needed

    What this PR does / why we need it: When a ConfigDrive cloud-init is configured the networkData from the mounted secret has to be only checked if NetworkDataSecretRef is set.

    Release note:

    Don't show a failure if ConfigDrive cloud init has UserDataSecretRef and not NetworkDataSecretRef
    
    size/M release-note dco-signoff: yes 
    opened by qinqon 3
  • Update CDI/CDI-API vendoring to get 1.55.0

    Update CDI/CDI-API vendoring to get 1.55.0

    Signed-off-by: Alexander Wels [email protected]

    What this PR does / why we need it: Currently KubeVirt uses CDI 1.50.0 and CDI-API 1.42.0 which is not consistent. This updates them both to 1.55.0

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer: Will need to update CDI version in kubevirtci image to also be 1.55.0

    Release note:

    NONE
    
    size/XXL release-note-none dco-signoff: yes 
    opened by awels 2
  • Fix tests CDI DV GC default reference

    Fix tests CDI DV GC default reference

    Signed-off-by: Arnon Gilboa [email protected]

    What this PR does / why we need it: CDI is enabling DataVolume garbage collection by default since: https://github.com/kubevirt/containerized-data-importer/pull/2421

    To prevent DV-related CI failures due to the old GC disabled default reference, we enabled GC by default for all lanes in cluster-deploy: https://github.com/kubevirt/kubevirt/pull/8479

    With the current fix, we no more need to explicitly enable it by default in cluster-deploy.

    Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

    Special notes for your reviewer:

    Release note:

    NONE
    
    size/XS release-note-none do-not-merge/hold dco-signoff: yes 
    opened by arnongilboa 3
Releases(v0.57.1)
Owner
KubeVirt
Managing virtualization workloads on Kubernetes
KubeVirt
Andrews-monitor - A Go program to monitor when times were available to order for Brown's Andrews dining hall. Used during the portion of the pandemic when the dining hall was only available for online order.

Andrews Dining Hall Monitor A Go program to monitor when times were available to order for Brown's Andrews dining hall. Used during the portion of the

null 0 Jan 1, 2022
Manage your dotfiles across multiple diverse machines, securely.

chezmoi Manage your dotfiles across multiple diverse machines, securely. With chezmoi, you can install chezmoi and your dotfiles on a new, empty machi

Tom Payne 7.6k Oct 5, 2022
Prestic - Lets you define and run restic commands from a YAML file

Pete's Restic Lets you define and run restic commands from a YAML file. Features

Pete Taylor 0 Jan 10, 2022
Provider-generic-workflows - A generic provider which uses argo workflows to define the backend actions.

provider-generic-workflows provider-generic-workflows is a generic provider which uses argo workflows for managing the external resource. This will re

Shailendra Sirohi 0 Jan 1, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2k Oct 2, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 737 Sep 28, 2022
Provide task runtime implementation with pidfd and eBPF sched_process_exit tracepoint to manage deamonless container with low overhead.

embedshim The embedshim is the kind of task runtime implementation, which can be used as plugin in containerd. With current shim design, it is used to

Fu Wei 89 Sep 23, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 23 Sep 16, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 63 Sep 8, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 59 Sep 26, 2022
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Computer Architecture and VLSI Systems (CARV) Laboratory 6 Sep 21, 2022
K8s-socketcan - Virtual SocketCAN Kubernetes device plugin

Virtual SocketCAN Kubernetes device plugin This plugins enables you to create vi

Jakub Piotr Cłapa 1 Feb 15, 2022
A kubernetes cni, connecting containers to neutron virtual networks.

neutron-cni A kubernetes cni, connecting containers to neutron virtual networks. Network Topology Architecture Quick Start Build make build-dev-im

null 8 May 5, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 61 Sep 28, 2022
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Chen Keinan 32 Sep 19, 2022
The dumb container runtime trying to be compatible with Kubernetes CRI

Go Dumb CRI The dumb container runtime trying to be compatible with Kubernetes CRI. Usage Run the server and create an IPC socket in /tmp/go-dumbcri.s

Ayaz Badouraly 0 Dec 12, 2021
Natural-deploy - A natural and simple way to deploy workloads or anything on other machines.

Natural Deploy Its Go way of doing Ansibles: Motivation: Have you ever felt when using ansible or any declarative type of program that is used for dep

Akilan Selvacoumar 0 Jan 3, 2022
Command-line tool to remotely execute commands on Windows machines through WinRM

winrm-cli This is a Go command-line executable to execute remote commands on Windows machines through the use of WinRM/WinRS. Note: this tool doesn't

Brice Figureau 143 Sep 27, 2022