CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

Overview

OpenEBS LVM CSI Driver

CII Best Practices Slack Community Meetings Go Report

OpenEBS Logo

CSI driver for provisioning Local PVs backed by LVM and more.

Project Status

Currently the LVM CSI Driver is in alpha.

Usage

Prerequisites

Before installing LVM driver please make sure your Kubernetes Cluster must meet the following prerequisites:

  1. all the nodes must have lvm2 utils installed
  2. volume group has been setup for provisioning the volume
  3. You have access to install RBAC components into kube-system namespace. The OpenEBS LVM driver components are installed in kube-system namespace to allow them to be flagged as system critical components.

Supported System

K8S : 1.17+

OS : Ubuntu

LVM : 2

Setup

Find the disk which you want to use for the LVM, for testing you can use the loopback device

truncate -s 1024G /tmp/disk.img
sudo losetup -f /tmp/disk.img --show

Create the Volume group on all the nodes, which will be used by the LVM Driver for provisioning the volumes

sudo pvcreate /dev/loop0
sudo vgcreate lvmvg /dev/loop0

Installation

Deploy the Operator yaml

kubectl apply -f https://raw.githubusercontent.com/openebs/lvm-localpv/master/deploy/lvm-operator.yaml

Deployment

deploy the sample fio application

kubectl apply -f https://raw.githubusercontent.com/openebs/lvm-localpv/master/deploy/sample/fio.yaml

Features

  • Access Modes
    • ReadWriteOnce
    • ReadOnlyMany
    • ReadWriteMany
  • Volume modes
    • Filesystem mode
    • Block mode
  • Supports fsTypes: ext4, btrfs, xfs
  • Volume metrics
  • Topology
  • Snapshot
  • Clone
  • Volume Resize
  • Backup/Restore
  • Ephemeral inline volume
Comments
  • Rancher?

    Rancher?

    What steps did you take and what happened: Installed this in rancher, but I am not getting any volumes provisioned.

    What did you expect to happen: Volumes to be provisioned

    The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)

    • `kubectl logs -f openebs-lvm-controller-0 -n kube-system -c openebs-lvm-plugin
    • kubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-plugin
    • kubectl get pods -n kube-system
    • kubectl get lvmvol -A -o yaml

    gist with those

    Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

    Environment:

    • LVM Driver version
    • Kubernetes version (use kubectl version)
    kubectl version
    Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.8", GitCommit:"5575935422cc1cf5169dfc8847cb587aa47bac5a", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:07Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}
    
    • Kubernetes installer & version: rancher
    • Cloud provider or hardware configuration: KVM on linux
    • OS (e.g. from /etc/os-release):
    cat /etc/os-release 
    NAME="RancherOS"
    VERSION=v1.5.8
    ID=rancheros
    ID_LIKE=
    VERSION_ID=v1.5.8
    PRETTY_NAME="RancherOS v1.5.8"
    HOME_URL="http://rancher.com/rancher-os/"
    SUPPORT_URL="https://forums.rancher.com/c/rancher-os"
    BUG_REPORT_URL="https://github.com/rancher/os/issues"
    BUILD_ID=
    
    [[email protected] ~]$ sudo pvscan
      PV /dev/vda1   VG lvmvg           lvm2 [<33.30 GiB / <33.30 GiB free]
      Total: 1 [<33.30 GiB] / in use: 1 [<33.30 GiB] / in no VG: 0 [0   ]
    [[email protected] ~]$ sudo vgscan
      Reading all physical volumes.  This may take a while...
      Found volume group "lvmvg" using metadata type lvm2
    
    $ cat openebs-lvm-sc.yaml 
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: openebs-lvmpv
    parameters:
      storage: "lvm"
      volgroup: "lvmvg"
    provisioner: local.csi.openebs.io
    
    $ k get sc
    NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5h38m
    openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5h38m
    openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  5h38m
    openebs-lvmpv (default)     local.csi.openebs.io                                       Delete          Immediate              false                  114m
    openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  5h38m
    
    $ k get pvc
    NAME                                       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
    data-whc01elastic-opendistro-es-data-0     Pending                                      openebs-lvmpv   105m
    data-whc01elastic-opendistro-es-master-0   Pending                                      openebs-lvmpv   105m
    
    $ k describe pvc data-whc01elastic-opendistro-es-data-0
    Name:          data-whc01elastic-opendistro-es-data-0
    Namespace:     default
    StorageClass:  openebs-lvmpv
    Status:        Pending
    Volume:        
    Labels:        app=whc01elastic-opendistro-es
                   heritage=Helm
                   release=whc01elastic
                   role=data
    Annotations:   volume.beta.kubernetes.io/storage-provisioner: local.csi.openebs.io
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      
    Access Modes:  
    VolumeMode:    Filesystem
    Mounted By:    whc01elastic-opendistro-es-data-0
    Events:
      Type    Reason                Age                     From                                                                                Message
      ----    ------                ----                    ----                                                                                -------
      Normal  ExternalProvisioning  4m39s (x401 over 104m)  persistentvolume-controller                                                         waiting for a volume to be created, either by external provisioner "local.csi.openebs.io" or manually created by system administrator
      Normal  Provisioning          91s (x29 over 105m)     local.csi.openebs.io_openebs-lvm-controller-0_33139b9f-e336-4a2b-a90a-33b6bb4a91c3  External provisioner is provisioning volume for claim "default/data-whc01elastic-opendistro-es-data-0"
    
    opened by joshuacox 14
  • openebs-lvm-plugin is busylooping on systems without dm-snapshot kernel module loaded

    openebs-lvm-plugin is busylooping on systems without dm-snapshot kernel module loaded

    What steps did you take and what happened:

    I tried to snapshot a volume created by lvm-localpv.

    I created a Default SnapshotClass Without SnapSize Parameter, and then a VolumeSnapshot resource:

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: lvm-localpv-snap
    spec:
      volumeSnapshotClassName: lvmpv-snapclass
      source:
        persistentVolumeClaimName: datadir-redpanda-0
    

    What did you expect to happen: Snapshot to be created.

    • kubectl logs -f openebs-lvm-controller-0 -n kube-system -c openebs-lvm-plugin
    I1017 09:26:48.057768       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:48.057937       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:48.068293       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462808},"snapshot_id":"[email protected]4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    I1017 09:26:48.650574       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1017 09:26:48.650691       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"0.8.2"}
    I1017 09:26:48.651376       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:48.651521       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:48.657290       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462808},"snapshot_id":"[email protected]4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    I1017 09:26:49.258316       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1017 09:26:49.258430       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"0.8.2"}
    I1017 09:26:49.259088       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:49.259214       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:49.274993       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462809},"snapshot_id":"[email protected]4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    I1017 09:26:49.857480       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1017 09:26:49.857635       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"0.8.2"}
    I1017 09:26:49.858364       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:49.858548       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:49.874360       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462809},"snapshot_id":"[email protected]4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    I1017 09:26:50.450447       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1017 09:26:50.450576       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"0.8.2"}
    I1017 09:26:50.451314       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:50.451583       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:50.456889       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462810},"snapshot_id":"[email protected]4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    
    • kubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-plugin
    E1017 09:19:28.198591       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    E1017 09:19:48.682737       1 lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory
      /sbin/modprobe failed: 1
      snapshot: Required device-mapper target(s) not detected in your kernel.
      Run `lvcreate --help' for more information.
    E1017 09:19:48.682764       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    I1017 09:20:26.774560       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    E1017 09:20:29.646872       1 lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory
      /sbin/modprobe failed: 1
      snapshot: Required device-mapper target(s) not detected in your kernel.
      Run `lvcreate --help' for more information.
    E1017 09:20:29.646928       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    I1017 09:21:26.746526       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    E1017 09:21:51.571130       1 lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory
      /sbin/modprobe failed: 1
      snapshot: Required device-mapper target(s) not detected in your kernel.
      Run `lvcreate --help' for more information.
    E1017 09:21:51.571168       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    I1017 09:22:26.766557       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    I1017 09:23:26.746535       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    I1017 09:24:26.762518       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    E1017 09:24:35.415262       1 lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory
      /sbin/modprobe failed: 1
      snapshot: Required device-mapper target(s) not detected in your kernel.
      Run `lvcreate --help' for more information.
    E1017 09:24:35.415304       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    I1017 09:25:26.754542       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    I1017 09:26:26.770526       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    I1017 09:27:26.778556       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    
    • kubectl get pods -n kube-system
    NAME                                           READY   STATUS    RESTARTS   AGE
    metrics-server-86cbb8457f-sgxtn                1/1     Running   0          31d
    local-path-provisioner-5ff76fc89d-h5hr5        1/1     Running   0          31d
    kube-vip-ds-cp-hmkdq                           1/1     Running   0          31d
    coredns-7448499f4d-6f76n                       1/1     Running   0          31d
    kube-vip-ds-svc-l4ncf                          1/1     Running   0          3d
    kube-vip-ds-svc-qqtlj                          1/1     Running   0          3d
    kube-vip-ds-svc-vlpcs                          1/1     Running   0          3d
    kube-vip-ds-svc-szsb4                          1/1     Running   0          3d
    openebs-lvm-node-v8wpt                         2/2     Running   0          3d
    openebs-lvm-node-v8f6s                         2/2     Running   0          3d
    openebs-lvm-controller-0                       5/5     Running   0          3d
    openebs-lvm-node-rjtgl                         2/2     Running   0          3d
    kube-state-metrics-5f97897c99-b45mq            1/1     Running   0          3d
    openebs-lvm-node-5rh8l                         2/2     Running   0          3d
    kube-vip-ds-svc-kchn4                          1/1     Running   0          3d
    openebs-lvm-node-2jtn9                         2/2     Running   0          3d
    kube-vip-ds-svc-wp597                          1/1     Running   0          3d
    kube-vip-ds-svc-2qv26                          1/1     Running   0          3d
    openebs-lvm-node-2gb2m                         2/2     Running   0          3d
    openebs-lvm-node-mc5rj                         2/2     Running   0          3d
    cloud-provider-equinix-metal-7fb9654c9-2xzxc   1/1     Running   2          31d
    
    • kubectl get lvmvol -A -o yaml
    apiVersion: v1
    items:
    - apiVersion: local.openebs.io/v1alpha1
      kind: LVMVolume
      metadata:
        creationTimestamp: "2021-10-14T09:48:54Z"
        finalizers:
        - lvm.openebs.io/finalizer
        generation: 3
        labels:
          kubernetes.io/nodename: redpanda-2
        name: pvc-d293d56b-45da-412d-ab62-fec20652e71b
        namespace: openebs
        resourceVersion: "10188280"
        uid: 0f25c600-c69e-459e-8274-b7ed70081f0b
      spec:
        capacity: "107374182400"
        ownerNodeID: redpanda-2
        shared: "no"
        thinProvision: "no"
        vgPattern: ^lvmvg$
        volGroup: lvmvg
      status:
        state: Ready
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""
    

    Anything else you would like to add:

    • The openebs-lvm-controller pod is busylooping a lot. It probably shouldn't try multiple times per second
    • The logs of the openebs-lvm-node suggest there might be a problem with some missing kernel module: lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory Maybe this folder doesn't exist in the pod/It doesn't exist on the host/It's not mounted into the pod/We need to have another kernel module around on the host?

    If it's some missing kernel feature, this might just need a bit more documentation and some more graceful error handling.

    Environment:

    • LVM Driver version: 0.8.2
    • Kubernetes version (use kubectl version): v1.21.2+k3s1
    • Kubernetes installer & version: k3s
    • Cloud provider or hardware configuration: Equinix Metal
    • OS (e.g. from /etc/os-release): Ubuntu 20.04.3 LTS
    enhancement project/community 
    opened by flokli 12
  • Snapshots are not created

    Snapshots are not created

    What steps did you take and what happened: Followed this guide: https://github.com/openebs/lvm-localpv/blob/develop/docs/snapshot.md

    What did you expect to happen: Snapshot should be created

    The output of the following commands will help us better understand what's going on:

    E0501 15:06:50.777462 1 volume.go:270] Get snapshot failed err: lvmsnapshots.local.openebs.io "snapshot-44c1fe7d-f202-4af5-b56e-69081360e95e" not found E0501 15:06:50.780672 1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to handle CreateSnapshotRequest for pvc-01f43051-83af-409f-922a-9a3653351aad: snapshot-44c1fe7d-f202-4af5-b56e-69081360e95e, {LVMSnapshot.local.openebs.io "snapshot-44c1fe7d-f202-4af5-b56e-69081360e95e" is invalid: [spec.capacity: Required value, spec.vgPattern: Required value]}

    Anything else you would like to add:

    I can create snapshots manually without any problems: lvcreate --size 1G --snapshot --name my-name /dev/tank/pvc-01f43051-83af-409f-922a-9a3653351aad In the log from the controller i see this name for the snapshot: snapshot-44c1fe7d-f202-4af5-b56e-69081360e95e. When trying to manually create this snapshot i get this error: Names starting "snapshot" are reserved. Please choose a different LV name. Could this be the issue? How to fix that?

    I have checked all logs but i cannot find any useful information about why creating the snapshot fails

    opened by benn0r 10
  • miss csistoragecapacity object.

    miss csistoragecapacity object.

    Describe the problem/challenge you have did not see object csistoragecapacity in kube-system when enable storageCapacity in csidriver.

    Environment:

    • LVM Driver version: 0.8.0
    • Kubernetes version (use kubectl version): 1.21.3
    • Kubernetes installer & version: kubeadm
    • other: -kubectl get pods -n kube-system -l role=openebs-lvm NAME READY STATUS RESTARTS AGE openebs-lvm-controller-0 5/5 Running 0 99m openebs-lvm-node-2txsf 2/2 Running 0 99m openebs-lvm-node-4mjxh 2/2 Running 0 99m openebs-lvm-node-kr5jb 2/2 Running 0 99m
    opened by taozix 9
  • CSIDriver is missing fsGroupPolicy

    CSIDriver is missing fsGroupPolicy

    What steps did you take and what happened: Created a new volume with fsGroup specified on pod spec securityContext but permissions were root:root 0755

    What did you expect to happen: Filesystem mounted with group set to GID specified in fsGroup.

    Anything else you would like to add:

    The CSIDriver is missing fsGroupPolicy: File

    Environment:

    • lvm-localpv version: 0.4.0
    opened by mazocode 9
  • lvm-operator.yaml in release doesn't pin container version

    lvm-operator.yaml in release doesn't pin container version

    What steps did you take and what happened: I deployed according to the readme, which directs me to kubectl apply -f deploy/lvm-operator.yaml.

    What did you expect to happen: I checked out a specific tag/release (0.8.5 in that case). I expected the yaml file to pin that version.

    However, https://github.com/openebs/lvm-localpv/blob/lvm-localpv-0.8.5/deploy/lvm-operator.yaml#L1258 uses the ci image tag.

    I'd expect a release to pin the version used explicitly, or the release artifacts to include a rendered lvm-operator.yaml file for that specific release.

    opened by flokli 8
  • Fix(scheduler): use SpaceWeighted as the default scheduler

    Fix(scheduler): use SpaceWeighted as the default scheduler

    Pull Request template

    Please, go through these steps before you submit a PR.

    Why is this PR required? What issue does it fix?: Fix issue https://github.com/openebs/lvm-localpv/issues/188

    What this PR does?: Use SpaceWeighted as the default scheduler

    Does this PR require any upgrade changes?: No

    If the changes in this PR are manually verified, list down the scenarios covered:: 1、Install openebs lvm 2、Create lvm storageclass(Use two K8S nodes for lvm provision) 3、Create PVC to use the sc above to provision LVM volume, to make sure it use the SpaceWeighted when schedule LVM volume

    Any additional information for your reviewer? : Mention if this PR is part of any design or a continuation of previous PRs

    Checklist:

    • [ ] Fixes #
    • [ ] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:

    PLEASE REMOVE BELOW INFORMATION BEFORE SUBMITTING

    The PR title message must follow convention: <type>(<scope>): <subject>.

    Where:

    • type is defining if release will be triggering after merging submitted changes, details in CONTRIBUTING.md. Most common types are:

      • feat - for new features, not a new feature for build script
      • fix - for bug fixes or improvements, not a fix for build script
      • chore - changes not related to production code
      • docs - changes related to documentation
      • style - formatting, missing semi colons, linting fix etc; no significant production code changes
      • test - adding missing tests, refactoring tests; no production code change
      • refactor - refactoring production code, eg. renaming a variable or function name, there should not be any significant production code changes
    • scope is a single word that best describes where the changes fit. Most common scopes are like:

      • data engine (localpv, jiva, cstor)
      • feature (provisioning, backup, restore, exporter)
      • code component (api, webhook, cast, upgrade)
      • test (tests, bdd)
      • chores (version, build, log, travis)
    • subject is a single line brief description of the changes made in the pull request.

    opened by nkwangleiGIT 6
  • When custom node Labels has been deleted, LVM LocalPV CSI driver will not schedule the PV to the node

    When custom node Labels has been deleted, LVM LocalPV CSI driver will not schedule the PV to the node

    What steps did you take and what happened:

    • Currently, all custom node Labels will be registered when restart the LVM-LocalPV Driver, CSI node topologyKeys will keep all those keys even if node labels has been changed. When custom node Labels has been deleted, LVM LocalPV CSI driver will not schedule the PV to the node, restart the LVM-LocalPV Driver daemon set is a must requirement.

    What did you expect to happen:

    • That is not a reasonable way to register the topologyKeys, because it can't predict Labels deleted.

    The output of the following commands will help us better understand what's going on:

    # kubectl logs -n kube-system openebs-lvm-controller-0  -c csi-provisioner
    
    E0619 07:43:46.123145       1 controller.go:984] error syncing claim "3ad3265d-8479-4485-a7ec-977b585afa19": failed to provision volume with StorageClass "local-lvm": error generating accessibility requirements: topology labels from selected node map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux node-role.kubernetes.io/master: node-role.kubernetes.io/node: openebs.io/nodename:node1] does not match topology keys from CSINode [beta.kubernetes.io/arch beta.kubernetes.io/os **test** kubernetes.io/arch kubernetes.io/hostname node-role.kubernetes.io/master node-role.kubernetes.io/node openebs.io/nodename]
    
    # kubectl get csinode node1 -o yaml
    ......
        topologyKeys:
        - beta.kubernetes.io/arch
        - beta.kubernetes.io/os
        - test
        - kubernetes.io/arch
        - kubernetes.io/hostname
        - kubernetes.io/os
        - node-role.kubernetes.io/master
        - node-role.kubernetes.io/node
        - openebs.io/nodename
    

    Describe the solution you'd like: Is that a better way to register topologyKeys from env rather than all the custom node labels? When you need to change topologyKeys, just modify env and restart LVM-LocalPV Driver.

    project/community 
    opened by fangfenghuang 6
  • feat(provisioning): add support for multiple vg to use for provisioning

    feat(provisioning): add support for multiple vg to use for provisioning

    Why is this PR required? What issue does it fix?: See #17 for more details.

    What this PR does?: Since the volgroup param now represents the regex, controller pass the same to node plugin by setting up an additional field called VolGroupRegex in lvmvolume resource. Node plugin controller will then chose a vg matching the provided regex and sets up the VolGroup field under lvmvolume. If there are multiple vgs matching the regex, node plugin controller will choose the one having minimum free space (bin packing) available to accommodate that volume capacity.

    Does this PR require any upgrade changes?:

    If the changes in this PR are manually verified, list down the scenarios covered:: Consider a k8s cluster having 4 nodes with vgs [lvmvg-a, lvmvg-b, lvmvg-a, xyzvg]. Configure the storage class by setting volgroup parameter as lvmvg* (regex denoting a lvmvg prefix). After creating a stateful set of size 4, we'll see each pvc gets scheduled on first 3 nodes (since xyzvg doesn't matches the volgroup regex).

    Any additional information for your reviewer? : Mention if this PR is part of any design or a continuation of previous PRs This pull request is dependent on #21. So, we need to close that first before closing this.

    Checklist:

    • [x] Fixes #17
    • [x] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:
    pr/community 
    opened by iyashu 6
  • feat(snapshot): add snapshot support for LVM PV

    feat(snapshot): add snapshot support for LVM PV

    Why is this PR required? What issue does it fix?: This PR adds support for LVM snapshot to the lvm-localPV CSI driver. The snapshots created will be readonly (as opposed to the default ReadWrite). Also, once snapshots are created for a volume, resize will not work for those volumes, since LVM does't support that.

    To create a snapshot, create a snapshot class as given below and then create a volumesnapshot resource

    kind: VolumeSnapshotClass
    apiVersion: snapshot.storage.k8s.io/v1
    metadata:
      name: lvm-localpv-snapclass
      annotations:
        snapshot.storage.kubernetes.io/is-default-class: "true"
    driver: local.csi.openebs.io
    deletionPolicy: Delete
    ---
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: lvm-localpv-snap
    spec:
      volumeSnapshotClassName: lvm-localpv-snapclass
      source:
        persistentVolumeClaimName: <pvc-name>
    

    What this PR does?:

    • adds LVMSnapshot CRDs
    • add snapshot controller to watch for LVMSnapshot CRs
    • adds the volumesnapshot related CRDs from storage.k8s.io to the deployment
    • use container images from k8s.gcr.io for CSI components

    Limitation Volumes with snapshots cannot be resized, as LVM does not support online resize of origin volumes with a snapshot. ControllerExpandVolume will error out if the volume to resized has any active snapshots.

    Does this PR require any upgrade changes?: dm-snapshot kernel module should be loaded for snapshot to work

    If the changes in this PR are manually verified, list down the scenarios covered::

    1. Snapshot creation
    2. Try to resize volume with snapshot (will error out the volume expansion)
    3. resize should work after snapshots are removed
    4. create multiple snapshots for the same volume

    Any additional information for your reviewer? : Mention if this PR is part of any design or a continuation of previous PRs

    Checklist:

    • [x] Fixes #10
    • [x] PR Title follows the convention of <type>(<scope>): <subject>
    • [x] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:
    opened by akhilerm 6
  • fix(data engine): moving pkg/config into pkg/driver/config (#8)

    fix(data engine): moving pkg/config into pkg/driver/config (#8)

    Signed-off-by: Oussama Salahouelhadj [email protected]

    moving pkg/comfig into pkg/driver/config

    • relinking import statement to the new config package path "pkg/driver/config" in the following files:
      • cmd/main.go
      • pkg/driver/driver.go
      • pkg/lvm/iolimiter.go
      • pkg/lvm/iolimiter_test.go

    Why is this PR required? What issue does it fix?: this PR is related to this issue: #8

    What this PR does?: change the config package path from ...pkg/comfig to ...pkg/driver/config and fixing/relinking to the new path.

    Checklist:

    • [x] Fixes #8
    • [x] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [x] Commit has unit tests
    • [x] Commit has integration tests
    hacktoberfest 
    opened by OussamaSALAHOUELHADJ 5
  • Need to support LVM stripping

    Need to support LVM stripping

    Describe the problem/challenge you have LVM support stripping for better I/O performance using striped logical volume when there‘re number of disks So,if lvm-localpv support this will be a useful feature image

    Describe the solution you'd like The storage class add parameters regarding

    1. if open stripping support
    2. the strip number

    Anything else you would like to add: This function also needs modification image

    opened by npu21 0
  • Requesting an update on the FOSSA licensing issue

    Requesting an update on the FOSSA licensing issue

    The Github mentions that there are 22 issues found with the licenses for this repo.

    Is there anyone looking at resolving the problems as many organisations would be wary of adopting this codebase as-is.

    opened by michaeloreillyintel 2
  • CRDs are not installed through Helm while setting crd.enableInstall to true

    CRDs are not installed through Helm while setting crd.enableInstall to true

    What steps did you take and what happened: I tried to install lvm-localpv either through this chart v1.0.0 or through the openebs umbrella one 3.3.0. The value crd.enableInstall is by default enabled for lvm-localpv but CRDs are not installed

    What did you expect to happen: CRDs to be installed

    The output of the following commands will help us better understand what's going on:

    Anything else you would like to add: the value crd.enableInstall seems deprecated to me and should be removed from the default values.

    Environment:

    • LVM Driver version: 1.0.0
    • Kubernetes version: v1.20.15
    • OS (e.g. from /etc/os-release): ubuntu 18.04
    opened by abuisine 0
  • feat(lvm-localpv): make the installation of snapshot.storage.k8s.io CRDs optional

    feat(lvm-localpv): make the installation of snapshot.storage.k8s.io CRDs optional

    Why is this PR required? What issue does it fix?: change for #203

    What this PR does?: This PR will enable the possibility to disable the installation of the volumesnapshot CRDs. To prevent any regressions, the default has been set to true. These resources are usually maintained and managed by the underlying kubernetes distribution. In case of OpenShift, it will automatically revert back the modifications made by this helmchart. This could be problematic if you use ArgoCD and have it on autosync.

    Checklist:

    • [x] Fixes #203
    • [x] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:
    opened by phhutter 0
  • make the installation of snapshot.storage.k8s.io CRDs optional

    make the installation of snapshot.storage.k8s.io CRDs optional

    Hi,

    Could we make the installation of the following CRDs optional?

    https://github.com/openebs/lvm-localpv/blob/develop/deploy/helm/charts/crds/volumesnapshotclasses.yaml https://github.com/openebs/lvm-localpv/blob/develop/deploy/helm/charts/crds/volumesnapshotcontents.yaml https://github.com/openebs/lvm-localpv/blob/develop/deploy/helm/charts/crds/volumesnapshots.yaml

    Is there a reason why you have included them in the openEBS helmchart? These resources are usually maintained and managed by the underlying kubernetes distribution. In case of OpenShift, it will automatically revert back the modifications made by your helmchart. This could be problematic if you use ArgoCD and have it on autosync.

    opened by phhutter 0
  • PodSecurityPolicy is deprecated since k8s 1.21

    PodSecurityPolicy is deprecated since k8s 1.21

    Describe the problem/challenge you have The PodSecurityPolicy is deprecated since 1.21 and will be removed in Kubernetes 1.25.

    src: https://kubernetes.io/docs/concepts/security/pod-security-policy/

    Solution use SecurityContextConstraints instead

    opened by phhutter 0
Releases(lvm-localpv-1.0.0)
Owner
OpenEBS
Containerized storage for containers
OpenEBS
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes

csi-driver-spiffe csi-driver-spiffe is a Container Storage Interface (CSI) driver plugin for Kubernetes to work along cert-manager. This CSI driver tr

null 31 Sep 18, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

HwameiStor 165 Aug 6, 2022
Dynamically provisioning persistent local storage with Kubernetes

Local Path Provisioner Overview Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the

Rancher 1.3k Sep 24, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Fortress 0 Jan 23, 2022
Dothill (Seagate) AssuredSAN dynamic provisioner for Kubernetes (CSI plugin).

Dothill-csi dynamic provisioner for Kubernetes A dynamic persistent volume (PV) provisioner for Dothill AssuredSAN based storage systems. Introduction

Enix 22 Sep 19, 2022
Kubernetes CSI driver for QNAP NAS's

QNAP CSI This is a very alpha QNAP Kubernetes CSI driver which lets you automatically provision iSCSI volumes on a QNAP NAS. Its only been tested on a

Terry Cain 3 Jul 29, 2022
Asynchronous data replication for Kubernetes volumes

VolSync VolSync asynchronously replicates Kubernetes persistent volumes between clusters using either rsync or rclone. It also supports creating backu

Backube 73 Sep 23, 2022
A Packer plugin for provisioning with Terraform (local)

Packer Plugin Terraform Inspired by Megan Marsh's talk https://www.hashicorp.com/resources/extending-packer I bit the bullet and started making my own

Servian 6 Apr 28, 2022
Karpenter: an open-source node provisioning project built for Kubernetes

Karpenter is an open-source node provisioning project built for Kubernetes. Its goal is to improve the efficiency and cost of running workloads on Kub

Rohan 1 Apr 10, 2022
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Weave Development Lab 7 Sep 6, 2022
Hexagonal architecture paradigms, such as dividing adapters into primary (driver) and secondary (driven)Hexagonal architecture paradigms, such as dividing adapters into primary (driver) and secondary (driven)

authorizer Architecture In this project, I tried to apply hexagonal architecture paradigms, such as dividing adapters into primary (driver) and second

Renato Benatti 0 Dec 7, 2021
Samantha 0 Feb 12, 2022
K8s-cinder-csi-plugin - K8s Pod Use Openstack Cinder Volume

k8s-cinder-csi-plugin K8s Pod Use Openstack Cinder Volume openstack volume list

douyali 0 Jul 18, 2022
Envoy file based dynamic routing using kubernetes config map

Envoy File Based Dynamic Routing Config mapを使用してEnvoy File Based Dynamic Routingを実現します。 概要 アーキテクチャとしては、 +----------+ +--------------+ +-----------

null 1 Mar 1, 2022
Docker Swarm Ingress service based on OpenResty with automatic Let's Encrypt SSL provisioning

Ingress Service for Docker Swarm Swarm Ingress OpenResty is a ingress service for Docker in Swarm mode that makes deploying microservices easy. It con

OpCycle 5 Jun 23, 2022
FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)

FaaSNet FaaSNet is the first system that provides an end-to-end, integrated solution for FaaS-optimized container runtime provisioning. FaaSNet uses l

LeapLab @ CS_GMU 36 Sep 28, 2022
Easy cloud instance provisioning

post-init (work in progress) Post-Init is a set of tools that allows you to easily connect to, provision, and interact with cloud instances after they

Joe Kralicky 1 Dec 6, 2021
Custom Terraform provider that allows provisioning VGS Proxy Routes.

VGS Terraform Provider Custom Terraform provider that allows provisioning VGS Proxy Routes. How to Install Requirements: terraform ver 0.12 or later M

Very Good Security, Inc. 4 Mar 12, 2022
Extensible Provisioning Protocol (EPP) in Go

EPP for Go Extensible Provisioning Protocol (EPP) for Go. EPP is an XML-based protocol for provisioning and managing domain names and other objects at

Domainr 1 Jan 18, 2022