Carina: an high performance and ops-free local storage for kubernetes

Overview

Carina

License

English | 中文

Background

Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applications are shifting into cloud native world, for example, modern databases and middlewares. However, both modern databases and its storage providers try to solve some common problems in their own way. For example, they both deal with data replications and consistency. This introduces a giant waste of both capacity and performance and needs more mantainness effort. And besides that, stateful applications strive to be more peformant, eliminating every possible latency, which is unavoidable for modern distributed storage systems. Enters carina.

Carina is a standard kubernetes CSI plugin. Users can use standard kubernetes storage resources like storageclass/PVC/PV to request storage media. Its key features includes:

  • Completely kubernetes native and easy to install
  • Using local disks and partition them into different groups based on disk type, user can provison different type of disks using different storage class.
  • Scaning physical disks and building a RAID as required. If disk fails, just plugin a new one and it's done.
  • Node capacity and performance aware, so scheduling pods more smartly.
  • Extremly low overhead. Carina sit besides the core data path and provide raw disk performance to applications.
  • Auto tiering. Admins can configure carina to combine the large-capacity-but-low-performant disk and small-capacity-but-high-performant disks as one storageclass, so user can benifit both from capacity and performance.
  • If nodes fails, carina will automatically detach the local volume from pods thus pods can be rescheduled.

Running Environments

  • Kubernetes:>=1.18 (*least verified version)
  • Node OS:Linux
  • Filesystems:ext4,xfs

Carina architecture

Carina is built for cloudnative stateful applications with raw disk performance and ops-free maintainess. Carina can scan local disks and classify them by disk types, for example, one node can have 10 HDDs and 2 SSDs. Carina then will group them into different disk pools and user can request different disk type by using different storage class. For data HA, carina now leverages STORCLI to build RAID groups.

carina-arch

Carina components

It has three componets: carina-scheduler, carina-controller and carina-node.

  • carina-scheduler is an kubernetes scheduler plugin, sorting nodes based on the requested PV size、node's free disk space and node IO perf stats. By default, carina-scheduler supports binpack and spreadout policies.
  • carina-controller is the controll plane of carina, which watches PVC resources and maintain the internal logivalVolume object.
  • carina-node is an agent which runs on each node. It manage local disks using LVM.

Features

Quickstart

  • Install
$ cd deploy/kubernetes
# install
$ ./deploy.sh
# uninstall
$ ./deploy.sh uninstall

Contribution Guide

Typical storage providers

NFS/NAS SAN Ceph Carina
typical usage general storage high performance block device extremly scalability high performance block device for cloudnative applications
filesystem yes yes yes yes
filesystem type NFS driver specific ext4/xfs ext4/xfs
block no yes yes yes
bandwidth standard standard high high
IOPS standard high standard high
latency standard low standard low
CSI support yes yes yes yes
snapshot no driver specific yes not yet, comming soon
clone no driver specific yes not yet, comming soon
quota no yes yes yes
resizing yes driver specific yes yes
data HA RAID or NAS appliacne yes yes RAID
ease of maintainess driver specific multiple drivers for multiple SAN high maintainess effort ops-free
budget high for NAS high high low, using the extra disks in existing kubernetes cluster
others data migrates with pods data migrates with pods data migrates with pods data doesn't migrate with pods
inplace rebulid if pod fails

FAQ

Similar projects

License

Carina is under the Apache 2.0 license. See the LICENSE file for details.

Issues
  • pv没有mount到容器

    pv没有mount到容器

    使用k8s版本 1.16,部署controller报错后降级了sidecar csi-provisioner => v1.6.1 未使用carina-scheduler,使用default scheduler绑定节点执行carina测试demo 问题:deployment部署成功后进入容器,/var/lib/www/html 没有mount

    deploy文件

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: carina-deployment
      namespace: carina
      labels:
        app: web-server
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: web-server
      template:
        metadata:
          annotations:
            cni: macvlan
          labels:
            app: web-server
        spec:
          nodeSelector:
            kubernetes.io/hostname: 172.23.36.5
          containers:
            - name: web-server
              image: nginx:latest
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: mypvc
                  mountPath: /var/lib/www/html
          volumes:
            - name: mypvc
              persistentVolumeClaim:
                claimName: csi-carina-pvc
                readOnly: false
    

    集群内pv,lv,sc image

    kubelet 日志 kubelet.log carina-node 日志 carina-node.log

    进入carina-node容器执行命令

    [email protected]:~ # docker exec -it 0e8b504934d8 bash
    [[email protected] /]# df -h
    Filesystem                                                   Size  Used Avail Use% Mounted on
    overlay                                                      893G   26G  867G   3% /
    udev                                                          63G     0   63G   0% /dev
    shm                                                           64M     0   64M   0% /dev/shm
    /dev/sdb                                                     893G   26G  867G   3% /csi
    tmpfs                                                         13G  1.4G   12G  11% /run/mount
    /dev/sda1                                                     47G  7.5G   40G  16% /var/log/carina
    tmpfs                                                         63G     0   63G   0% /sys/fs/cgroup
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/c6a9e39c-8f82-4738-9629-f4a662fd88bc/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/dfde4bea-cd81-4ec0-ae1b-3d0960fb6cc1/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/86944153-3c9b-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-gfsdw
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4f66c584-3e17-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-dj45d
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/89e4ef0e-58b0-4d09-81aa-21019b58f8b4/volumes/kubernetes.io~secret/volcano-controllers-token-5k4kn
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4aed7639-44a3-4db2-9d43-153fba35028a/volumes/kubernetes.io~secret/kruise-daemon-token-khwgc
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4d638e6a-9fa8-4ec7-a010-69798c89bcc4/volumes/kubernetes.io~secret/kubecost-kube-state-metrics-token-czcv2
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/98b87ba2-cf4e-45b0-8962-65a4707a5791/volumes/kubernetes.io~secret/kubecost-prometheus-node-exporter-token-4xgxv
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/ddc94974-7a4e-4d0d-ae7f-9d6af689dfcd/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/8f78b4fb-4092-4630-8b93-ca815561607f/volumes/kubernetes.io~secret/default-token-p54ss
    tmpfs                                                         63G     0   63G   0% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~empty-dir/socket-dir
    tmpfs                                                         63G  8.0K   63G   1% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/certs
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/carina-csi-controller-token-lcrp9
    tmpfs                                                         63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~secret/default-token-6lz94
    /dev/carina/volume-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1  6.0G   33M  6.0G   1% /data/docker/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~csi/pvc-c01289bc-5188-436b-90fc-6d32cab20fc1/mount
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/d5ddd88d-8e11-4211-8d0f-c1982787d1d9/volumes/kubernetes.io~secret/default-token-zstfk
    
    [[email protected] /]# lvs
      LV                                              VG            Attr       LSize  Pool                                          Origin Data%  Meta%  Move Log Cpy%Sync Convert
      mylv                                            carina-vg-hdd -wi-a----- 10.00g
      thin-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1   carina-vg-hdd twi-aotz--  6.00g                                                      99.47  47.85
      volume-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1 carina-vg-hdd Vwi-aotz--  6.00g thin-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1        99.47
    [[email protected] /]# pvs
      PV                  VG            Fmt  Attr PSize   PFree
      /dev/mapper/loop0p1 carina-vg-hdd lvm2 a--  <19.53g <19.52g
      /dev/mapper/loop1p1 carina-vg-hdd lvm2 a--   24.41g   8.40g
      /dev/mapper/loop2p1 carina-vg-hdd lvm2 a--   29.29g  29.29g
    [[email protected] /]# vgs
      VG            #PV #LV #SN Attr   VSize  VFree
      carina-vg-hdd   3   3   0 wz--n- 73.23g 57.21g
    

    进入nginx容器

    [email protected]:~ # docker exec -it b3a711b9349c bash
    [email protected]:/# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    overlay         893G   26G  867G   3% /
    tmpfs            64M     0   64M   0% /dev
    tmpfs            63G     0   63G   0% /sys/fs/cgroup
    /dev/sdb        893G   26G  867G   3% /etc/hosts
    shm              64M     0   64M   0% /dev/shm
    tmpfs            63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
    tmpfs            63G     0   63G   0% /proc/acpi
    tmpfs            63G     0   63G   0% /sys/firmware
    [email protected]:/#
    

    宿主机 df -h

    [email protected]:~ # df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev             63G     0   63G   0% /dev
    tmpfs            13G  1.4G   12G  11% /run
    /dev/sda1        47G  7.5G   40G  16% /
    tmpfs            63G  460K   63G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs            63G     0   63G   0% /sys/fs/cgroup
    /dev/sda3       165G   32G  133G  20% /data
    /dev/sdb        893G   26G  867G   3% /data/docker
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/c6a9e39c-8f82-4738-9629-f4a662fd88bc/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/dfde4bea-cd81-4ec0-ae1b-3d0960fb6cc1/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/86944153-3c9b-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-gfsdw
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4f66c584-3e17-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-dj45d
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/89e4ef0e-58b0-4d09-81aa-21019b58f8b4/volumes/kubernetes.io~secret/volcano-controllers-token-5k4kn
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4aed7639-44a3-4db2-9d43-153fba35028a/volumes/kubernetes.io~secret/kruise-daemon-token-khwgc
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4d638e6a-9fa8-4ec7-a010-69798c89bcc4/volumes/kubernetes.io~secret/kubecost-kube-state-metrics-token-czcv2
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/98b87ba2-cf4e-45b0-8962-65a4707a5791/volumes/kubernetes.io~secret/kubecost-prometheus-node-exporter-token-4xgxv
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/ddc94974-7a4e-4d0d-ae7f-9d6af689dfcd/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/8f78b4fb-4092-4630-8b93-ca815561607f/volumes/kubernetes.io~secret/default-token-p54ss
    tmpfs            63G     0   63G   0% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~empty-dir/socket-dir
    tmpfs            63G  8.0K   63G   1% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/certs
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/carina-csi-controller-token-lcrp9
    overlay         893G   26G  867G   3% /data/docker/overlay2/a216d4228c9c3c045c6e4855906444b74ff39a9ee23ec0fb9dd1882aacf2ebf0/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/526b8df3824b74887b7450dfab234ebf109c1bb7e500f866987ce9039269d3d0/merged
    shm              64M     0   64M   0% /data/docker/containers/caa29edd3a5b8e138e37bcf91a95adead757dbf460deb3f7b745a7dfc0c93de7/mounts/shm
    shm              64M     0   64M   0% /data/docker/containers/064c3b334433a68749cd477d1db14cbeb6104dbade3286ebcfc3ea633701233c/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/71e132c2b079223cbc62f4a88de131809c4b7f1611fbcc7719abc4bd46654c87/merged
    shm              64M     0   64M   0% /data/docker/containers/531d44a6048b2ce7e1d2f6a61604ecdecdb906f38ef90e47594665029a3583a7/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/19653af8958402eefff0a01b1f8a8c676bfefc9617207e6fe71eba3bda5d1d46/merged
    shm              64M     0   64M   0% /data/docker/containers/a86dd2b0d1e169680ec3cef8ba3a357b1ea2766d39a920290e9fdc3a6fca865e/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/544bda05f222b55d1126e5f51c1c7559f8db819ab02ea4eb633635d408353e84/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/8cc47db16916c53d3324ad4b8fd251036808602fbe837353c5f70e71efa4d2f4/merged
    shm              64M     0   64M   0% /data/docker/containers/1542e464b9ffa9488478962415ec61589aef02d02f7ceee381837c943772a4ef/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/667083a79276e053ab38db18d459596ebe89aea07bf72897e8cd3d9154f2cb0d/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/541d373f498fd0aae9732569dc9ceb3d5edbf395da34153ce31daca5a6637814/merged
    shm              64M     0   64M   0% /data/docker/containers/617dfa1afde0d1ca3a0dfe17ea96a27ec0ab8ee2536be344a0f31d5d17a76ae3/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/16fae5669dcb9f44aee19b42a340acacede6fdb41f610f178f71785a0bab1d6d/merged
    shm              64M     0   64M   0% /data/docker/containers/4d7b8c7cb079752cd1c2cfcf5ac3d55997696273fc957e286481b923add98b69/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/6e4bd0e7003ffc089171f35c347c7e35f5b39e3c81c48740e09caf2f838f6e0b/merged
    shm              64M     0   64M   0% /data/docker/containers/385987b7f5071e0119c4e1cd67cff21a48898be2252e2fe063102ec10cee42fc/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/703de9e6fa8465ab8cc79c7aac019c9e8cb5bf031352b483d92c4061f6afe64b/merged
    shm              64M     0   64M   0% /data/docker/containers/78c97664414f17c3d2a4b3b3192681793da4fb47e45f4e192761d30a710ac78d/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/7a2f7ac15692e724c2da318427ddacc11badd6dee13bc58eac51aa349ac0c1da/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/69d4eaf83bc1c0fde95f0fbfdaaf6293b8166ffec78a86b0f287ef3bd9793b47/merged
    shm              64M     0   64M   0% /data/docker/containers/c328c93dfcedad046375a6d5c7ae61c159b4a1ccbfabd6cf84ede72fc3af5b80/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/2b66361e05449c67b58666579f7bc763012ed1722599cfcc853adeb91b6eeffe/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/98a9216bf26b3f6fb4e01205e69c6a61fa3946c0e0d4a2ee3cd0166e66921bb5/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/55848bcb70e3f7d69e033ff0279848a1dde960a64e37d738d9dbe7899d6c34e2/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/42e90451be6a7ec4dc9345764e8079d3beee8b205402e90d7db09fa02a260f34/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/fb7a968851ca0f2832fbc7d515c5676ffeb52ba8b63d874c46ef29d43f763d82/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/f404fdff1634045c92e58ea95536fbd7427b295881e4a969c94af608e734aa15/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/3e80823a44e5d21f1138528f30a5e7df33af63e8f6b35706a7ae392fecc59db6/merged
    tmpfs            13G     0   13G   0% /run/user/0
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/5cd34f3e-8f5d-402e-ac45-5129ccc89dea/volumes/kubernetes.io~secret/carina-csi-node-token-mr2fk
    overlay         893G   26G  867G   3% /data/docker/overlay2/d7e31342404d08d5fd4676d41ec7aaaf3d9ee5d8f98c1376cad972613c93a0ac/merged
    shm              64M     0   64M   0% /data/docker/containers/939cdcc03f8e7b986dbe981eaa895de4d25adc0021a5e81cd144b9438adb85f3/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/506e0d518ad983e10a29a2aed73707bdea0f40f70c85408fe5a326ed1e87220b/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/3b5df922c0ce360e132b56c70407fe3c49b991c6bf277af05a06a3533ee985a5/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/5125665eab4d1ed3b046f766588d83576c20e36dd32984520b5a0f852e407d3f/merged
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~secret/default-token-6lz94
    overlay         893G   26G  867G   3% /data/docker/overlay2/02245acd4ae110d14e805b69ce6fb589d391f9faee669a7659224a6c74c9b30d/merged
    shm              64M     0   64M   0% /data/docker/containers/d6299df3d906b1495e81dc09ba54ea05cac467e4b5f87ae2f8edc8e09b31fe65/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/467f745acd8f320de388690fa330bebf9601570cc199326bde64ba2dd16f0b52/merged
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/5525ffff-228f-403c-8eb3-9fa3764f6779/volumes/kubernetes.io~secret/default-token-zstfk
    overlay         893G   26G  867G   3% /data/docker/overlay2/fce3315104b4a463a8eeba2c57d418e59d82425bdf935dc44c7af9fd4dc7a017/merged
    shm              64M     0   64M   0% /data/docker/containers/ab3ed3e62ad99a7bb4b62312757e7c527f3385e30bb270c80048d164c205a967/mounts/shm
    
    [email protected]:~ # fdisk -l
    Disk /dev/sdb: 893.1 GiB, 958999298048 bytes, 1873045504 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    
    
    Disk /dev/sda: 222.6 GiB, 238999830528 bytes, 466796544 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0x0f794366
    
    Device     Boot     Start       End   Sectors   Size Id Type
    /dev/sda1  *         2048  97656831  97654784  46.6G 83 Linux
    /dev/sda2        97656832 121094143  23437312  11.2G 82 Linux swap / Solaris
    /dev/sda3       121094144 466794495 345700352 164.9G 83 Linux
    
    
    Disk /dev/loop0: 19.5 GiB, 20971520000 bytes, 40960000 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/loop1: 24.4 GiB, 26214400000 bytes, 51200000 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/loop2: 29.3 GiB, 31457280000 bytes, 61440000 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/mapper/carina--vg--hdd-volume--pvc--c01289bc--5188--436b--90fc--6d32cab20fc1: 6 GiB, 6442450944 bytes, 12582912 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 65536 bytes / 65536 bytes
    
    
    Disk /dev/mapper/carina--vg--hdd-mylv: 10 GiB, 10737418240 bytes, 20971520 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    opened by WulixuanS 9
  • 咋们那个故障转移的功能非常不错,但是存在一些问题

    咋们那个故障转移的功能非常不错,但是存在一些问题

    咋们目前的逻辑是先强制删除POD,然后再删除PVC,这个逻辑有俩个潜在问题。

    第一个问题:删除POD后,由于工作负载的作用,会立马创建新的POD,而这个新的POD会进入调度流程。 假如此时PVC还没有被删除,由于PV亲和性约束,那么该POD就会处于Pending状态,而且由于pvc为POD所用,此时是无法成功删除PVC的,迁移功能失效。因此,应该避免POD受之前PVC影响。

    第二个问题:删除PVC后,就不停尝试新建PVC,没有考虑类似statefulset 中PVC Template的场景(删除POD后,对应的工作负载会自动创建PVC)。因此,新建PVC之前,应该先查询一下是否存在状态正常的PVC。

    针对第一个问题,有俩种改进策略: 一、先patch PVC,去除掉pvc protection的finallizer,然后再删除PVC,然后再创建新的PVC; 二、先删除PVC,然后再删除POD,之后PVC会被清理回收,最后再创建新的PVC;

    针对第二个问题,新建PVC之前,应该先查询一下是否存在状态正常的PVC。

    good first issue 
    opened by fanhaouu 5
  • 节点资源够用,但是调度失败

    节点资源够用,但是调度失败

    What happened: pod调度失败

    storageclass.yaml

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: csi-carina-sc
    provisioner: carina.storage.io
    parameters:
      carina.storage.io/disk-type: "vg_hdd"
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    # 创建pvc后立即创建pv,WaitForFirstConsumer表示被容器绑定调度后再创建pv
    volumeBindingMode: WaitForFirstConsumer
    mountOptions:
    

    node注册资源信息

    allocatable:
        carina.storage.io/vg_hdd: "117"
        carina.storage.io/vg_ssd: "117"
        cpu: 7600m
        ephemeral-storage: 196460520Ki
        hugepages-1Gi: "0"
        hugepages-2Mi: "0"
        memory: "31225464193"
        mixer.io/ext-cpu: "6237"
        mixer.io/ext-memory: "0"
        mixer.kubernetes.io/ext-cpu: "5837"
        mixer.kubernetes.io/ext-memory: "9958435896"
        pods: "62"
      capacity:
        carina.storage.io/vg_hdd: "128"
        carina.storage.io/vg_ssd: "128"
        cpu: "8"
        ephemeral-storage: 196460520Ki
        hugepages-1Gi: "0"
        hugepages-2Mi: "0"
        memory: 32637492Ki
        mixer.io/ext-cpu: "6237"
        mixer.io/ext-memory: "0"
        mixer.kubernetes.io/ext-cpu: "5837"
        mixer.kubernetes.io/ext-memory: "9958435896"
        pods: "62"
    
    

    csi configmap配置,这里我不需要自动发现,所以匹配没写

    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: carina-csi-config
      namespace: kube-system
      labels:
        class: carina
    data:
      config.json: |-
        {
          "diskSelector": [
            {
              "name": "vg_ssd" ,
              "policy": "LVM",
              "nodeLabel": "kubernetes.io/hostname"
            },
            {
              "name": "vg_hdd",
              "policy": "LVM",
              "nodeLabel": "kubernetes.io/hostname"
            }
          ],
          "diskScanInterval": "300",
          "schedulerStrategy": "spreadout"
        }
    

    调度器日志

    I0222 06:32:04.145738       1 storage-plugins.go:69] filter pod: carina-deployment-b6785745d-29ghc, node: tj1-kubekey-test07.kscn
    I0222 06:32:04.145771       1 storage-plugins.go:130] mismatch pod: carina-deployment-b6785745d-29ghc, node: tj1-kubekey-test07.kscn, request: 1, capacity: 0
    

    可以确定的是节点由足够资源

    storageclass What you expected to happen: pod调度成功 How to reproduce it:

    Anything else we need to know?:

    Environment:

    • CSI Driver version: 0.9.1
    • Kubernetes version (use kubectl version):
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
    opened by zwForrest 3
  • refactor nodestorageresource update logic and modify manager.go structure

    refactor nodestorageresource update logic and modify manager.go structure

    What type of PR is this?

    /kind bug /kind cleanup /kind design /kind feature

    What this PR does / why we need it:

    1. receive timely feedback on changes to lvm and disk, it's not necessary to watch pv and nsr
    2. resolve nodestorageresource's status is inconsistent with vg/disk situation
    3. remove unnecessary file
    4. modify manager.go to make its structure clear

    Which issue(s) this PR fixes:

    Fixes #87

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
    opened by fanhaouu 2
  • nodestorageresource's status is inconsistent with vg/disk situation

    nodestorageresource's status is inconsistent with vg/disk situation

    What happened: nodestorageresource's status is inconsistent with vg/disk situation

    What you expected to happen: nodestorageresource's status should be consistent with vg/disk situation

    How to reproduce it: 1、for configmap,add new device loop5-7

    {
      "diskSelector": [
        {
          "name": "carina-vg-ssd" ,
          "re": ["/dev/vdb","/dev/vdd","loop0+","loop1+","loop3+","loop4+","loop5+","loop6+"],
          "policy": "LVM",
          "nodeLabel": "kubernetes.io/hostname"
        },
        {
          "name": "carina-raw" ,
          "re": ["/dev/vdc","loop2+","loop7+"],
          "policy": "RAW",
          "nodeLabel": "kubernetes.io/hostname"
        }
      ],
      "diskScanInterval": "300",
      "schedulerStrategy": "spreadout"
    }
    

    2、waiting about two minutes, then add new device loop5~7 [[email protected] opt]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk └─vda1 253:1 0 40G 0 part / vdb 253:16 0 10G 0 disk ├─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1_tdata 252:1 0 3G 0 lvm
    │ └─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1-tpool 252:2 0 3G 0 lvm
    │ ├─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1 252:3 0 3G 1 lvm
    │ └─carina--vg--ssd-volume--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1 252:4 0 3G 0 lvm /var/lib/kubelet/pods/8de7eb1a-53e3- └─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51_tdata 252:6 0 3G 0 lvm
    └─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51-tpool 252:7 0 3G 0 lvm
    ├─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51 252:8 0 3G 1 lvm
    └─carina--vg--ssd-volume--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51 252:9 0 3G 0 lvm /var/lib/kubelet/pods/cfb07e2b-be66- vdc 253:32 0 11G 0 disk └─vdc2 253:34 0 5G 0 part vdd 253:48 0 11G 0 disk loop0 7:0 0 5G 0 loop loop1 7:1 0 15G 0 loop ├─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1_tmeta 252:0 0 4M 0 lvm
    │ └─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1-tpool 252:2 0 3G 0 lvm
    │ ├─carina--vg--ssd-thin--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1 252:3 0 3G 1 lvm
    │ └─carina--vg--ssd-volume--pvc--36c6a41d--9712--45a6--9a0d--329bd576beb1 252:4 0 3G 0 lvm /var/lib/kubelet/pods/8de7eb1a-53e3- └─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51_tmeta 252:5 0 4M 0 lvm
    └─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51-tpool 252:7 0 3G 0 lvm
    ├─carina--vg--ssd-thin--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51 252:8 0 3G 1 lvm
    └─carina--vg--ssd-volume--pvc--17888c0f--5a11--4384--b97d--b54d46cf4b51 252:9 0 3G 0 lvm /var/lib/kubelet/pods/cfb07e2b-be66- loop2 7:2 0 15G 0 loop loop3 7:3 0 15G 0 loop loop4 7:4 0 15G 0 loop loop5 7:5 0 15G 0 loop loop6 7:6 0 15G 0 loop loop7 7:7 0 15G 0 loop

    3、wait, exec vgs and pvs, devicemanager works fine [[email protected] opt]# vgs VG #PV #LV #SN Attr VSize VFree carina-vg-ssd 7 4 0 wz--n- 95.97g 89.96g

    [[email protected] opt]# pvs PV VG Fmt Attr PSize PFree
    /dev/loop1 carina-vg-ssd lvm2 a-- <15.00g <14.99g /dev/loop3 carina-vg-ssd lvm2 a-- <15.00g <15.00g /dev/loop4 carina-vg-ssd lvm2 a-- <15.00g <15.00g /dev/loop5 carina-vg-ssd lvm2 a-- <15.00g <15.00g /dev/loop6 carina-vg-ssd lvm2 a-- <15.00g <15.00g /dev/vdb carina-vg-ssd lvm2 a-- <10.00g 3.99g /dev/vdd carina-vg-ssd lvm2 a-- <11.00g <11.00g

    4、nodestorageresource's status doesn't have loop5-7 device disks:

    • name: vdc partitions: "2": last: 10738466815 name: carina.io/c352dfc41cdd number: 2 start: 5369757696 path: /dev/vdc sectorSize: 512 size: 11811160064 udevInfo: name: vdc properties: DEVNAME: /dev/vdc DEVPATH: /devices/pci0000:00/0000:00:06.0/virtio3/block/vdc DEVTYPE: disk MAJOR: "253" MINOR: "32" SUBSYSTEM: block sysPath: /devices/pci0000:00/0000:00:06.0/virtio3/block/vdc
    • name: loop2 path: /dev/loop2 sectorSize: 512 size: 16106127360 udevInfo: name: loop2 properties: DEVNAME: /dev/loop2 DEVPATH: /devices/virtual/block/loop2 DEVTYPE: disk MAJOR: "7" MINOR: "2" SUBSYSTEM: block sysPath: /devices/virtual/block/loop2 syncTime: "2022-07-22T09:16:44Z" vgGroups:
    • lvCount: 4 pvCount: 5 pvName: /dev/loop4 pvs:
      • pvAttr: a-- pvFmt: lvm2 pvFree: 4286578688 pvName: /dev/vdb pvSize: 10733223936 vgName: carina-vg-ssd
      • pvAttr: a-- pvFmt: lvm2 pvFree: 11806965760 pvName: /dev/vdd pvSize: 11806965760 vgName: carina-vg-ssd
      • pvAttr: a-- pvFmt: lvm2 pvFree: 16093544448 pvName: /dev/loop1 pvSize: 16101933056 vgName: carina-vg-ssd
      • pvAttr: a-- pvFmt: lvm2 pvFree: 16101933056 pvName: /dev/loop3 pvSize: 16101933056 vgName: carina-vg-ssd
      • pvAttr: a-- pvFmt: lvm2 pvFree: 16101933056 pvName: /dev/loop4 pvSize: 16101933056 vgName: carina-vg-ssd vgAttr: wz--n- vgFree: 64390955008 vgName: carina-vg-ssd vgSize: 70845988864
    bug 
    opened by fanhaouu 2
  • Refactor check create pvc

    Refactor check create pvc

    check whether it exists before recreate pvc

    What type of PR is this?

    /kind bug

    What this PR does / why we need it: decrease recreate pvc times

    Which issue(s) this PR fixes: Fixes #77

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
    opened by fanhaouu 2
  • http://www.opencarina.io/ 挂了

    http://www.opencarina.io/ 挂了

    What happened:

    What you expected to happen:

    How to reproduce it:

    Anything else we need to know?:

    Environment:

    • CSI Driver version:
    • Kubernetes version (use kubectl version):
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
    opened by lxd5866 2
  • IO限流方面的问题

    IO限流方面的问题

    我看到 io-throttling 相关代码并没有设置 /sys/fs/cgroup/blkio/tasks 文件,我在 cgroupv1 上测试了下必须要设置该文件后才能进行 directIO限流,当时项目不设置该文件的考虑是什么呢?

    https://github.com/carina-io/carina/blob/185cf9bc7e7b7299270df1370fb9190e5a2585d3/controllers/pod_controller.go#L38-L46

    opened by TheBeatles1994 2
  • csi-carina-provisioner无法部署成功

    csi-carina-provisioner无法部署成功

    通过官方文档部署carina,相关组件状态如下(k8s版本:1.19.14):

    > kubectl get pods -n kube-system |grep carina
    carina-scheduler-c5bc859d4-5ncl5         1/1     Running             13         71m
    csi-carina-node-rhxwc                    2/2     Running             1          71m
    csi-carina-node-z4kpc                    2/2     Running             0          71m
    csi-carina-provisioner-b54f4b965-m96vn   0/4     ContainerCreating   0          55m
    csi-carina-provisioner-b54f4b965-pr7bc   0/4     Evicted             0          71m
    

    发现csi-carina-provisioner组件没有成功部署,分别查看两个问题pod:

    1. 先看那个evicted的pod
    > kubectl describe pod csi-carina-provisioner-b54f4b965-pr7bc -n kube-system
    Name:           csi-carina-provisioner-b54f4b965-pr7bc
    Namespace:      kube-system
    Priority:       0
    ...... # 省略其他信息
    Events:
      Type     Reason               Age                 From               Message
      ----     ------               ----                ----               -------
      Normal   Scheduled            72m                 default-scheduler  Successfully assigned kube-system/csi-carina-provisioner-b54f4b965-pr7bc to u20-m1
      Warning  FailedMount          70m                 kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[carina-csi-controller-token-6fb75 config certs socket-dir]: timed out waiting for the condition
      Warning  FailedMount          65m                 kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[certs socket-dir carina-csi-controller-token-6fb75 config]: timed out waiting for the condition
      Warning  FailedMount          64m (x12 over 72m)  kubelet            MountVolume.SetUp failed for volume "certs" : secret "mutatingwebhook" not found
      Warning  Evicted              64m                 kubelet            The node was low on resource: ephemeral-storage.
      Warning  ExceededGracePeriod  64m                 kubelet            Container runtime did not kill the pod within specified grace period.
      Warning  FailedMount          63m (x2 over 68m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[socket-dir carina-csi-controller-token-6fb75 config certs]: timed out waiting for the condition
      Warning  FailedMount          68s (x31 over 62m)  kubelet            MountVolume.SetUp failed for volume "certs" : object "kube-system"/"mutatingwebhook" not registered
    
    1. 再看那个ContainerCreating的pod
    > kubectl describe po  csi-carina-provisioner-b54f4b965-m96vn -n kube-system
    Name:           csi-carina-provisioner-b54f4b965-m96vn
    Namespace:      kube-system
    Priority:       0
    ...... # 省略其他信息
    Events:
      Type     Reason       Age                 From               Message
      ----     ------       ----                ----               -------
      Normal   Scheduled    56m                 default-scheduler  Successfully assigned kube-system/csi-carina-provisioner-b54f4b965-m96vn to u20-w1
      Warning  FailedMount  43m (x2 over 54m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[carina-csi-controller-token-6fb75 config certs socket-dir]: timed out waiting for the condition
      Warning  FailedMount  40m (x2 over 45m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[config certs socket-dir carina-csi-controller-token-6fb75]: timed out waiting for the condition
      Warning  FailedMount  20m (x7 over 52m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[socket-dir carina-csi-controller-token-6fb75 config certs]: timed out waiting for the condition
      Warning  FailedMount  18m (x3 over 47m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[certs], unattached volumes=[certs socket-dir carina-csi-controller-token-6fb75 config]: timed out waiting for the condition
      Warning  FailedMount  85s (x35 over 56m)  kubelet            MountVolume.SetUp failed for volume "certs" : secret "mutatingwebhook" not found
    

    对比两个pod的event,有共同的错误信息:

    MountVolume.SetUp failed for volume "certs" : secret "mutatingwebhook" not found

    希望能成功部署后试用carina,还请协助,谢谢。

    opened by nevermosby 2
  • modify carina-csi-config validate logic

    modify carina-csi-config validate logic

    What type of PR is this?

    /kind bug /kind cleanup

    What this PR does / why we need it:

    1. remove legacy parameter ConfigProvider
    2. modify some parameter and method scope
    3. when carina-csi-config validate failed, the running node server will exit, it is best to ignore this change; when the node server starting, it should exit if carina-csi-config validate failed.

    Which issue(s) this PR fixes:

    Fixes #88

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
    opened by fanhaouu 1
  • refactor rebuild pvc

    refactor rebuild pvc

    What type of PR is this?

    /kind bug

    What this PR does / why we need it: 1、Delete unused local variable cacheNodeName in node_controller.go. 2、Whether the PV is bound should be used to indirectly judge whether the PVC exists, then the global variable cacheNoDeleteLv has no great value and significance, and there is no logical mechanism for releasing its capacity. In theory, one day in the future, the application will be OOM. 3、When the PV recycling policy is retain, the pv and logicvolume will be keeped, the newly created PVC will be deleted by mistake.

    Which issue(s) this PR fixes:

    Fixes #81

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
    opened by fanhaouu 1
  • failed to update logicvolume's status in the first time

    failed to update logicvolume's status in the first time

    What happened: 2022-07-28T20:45:53.638+0800 error controllers/logicvolume_controller.go:98 Operation cannot be fulfilled on logicvolumes.carina.storage.io "pvc-4e7ac7b2-49a1-4511-93c9-ed6f4c321a37": the object has been modified; please apply your changes to the latest version and try again failed to create LV name pvc-4e7ac7b2-49a1-4511-93c9-ed6f4c321a37

    What you expected to happen: does not rely on retriy logic, successfully update logicvolume status

    How to reproduce it: create pv, you will see it

    opened by fanhaouu 1
  • throw some exception when kubelet terminate csi node container

    throw some exception when kubelet terminate csi node container

    What happened: {"level":"info","ts":1659103499.6545281,"msg":"Stopping and waiting for non leader election runnables"} {"level":"info","ts":1659103529.6545928,"msg":"Stopping and waiting for leader election runnables"} {"level":"info","ts":1659103529.6546292,"msg":"Stopping and waiting for caches"} {"level":"info","ts":1659103529.654654,"msg":"Stopping and waiting for webhooks"} {"level":"info","ts":1659103529.6546695,"msg":"Wait completed, proceeding to shutdown the manager"} {"level":"error","ts":1659103529.654631,"logger":"setup","msg":"problem running manager","error":"failed waiting for all runnables to end within grace period of 30s: context deadline exceeded","stacktrace":"github.com/carina-io/carina/cmd/carina-node/run.subMain\n\t/workspace/github.com/carina-io/carina/cmd/carina-node/run/run.go:163\ngithub.com/carina-io/carina/cmd/carina-node/run.glob..func1\n\t/workspace/github.com/carina-io/carina/cmd/carina-node/run/root.go:48\ngithub.com/spf13/cobra.(*Command).execute\n\t/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:974\ngithub.com/spf13/cobra.(*Command).Execute\n\t/workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:902\ngithub.com/carina-io/carina/cmd/carina-node/run.Execute\n\t/workspace/github.com/carina-io/carina/cmd/carina-node/run/root.go:55\nmain.main\n\t/workspace/github.com/carina-io/carina/cmd/carina-node/main.go:29\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:255"} {"level":"info","ts":1659103529.654709,"logger":"controller.nodestorageresource","msg":"Shutdown signal received, waiting for all workers to finish","reconciler group":"carina.storage.io","reconciler kind":"NodeStorageResource"} panic: close of closed channel

    goroutine 1 [running]: github.com/carina-io/carina/cmd/carina-node/run.subMain() /workspace/github.com/carina-io/carina/cmd/carina-node/run/run.go:165 +0xb88 github.com/carina-io/carina/cmd/carina-node/run.glob..func1(0x28b9440, {0x1920d0d, 0x3, 0x3}) /workspace/github.com/carina-io/carina/cmd/carina-node/run/root.go:48 +0x1e github.com/spf13/cobra.(*Command).execute(0x28b9440, {0xc000138050, 0x3, 0x3}) /workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:856 +0x60e github.com/spf13/cobra.(*Command).ExecuteC(0x28b9440) /workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:974 +0x3bc github.com/spf13/cobra.(*Command).Execute(...) /workspace/github.com/carina-io/carina/vendor/github.com/spf13/cobra/command.go:902 github.com/carina-io/carina/cmd/carina-node/run.Execute() /workspace/github.com/carina-io/carina/cmd/carina-node/run/root.go:55 +0x25 main.main() /workspace/github.com/carina-io/carina/cmd/carina-node/main.go:29 +0x1c

    What you expected to happen: no any exception

    How to reproduce it: kubectl delete csinode-pod -n carina

    opened by fanhaouu 1
  • carina存在引入 3 个有漏洞的缺陷组件

    carina存在引入 3 个有漏洞的缺陷组件

    What happened: carina存在引入 3 个有漏洞的缺陷组件:


    缺陷名称 | 当前版本 | 最小修复版本 | 依赖类型 | 漏洞数量 -- | -- | -- | -- | -- github.com/dgrijalva/jwt-go | v3.2.0+incompatible | 4.0.0-preview1 | 直接依赖 | 1 | github.com/satori/go.uuid | v1.2.0 |   | 直接依赖| 1 | github.com/miekg/dns | v1.0.14 | 1.1.25 | 直接依赖 | 1 |

    Environment: version v0.10.0

    opened by wongearl 1
  • About device registration

    About device registration

    Is your feature request related to a problem?/Why is this needed In early carina releases, disks were registered as a device on Node, but this was removed after Carina v0.10.0, with Nodecrd replacing it

    Describe the solution you'd like in detail Some partners in the community reported that carina-Scheduler could not be used due to multiple schedulers in their cluster. So hopefully there will still be the ability to register disk devices with the nodes so that the scheduling functions of the default scheduler can be used

    Describe alternatives you've considered

    So we have several options: ① device registration co-exists with Nodecrd; ② open a single scheduling plug-in project based on Webhook

    Additional context

    good first issue 
    opened by antmoveh 1
  • bcache创建逻辑

    bcache创建逻辑

    请问bcache是如何创建的?我在代码中没有在找到这块逻辑。 测试环境无法成功创建bcache。

    0.10版本,helm部署,部署时已启用bcache。

    [[email protected] ~]# lsmod | grep bcache
    bcache                274432  0
    crc64                  16384  1 bcache
    

    carina-ndoe 报错

     Create with no support type  failed to create LV name pvc-5b074f0d-c0ff-46b5-b0b5-7c658e4980d4
    {"level":"error","ts":1654150765.6952772,"logger":"controller.logicvolume","msg":"Reconciler error","reconciler group":"carina.storage.io","reconciler kind":"LogicVolume","name":"pvc-5b074f0d-c0ff-46b5-b0b5-7c658e4980d4","namespace":"default","error":"Create with no support type ","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/workspace/github.com/carina-io/carina/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/workspace/github.com/carina-io/carina/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227"}
    
    opened by jimorsm 1
Releases(v0.10.0)
  • v0.10.0(Apr 28, 2022)

    Brief description

    • Carina has entered CNCF panorama and is applying for sandbox project. This version mainly adds the function of bare disk and adjusts the project structure

    Support functions

    • Removed csi.proto upgrade CSI_VERSION=1.5
    • Remove device registration and use the CRD resource NodeStorageResource instead
    • Added controllers that maintain NodeStorageResource
    • The scheduler supports fetching resources from NodeStorageResource
    • Upgrade go.mod to depend on K8s1.23
    • Upgrade the Webhook certificate using job
    • Raw disk support
    • Storage volume backup is supported with Velero
    • More English documentation support
    • Specification of fields in annotations

    Contributors

    @ZhangZhenhua @zhangkai8048 @antmoveh

    Mirror address

    registry.cn-hangzhou.aliyuncs.com/antmoveh/carina:v0.10.0 registry.cn-hangzhou.aliyuncs.com/antmoveh/carina-scheduler:v0.10.0

    What's Changed

    • Feature/v0.9.1 node notready by @antmoveh in https://github.com/carina-io/carina/pull/43
    • make migrate pods test for mysql example by @zhangkai8048 in https://github.com/carina-io/carina/pull/45
    • change wx images by @zhangkai8048 in https://github.com/carina-io/carina/pull/46
    • change wx img by @zhangkai8048 in https://github.com/carina-io/carina/pull/48
    • V0.9.1 change example mysql.md by @zhangkai8048 in https://github.com/carina-io/carina/pull/49
    • Feature/v0.9.3 k8s certgen by @antmoveh in https://github.com/carina-io/carina/pull/50
    • Feature/v0.9.4 node crd by @antmoveh in https://github.com/carina-io/carina/pull/55
    • Feature/v0.9.4 crd scheduler by @antmoveh in https://github.com/carina-io/carina/pull/56
    • Feature/v0.9.5 resolve conflict by @antmoveh in https://github.com/carina-io/carina/pull/57
    • Feature/v0.9.4 test node by @antmoveh in https://github.com/carina-io/carina/pull/58
    • [docs] add english manuals by @ZhangZhenhua in https://github.com/carina-io/carina/pull/61
    • [docs] add blogs in readme by @ZhangZhenhua in https://github.com/carina-io/carina/pull/62
    • add CODE_OF_CONDUCT.md by @ZhangZhenhua in https://github.com/carina-io/carina/pull/64
    • [docs] mark RAID management as V1.0 by @ZhangZhenhua in https://github.com/carina-io/carina/pull/65
    • [docs] add CONTRIBUTING.md by @ZhangZhenhua in https://github.com/carina-io/carina/pull/66
    • Feature/batav0.10 raw by @zhangkai8048 in https://github.com/carina-io/carina/pull/67

    Full Changelog: https://github.com/carina-io/carina/compare/v0.9.1...v0.10.0

    Source code(tar.gz)
    Source code(zip)
    carina-csi-driver-v0.10.0.tgz(14.35 KB)
  • v0.9.1(Jan 18, 2022)

    Brief description
    • Lvm-based local storage project to provide local functionality for Kubernetes. This version has been validated in a number of test environments
    Support functions
    • Carina supports the configuration of existing local storage volumes
    • The node is damaged and container migration is supported
    • Supporting Helm Installation
    • Multiple architecture mirroring is supported, linux/amd64 and linux/arm64
    • Optimizing base mirroring
    Contributors

    @antmoveh @zhangkai8048

    Mirror address
    • registry.cn-hangzhou.aliyuncs.com/antmoveh/carina:v0.9.1
    • registry.cn-hangzhou.aliyuncs.com/antmoveh/carina-scheduler:v0.9.1
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Oct 12, 2021)

    Brief description
    • Lvm-based local storage project to provide local functionality for Kubernetes. This version has been validated in a number of test environments
    Support functions

    | functions | state | | --------------------------------- | ----- | | Dynamic pv | √ | | Local file storage | √ | | Local block storage | √ | | Storage capacity Limit | √ | | Expansion of storage Volume | √ | | Store the topology | √ | | Local Disk Management | √ | | Disk speed limit | √ | | Nodes are migrated | √ | | Scheduling based on disk capacity | √ |

    Verified version

    | kubernetes | Verified | | ------------------ | -------- | | kubernetes v1.18.x | √ | | kubernetes v1.19.x | √ | | kubernetes v1.20.x | √ |

    Contributors

    @antmoveh @ZhangZhenhua

    Mirror address
    • registry.cn-hangzhou.aliyuncs.com/antmoveh/carina:v0.9-20210804141609
    • registry.cn-hangzhou.aliyuncs.com/antmoveh/scheduler:v0.9-20211012111249
    Source code(tar.gz)
    Source code(zip)
topolvm operator provide kubernetes local storage which is light weight and high performance

Topolvm-Operator Topolvm-Operator is an open source cloud-native local storage orchestrator for Kubernetes, which bases on topolvm. Supported environm

Alauda.io 21 Jun 29, 2022
dockin ops is a project used to handle the exec request for kubernetes under supervision

Dockin Ops - Dockin Operation service English | 中文 Dockin operation and maintenance management system is a safe operation and maintenance management s

WeBankFinTech 33 Feb 19, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

HwameiStor 164 Jul 29, 2022
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Shachar Sharon 0 Jan 17, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Fortress 0 Jan 23, 2022
Dynamically provisioning persistent local storage with Kubernetes

Local Path Provisioner Overview Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the

Rancher 1.3k Aug 2, 2022
A batch scheduler of kubernetes for high performance workload, e.g. AI/ML, BigData, HPC

kube-batch kube-batch is a batch scheduler for Kubernetes, providing mechanisms for applications which would like to run batch jobs leveraging Kuberne

Kubernetes SIGs 984 Jul 28, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10.2k Aug 6, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kaisen Linux 0 Feb 14, 2022
SMART information of local storage devices as Prometheus metrics

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Red Hat Storage 0 Feb 10, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
A Golang based high performance, scalable and distributed workflow framework

Go-Flow A Golang based high performance, scalable and distributed workflow framework It allows to programmatically author distributed workflow as Dire

Vanu 595 Aug 1, 2022
A golang CTF competition platform with high-performance, security and low hardware requirements.

CTFgo - CTF Platform written in Golang A golang CTF competition platform with high-performance, security and low hardware requirements. Live Demo • Di

CTFgo 2 Jun 12, 2022
A high-performance Directed-Acyclic-Graph JIT in Go

GAG - A Directed-Acyclic-Graph JIT in Go GAG is a library I created while developing https://isobot.io to experiment with different ways of implementi

V-X 3 Mar 16, 2022
StoneWork is a high-performance, all-(CNFs)-in-one network solution.

StoneWork, high-performance dataplane, modular control-plane solution StoneWork is used by PANTHEON.tech to integrate its CNFs on top of a single shar

PANTHEON.tech 10 Jul 19, 2022
A high performance online bookstore system.

HPOB 高性能网上书店 A high performance online bookstore system. Introduction 介绍 一个基于Gin、gorm、viper、zap等库的web服务器,实现了网上书店相关接口。 Summary 概要 使用go语言编写的,基于gin、gorm、

邹永赫 2 Apr 27, 2022
High-performance GitHub webhook events toolset for Go :rocket:

githubevents GitHub webhook events toolset for Go githubevents is a webhook events toolset for the Go programming language inspired by octokit/webhook

Christian Bargmann 28 Aug 1, 2022
⚡️ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP

Your new microservice development environment friend. This CLI tool allows you to define a configuration to work with both local applications (Go, Nod

Vincent Composieux 1.3k Jul 19, 2022
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

OpenEBS LVM CSI Driver CSI driver for provisioning Local PVs backed by LVM and more. Project Status Currently the LVM CSI Driver is in alpha

OpenEBS 96 Jul 24, 2022