Carina: an high performance and ops-free local storage for kubernetes

Overview

Carina

License

English | 中文

Background

Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applications are shifting into cloud native world, for example, modern databases and middlewares. However, both modern databases and its storage providers try to solve some common problems in their own way. For example, they both deal with data replications and consistency. This introduces a giant waste of both capacity and performance and needs more mantainness effort. And besides that, stateful applications strive to be more peformant, eliminating every possible latency, which is unavoidable for modern distributed storage systems. Enters carina.

Carina is a standard kubernetes CSI plugin. Users can use standard kubernetes storage resources like storageclass/PVC/PV to request storage media. Its key features includes:

  • Completely kubernetes native and easy to install
  • Using local disks and partition them into different groups based on disk type, user can provison different type of disks using different storage class.
  • Scaning physical disks and building a RAID as required. If disk fails, just plugin a new one and it's done.
  • Node capacity and performance aware, so scheduling pods more smartly.
  • Extremly low overhead. Carina sit besides the core data path and provide raw disk performance to applications.
  • Auto tiering. Admins can configure carina to combine the large-capacity-but-low-performant disk and small-capacity-but-high-performant disks as one storageclass, so user can benifit both from capacity and performance.
  • If nodes fails, carina will automatically detach the local volume from pods thus pods can be rescheduled.

Running Environments

  • Kubernetes:>=1.18 (*least verified version)
  • Node OS:Linux
  • Filesystems:ext4,xfs

Carina architecture

Carina is built for cloudnative stateful applications with raw disk performance and ops-free maintainess. Carina can scan local disks and classify them by disk types, for example, one node can have 10 HDDs and 2 SSDs. Carina then will group them into different disk pools and user can request different disk type by using different storage class. For data HA, carina now leverages STORCLI to build RAID groups.

carina-arch

Carina components

It has three componets: carina-scheduler, carina-controller and carina-node.

  • carina-scheduler is an kubernetes scheduler plugin, sorting nodes based on the requested PV size、node's free disk space and node IO perf stats. By default, carina-scheduler supports binpack and spreadout policies.
  • carina-controller is the controll plane of carina, which watches PVC resources and maintain the internal logivalVolume object.
  • carina-node is an agent which runs on each node. It manage local disks using LVM.

Features

Quickstart

  • Install
$ cd deploy/kubernetes
# install
$ ./deploy.sh
# uninstall
$ ./deploy.sh uninstall

Contribution Guide

Typical storage providers

NFS/NAS SAN Ceph Carina
typical usage general storage high performance block device extremly scalability high performance block device for cloudnative applications
filesystem yes yes yes yes
filesystem type NFS driver specific ext4/xfs ext4/xfs
block no yes yes yes
bandwidth standard standard high high
IOPS standard high standard high
latency standard low standard low
CSI support yes yes yes yes
snapshot no driver specific yes not yet, comming soon
clone no driver specific yes not yet, comming soon
quota no yes yes yes
resizing yes driver specific yes yes
data HA RAID or NAS appliacne yes yes RAID
ease of maintainess driver specific multiple drivers for multiple SAN high maintainess effort ops-free
budget high for NAS high high low, using the extra disks in existing kubernetes cluster
others data migrates with pods data migrates with pods data migrates with pods data doesn't migrate with pods
inplace rebulid if pod fails

FAQ

Similar projects

License

Carina is under the Apache 2.0 license. See the LICENSE file for details.

Comments
  • pv没有mount到容器

    pv没有mount到容器

    使用k8s版本 1.16,部署controller报错后降级了sidecar csi-provisioner => v1.6.1 未使用carina-scheduler,使用default scheduler绑定节点执行carina测试demo 问题:deployment部署成功后进入容器,/var/lib/www/html 没有mount

    deploy文件

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: carina-deployment
      namespace: carina
      labels:
        app: web-server
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: web-server
      template:
        metadata:
          annotations:
            cni: macvlan
          labels:
            app: web-server
        spec:
          nodeSelector:
            kubernetes.io/hostname: 172.23.36.5
          containers:
            - name: web-server
              image: nginx:latest
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: mypvc
                  mountPath: /var/lib/www/html
          volumes:
            - name: mypvc
              persistentVolumeClaim:
                claimName: csi-carina-pvc
                readOnly: false
    

    集群内pv,lv,sc image

    kubelet 日志 kubelet.log carina-node 日志 carina-node.log

    进入carina-node容器执行命令

    [email protected]:~ # docker exec -it 0e8b504934d8 bash
    [[email protected] /]# df -h
    Filesystem                                                   Size  Used Avail Use% Mounted on
    overlay                                                      893G   26G  867G   3% /
    udev                                                          63G     0   63G   0% /dev
    shm                                                           64M     0   64M   0% /dev/shm
    /dev/sdb                                                     893G   26G  867G   3% /csi
    tmpfs                                                         13G  1.4G   12G  11% /run/mount
    /dev/sda1                                                     47G  7.5G   40G  16% /var/log/carina
    tmpfs                                                         63G     0   63G   0% /sys/fs/cgroup
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/c6a9e39c-8f82-4738-9629-f4a662fd88bc/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/dfde4bea-cd81-4ec0-ae1b-3d0960fb6cc1/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/86944153-3c9b-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-gfsdw
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4f66c584-3e17-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-dj45d
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/89e4ef0e-58b0-4d09-81aa-21019b58f8b4/volumes/kubernetes.io~secret/volcano-controllers-token-5k4kn
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4aed7639-44a3-4db2-9d43-153fba35028a/volumes/kubernetes.io~secret/kruise-daemon-token-khwgc
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/4d638e6a-9fa8-4ec7-a010-69798c89bcc4/volumes/kubernetes.io~secret/kubecost-kube-state-metrics-token-czcv2
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/98b87ba2-cf4e-45b0-8962-65a4707a5791/volumes/kubernetes.io~secret/kubecost-prometheus-node-exporter-token-4xgxv
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/ddc94974-7a4e-4d0d-ae7f-9d6af689dfcd/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/8f78b4fb-4092-4630-8b93-ca815561607f/volumes/kubernetes.io~secret/default-token-p54ss
    tmpfs                                                         63G     0   63G   0% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~empty-dir/socket-dir
    tmpfs                                                         63G  8.0K   63G   1% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/certs
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/carina-csi-controller-token-lcrp9
    tmpfs                                                         63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~secret/default-token-6lz94
    /dev/carina/volume-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1  6.0G   33M  6.0G   1% /data/docker/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~csi/pvc-c01289bc-5188-436b-90fc-6d32cab20fc1/mount
    tmpfs                                                         63G   12K   63G   1% /var/lib/kubelet/pods/d5ddd88d-8e11-4211-8d0f-c1982787d1d9/volumes/kubernetes.io~secret/default-token-zstfk
    
    [[email protected] /]# lvs
      LV                                              VG            Attr       LSize  Pool                                          Origin Data%  Meta%  Move Log Cpy%Sync Convert
      mylv                                            carina-vg-hdd -wi-a----- 10.00g
      thin-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1   carina-vg-hdd twi-aotz--  6.00g                                                      99.47  47.85
      volume-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1 carina-vg-hdd Vwi-aotz--  6.00g thin-pvc-c01289bc-5188-436b-90fc-6d32cab20fc1        99.47
    [[email protected] /]# pvs
      PV                  VG            Fmt  Attr PSize   PFree
      /dev/mapper/loop0p1 carina-vg-hdd lvm2 a--  <19.53g <19.52g
      /dev/mapper/loop1p1 carina-vg-hdd lvm2 a--   24.41g   8.40g
      /dev/mapper/loop2p1 carina-vg-hdd lvm2 a--   29.29g  29.29g
    [[email protected] /]# vgs
      VG            #PV #LV #SN Attr   VSize  VFree
      carina-vg-hdd   3   3   0 wz--n- 73.23g 57.21g
    

    进入nginx容器

    [email protected]:~ # docker exec -it b3a711b9349c bash
    [email protected]:/# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    overlay         893G   26G  867G   3% /
    tmpfs            64M     0   64M   0% /dev
    tmpfs            63G     0   63G   0% /sys/fs/cgroup
    /dev/sdb        893G   26G  867G   3% /etc/hosts
    shm              64M     0   64M   0% /dev/shm
    tmpfs            63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
    tmpfs            63G     0   63G   0% /proc/acpi
    tmpfs            63G     0   63G   0% /sys/firmware
    [email protected]:/#
    

    宿主机 df -h

    [email protected]:~ # df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev             63G     0   63G   0% /dev
    tmpfs            13G  1.4G   12G  11% /run
    /dev/sda1        47G  7.5G   40G  16% /
    tmpfs            63G  460K   63G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs            63G     0   63G   0% /sys/fs/cgroup
    /dev/sda3       165G   32G  133G  20% /data
    /dev/sdb        893G   26G  867G   3% /data/docker
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/c6a9e39c-8f82-4738-9629-f4a662fd88bc/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/dfde4bea-cd81-4ec0-ae1b-3d0960fb6cc1/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/86944153-3c9b-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-gfsdw
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4f66c584-3e17-11ec-8561-049fca30d189/volumes/kubernetes.io~secret/default-token-dj45d
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/89e4ef0e-58b0-4d09-81aa-21019b58f8b4/volumes/kubernetes.io~secret/volcano-controllers-token-5k4kn
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4aed7639-44a3-4db2-9d43-153fba35028a/volumes/kubernetes.io~secret/kruise-daemon-token-khwgc
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/4d638e6a-9fa8-4ec7-a010-69798c89bcc4/volumes/kubernetes.io~secret/kubecost-kube-state-metrics-token-czcv2
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/98b87ba2-cf4e-45b0-8962-65a4707a5791/volumes/kubernetes.io~secret/kubecost-prometheus-node-exporter-token-4xgxv
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/ddc94974-7a4e-4d0d-ae7f-9d6af689dfcd/volumes/kubernetes.io~secret/default-token-zstfk
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/8f78b4fb-4092-4630-8b93-ca815561607f/volumes/kubernetes.io~secret/default-token-p54ss
    tmpfs            63G     0   63G   0% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~empty-dir/socket-dir
    tmpfs            63G  8.0K   63G   1% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/certs
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/72f19bce-76f6-400e-8f3e-48fe56c28c78/volumes/kubernetes.io~secret/carina-csi-controller-token-lcrp9
    overlay         893G   26G  867G   3% /data/docker/overlay2/a216d4228c9c3c045c6e4855906444b74ff39a9ee23ec0fb9dd1882aacf2ebf0/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/526b8df3824b74887b7450dfab234ebf109c1bb7e500f866987ce9039269d3d0/merged
    shm              64M     0   64M   0% /data/docker/containers/caa29edd3a5b8e138e37bcf91a95adead757dbf460deb3f7b745a7dfc0c93de7/mounts/shm
    shm              64M     0   64M   0% /data/docker/containers/064c3b334433a68749cd477d1db14cbeb6104dbade3286ebcfc3ea633701233c/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/71e132c2b079223cbc62f4a88de131809c4b7f1611fbcc7719abc4bd46654c87/merged
    shm              64M     0   64M   0% /data/docker/containers/531d44a6048b2ce7e1d2f6a61604ecdecdb906f38ef90e47594665029a3583a7/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/19653af8958402eefff0a01b1f8a8c676bfefc9617207e6fe71eba3bda5d1d46/merged
    shm              64M     0   64M   0% /data/docker/containers/a86dd2b0d1e169680ec3cef8ba3a357b1ea2766d39a920290e9fdc3a6fca865e/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/544bda05f222b55d1126e5f51c1c7559f8db819ab02ea4eb633635d408353e84/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/8cc47db16916c53d3324ad4b8fd251036808602fbe837353c5f70e71efa4d2f4/merged
    shm              64M     0   64M   0% /data/docker/containers/1542e464b9ffa9488478962415ec61589aef02d02f7ceee381837c943772a4ef/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/667083a79276e053ab38db18d459596ebe89aea07bf72897e8cd3d9154f2cb0d/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/541d373f498fd0aae9732569dc9ceb3d5edbf395da34153ce31daca5a6637814/merged
    shm              64M     0   64M   0% /data/docker/containers/617dfa1afde0d1ca3a0dfe17ea96a27ec0ab8ee2536be344a0f31d5d17a76ae3/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/16fae5669dcb9f44aee19b42a340acacede6fdb41f610f178f71785a0bab1d6d/merged
    shm              64M     0   64M   0% /data/docker/containers/4d7b8c7cb079752cd1c2cfcf5ac3d55997696273fc957e286481b923add98b69/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/6e4bd0e7003ffc089171f35c347c7e35f5b39e3c81c48740e09caf2f838f6e0b/merged
    shm              64M     0   64M   0% /data/docker/containers/385987b7f5071e0119c4e1cd67cff21a48898be2252e2fe063102ec10cee42fc/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/703de9e6fa8465ab8cc79c7aac019c9e8cb5bf031352b483d92c4061f6afe64b/merged
    shm              64M     0   64M   0% /data/docker/containers/78c97664414f17c3d2a4b3b3192681793da4fb47e45f4e192761d30a710ac78d/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/7a2f7ac15692e724c2da318427ddacc11badd6dee13bc58eac51aa349ac0c1da/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/69d4eaf83bc1c0fde95f0fbfdaaf6293b8166ffec78a86b0f287ef3bd9793b47/merged
    shm              64M     0   64M   0% /data/docker/containers/c328c93dfcedad046375a6d5c7ae61c159b4a1ccbfabd6cf84ede72fc3af5b80/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/2b66361e05449c67b58666579f7bc763012ed1722599cfcc853adeb91b6eeffe/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/98a9216bf26b3f6fb4e01205e69c6a61fa3946c0e0d4a2ee3cd0166e66921bb5/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/55848bcb70e3f7d69e033ff0279848a1dde960a64e37d738d9dbe7899d6c34e2/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/42e90451be6a7ec4dc9345764e8079d3beee8b205402e90d7db09fa02a260f34/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/fb7a968851ca0f2832fbc7d515c5676ffeb52ba8b63d874c46ef29d43f763d82/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/f404fdff1634045c92e58ea95536fbd7427b295881e4a969c94af608e734aa15/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/3e80823a44e5d21f1138528f30a5e7df33af63e8f6b35706a7ae392fecc59db6/merged
    tmpfs            13G     0   13G   0% /run/user/0
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/5cd34f3e-8f5d-402e-ac45-5129ccc89dea/volumes/kubernetes.io~secret/carina-csi-node-token-mr2fk
    overlay         893G   26G  867G   3% /data/docker/overlay2/d7e31342404d08d5fd4676d41ec7aaaf3d9ee5d8f98c1376cad972613c93a0ac/merged
    shm              64M     0   64M   0% /data/docker/containers/939cdcc03f8e7b986dbe981eaa895de4d25adc0021a5e81cd144b9438adb85f3/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/506e0d518ad983e10a29a2aed73707bdea0f40f70c85408fe5a326ed1e87220b/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/3b5df922c0ce360e132b56c70407fe3c49b991c6bf277af05a06a3533ee985a5/merged
    overlay         893G   26G  867G   3% /data/docker/overlay2/5125665eab4d1ed3b046f766588d83576c20e36dd32984520b5a0f852e407d3f/merged
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/9c9efec6-fdcc-4a18-aff5-dfb8979564be/volumes/kubernetes.io~secret/default-token-6lz94
    overlay         893G   26G  867G   3% /data/docker/overlay2/02245acd4ae110d14e805b69ce6fb589d391f9faee669a7659224a6c74c9b30d/merged
    shm              64M     0   64M   0% /data/docker/containers/d6299df3d906b1495e81dc09ba54ea05cac467e4b5f87ae2f8edc8e09b31fe65/mounts/shm
    overlay         893G   26G  867G   3% /data/docker/overlay2/467f745acd8f320de388690fa330bebf9601570cc199326bde64ba2dd16f0b52/merged
    tmpfs            63G   12K   63G   1% /data/docker/kubelet/pods/5525ffff-228f-403c-8eb3-9fa3764f6779/volumes/kubernetes.io~secret/default-token-zstfk
    overlay         893G   26G  867G   3% /data/docker/overlay2/fce3315104b4a463a8eeba2c57d418e59d82425bdf935dc44c7af9fd4dc7a017/merged
    shm              64M     0   64M   0% /data/docker/containers/ab3ed3e62ad99a7bb4b62312757e7c527f3385e30bb270c80048d164c205a967/mounts/shm
    
    [email protected]:~ # fdisk -l
    Disk /dev/sdb: 893.1 GiB, 958999298048 bytes, 1873045504 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    
    
    Disk /dev/sda: 222.6 GiB, 238999830528 bytes, 466796544 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0x0f794366
    
    Device     Boot     Start       End   Sectors   Size Id Type
    /dev/sda1  *         2048  97656831  97654784  46.6G 83 Linux
    /dev/sda2        97656832 121094143  23437312  11.2G 82 Linux swap / Solaris
    /dev/sda3       121094144 466794495 345700352 164.9G 83 Linux
    
    
    Disk /dev/loop0: 19.5 GiB, 20971520000 bytes, 40960000 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/loop1: 24.4 GiB, 26214400000 bytes, 51200000 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/loop2: 29.3 GiB, 31457280000 bytes, 61440000 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/mapper/carina--vg--hdd-volume--pvc--c01289bc--5188--436b--90fc--6d32cab20fc1: 6 GiB, 6442450944 bytes, 12582912 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 65536 bytes / 65536 bytes
    
    
    Disk /dev/mapper/carina--vg--hdd-mylv: 10 GiB, 10737418240 bytes, 20971520 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    opened by WulixuanS 9
  • Feature/v0.11 update script

    Feature/v0.11 update script

    /kind bug /kind documentation

    1. Added the Carina V0.11 upgrade document
    2. Add Carina v0.11 Changelog
    3. Modify outdated CARina documents
    4. Fixed some description errors
    Update the Carina documentation
    

    #110 #115

    approved size/L 
    opened by antmoveh 8
  • create deployment failed

    create deployment failed

    执行完 ./deploy.sh install 以后,用下面的 yaml 文件创建 Deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: test-deployment
      labels:
        app: test-web-server
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: test-web-server
      template:
        metadata:
          labels:
            app: test-web-server
        spec:
          containers:
            - name: test-web-server
              image: nginx:latest
              imagePullPolicy: "IfNotPresent"
    

    在创建 ReplicaSet 失败:

    image

    describe 这个 rs:

    image

    想知道如何解决这样的错误。谢谢🙏

    help wanted 
    opened by TensShinet 8
  • add ci.yaml

    add ci.yaml

    What type of PR is this?

    /kind cleanup

    What this PR does / why we need it:

    add ci.yaml for github workflow

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
    approved lgtm 
    opened by wongearl 5
  • update e2e deploy yaml,modify Makefile and e2e.sh

    update e2e deploy yaml,modify Makefile and e2e.sh

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    update e2e deploy yaml,modify Makefile and e2e.sh

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
    enhancement approved lgtm 
    opened by wongearl 5
  • 咋们那个故障转移的功能非常不错,但是存在一些问题

    咋们那个故障转移的功能非常不错,但是存在一些问题

    咋们目前的逻辑是先强制删除POD,然后再删除PVC,这个逻辑有俩个潜在问题。

    第一个问题:删除POD后,由于工作负载的作用,会立马创建新的POD,而这个新的POD会进入调度流程。 假如此时PVC还没有被删除,由于PV亲和性约束,那么该POD就会处于Pending状态,而且由于pvc为POD所用,此时是无法成功删除PVC的,迁移功能失效。因此,应该避免POD受之前PVC影响。

    第二个问题:删除PVC后,就不停尝试新建PVC,没有考虑类似statefulset 中PVC Template的场景(删除POD后,对应的工作负载会自动创建PVC)。因此,新建PVC之前,应该先查询一下是否存在状态正常的PVC。

    针对第一个问题,有俩种改进策略: 一、先patch PVC,去除掉pvc protection的finallizer,然后再删除PVC,然后再创建新的PVC; 二、先删除PVC,然后再删除POD,之后PVC会被清理回收,最后再创建新的PVC;

    针对第二个问题,新建PVC之前,应该先查询一下是否存在状态正常的PVC。

    good first issue 
    opened by fanhaouu 5
  • refac: modify scheduler informer

    refac: modify scheduler informer

    What type of PR is this?

    /kind bug /kind feature

    What this PR does / why we need it:

    1. refactor scheduler informer(first of all, get nsr/lv from cache, then from apiserver)
    2. fix the bug 'can't use disk part to create pv'

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
    approved lgtm size/L 
    opened by fanhaouu 4
  • fix bug for delete lv when node is not ready

    fix bug for delete lv when node is not ready

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

    /kind bug

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #128

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    skip rebuid lv when pod.annotation has none or not need to rebuid
    
    approved lgtm ok-to-test size/M 
    opened by zhangkai8048 4
  • fix chart bug with modprobe

    fix chart bug with modprobe

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

    /kind bug What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #129

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    再初始化安装判断内核版本,大于3.10版本加载内核模块
    
    approved lgtm size/M 
    opened by zhangkai8048 4
  • refactor scheduler logic

    refactor scheduler logic

    What type of PR is this?

    /kind bug /kind cleanup /kind feature

    What this PR does / why we need it: 1、will reschedule if storage resources are exhausted when topology feature is open and volume binding mode is WaitForFirstConsumer. 2、refactor and optimize scheduler logic 3、modify csi driver k8s node service

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer: There are many changes

    Does this PR introduce a user-facing change?:

    none
    
    enhancement approved kind/bug lgtm size/XXL 
    opened by fanhaouu 4
  • Develop/v0.11.0

    Develop/v0.11.0

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

    /kind bug /kind feature

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    change chart to version v0.11.0;fix run make controller command in debug mod ;fix deploy chart verison bug in README.md;
    
    
    approved lgtm size/XL 
    opened by zhangkai8048 4
  • thin-provisioning support

    thin-provisioning support

    Is your feature request related to a problem?/Why is this needed

    Currently, the total free space can be used is limited to node's total disk size. In short, it's thick provisioning mode.

    Describe the solution you'd like in detail

    Supports thin provisionning. User can request PV size which is larger than total disk size.

    opened by ZhangZhenhua 0
  • Pod Local bcache possible?

    Pod Local bcache possible?

    Is your feature request related to a problem?/Why is this needed The ability for Carina to provide tiered storage using bcache is very powerful, especially in context of database operations. However, it currently requires data to reside at the node level rather than leveraging a combination of persistent storage at the pod level and ephemeral NVMe/SSD storage at the node level. This makes it very difficult to move pods to new nodes easily.

    Describe the solution you'd like in detail Would it be possible to construct the bcache volume within a pod so that it would utilize local node ephemeral NVMe/SSD disks but utilize a PV exposed at the pod level? This way, the persistent part of the bcache can move easily with the pod and the cache portion would be discarded and rebuilt once the pod has been rescheduled to a new node.

    For example, in a GCP environment we can create a node with a local 375GB NVMe drive. As pods are scheduled to the node, a portion of the 375GB drive is allocated to the pod as a cache device (raw block device) as well as using a PV (raw block device) attached from the GCP persistent volume service. When the pod is initialized, the bcache device is created pod-local using the two attached block devices.

    The benefit of this is the data is no longer node bound and the pods can be rescheduled easily to new nodes with their persistent data following. It would also enable resizing of individual PVs without worrying about how much disk space is attached at the node level.

    Describe alternatives you've considered

    1. Just sticking with standard network attached PVs. This is not optimal for database operations since having local disk can significantly boost read/write performance.

    2. Try a homegrown version of this local bcache concept using TopoLVM (https://github.com/topolvm/topolvm) and network attached storage PVs.

    3. Also looked at using ZFS ARC but that also requires setting up our own storage layer rather than leveraging GCP, AWS, or Azure managed storage.

    Additional context This would have immediate use for Postgres and Greenplum running in kubernetes. The churn of rebuilding large data drives can be significant for clusters with frequent node terminations (spot instances).

    enhancement 
    opened by inviscid 3
  • About the Extender Webhook scheduler features

    About the Extender Webhook scheduler features

    /kind feature /enhancement

    The existing carina scheduling is extended in Framework V2 mode. The POD Scheduler field in the cluster must be Carina-Scheduler. Now let's discuss whether we need to add an Extender Webhook Scheduler

    ①: The Extender Webhook Scheduler is not added ②: Add an Extender Webhook Scheduler, and keep the existing scheduler ③: Replace the existing scheduler with the Extender Webhook Scheduler.

    enhancement help wanted kind/feature kind/support 
    opened by antmoveh 9
  • Compatible with kubernetes 1.25

    Compatible with kubernetes 1.25

    Carina v0.11.0 is not compatible with the latest kubernetes 1.25. I upgraded the dependent project and it is running well in my kubernetes 1.25 cluster.

    enhancement do-not-merge/hold kind/design kind/feature 
    opened by duanhongyi 18
Releases(v0.11.1)
  • v0.11.1(Sep 19, 2022)

    Brief description

    Carina has entered CNCF panorama and is applying for sandbox project. This version mainly adds the function of bare disk and adjusts the project structure

    What's Changed

    • Repair The pv is lost due to node restart #144 @zhangkai8048
    • Added the upgrade upgrade script #142 @antmoveh
    • Helm chat deployment adds psp resources #145 @zhangkai8048
    • It is clear that the current version of carina supports 1.18-1.24 #140 @antmoveh
    • Planning discussion carina supports the Kubernetes 1.25 solution #133 @duanhongyi
    • Added e2e unit test scripts #138 @wongearl

    Contributors

    @duanhongyi @zhangkai8048 @wongearl @antmoveh

    Mirror address

    registry.cn-hangzhou.aliyuncs.com/carina/carina:v0.11.1 registry.cn-hangzhou.aliyuncs.com/carina/carina-scheduler:v0.11.1

    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Aug 31, 2022)

    Brief description

    Carina has entered CNCF panorama and is applying for sandbox project. This version mainly adds the function of bare disk and adjusts the project structure

    What's Changed

    • Support the cgroup v2 @fanhaouu
    • Adjustment of project structure @fanhaouu
    • The HTTP server is deleted @fanhaouu
    • Logicvolume changed from Namespace to Cluster, upgrade) @antmoveh
    • Fixed the problem that message notification is not timely @fanhaouu
    • Fix the metric server panic problem #91 @fanhaouu
    • Mirrored warehouse has personal space migrated to Carina exclusive space @antmoveh
    • To improve LVM volume performance, do not create a thin-pool when creating an LVM volume #96 @fanhaouu
    • Add parameter carina.storage.io/allow-pod-migration-if-notready to storageclass. Webhook will automatically add this annotation for POD when SC has this parameter #95 @antmoveh
    • Nodestorageresource structuring and issue fixing #87 @fanhaouu
    • Remove ConfigMap synchronization control #75 @fanhaouu
    • The Carina E2E test is being refined @wongearl
    • Promote carina into cncf sandbox project and roadmap @ZhangZhenhua
    • Update outdated documents @antmoveh
    • fix typos #104 @zhanghaizhou
    • Add and modify charts component switches #100 @duanhongyi
    • fix bug when disk has not free size #69 @zhangkai8048
    • doc: correct the chart repo url #93 @CarlJi

    Contributors

    @fanhaouu @duanhongyi @CarlJi @zhanghaizhou @ZhangZhenhua @zhangkai8048 @wongearl @antmoveh

    Mirror address

    registry.cn-hangzhou.aliyuncs.com/carina/carina:v0.11.0 registry.cn-hangzhou.aliyuncs.com/carina/carina-scheduler:v0.11.0

    Congratulations :tada: :tada: :tada:

    Special congratulations to @fanhaouu for being carina project approver

    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Apr 28, 2022)

    Brief description

    • Carina has entered CNCF panorama and is applying for sandbox project. This version mainly adds the function of bare disk and adjusts the project structure

    Support functions

    • Removed csi.proto upgrade CSI_VERSION=1.5
    • Remove device registration and use the CRD resource NodeStorageResource instead
    • Added controllers that maintain NodeStorageResource
    • The scheduler supports fetching resources from NodeStorageResource
    • Upgrade go.mod to depend on K8s1.23
    • Upgrade the Webhook certificate using job
    • Raw disk support
    • Storage volume backup is supported with Velero
    • More English documentation support
    • Specification of fields in annotations

    Contributors

    @ZhangZhenhua @zhangkai8048 @antmoveh

    Mirror address

    registry.cn-hangzhou.aliyuncs.com/antmoveh/carina:v0.10.0 registry.cn-hangzhou.aliyuncs.com/antmoveh/carina-scheduler:v0.10.0

    What's Changed

    • Feature/v0.9.1 node notready by @antmoveh in https://github.com/carina-io/carina/pull/43
    • make migrate pods test for mysql example by @zhangkai8048 in https://github.com/carina-io/carina/pull/45
    • change wx images by @zhangkai8048 in https://github.com/carina-io/carina/pull/46
    • change wx img by @zhangkai8048 in https://github.com/carina-io/carina/pull/48
    • V0.9.1 change example mysql.md by @zhangkai8048 in https://github.com/carina-io/carina/pull/49
    • Feature/v0.9.3 k8s certgen by @antmoveh in https://github.com/carina-io/carina/pull/50
    • Feature/v0.9.4 node crd by @antmoveh in https://github.com/carina-io/carina/pull/55
    • Feature/v0.9.4 crd scheduler by @antmoveh in https://github.com/carina-io/carina/pull/56
    • Feature/v0.9.5 resolve conflict by @antmoveh in https://github.com/carina-io/carina/pull/57
    • Feature/v0.9.4 test node by @antmoveh in https://github.com/carina-io/carina/pull/58
    • [docs] add english manuals by @ZhangZhenhua in https://github.com/carina-io/carina/pull/61
    • [docs] add blogs in readme by @ZhangZhenhua in https://github.com/carina-io/carina/pull/62
    • add CODE_OF_CONDUCT.md by @ZhangZhenhua in https://github.com/carina-io/carina/pull/64
    • [docs] mark RAID management as V1.0 by @ZhangZhenhua in https://github.com/carina-io/carina/pull/65
    • [docs] add CONTRIBUTING.md by @ZhangZhenhua in https://github.com/carina-io/carina/pull/66
    • Feature/batav0.10 raw by @zhangkai8048 in https://github.com/carina-io/carina/pull/67

    Full Changelog: https://github.com/carina-io/carina/compare/v0.9.1...v0.10.0

    Source code(tar.gz)
    Source code(zip)
    carina-csi-driver-v0.10.0.tgz(14.35 KB)
  • v0.9.1(Jan 18, 2022)

    Brief description
    • Lvm-based local storage project to provide local functionality for Kubernetes. This version has been validated in a number of test environments
    Support functions
    • Carina supports the configuration of existing local storage volumes
    • The node is damaged and container migration is supported
    • Supporting Helm Installation
    • Multiple architecture mirroring is supported, linux/amd64 and linux/arm64
    • Optimizing base mirroring
    Contributors

    @antmoveh @zhangkai8048

    Mirror address
    • registry.cn-hangzhou.aliyuncs.com/antmoveh/carina:v0.9.1
    • registry.cn-hangzhou.aliyuncs.com/antmoveh/carina-scheduler:v0.9.1
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Oct 12, 2021)

    Brief description
    • Lvm-based local storage project to provide local functionality for Kubernetes. This version has been validated in a number of test environments
    Support functions

    | functions | state | | --------------------------------- | ----- | | Dynamic pv | √ | | Local file storage | √ | | Local block storage | √ | | Storage capacity Limit | √ | | Expansion of storage Volume | √ | | Store the topology | √ | | Local Disk Management | √ | | Disk speed limit | √ | | Nodes are migrated | √ | | Scheduling based on disk capacity | √ |

    Verified version

    | kubernetes | Verified | | ------------------ | -------- | | kubernetes v1.18.x | √ | | kubernetes v1.19.x | √ | | kubernetes v1.20.x | √ |

    Contributors

    @antmoveh @ZhangZhenhua

    Mirror address
    • registry.cn-hangzhou.aliyuncs.com/antmoveh/carina:v0.9-20210804141609
    • registry.cn-hangzhou.aliyuncs.com/antmoveh/scheduler:v0.9-20211012111249
    Source code(tar.gz)
    Source code(zip)
Owner
null
topolvm operator provide kubernetes local storage which is light weight and high performance

Topolvm-Operator Topolvm-Operator is an open source cloud-native local storage orchestrator for Kubernetes, which bases on topolvm. Supported environm

Alauda.io 23 Nov 1, 2022
dockin ops is a project used to handle the exec request for kubernetes under supervision

Dockin Ops - Dockin Operation service English | 中文 Dockin operation and maintenance management system is a safe operation and maintenance management s

WeBankFinTech 34 Aug 12, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

HwameiStor 165 Aug 6, 2022
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Shachar Sharon 0 Jan 17, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Fortress 0 Jan 23, 2022
Dynamically provisioning persistent local storage with Kubernetes

Local Path Provisioner Overview Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the

Rancher 1.4k Nov 28, 2022
A batch scheduler of kubernetes for high performance workload, e.g. AI/ML, BigData, HPC

kube-batch kube-batch is a batch scheduler for Kubernetes, providing mechanisms for applications which would like to run batch jobs leveraging Kuberne

Kubernetes SIGs 1k Nov 14, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10.8k Nov 22, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kaisen Linux 0 Feb 14, 2022
SMART information of local storage devices as Prometheus metrics

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Red Hat Storage 0 Feb 10, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
A Golang based high performance, scalable and distributed workflow framework

Go-Flow A Golang based high performance, scalable and distributed workflow framework It allows to programmatically author distributed workflow as Dire

Vanu 664 Nov 24, 2022
A golang CTF competition platform with high-performance, security and low hardware requirements.

CTFgo - CTF Platform written in Golang A golang CTF competition platform with high-performance, security and low hardware requirements. Live Demo • Di

CTFgo 2 Oct 20, 2022
⚡️ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP

Your new microservice development environment friend. This CLI tool allows you to define a configuration to work with both local applications (Go, Nod

Vincent Composieux 1.3k Nov 21, 2022
A high-performance Directed-Acyclic-Graph JIT in Go

GAG - A Directed-Acyclic-Graph JIT in Go GAG is a library I created while developing https://isobot.io to experiment with different ways of implementi

V-X 3 Mar 16, 2022
StoneWork is a high-performance, all-(CNFs)-in-one network solution.

StoneWork, high-performance dataplane, modular control-plane solution StoneWork is used by PANTHEON.tech to integrate its CNFs on top of a single shar

PANTHEON.tech 18 Oct 18, 2022
A high performance online bookstore system.

HPOB 高性能网上书店 A high performance online bookstore system. Introduction 介绍 一个基于Gin、gorm、viper、zap等库的web服务器,实现了网上书店相关接口。 Summary 概要 使用go语言编写的,基于gin、gorm、

邹永赫 2 Apr 27, 2022
High-performance GitHub webhook events toolset for Go :rocket:

githubevents GitHub webhook events toolset for Go githubevents is a webhook events toolset for the Go programming language inspired by octokit/webhook

Christian Bargmann 35 Nov 7, 2022
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

OpenEBS LVM CSI Driver CSI driver for provisioning Local PVs backed by LVM and more. Project Status Currently the LVM CSI Driver is in alpha

OpenEBS 114 Nov 19, 2022