cloud-native local storage management system

Overview

Open-Local - 云原生本地磁盘管理系统

Open-Local是由多个组件构成的本地磁盘管理系统,目标是解决当前 Kubernetes 本地存储能力缺失问题。通过Open-Local使用本地存储会像集中式存储一样简单

Open-Local已广泛用于生产环境,目前使用的产品包括:

  • 阿里云 OECP (企业级容器平台)
  • 阿里云 ADP (云原生应用交付平台)
  • 蚂蚁 AntStack Plus 产品

特性

  • 本地存储池管理
  • 存储卷动态分配
  • 存储卷容量隔离
  • 存储卷扩容
  • 存储卷快照
  • 存储卷监控

架构

Open-Local包含三大类组件:

  • Scheduler-Extender: 作为 Kubernetes Scheduler 的扩展组件,通过 Extender 方式实现,新增本地存储调度算法
  • CSI: 按照 CSI(Container Storage Interface) 标准实现本地磁盘管理能力
  • Agent: 运行在集群中的每个节点,通过上报集群中本地存储设备信息以供 Scheduler-Extender 决策调度

开发

详见文档

mkdir -p $GOPATH/src/github.com/oecp/
cd $GOPATH/src/github.com/oecp/
git clone https://github.com/oecp/open-local.git
# build binary
make
# build image
make image
Comments
  • Unable to install open-local on minicube

    Unable to install open-local on minicube

    Hello,

    I followed the installation guide here

    When I typed kubectl get po -nkube-system -l app=open-local the output was:

    NAME                                              READY   STATUS      RESTARTS   AGE
    open-local-agent-p2xdq                            3/3     Running     0          13m
    open-local-csi-provisioner-59cd8644ff-n52xc       1/1     Running     0          13m
    open-local-csi-resizer-554f54b5b4-xkw97           1/1     Running     0          13m
    open-local-csi-snapshotter-64dff4b689-9g9wl       1/1     Running     0          13m
    open-local-init-job--1-f9vzz                      0/1     Completed   0          13m
    open-local-init-job--1-j7j8b                      0/1     Completed   0          13m
    open-local-init-job--1-lmvqd                      0/1     Completed   0          13m
    open-local-scheduler-extender-5dc8d8bb49-n44pn    1/1     Running     0          13m
    open-local-snapshot-controller-846c8f6578-2bfhx   1/1     Running     0          13m
    

    However, when I typed kubectl get nodelocalstorage, I got this output:

    NAME       STATE   PHASE   AGENTUPDATEAT   SCHEDULERUPDATEAT   SCHEDULERUPDATESTATUS
    minikube                                                       
    

    According to the installation guide, the column The STATE should display DiskReady.

    And if I typed kubectl get nls -o yaml, it outputted:

    piVersion: v1
    items:
    - apiVersion: csi.aliyun.com/v1alpha1
      kind: NodeLocalStorage
      metadata:
        creationTimestamp: "2021-09-20T13:37:09Z"
        generation: 1
        name: minikube
        resourceVersion: "615"
        uid: 6f193362-e2b2-4053-a6e6-81de35c96eaf
      spec:
        listConfig:
          devices: {}
          mountPoints:
            include:
            - /mnt/open-local/disk-[0-9]+
          vgs:
            include:
            - open-local-pool-[0-9]+
        nodeName: minikube
        resourceToBeInited:
          vgs:
          - devices:
            - /dev/sdb
            name: open-local-pool-0
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""
    

    I am running Minicube on my desktop computer which has a SSD hard disk.

    Thank you for your help.

    opened by alex-arica 42
  • Use an existing VG?

    Use an existing VG?

    Question

    Is it possible to use an existing VG with this project? I already have a PV and VG created, and the VG has 100GB free to create LVs.

    Would it be possible to configure open-local to create new LVs in the existing VG? If it's possible, would appreciate any help.

    documentation enhancement 
    opened by murkylife 21
  • Helm install CSIDriver error

    Helm install CSIDriver error

    System info

    Via uname -a && kubectl version && helm version && apt-show-versions lvm2 | grep amd:

    Linux master-node 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T17:57:25Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
    version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
    lvm2:amd64/bionic-updates 2.02.176-4.1ubuntu3.18.04.3 uptodate
    

    Bug Description

    Setup

    wget https://github.com/alibaba/open-local/archive/refs/tags/v0.1.1.zip
    unzip v0.1.1.zip
    cd open-local-0.1.1
    

    The Problem

    Using release 0.1.1 of open-local and following the current user guide's instructions, when I run

    helm install open-local ./helm
    

    -- I get the following error:

    Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "CSIDriver" in version "storage.k8s.io/v1beta1"
    
    good first issue 
    opened by synkarius 9
  • device schedule with error Get Response StatusCode 500

    device schedule with error Get Response StatusCode 500

    Ⅰ. Issue Description

    新建 pvc 时报错

      Normal   WaitForFirstConsumer  2m13s                 persistentvolume-controller                                      waiting for first consumer to be created before binding
      Normal   ExternalProvisioning  11s (x11 over 2m13s)  persistentvolume-controller                                      waiting for a volume to be created, either by external provisioner "local.csi.aliyun.com" or manually created by system administrator
      Normal   Provisioning          6s (x8 over 2m13s)    local.csi.aliyun.com_node1_080636cb-a68d-4ee8-a3a3-db5ae5634cbb  External provisioner is provisioning volume for claim "demo/pvc-open-local-device-hdd-test2-0-d0"
      Warning  ProvisioningFailed    6s (x8 over 2m13s)    local.csi.aliyun.com_node1_080636cb-a68d-4ee8-a3a3-db5ae5634cbb  failed to provision volume with StorageClass "open-local-device-hdd": rpc error: code = InvalidArgument desc = Parse Device part schedule info error rpc error: code = InvalidArgument desc = device schedule with error Get Response StatusCode 500, Response: failed to allocate local storage for pvc demo/pvc-open-local-device-hdd-test2-0-d0: Insufficient Device storage, requested 0, available 0, capacity 0
    

    Ⅱ. Describe what happened

    我是在 k3s 集群中部署的 open-local,但是因为 k3s 没有 kube-scheduler 相关配置文件,所以 init-job 无法正常运行。

    modifying kube-scheduler.yaml...
    grep: /etc/kubernetes/manifests/kube-scheduler.yaml: No such file or directory
    + sed -i '/  hostNetwork: true/a \  dnsPolicy: ClusterFirstWithHostNet' /etc/kubernetes/manifests/kube-scheduler.yaml
    sed: can't read /etc/kubernetes/manifests/kube-scheduler.yaml: No such file or directory
    

    其他相关配置都运行良好:

    NAME                                              READY   STATUS    RESTARTS   AGE
    open-local-agent-7sd9d                            3/3     Running   0          22h
    open-local-csi-provisioner-785b7f99bd-hlqdv       1/1     Running   0          22h
    open-local-agent-8kg4r                            3/3     Running   0          22h
    open-local-agent-jljlv                            3/3     Running   0          22h
    open-local-scheduler-extender-5d48bc465c-r42pn    1/1     Running   0          22h
    open-local-snapshot-controller-785987975c-hhgr7   1/1     Running   0          22h
    open-local-csi-snapshotter-5f797c4596-wml76       1/1     Running   0          22h
    open-local-csi-resizer-7c9698976f-f7tzz           1/1     Running   0          22h
    
    master1 [~]$ kubectl get nodelocalstorage -ojson master1|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:02:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    master1 [~]$ kubectl get nodelocalstorage -ojson node1|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:01:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    master1 [~]$ kubectl get nodelocalstorage -ojson node2|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:02:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    

    所以我怀疑是调度的问题导致无法正常创建 PVC/PV

    Ⅲ. Describe what you expected to happen

    希望能够在 k3s 正常部署运行 open-local

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. 建议参考 k8s-scheduler-extender的方式实现调度的自定义,兼容 k3s 等其他平台

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version:
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
    opened by Seryta 7
  • [feature] support SPDK

    [feature] support SPDK

    Why you need it?

    vhost-user-blk/scsi. is a high efficient way to transport data for virtual environments. Open-local currently doesn't support vhost-user-blk/scsi.

    How it could be?

    The Storage Performance Development Kit (SPDK) can provide vhost support. To support vhost-user-blk/scsi in open-local node CSI driver should communicate with SPDK. Following is a brief description :

    .  NodeStageVolume / NodeUnStageVolume
        n/a
    .  NodePublishVolume
        -  Create bdev
            # scripts/rpc.py bdev_aio_create <path_to_host_block_dev> <bdev_name>
            # scripts/rpc.py bdev_lvol_create_lvstore <bdev_name> <lvs_name >
            # scripts/rpc.py bdev_lvol_create  <lvol_name> <size> -l <lvs_name>
        -  Create vhost device
            # scripts/rpc.py vhost_create_blk_controller --cpumask 0x1 vhostblk0 <bdev_name>
            # mknod /var/run/kata-containers/vhost-user/block/devices/vhostblk0 b 241 0
            # mount --bind [...] /var/run/kata-containers/vhost-user/block/devices/vhostblk0 <target_path>
    .  NodeUnPublishVolume
           # umount <target_path>
           # scripts/rpc.py bdev_lvol_delete  <lvol_name>
           # rm /var/run/kata-containers/vhost-user/block/devices/vhostblk0
    

    besides, we need add a field in nlsc and nls to indicate if the storage is provided by SPDK.

    image

    Other related information

    opened by caiwenzh 6
  • snapshot-controller CrashLoopBackOff and pvc keep in pending status

    snapshot-controller CrashLoopBackOff and pvc keep in pending status

    Ⅰ. Issue Description

    I follow the instruction install the open-local but it seems does not work correctly. Probably some bugs here. The snapshot-controller CrashLoopBackOff with the message "Failed to list v1 volumesnapshots with error=the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)" . Besides,I create the example yaml and the pv keep in pending condition. I'm not sure whether it is the first one cause the second.

    Ⅱ. Describe what happened

    The snapshot-controller CrashLoopBackOff but crd exist(not sure for the version) image image

    The open-local-controller also give the message Failed to watch v1.VolumeSnapshotClass image

    Check the clusterrole of open-local image

    The pvc in pending status image image image image

    Extension scheduler give error message image image

    The pod also pending image

    Nodelocalstorage does not create successfully image

    Double check the NodeLocalStorageInitConfig which is correct image image

    Ⅲ. Describe what you expected to happen

    the example yaml given by instruction create successfully

    Ⅵ. Environment:

    • Open-Local version: v0.5.5
    • OS (e.g. from /etc/os-release): centos 7.9
    • Kernel (e.g. uname -a): Linux kube-control-2 3.10.0-1160.el7.x86_64 #1 SMP Wed Nov 18 03:43:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
    • Install tools: helm
    • Others:
    opened by FreshMan123 4
  • device pv faile to mount as fs

    device pv faile to mount as fs

    Ⅰ. Issue Description

    Try to use device volumeType and mount as fs. PV seem fail to mount

    Ⅱ. Describe what happened

    Apply a yaml file like this:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: dev-fs-pvc
    spec:
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: open-local-device-hdd
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: "open-local-test-dev-fs"
    spec:
      containers:
      - name: dev-fs
        image: busybox
        volumeMounts:
        - mountPath: "/data"
          name: data
        command:
        - stat 
        - /data
      volumes:
       - name: data
         persistentVolumeClaim:
           claimName: dev-fs-pvc
      restartPolicy: Never
    
    
    Events:
      Type     Reason            Age                From               Message
      ----     ------            ----               ----               -------
      Warning  FailedScheduling  38s                default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "dev-fs-pvc"
      Warning  FailedScheduling  35s                default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "dev-fs-pvc"
      Normal   Scheduled         31s                default-scheduler  Successfully assigned open-local/open-local-test-dev-fs to node2
      Warning  FailedMount       13s (x6 over 29s)  kubelet            MountVolume.SetUp failed for volume "local-7859a7ae-c56c-48eb-8c36-dd898f1ab22f" : rpc error: code = Internal desc = NodePublishVolume(FileSystem): mount device volume local-7859a7ae-c56c-48eb-8c36-dd898f1ab22f with path /var/lib/kubelet/pods/249942c1-26ee-41e4-8a05-1e67ee9deaab/volumes/kubernetes.io~csi/local-7859a7ae-c56c-48eb-8c36-dd898f1ab22f/mount with error: rpc error: code = Internal desc = mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t ext4 -o rw,defaults /dev/vdd /var/lib/kubelet/pods/249942c1-26ee-41e4-8a05-1e67ee9deaab/volumes/kubernetes.io~csi/local-7859a7ae-c56c-48eb-8c36-dd898f1ab22f/mount
    Output: mount: wrong fs type, bad option, bad superblock on /dev/vdd,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    

    Ⅲ. Describe what you expected to happen

    pv should be mounted, pod should run

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. Apply yaml as above
    2. Pod stuck in creating state
    3. kubectl describe the targeting pod and see the error msg

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: 0.5.5
    • OS (e.g. from /etc/os-release): ubuntu
    • Kernel (e.g. uname -a): 5.15
    • Install tools: helm
    • Others:
    opened by liyimeng 4
  • make failed

    make failed

    Ⅰ. Issue Description

    I check out the repo and run make, it seems fail out of the box, am I missing something?

    Ⅱ. Describe what happened

    [email protected]:~/workspace/picloud/open-local$ make
    go test -v ./...
    ?       github.com/alibaba/open-local/cmd       [no test files]
    ?       github.com/alibaba/open-local/cmd/agent [no test files]
    ?       github.com/alibaba/open-local/cmd/controller    [no test files]
    ?       github.com/alibaba/open-local/cmd/csi   [no test files]
    ?       github.com/alibaba/open-local/cmd/doc   [no test files]
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="Waiting for informer caches to sync"
    time="2022-06-08T12:15:26+02:00" level=info msg="starting http server on port 23000"
    time="2022-06-08T12:15:26+02:00" level=info msg="all informer caches are synced"
    === RUN   TestVGWithName
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod testpod with nodes [[node-192.168.0.1 node-192.168.0.2 node-192.168.0.3 node-192.168.0.4]]"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.1"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=error msg="Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: false,failReasons: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi], err: Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.1,fits: false,failReasons: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.1 is not suitable for pod default/testpod, reason: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi] "
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.2"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.2 is capable of lvm 1 pvcs"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.2,fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.3"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.3 is capable of lvm 1 pvcs"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.3,fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=error msg="no vg(LVM) named ssd in node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: false,failReasons: [no vg(LVM) named ssd in node node-192.168.0.4], err: no vg(LVM) named ssd in node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.4,fits: false,failReasons: [no vg(LVM) named ssd in node node-192.168.0.4], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.4 is not suitable for pod default/testpod, reason: [no vg(LVM) named ssd in node node-192.168.0.4] "
    unexpected fault address 0x0
    fatal error: fault
    [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x46845f]
    
    goroutine 91 [running]:
    runtime.throw({0x178205e?, 0x18?})
            /usr/local/go/src/runtime/panic.go:992 +0x71 fp=0xc0004d71e8 sp=0xc0004d71b8 pc=0x4380b1
    runtime.sigpanic()
            /usr/local/go/src/runtime/signal_unix.go:825 +0x305 fp=0xc0004d7238 sp=0xc0004d71e8 pc=0x44e485
    aeshashbody()
            /usr/local/go/src/runtime/asm_amd64.s:1343 +0x39f fp=0xc0004d7240 sp=0xc0004d7238 pc=0x46845f
    runtime.mapiternext(0xc000788780)
            /usr/local/go/src/runtime/map.go:934 +0x2cb fp=0xc0004d72b0 sp=0xc0004d7240 pc=0x411beb
    runtime.mapiterinit(0x0?, 0x8?, 0x1?)
            /usr/local/go/src/runtime/map.go:861 +0x228 fp=0xc0004d72d0 sp=0xc0004d72b0 pc=0x4118c8
    reflect.mapiterinit(0xc000039cf8?, 0xc0004d7358?, 0x461365?)
            /usr/local/go/src/runtime/map.go:1373 +0x19 fp=0xc0004d72f8 sp=0xc0004d72d0 pc=0x464b79
    github.com/modern-go/reflect2.(*UnsafeMapType).UnsafeIterate(...)
            /home/lee/workspace/picloud/open-local/vendor/github.com/modern-go/reflect2/unsafe_map.go:112
    github.com/json-iterator/go.(*sortKeysMapEncoder).Encode(0xc00058f230, 0xc000497f00, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_map.go:291 +0x225 fp=0xc0004d7468 sp=0xc0004d72f8 pc=0x8553e5
    github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc00058f350, 0x1436da0?, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_struct_encoder.go:110 +0x56 fp=0xc0004d74e0 sp=0xc0004d7468 pc=0x862b36
    github.com/json-iterator/go.(*structEncoder).Encode(0xc00058f3e0, 0x0?, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_struct_encoder.go:158 +0x765 fp=0xc0004d75c8 sp=0xc0004d74e0 pc=0x863545
    github.com/json-iterator/go.(*OptionalEncoder).Encode(0xc00013bb80?, 0x0?, 0x0?)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_optional.go:70 +0xa4 fp=0xc0004d7618 sp=0xc0004d75c8 pc=0x85a744
    github.com/json-iterator/go.(*onePtrEncoder).Encode(0xc0004b3210, 0xc000497ef0, 0xc000497f50?)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect.go:219 +0x82 fp=0xc0004d7650 sp=0xc0004d7618 pc=0x84d7c2
    github.com/json-iterator/go.(*Stream).WriteVal(0xc000039ce0, {0x158a3e0, 0xc000497ef0})
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect.go:98 +0x158 fp=0xc0004d76c0 sp=0xc0004d7650 pc=0x84cad8
    github.com/json-iterator/go.(*frozenConfig).Marshal(0xc00013bb80, {0x158a3e0, 0xc000497ef0})
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/config.go:299 +0xc9 fp=0xc0004d7758 sp=0xc0004d76c0 pc=0x843d89
    github.com/alibaba/open-local/pkg/scheduler/server.PredicateRoute.func1({0x19bfee0, 0xc00019c080}, 0xc000318000, {0x203000?, 0xc00062b928?, 0xc00062b84d?})
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/routes.go:83 +0x326 fp=0xc0004d7878 sp=0xc0004d7758 pc=0x132d5e6
    github.com/alibaba/open-local/pkg/scheduler/server.DebugLogging.func1({0x19cafb0?, 0xc0005a80e0}, 0xc000056150?, {0x0, 0x0, 0x0})
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/routes.go:217 +0x267 fp=0xc0004d7988 sp=0xc0004d7878 pc=0x132e4a7
    github.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc0000b0de0, {0x19cafb0, 0xc0005a80e0}, 0xc000318000)
            /home/lee/workspace/picloud/open-local/vendor/github.com/julienschmidt/httprouter/router.go:387 +0x82b fp=0xc0004d7a98 sp=0xc0004d7988 pc=0x12d61ab
    net/http.serverHandler.ServeHTTP({0x19bc700?}, {0x19cafb0, 0xc0005a80e0}, 0xc000318000)
            /usr/local/go/src/net/http/server.go:2916 +0x43b fp=0xc0004d7b58 sp=0xc0004d7a98 pc=0x7e87fb
    net/http.(*conn).serve(0xc0001da3c0, {0x19cbab0, 0xc0001b68a0})
            /usr/local/go/src/net/http/server.go:1966 +0x5d7 fp=0xc0004d7fb8 sp=0xc0004d7b58 pc=0x7e3cb7
    net/http.(*Server).Serve.func3()
            /usr/local/go/src/net/http/server.go:3071 +0x2e fp=0xc0004d7fe0 sp=0xc0004d7fb8 pc=0x7e914e
    runtime.goexit()
            /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0004d7fe8 sp=0xc0004d7fe0 pc=0x46b061
    created by net/http.(*Server).Serve
            /usr/local/go/src/net/http/server.go:3071 +0x4db
    
    goroutine 1 [chan receive]:
    testing.(*T).Run(0xc000103ba0, {0x178cc75?, 0x516ac5?}, 0x18541b0)
            /usr/local/go/src/testing/testing.go:1487 +0x37a
    testing.runTests.func1(0xc0001b69c0?)
            /usr/local/go/src/testing/testing.go:1839 +0x6e
    testing.tRunner(0xc000103ba0, 0xc00064bcd8)
            /usr/local/go/src/testing/testing.go:1439 +0x102
    testing.runTests(0xc00050a0a0?, {0x2540700, 0x7, 0x7}, {0x7fa22c405a68?, 0x40?, 0x2557740?})
            /usr/local/go/src/testing/testing.go:1837 +0x457
    testing.(*M).Run(0xc00050a0a0)
            /usr/local/go/src/testing/testing.go:1719 +0x5d9
    main.main()
            _testmain.go:59 +0x1aa
    
    goroutine 19 [chan receive]:
    k8s.io/klog/v2.(*loggingT).flushDaemon(0x0?)
            /home/lee/workspace/picloud/open-local/vendor/k8s.io/klog/v2/klog.go:1169 +0x6a
    created by k8s.io/klog/v2.init.0
            /home/lee/workspace/picloud/open-local/vendor/k8s.io/klog/v2/klog.go:417 +0xf6
    
    goroutine 92 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607b38, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc0003c6100?, 0xc00050c2e1?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Read(0xc0003c6100, {0xc00050c2e1, 0x1, 0x1})
            /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
    net.(*netFD).Read(0xc0003c6100, {0xc00050c2e1?, 0xc000613628?, 0xc00061e000?})
            /usr/local/go/src/net/fd_posix.go:55 +0x29
    net.(*conn).Read(0xc000612180, {0xc00050c2e1?, 0xc0005147a0?, 0x985846?})
            /usr/local/go/src/net/net.go:183 +0x45
    net/http.(*connReader).backgroundRead(0xc00050c2d0)
            /usr/local/go/src/net/http/server.go:672 +0x3f
    created by net/http.(*connReader).startBackgroundRead
            /usr/local/go/src/net/http/server.go:668 +0xca
    
    goroutine 43 [select]:
    net/http.(*persistConn).roundTrip(0xc00056a360, 0xc0006420c0)
            /usr/local/go/src/net/http/transport.go:2620 +0x974
    net/http.(*Transport).roundTrip(0x25410e0, 0xc0004c6600)
            /usr/local/go/src/net/http/transport.go:594 +0x7c9
    net/http.(*Transport).RoundTrip(0x40f405?, 0x19b3900?)
            /usr/local/go/src/net/http/roundtrip.go:17 +0x19
    net/http.send(0xc0004c6600, {0x19b3900, 0x25410e0}, {0x172b2a0?, 0x178c601?, 0x0?})
            /usr/local/go/src/net/http/client.go:252 +0x5d8
    net/http.(*Client).send(0x2556ec0, 0xc0004c6600, {0xd?, 0x1788f4f?, 0x0?})
            /usr/local/go/src/net/http/client.go:176 +0x9b
    net/http.(*Client).do(0x2556ec0, 0xc0004c6600)
            /usr/local/go/src/net/http/client.go:725 +0x8f5
    net/http.(*Client).Do(...)
            /usr/local/go/src/net/http/client.go:593
    net/http.(*Client).Post(0x17b1437?, {0xc000492480?, 0xc00054bdc8?}, {0x178f761, 0x10}, {0x19b0fe0?, 0xc0001b6a20?})
            /usr/local/go/src/net/http/client.go:858 +0x148
    net/http.Post(...)
            /usr/local/go/src/net/http/client.go:835
    github.com/alibaba/open-local/cmd/scheduler.predicateFunc(0xc0000f9800, {0x253ebe0, 0x4, 0x4})
            /home/lee/workspace/picloud/open-local/cmd/scheduler/extender_test.go:348 +0x1e8
    github.com/alibaba/open-local/cmd/scheduler.TestVGWithName(0x4082b9?)
            /home/lee/workspace/picloud/open-local/cmd/scheduler/extender_test.go:135 +0x17e
    testing.tRunner(0xc000103d40, 0x18541b0)
            /usr/local/go/src/testing/testing.go:1439 +0x102
    created by testing.(*T).Run
            /usr/local/go/src/testing/testing.go:1486 +0x35f
    
    goroutine 87 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607d18, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc00003a580?, 0xc000064000?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Accept(0xc00003a580)
            /usr/local/go/src/internal/poll/fd_unix.go:614 +0x22c
    net.(*netFD).accept(0xc00003a580)
            /usr/local/go/src/net/fd_unix.go:172 +0x35
    net.(*TCPListener).accept(0xc0001301e0)
            /usr/local/go/src/net/tcpsock_posix.go:139 +0x28
    net.(*TCPListener).Accept(0xc0001301e0)
            /usr/local/go/src/net/tcpsock.go:288 +0x3d
    net/http.(*Server).Serve(0xc0000dc2a0, {0x19cada0, 0xc0001301e0})
            /usr/local/go/src/net/http/server.go:3039 +0x385
    net/http.(*Server).ListenAndServe(0xc0000dc2a0)
            /usr/local/go/src/net/http/server.go:2968 +0x7d
    net/http.ListenAndServe(...)
            /usr/local/go/src/net/http/server.go:3222
    github.com/alibaba/open-local/pkg/scheduler/server.(*ExtenderServer).InitRouter.func1()
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/web.go:185 +0x157
    created by github.com/alibaba/open-local/pkg/scheduler/server.(*ExtenderServer).InitRouter
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/web.go:182 +0x478
    
    goroutine 49 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607c28, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc00003a800?, 0xc000639000?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Read(0xc00003a800, {0xc000639000, 0x1000, 0x1000})
            /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
    net.(*netFD).Read(0xc00003a800, {0xc000639000?, 0x17814b4?, 0x0?})
            /usr/local/go/src/net/fd_posix.go:55 +0x29
    net.(*conn).Read(0xc000495a38, {0xc000639000?, 0x19ce530?, 0xc000370ea0?})
            /usr/local/go/src/net/net.go:183 +0x45
    net/http.(*persistConn).Read(0xc00056a360, {0xc000639000?, 0x40757d?, 0x60?})
            /usr/local/go/src/net/http/transport.go:1929 +0x4e
    bufio.(*Reader).fill(0xc000522a80)
            /usr/local/go/src/bufio/bufio.go:106 +0x103
    bufio.(*Reader).Peek(0xc000522a80, 0x1)
            /usr/local/go/src/bufio/bufio.go:144 +0x5d
    net/http.(*persistConn).readLoop(0xc00056a360)
            /usr/local/go/src/net/http/transport.go:2093 +0x1ac
    created by net/http.(*Transport).dialConn
            /usr/local/go/src/net/http/transport.go:1750 +0x173e
    
    goroutine 178 [select]:
    net/http.(*persistConn).writeLoop(0xc00056a360)
            /usr/local/go/src/net/http/transport.go:2392 +0xf5
    created by net/http.(*Transport).dialConn
            /usr/local/go/src/net/http/transport.go:1751 +0x1791
    FAIL    github.com/alibaba/open-local/cmd/scheduler     0.177s
    ?       github.com/alibaba/open-local/cmd/version       [no test files]
    ?       github.com/alibaba/open-local/pkg       [no test files]
    ?       github.com/alibaba/open-local/pkg/agent/common  [no test files]
    === RUN   TestNewAgent
    
    

    Ⅲ. Describe what you expected to happen

    make should run through.

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. git clone https://github.com/alibaba/open-local.git
    2. cd open-local
    3. make 4.
    4. failed

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: main branch
    • OS (e.g. from /etc/os-release): ubuntu 22.04
    • Kernel (e.g. uname -a): 5.15.0-33
    • Install tools:
    • Others:
    opened by liyimeng 4
  • [bug] Extender fail to update some nls after the nls resource is deleted and rebuilt in a large-scale cluster

    [bug] Extender fail to update some nls after the nls resource is deleted and rebuilt in a large-scale cluster

    批量删除nls后,nls会正常创建,但extender patch时会报错:

    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzx9vn1t003z"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzx9vn1t003z\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyljpz"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyljpz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyli4z"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyli4z\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzx9vn1t01pz"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzx9vn1t01pz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:32+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyljmz"
    time="2021-11-26T14:50:32+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyljmz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:32+08:00" level=debug msg="get update on node local cache izbp14kyqi4fdsb7ax48itz"
    time="2021-11-26T14:50:32+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp14kyqi4fdsb7ax48itz\": the object has been modified; please apply your changes to the latest version and try again"
    

    影响就是extender无法更新nls的status,导致应用无法使用该节点上的存储设备。

    是在大规模场景下做的测试。

    bug 
    opened by TheBeatles1994 4
  • VG  Create  Error

    VG Create Error

    Ⅰ. Issue Description

    NodeLocalStorage's Status Is Null. image

    Ⅱ. Describe what happened

    Install Open-local By Chart,Worker Of Kubernetes Adding Raw Device,But VG In The Worker Is Not.Then Check NLS status,Find It Was Null.

    Ⅲ. Describe what you expected to happen

    VG In The Worker Of Raw Device Is Fine.

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. helm install open-local 2.vgs in Worker
    2. kubectl get nls

    Ⅴ. Anything else we need to know?

    1、Scheduler Process image 2、scheduler-policy-config.json image 3、driver-registrar Logs image 4、Agent Logs image 5、scheduler-extender Logs image 6、NodeLocalStorageInitConfig image 7、Raw Device Of Worker image

    Ⅵ. Environment:

    -Kubernetes Version image

    • Open-Local version: image

    • OS (e.g. from /etc/os-release):

    image

    • Kernel (e.g. uname -a):

    image

    • Install tools:

    image

    Thanks

    bug 
    opened by liyongxian 3
  • question about monitor

    question about monitor

    Question

    I am wondering which service will provide the metrics? I found that the ServiceMonitor open-local give following message like this : spec: endpoints:

    • path: /metrics port: http-metrics jobLabel: app namespaceSelector: matchNames:
      • kube-system selector: matchLabels: app: open-local

    but there is no svc like open-local only open-local-scheduler-extender svc exist. so I am so confused about which pod provides the metrics. Is that the open-local-controller ,agent or open-local-scheduler-extender(btw I have already set monitor to true)

    opened by FreshMan123 2
  • Sane-behavior cannot set when enabling hierarchical cgroup v1 blkio throttling

    Sane-behavior cannot set when enabling hierarchical cgroup v1 blkio throttling

    Question

    OS: Ubuntu20.04 5.15.0-46-generic docker: 20.10.12 containerd: 20.10.12-0 runc: 1.1.0-0 cgroup version: v1 kubernetes: 1.22.0

    Hello,

    I implemeted OpenLocal with kubernetes 1.22.0, and trying to throttle LVM block IO following https://github.com/alibaba/open-local/blob/main/docs/user-guide/type-lvm_zh_CN.md.

    The cgroup block io bps and iops limit is all in pod level. Throttling implements hierarchy support, it is reasonable to limit all containers' total block iops and bps under the value that set to the pod's blkio.throttle.read_bps_device, blkio.throttle.write_bps_device, blkio.throttle.read_iops_device and blkio.throttle.write_iops_device.

    However, throttling's hierarchy support is enabled only if "sane_behavior" is enabled from cgroup side, and cgroup.sane_behavior(ReadOnly) is set to 0 by default. Ref: https://www.kernel.org/doc/Documentation/cgroup-v1/blkio-controller.txt To limit container's blkio, I must enable "sane_behavior" by remounting /sys/fs/cgroup/blkio with the flag -o __DEVEL__sane_behavior. The problem is that it seems the kernel desn't recognize the flag.

    $ umount -l /sys/fs/cgroup/blkio $ mount -t cgroup -o blkio -o __DEVEL__sane_behavior none /sys/fs/cgroup/blkio mount: /sys/fs/cgroup: wrong fs type, bad option, bad superblock on none, missing codepage or helper program, or other error

    Is any one tried the io throttling function with success? Could you please give me some info about your kernel version or any problem with my configuration? Thanks!

    opened by liuyingshanx 0
  • Simplify setup with using dynamic scheduler extenders

    Simplify setup with using dynamic scheduler extenders

    Why you need it?

    Right now open-local use an init-job to statically configure scheduler extends, this is not convenient for some k8s distribution like k3s. I would like to propose doing this at controller startup instead of using external json/yaml file.

    How it could be?

    when open-local controller startup, it uses client-go to create the scheduler config (https://kubernetes.io/docs/reference/config-api/kube-scheduler-config.v1beta3/), when it quick delete the config.

    This should make open-local work much smart and stable. :)

    Other related information

    https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#extended-resources

    opened by liyimeng 3
  • Rename: VG --> StoragePool; LogicalVolume to LocalVolume

    Rename: VG --> StoragePool; LogicalVolume to LocalVolume

    Ⅰ. Issue Description

    The project named, open-local, not open-lvm, it is suppose to support or local storage solution more than LVM.

    Consider to refact the code to make it more generic and prepare to accept new local storage type.

    This seem to be a big jobs, especially some of this naming is used in yaml file, which likely break backward compatibility.

    Not sure if this is feasible.

    opened by liyimeng 2
  • [bug] Pod scheduling problem

    [bug] Pod scheduling problem

    Ⅰ. Issue Description

    Pods with multiple pvc are scheduled to nodes with insufficient capacity, while other nodes can meet the capacity requirements of PVCs.

    Ⅱ. Describe what happened

    I deploy an example sts-nginx with 3 replicas on 4 nodes, each pod mounts two pieces of pvc, 1T and 100G, which are created on the node's volume group 'vgdaasdata' and 'vgdaaslogs' respectively.

    1. Initially, The capacity of the 2 volume groups is 1.7T and 560G , which means that only one pod can be scheduled on each node, the cache of scheduler-extender is shown in Figure 1.
      3D972589-97E9-424F-8946-AE88134B17FE

    2. Figure 2 shows that 5 PVs were successfully created, and one was not created. The reason is that there are two pods scheduled to the same node.
      6D11807B-335A-46EE-AE3F-6A98CC210996 At this time, the cache of scheduler-extender is shown in Figure 3 and 4. FE522939-0071-4D2C-9ECF-3F35B7B73883 A17B1A70-022B-4704-931B-663E9A06321D

    3. Then, I delete the STS and these PVCs, the cache is shown in Figures 5. 60635978-D202-47FF-BB2F-02FBB08603A8

    Ⅲ. Describe what you expected to happen

    Only one pod can be scheduled on each node in this situation.

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    Deploy a workload as I described.

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: 0.5.5
    • OS (e.g. from /etc/os-release): centos 7.9
    • Kernel (e.g. uname -a):
    • Install tools: helm 3.0
    • Others: Kube-scheduler logs shown in Figure 6. 7CB1141F-74EF-4FDC-8669-FE9B74D94767
    opened by luokp 0
  • [feature] Extend to support zfs

    [feature] Extend to support zfs

    Why you need it?

    openlocal is a general name, while only lvm is covered, extend it to support ZFS seem to make it better deserving the name ;)

    lvm is better performance, less resource demanding, but ZFS has some nice features like ensuring data integrity make it still a valid choice in many datacenter.

    How it could be?

    Providing lvm and zfs is so similar for end user perspective, I am wondering if it is possible to add a ZFS driver side by side, make openlocal support zfs as backend storage option. I see openebs have an option named local-zfs, but it is look pretty messy there. This project, seem providing much neat code. it could be better base to implement a zfs csi driver here. If zfs is included, I believe the project will get much more attention

    What do you thinkg @TheBeatles1994

    Other related information

    opened by liyimeng 6
Releases(v0.6.0)
  • v0.6.0(Nov 15, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support framework scheduler @chzhj #169 #174
    • [feature] add SPDK vhost support @caiwenzh #153
    • [enhancement] agent initialize resource when nls is changed @lihezhong93 #182
    • [enhancement] add unit test for csi node @TheBeatles1994 #180
    • [enhancement] add UT for csi controller server @TheBeatles1994 #179
    • [enhancement] remove v1beta1 version of volumesnapshot crds @TheBeatles1994 #175
    • [enhancement] adapter cgroup driver @jinriyang #162
    • [enhancement] adapt build to go module @liyimeng #147
    • [enhancement] bump apiVersion to snapshot.storage.k8s.io/v1 @jenting #146
    • [bugfix] volumesnapshot hang when deleting @TheBeatles1994 #186
    • [bugfix] fix node expand volume error and incorrect cache of extender @TheBeatles1994 #174
    • [bugfix] fix iscsid fail to start issue @caiwenzh #170
    • [bugfix] apk add xfsprogs-extra and blkid in Dockerfile @TheBeatles1994
    • [bugfix] fail to create ephemeral volume @TheBeatles1994
    • [bugfix] fix ephemeral device of direct volume may not be released issue @caiwenzh #160
    • [bugfix] slice bounds out of range when scoring @TheBeatles1994 #154
    • [bugfix] fix grafana servicemonitor port same with scheduler-extender service port name @luokp #149
    Source code(tar.gz)
    Source code(zip)
  • v0.5.5(Jun 1, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [enhancement] init resource with exponential backoff by @lihezhong93 #130
    • [enhancement] make unpublishvolume more robust by @TheBeatles1994 #137
    • [enhancement] optimize pod list performance under large-scale clusters by @TheBeatles1994 #139
    • [bugfix] fix open-local-device-ssd mediaType Spelling mistake by @jieerliansan #140
    • [bugfix] fix incorrect snapshot size by @TheBeatles1994 #135
    Source code(tar.gz)
    Source code(zip)
  • v0.5.4(Apr 26, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support ipv6 mode #125 #127
    • [bugfix] fail to wipe start of new LV when creating inline volume #123
    • [bugfix] fail to update updateStatus of nls when deploying in lower k8s version(<1.20) #122
    • [bugfix] slice bounds out of range when length of logicalvolume name is less than 3 #126
    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(Mar 20, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support deleting orphaned volumesnapshotcontent #116
    • [feature] support turning off nls auto-updation of controller to be compatible with older versions #117
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(Mar 4, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support open-local controller #108
    • [feature] add grafana dashboard #113
    • [enhancement] support pushing to dockerhub #109
    • [enhancement] update readme #111 #112
    • [bugfix] build wrong arm64 image #110
    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Feb 20, 2022)

  • v0.5.0(Feb 16, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support ephemeral local volumes #101 #102
    • [enhancement] add introduction of vgName in param.md #103
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Dec 29, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support io-throttling #95
    • [enhancement] slim image size #92
    • [enhancement] remove mutex in agent #93
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Dec 10, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [bugfix] error cache when creating snapshot pv #87
    • [enhancement] upgrade snapshot client to v4.2.0 #89
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Dec 3, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [bugfix] csi-plugin crash when creating mountpoint pv #85
    • [bugfix] clean old data when deleting device pv #84
    • [bugfix] pvc pending when there are available devices on node #81
    • [bugfix] no events report in nls #80
    • [enhancement] refactor codes of metrics #77
    • [enhancement] more details when throwing error #79
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Nov 28, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support raw block volume #57 #69
    • [bugfix] check targetPath is existed when found no mountpoint #71
    • [enhancement] add docs: user-guide and api #59
    • [enhancement] add grpc-connection-timeout param in subcommadn csi #58
    • [enhancement] support open-simulator #60
    • [enhancement] Improve the readability of scheduling failure information
    • [enhancement] return fast when scoring
    • [enhancement] adjust the order of locks in the code to improve scheduling concurrency #75
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Oct 9, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support csi.storage.k8s.io/fstype in storageclass #51
    • [bugfix] fix slice bounds out of range bug #56
    • [enhancement] report err event in nls when creating VG failed #49
    • [enhancement] add build/release workflow, support building image of arm64 architecture #53
    • [enhancement] generate markdown docs of commandline automatically #50
    • [enhancement] Update issue templates #23
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Sep 20, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [bugfix] failed to create striped lv when add new PV in VG (tangtx3)
    • [bugfix] failed to mount mountpoint type PV in K8s 1.20 #25 (thebeatles1994)
    • [feature] support create vg forcelly #18 (thebeatles1994)
    • [enhancement] chore: add markdown, misspell and golangci action for open-local (allencloud)
    • [enhancement] fix golangci-lint error #13 (thebeatles1994)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Sep 1, 2021)

    ChangeLog:

    • [bugfix]get env SCHEDULER_HOST first when expand volume
    • [bugfix]add leases in helm/templates/rbac.yaml
    • fix README-zh_CN doc url broken #4
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Aug 27, 2021)

    ChangeLog:

    • Change driver name of open-local to local.csi.aliyun.com
    • Refactor code of ExpandVolume
    • Add user guide in English
    • Update chart to supporte k8s v1.18+ version
    • [Bugfix]Mountpoint does not exist when using PV of device type
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Aug 7, 2021)

    ChangeLog:

    • Add README-zh_CN.md
    • Free the dependency from alibaba csi-provisioner
    • [Bugfix]LookupVolumeGroup error: Volume group xxx not found
    • [Bugfix]User open-local support patching resource persistentvolumeclaims/status
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Aug 4, 2021)

Owner
null
A simple image hosting script in Golang (Smart Storage with Telegram Cloud Storage)

image-upload-tg It's a simple image hosting script in Golang. It's have http server and image can be uploaded from here, also images store temporary i

Alperen Çetin 0 Jan 19, 2022
An edge-native container management system for edge computing

SuperEdge is an open source container management system for edge computing to manage compute resources and container applications in multiple edge regions. These resources and applications, in the current approach, are managed as one single Kubernetes cluster. A native Kubernetes cluster can be easily converted to a SuperEdge cluster.

SuperEdge 879 Nov 24, 2022
Cloudpods is a cloud-native open source unified multi/hybrid-cloud platform developed with Golang

Cloudpods is a cloud-native open source unified multi/hybrid-cloud platform developed with Golang, i.e. Cloudpods is a cloud on clouds. Cloudpods is able to manage not only on-premise KVM/baremetals, but also resources from many cloud accounts across many cloud providers. It hides the differences of underlying cloud providers and exposes one set of APIs that allow programatically interacting with these many clouds.

null 1 Jan 11, 2022
Contentrouter - Protect static content via Firebase Hosting with Cloud Run and Google Cloud Storage

contentrouter A Cloud Run service to gate static content stored in Google Cloud

G. Hussain Chinoy 0 Jan 2, 2022
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient

Cloud Storage Service Author: Marisa Tania, Ryan Tjakrakartadinata Professor: Matthew Malensek See project spec here: https://www.cs.usfca.edu/~mmalen

Ryan G Tjakrakartadinata 2 Dec 7, 2021
A local emulator for Cloud Bigtable with persistance to a sqlite3 backend.

Little Bigtable A local emulator for Cloud Bigtable with persistance to a sqlite3 backend. The Cloud SDK provided cbtemulator is in-memory and does no

Bitly 16 Sep 29, 2022
High quality cloud service emulators for local development stacks

emulators High quality Google Cloud service emulators for local development stacks Why? At FullStory, our entire product and backend software stack ru

Engineering at FullStory 88 Nov 14, 2022
Cloud-native way to provide elastic Jupyter Notebook services on Kubernetes

elastic-jupyter-operator: Elastic Jupyter on Kubernetes Kubernetes 原生的弹性 Jupyter 即服务 介绍 为用户按需提供弹性的 Jupyter Notebook 服务。elastic-jupyter-operator 提供以下特性

TKEStack 138 Nov 27, 2022
A Cloud Native Buildpack for Go

The Go Paketo Buildpack provides a set of collaborating buildpacks that enable the building of a Go-based application.

Paketo Buildpacks 62 Nov 16, 2022
Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.

Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.

Bytedance Inc. 1.5k Nov 18, 2022
Nocalhost is Cloud Native Dev Environment.

Most productive way to build cloud-native applications. Nocalhost The term Nocalhost originates from No Local, which is a cloud-native development too

Nocalhost 1.4k Nov 26, 2022
A Cloud Native Buildpack that contributes SDKMAN and uses it to install dependencies like the Java Virtual Machine

gcr.io/paketo-buildpacks/sdkman A Cloud Native Buildpack that contributes SDKMAN and uses it to install dependencies like the Java Virtual Machine. Be

Daniel Mikusa 1 Jan 8, 2022
A Cloud Native Buildpack that provides the Open Liberty runtime

gcr.io/paketo-buildpacks/open-liberty The Paketo Open Liberty Buildpack is a Cloud Native Buildpack that contributes Open Liberty for Java EE support.

Paketo Buildpacks 5 Nov 1, 2022
cloneMAP: cloud-native Multi-Agent Platform

cloneMAP: cloud-native Multi-Agent Platform cloneMAP is a multi-agent platform (MAP) that is designed to run in a cloud environment based on Kubernete

RWTH Aachen University, Institute for Automation of Complex Power Systems 3 Nov 18, 2022
A Cloud Native Buildpack that contributes the Syft CLI which can be used to generate SBoM information

gcr.io/paketo-buildpacks/syft The Paketo Syft Buildpack is a Cloud Native Buildpack that contributes the Syft CLI which can be used to generate SBoM i

Paketo Buildpacks 4 Nov 16, 2022
Go language interface to Swift / Openstack Object Storage / Rackspace cloud files (golang)

Swift This package provides an easy to use library for interfacing with Swift / Openstack Object Storage / Rackspace cloud files from the Go Language

Nick Craig-Wood 294 Nov 9, 2022
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Kubernetes 93.9k Nov 22, 2022
Next generation distributed, event-driven, parallel config management!

mgmt: next generation config management! About: Mgmt is a real-time automation tool. It is familiar to existing configuration management software, but

James 3k Nov 19, 2022
Container Storage Interface driver for Synology NAS

Synology CSI Driver for Kubernetes The official Container Storage Interface driver for Synology NAS. Container Images & Kubernetes Compatibility Drive

Synology Open Source 217 Nov 27, 2022