Open-Local is a local disk management system composed of multiple components.

Overview

Open-Local

English | 简体中文

Open-Local is a local disk management system composed of multiple components. With Open-Local, using local storage in Kubernetes will be as simple as centralized storage.

Features

  • Local storage pool management
  • Dynamic volume provisioning
  • Extended scheduler
  • Volume expansion
  • Volume snapshot
  • Volume metrics

Overall Architecture

Open-Localcontains three types of components:

  • Scheduler extender: as an extended component of Kubernetes Scheduler, adding local storage scheduling algorithm
  • CSI plugins: providing the ability to create/delete volume, expand volume and take snapshots of the volume
  • Agent: running on each node in the K8s cluster, and report local storage device information for Scheduler extender

Who uses Open-Local

Open-Local has been widely used in production environments, and currently used products include:

  • Alibaba Cloud ECP (Enterprise Container Platform)
  • Alibaba Cloud ADP (Cloud-Native Application Delivery Platform)
  • AntStack Plus Products

User guide

More details here

License

Apache 2.0 License

Issues
  • Unable to install open-local on minicube

    Unable to install open-local on minicube

    Hello,

    I followed the installation guide here

    When I typed kubectl get po -nkube-system -l app=open-local the output was:

    NAME                                              READY   STATUS      RESTARTS   AGE
    open-local-agent-p2xdq                            3/3     Running     0          13m
    open-local-csi-provisioner-59cd8644ff-n52xc       1/1     Running     0          13m
    open-local-csi-resizer-554f54b5b4-xkw97           1/1     Running     0          13m
    open-local-csi-snapshotter-64dff4b689-9g9wl       1/1     Running     0          13m
    open-local-init-job--1-f9vzz                      0/1     Completed   0          13m
    open-local-init-job--1-j7j8b                      0/1     Completed   0          13m
    open-local-init-job--1-lmvqd                      0/1     Completed   0          13m
    open-local-scheduler-extender-5dc8d8bb49-n44pn    1/1     Running     0          13m
    open-local-snapshot-controller-846c8f6578-2bfhx   1/1     Running     0          13m
    

    However, when I typed kubectl get nodelocalstorage, I got this output:

    NAME       STATE   PHASE   AGENTUPDATEAT   SCHEDULERUPDATEAT   SCHEDULERUPDATESTATUS
    minikube                                                       
    

    According to the installation guide, the column The STATE should display DiskReady.

    And if I typed kubectl get nls -o yaml, it outputted:

    piVersion: v1
    items:
    - apiVersion: csi.aliyun.com/v1alpha1
      kind: NodeLocalStorage
      metadata:
        creationTimestamp: "2021-09-20T13:37:09Z"
        generation: 1
        name: minikube
        resourceVersion: "615"
        uid: 6f193362-e2b2-4053-a6e6-81de35c96eaf
      spec:
        listConfig:
          devices: {}
          mountPoints:
            include:
            - /mnt/open-local/disk-[0-9]+
          vgs:
            include:
            - open-local-pool-[0-9]+
        nodeName: minikube
        resourceToBeInited:
          vgs:
          - devices:
            - /dev/sdb
            name: open-local-pool-0
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""
    

    I am running Minicube on my desktop computer which has a SSD hard disk.

    Thank you for your help.

    opened by alex-arica 42
  • Use an existing VG?

    Use an existing VG?

    Question

    Is it possible to use an existing VG with this project? I already have a PV and VG created, and the VG has 100GB free to create LVs.

    Would it be possible to configure open-local to create new LVs in the existing VG? If it's possible, would appreciate any help.

    documentation enhancement 
    opened by murkylife 21
  • Helm install CSIDriver error

    Helm install CSIDriver error

    System info

    Via uname -a && kubectl version && helm version && apt-show-versions lvm2 | grep amd:

    Linux master-node 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T17:57:25Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
    version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
    lvm2:amd64/bionic-updates 2.02.176-4.1ubuntu3.18.04.3 uptodate
    

    Bug Description

    Setup

    wget https://github.com/alibaba/open-local/archive/refs/tags/v0.1.1.zip
    unzip v0.1.1.zip
    cd open-local-0.1.1
    

    The Problem

    Using release 0.1.1 of open-local and following the current user guide's instructions, when I run

    helm install open-local ./helm
    

    -- I get the following error:

    Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "CSIDriver" in version "storage.k8s.io/v1beta1"
    
    good first issue 
    opened by synkarius 9
  • device schedule with error Get Response StatusCode 500

    device schedule with error Get Response StatusCode 500

    Ⅰ. Issue Description

    新建 pvc 时报错

      Normal   WaitForFirstConsumer  2m13s                 persistentvolume-controller                                      waiting for first consumer to be created before binding
      Normal   ExternalProvisioning  11s (x11 over 2m13s)  persistentvolume-controller                                      waiting for a volume to be created, either by external provisioner "local.csi.aliyun.com" or manually created by system administrator
      Normal   Provisioning          6s (x8 over 2m13s)    local.csi.aliyun.com_node1_080636cb-a68d-4ee8-a3a3-db5ae5634cbb  External provisioner is provisioning volume for claim "demo/pvc-open-local-device-hdd-test2-0-d0"
      Warning  ProvisioningFailed    6s (x8 over 2m13s)    local.csi.aliyun.com_node1_080636cb-a68d-4ee8-a3a3-db5ae5634cbb  failed to provision volume with StorageClass "open-local-device-hdd": rpc error: code = InvalidArgument desc = Parse Device part schedule info error rpc error: code = InvalidArgument desc = device schedule with error Get Response StatusCode 500, Response: failed to allocate local storage for pvc demo/pvc-open-local-device-hdd-test2-0-d0: Insufficient Device storage, requested 0, available 0, capacity 0
    

    Ⅱ. Describe what happened

    我是在 k3s 集群中部署的 open-local,但是因为 k3s 没有 kube-scheduler 相关配置文件,所以 init-job 无法正常运行。

    modifying kube-scheduler.yaml...
    grep: /etc/kubernetes/manifests/kube-scheduler.yaml: No such file or directory
    + sed -i '/  hostNetwork: true/a \  dnsPolicy: ClusterFirstWithHostNet' /etc/kubernetes/manifests/kube-scheduler.yaml
    sed: can't read /etc/kubernetes/manifests/kube-scheduler.yaml: No such file or directory
    

    其他相关配置都运行良好:

    NAME                                              READY   STATUS    RESTARTS   AGE
    open-local-agent-7sd9d                            3/3     Running   0          22h
    open-local-csi-provisioner-785b7f99bd-hlqdv       1/1     Running   0          22h
    open-local-agent-8kg4r                            3/3     Running   0          22h
    open-local-agent-jljlv                            3/3     Running   0          22h
    open-local-scheduler-extender-5d48bc465c-r42pn    1/1     Running   0          22h
    open-local-snapshot-controller-785987975c-hhgr7   1/1     Running   0          22h
    open-local-csi-snapshotter-5f797c4596-wml76       1/1     Running   0          22h
    open-local-csi-resizer-7c9698976f-f7tzz           1/1     Running   0          22h
    
    master1 [~]$ kubectl get nodelocalstorage -ojson master1|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:02:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    master1 [~]$ kubectl get nodelocalstorage -ojson node1|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:01:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    master1 [~]$ kubectl get nodelocalstorage -ojson node2|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:02:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    

    所以我怀疑是调度的问题导致无法正常创建 PVC/PV

    Ⅲ. Describe what you expected to happen

    希望能够在 k3s 正常部署运行 open-local

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. 建议参考 k8s-scheduler-extender的方式实现调度的自定义,兼容 k3s 等其他平台

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version:
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
    opened by Seryta 7
  • make failed

    make failed

    Ⅰ. Issue Description

    I check out the repo and run make, it seems fail out of the box, am I missing something?

    Ⅱ. Describe what happened

    [email protected]:~/workspace/picloud/open-local$ make
    go test -v ./...
    ?       github.com/alibaba/open-local/cmd       [no test files]
    ?       github.com/alibaba/open-local/cmd/agent [no test files]
    ?       github.com/alibaba/open-local/cmd/controller    [no test files]
    ?       github.com/alibaba/open-local/cmd/csi   [no test files]
    ?       github.com/alibaba/open-local/cmd/doc   [no test files]
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="Waiting for informer caches to sync"
    time="2022-06-08T12:15:26+02:00" level=info msg="starting http server on port 23000"
    time="2022-06-08T12:15:26+02:00" level=info msg="all informer caches are synced"
    === RUN   TestVGWithName
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod testpod with nodes [[node-192.168.0.1 node-192.168.0.2 node-192.168.0.3 node-192.168.0.4]]"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.1"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=error msg="Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: false,failReasons: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi], err: Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.1,fits: false,failReasons: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.1 is not suitable for pod default/testpod, reason: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi] "
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.2"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.2 is capable of lvm 1 pvcs"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.2,fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.3"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.3 is capable of lvm 1 pvcs"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.3,fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=error msg="no vg(LVM) named ssd in node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: false,failReasons: [no vg(LVM) named ssd in node node-192.168.0.4], err: no vg(LVM) named ssd in node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.4,fits: false,failReasons: [no vg(LVM) named ssd in node node-192.168.0.4], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.4 is not suitable for pod default/testpod, reason: [no vg(LVM) named ssd in node node-192.168.0.4] "
    unexpected fault address 0x0
    fatal error: fault
    [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x46845f]
    
    goroutine 91 [running]:
    runtime.throw({0x178205e?, 0x18?})
            /usr/local/go/src/runtime/panic.go:992 +0x71 fp=0xc0004d71e8 sp=0xc0004d71b8 pc=0x4380b1
    runtime.sigpanic()
            /usr/local/go/src/runtime/signal_unix.go:825 +0x305 fp=0xc0004d7238 sp=0xc0004d71e8 pc=0x44e485
    aeshashbody()
            /usr/local/go/src/runtime/asm_amd64.s:1343 +0x39f fp=0xc0004d7240 sp=0xc0004d7238 pc=0x46845f
    runtime.mapiternext(0xc000788780)
            /usr/local/go/src/runtime/map.go:934 +0x2cb fp=0xc0004d72b0 sp=0xc0004d7240 pc=0x411beb
    runtime.mapiterinit(0x0?, 0x8?, 0x1?)
            /usr/local/go/src/runtime/map.go:861 +0x228 fp=0xc0004d72d0 sp=0xc0004d72b0 pc=0x4118c8
    reflect.mapiterinit(0xc000039cf8?, 0xc0004d7358?, 0x461365?)
            /usr/local/go/src/runtime/map.go:1373 +0x19 fp=0xc0004d72f8 sp=0xc0004d72d0 pc=0x464b79
    github.com/modern-go/reflect2.(*UnsafeMapType).UnsafeIterate(...)
            /home/lee/workspace/picloud/open-local/vendor/github.com/modern-go/reflect2/unsafe_map.go:112
    github.com/json-iterator/go.(*sortKeysMapEncoder).Encode(0xc00058f230, 0xc000497f00, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_map.go:291 +0x225 fp=0xc0004d7468 sp=0xc0004d72f8 pc=0x8553e5
    github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc00058f350, 0x1436da0?, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_struct_encoder.go:110 +0x56 fp=0xc0004d74e0 sp=0xc0004d7468 pc=0x862b36
    github.com/json-iterator/go.(*structEncoder).Encode(0xc00058f3e0, 0x0?, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_struct_encoder.go:158 +0x765 fp=0xc0004d75c8 sp=0xc0004d74e0 pc=0x863545
    github.com/json-iterator/go.(*OptionalEncoder).Encode(0xc00013bb80?, 0x0?, 0x0?)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_optional.go:70 +0xa4 fp=0xc0004d7618 sp=0xc0004d75c8 pc=0x85a744
    github.com/json-iterator/go.(*onePtrEncoder).Encode(0xc0004b3210, 0xc000497ef0, 0xc000497f50?)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect.go:219 +0x82 fp=0xc0004d7650 sp=0xc0004d7618 pc=0x84d7c2
    github.com/json-iterator/go.(*Stream).WriteVal(0xc000039ce0, {0x158a3e0, 0xc000497ef0})
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect.go:98 +0x158 fp=0xc0004d76c0 sp=0xc0004d7650 pc=0x84cad8
    github.com/json-iterator/go.(*frozenConfig).Marshal(0xc00013bb80, {0x158a3e0, 0xc000497ef0})
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/config.go:299 +0xc9 fp=0xc0004d7758 sp=0xc0004d76c0 pc=0x843d89
    github.com/alibaba/open-local/pkg/scheduler/server.PredicateRoute.func1({0x19bfee0, 0xc00019c080}, 0xc000318000, {0x203000?, 0xc00062b928?, 0xc00062b84d?})
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/routes.go:83 +0x326 fp=0xc0004d7878 sp=0xc0004d7758 pc=0x132d5e6
    github.com/alibaba/open-local/pkg/scheduler/server.DebugLogging.func1({0x19cafb0?, 0xc0005a80e0}, 0xc000056150?, {0x0, 0x0, 0x0})
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/routes.go:217 +0x267 fp=0xc0004d7988 sp=0xc0004d7878 pc=0x132e4a7
    github.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc0000b0de0, {0x19cafb0, 0xc0005a80e0}, 0xc000318000)
            /home/lee/workspace/picloud/open-local/vendor/github.com/julienschmidt/httprouter/router.go:387 +0x82b fp=0xc0004d7a98 sp=0xc0004d7988 pc=0x12d61ab
    net/http.serverHandler.ServeHTTP({0x19bc700?}, {0x19cafb0, 0xc0005a80e0}, 0xc000318000)
            /usr/local/go/src/net/http/server.go:2916 +0x43b fp=0xc0004d7b58 sp=0xc0004d7a98 pc=0x7e87fb
    net/http.(*conn).serve(0xc0001da3c0, {0x19cbab0, 0xc0001b68a0})
            /usr/local/go/src/net/http/server.go:1966 +0x5d7 fp=0xc0004d7fb8 sp=0xc0004d7b58 pc=0x7e3cb7
    net/http.(*Server).Serve.func3()
            /usr/local/go/src/net/http/server.go:3071 +0x2e fp=0xc0004d7fe0 sp=0xc0004d7fb8 pc=0x7e914e
    runtime.goexit()
            /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0004d7fe8 sp=0xc0004d7fe0 pc=0x46b061
    created by net/http.(*Server).Serve
            /usr/local/go/src/net/http/server.go:3071 +0x4db
    
    goroutine 1 [chan receive]:
    testing.(*T).Run(0xc000103ba0, {0x178cc75?, 0x516ac5?}, 0x18541b0)
            /usr/local/go/src/testing/testing.go:1487 +0x37a
    testing.runTests.func1(0xc0001b69c0?)
            /usr/local/go/src/testing/testing.go:1839 +0x6e
    testing.tRunner(0xc000103ba0, 0xc00064bcd8)
            /usr/local/go/src/testing/testing.go:1439 +0x102
    testing.runTests(0xc00050a0a0?, {0x2540700, 0x7, 0x7}, {0x7fa22c405a68?, 0x40?, 0x2557740?})
            /usr/local/go/src/testing/testing.go:1837 +0x457
    testing.(*M).Run(0xc00050a0a0)
            /usr/local/go/src/testing/testing.go:1719 +0x5d9
    main.main()
            _testmain.go:59 +0x1aa
    
    goroutine 19 [chan receive]:
    k8s.io/klog/v2.(*loggingT).flushDaemon(0x0?)
            /home/lee/workspace/picloud/open-local/vendor/k8s.io/klog/v2/klog.go:1169 +0x6a
    created by k8s.io/klog/v2.init.0
            /home/lee/workspace/picloud/open-local/vendor/k8s.io/klog/v2/klog.go:417 +0xf6
    
    goroutine 92 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607b38, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc0003c6100?, 0xc00050c2e1?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Read(0xc0003c6100, {0xc00050c2e1, 0x1, 0x1})
            /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
    net.(*netFD).Read(0xc0003c6100, {0xc00050c2e1?, 0xc000613628?, 0xc00061e000?})
            /usr/local/go/src/net/fd_posix.go:55 +0x29
    net.(*conn).Read(0xc000612180, {0xc00050c2e1?, 0xc0005147a0?, 0x985846?})
            /usr/local/go/src/net/net.go:183 +0x45
    net/http.(*connReader).backgroundRead(0xc00050c2d0)
            /usr/local/go/src/net/http/server.go:672 +0x3f
    created by net/http.(*connReader).startBackgroundRead
            /usr/local/go/src/net/http/server.go:668 +0xca
    
    goroutine 43 [select]:
    net/http.(*persistConn).roundTrip(0xc00056a360, 0xc0006420c0)
            /usr/local/go/src/net/http/transport.go:2620 +0x974
    net/http.(*Transport).roundTrip(0x25410e0, 0xc0004c6600)
            /usr/local/go/src/net/http/transport.go:594 +0x7c9
    net/http.(*Transport).RoundTrip(0x40f405?, 0x19b3900?)
            /usr/local/go/src/net/http/roundtrip.go:17 +0x19
    net/http.send(0xc0004c6600, {0x19b3900, 0x25410e0}, {0x172b2a0?, 0x178c601?, 0x0?})
            /usr/local/go/src/net/http/client.go:252 +0x5d8
    net/http.(*Client).send(0x2556ec0, 0xc0004c6600, {0xd?, 0x1788f4f?, 0x0?})
            /usr/local/go/src/net/http/client.go:176 +0x9b
    net/http.(*Client).do(0x2556ec0, 0xc0004c6600)
            /usr/local/go/src/net/http/client.go:725 +0x8f5
    net/http.(*Client).Do(...)
            /usr/local/go/src/net/http/client.go:593
    net/http.(*Client).Post(0x17b1437?, {0xc000492480?, 0xc00054bdc8?}, {0x178f761, 0x10}, {0x19b0fe0?, 0xc0001b6a20?})
            /usr/local/go/src/net/http/client.go:858 +0x148
    net/http.Post(...)
            /usr/local/go/src/net/http/client.go:835
    github.com/alibaba/open-local/cmd/scheduler.predicateFunc(0xc0000f9800, {0x253ebe0, 0x4, 0x4})
            /home/lee/workspace/picloud/open-local/cmd/scheduler/extender_test.go:348 +0x1e8
    github.com/alibaba/open-local/cmd/scheduler.TestVGWithName(0x4082b9?)
            /home/lee/workspace/picloud/open-local/cmd/scheduler/extender_test.go:135 +0x17e
    testing.tRunner(0xc000103d40, 0x18541b0)
            /usr/local/go/src/testing/testing.go:1439 +0x102
    created by testing.(*T).Run
            /usr/local/go/src/testing/testing.go:1486 +0x35f
    
    goroutine 87 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607d18, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc00003a580?, 0xc000064000?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Accept(0xc00003a580)
            /usr/local/go/src/internal/poll/fd_unix.go:614 +0x22c
    net.(*netFD).accept(0xc00003a580)
            /usr/local/go/src/net/fd_unix.go:172 +0x35
    net.(*TCPListener).accept(0xc0001301e0)
            /usr/local/go/src/net/tcpsock_posix.go:139 +0x28
    net.(*TCPListener).Accept(0xc0001301e0)
            /usr/local/go/src/net/tcpsock.go:288 +0x3d
    net/http.(*Server).Serve(0xc0000dc2a0, {0x19cada0, 0xc0001301e0})
            /usr/local/go/src/net/http/server.go:3039 +0x385
    net/http.(*Server).ListenAndServe(0xc0000dc2a0)
            /usr/local/go/src/net/http/server.go:2968 +0x7d
    net/http.ListenAndServe(...)
            /usr/local/go/src/net/http/server.go:3222
    github.com/alibaba/open-local/pkg/scheduler/server.(*ExtenderServer).InitRouter.func1()
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/web.go:185 +0x157
    created by github.com/alibaba/open-local/pkg/scheduler/server.(*ExtenderServer).InitRouter
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/web.go:182 +0x478
    
    goroutine 49 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607c28, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc00003a800?, 0xc000639000?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Read(0xc00003a800, {0xc000639000, 0x1000, 0x1000})
            /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
    net.(*netFD).Read(0xc00003a800, {0xc000639000?, 0x17814b4?, 0x0?})
            /usr/local/go/src/net/fd_posix.go:55 +0x29
    net.(*conn).Read(0xc000495a38, {0xc000639000?, 0x19ce530?, 0xc000370ea0?})
            /usr/local/go/src/net/net.go:183 +0x45
    net/http.(*persistConn).Read(0xc00056a360, {0xc000639000?, 0x40757d?, 0x60?})
            /usr/local/go/src/net/http/transport.go:1929 +0x4e
    bufio.(*Reader).fill(0xc000522a80)
            /usr/local/go/src/bufio/bufio.go:106 +0x103
    bufio.(*Reader).Peek(0xc000522a80, 0x1)
            /usr/local/go/src/bufio/bufio.go:144 +0x5d
    net/http.(*persistConn).readLoop(0xc00056a360)
            /usr/local/go/src/net/http/transport.go:2093 +0x1ac
    created by net/http.(*Transport).dialConn
            /usr/local/go/src/net/http/transport.go:1750 +0x173e
    
    goroutine 178 [select]:
    net/http.(*persistConn).writeLoop(0xc00056a360)
            /usr/local/go/src/net/http/transport.go:2392 +0xf5
    created by net/http.(*Transport).dialConn
            /usr/local/go/src/net/http/transport.go:1751 +0x1791
    FAIL    github.com/alibaba/open-local/cmd/scheduler     0.177s
    ?       github.com/alibaba/open-local/cmd/version       [no test files]
    ?       github.com/alibaba/open-local/pkg       [no test files]
    ?       github.com/alibaba/open-local/pkg/agent/common  [no test files]
    === RUN   TestNewAgent
    
    

    Ⅲ. Describe what you expected to happen

    make should run through.

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. git clone https://github.com/alibaba/open-local.git
    2. cd open-local
    3. make 4.
    4. failed

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: main branch
    • OS (e.g. from /etc/os-release): ubuntu 22.04
    • Kernel (e.g. uname -a): 5.15.0-33
    • Install tools:
    • Others:
    opened by liyimeng 4
  • [bug] Extender fail to update some nls after the nls resource is deleted and rebuilt in a large-scale cluster

    [bug] Extender fail to update some nls after the nls resource is deleted and rebuilt in a large-scale cluster

    批量删除nls后,nls会正常创建,但extender patch时会报错:

    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzx9vn1t003z"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzx9vn1t003z\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyljpz"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyljpz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyli4z"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyli4z\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzx9vn1t01pz"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzx9vn1t01pz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:32+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyljmz"
    time="2021-11-26T14:50:32+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyljmz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:32+08:00" level=debug msg="get update on node local cache izbp14kyqi4fdsb7ax48itz"
    time="2021-11-26T14:50:32+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp14kyqi4fdsb7ax48itz\": the object has been modified; please apply your changes to the latest version and try again"
    

    影响就是extender无法更新nls的status,导致应用无法使用该节点上的存储设备。

    是在大规模场景下做的测试。

    bug 
    opened by TheBeatles1994 4
  • VG  Create  Error

    VG Create Error

    Ⅰ. Issue Description

    NodeLocalStorage's Status Is Null. image

    Ⅱ. Describe what happened

    Install Open-local By Chart,Worker Of Kubernetes Adding Raw Device,But VG In The Worker Is Not.Then Check NLS status,Find It Was Null.

    Ⅲ. Describe what you expected to happen

    VG In The Worker Of Raw Device Is Fine.

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. helm install open-local 2.vgs in Worker
    2. kubectl get nls

    Ⅴ. Anything else we need to know?

    1、Scheduler Process image 2、scheduler-policy-config.json image 3、driver-registrar Logs image 4、Agent Logs image 5、scheduler-extender Logs image 6、NodeLocalStorageInitConfig image 7、Raw Device Of Worker image

    Ⅵ. Environment:

    -Kubernetes Version image

    • Open-Local version: image

    • OS (e.g. from /etc/os-release):

    image

    • Kernel (e.g. uname -a):

    image

    • Install tools:

    image

    Thanks

    bug 
    opened by liyongxian 3
  • [bug]create inline volume error

    [bug]create inline volume error

    Ⅰ. Issue Description

    create inline volume error

    Ⅱ. Describe what happened

      Normal   Scheduled    18s               default-scheduler  Successfully assigned default/file-server-6b9b66fb7c-bqbgl to block4
      Warning  FailedMount  1s (x6 over 18s)  kubelet            MountVolume.SetUp failed for volume "webroot" : rpc error: code = Internal desc = NodePublishVolume(mountLvmFS): mount lvm volume csi-993f2d33ec8c5892c833a038cfafa0e364df02be647edc83bc2f66d5435871ae with path /var/lib/kubelet/pods/f65659a8-7271-4600-b43b-39d692967276/volumes/kubernetes.io~csi/webroot/mount with error: rpc error: code = Internal desc = Failed to run cmd: /bin/nsenter --mount=/proc/1/ns/mnt --ipc=/proc/1/ns/ipc --net=/proc/1/ns/net --uts=/proc/1/ns/uts  lvcreate -n csi-993f2d33ec8c5892c833a038cfafa0e364df02be647edc83bc2f66d5435871ae -L 1024m open-local-pool-0, with out: WARNING: ext4 signature detected on /dev/open-local-pool-0/csi-993f2d33ec8c5892c833a038cfafa0e364df02be647edc83bc2f66d5435871ae at offset 1080. Wipe it? [y/n]: [n]
      Aborted wiping of ext4.
      1 existing signature left on the device.
      Failed to wipe signatures on logical volume open-local-pool-0/csi-993f2d33ec8c5892c833a038cfafa0e364df02be647edc83bc2f66d5435871ae.
      Aborting. Failed to wipe start of new LV.
    , with error: exit status 5
    

    Ⅲ. Anything else we need to know?

    createLvm(inline volume)

    cmd := fmt.Sprintf("%s lvcreate -n %s -L %d%s %s", localtype.NsenterCmd, volumeID, pvSize, unit, vgName)
    

    CreateLV

    args := []string{localtype.NsenterCmd, "lvcreate", "-n", name, "-L", fmt.Sprintf("%db", size), "-W", "y", "-y"}
    
    bug 
    opened by halfme 2
  • open-local agent populates too many error message if volume group auto-creation failed

    open-local agent populates too many error message if volume group auto-creation failed

    Ⅰ. Issue Description

    open-local agent populates too many error message if volume group auto-creation failed

    Ⅱ. Describe what happened

    if the volume group was not able to create by agent, error will be generated again an again image

    These errors may hide the informational message of the system

    Ⅲ. Describe what you expected to happen

    it's good to introduce some backoff mechanism to eliminate noise messages

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. specify init configuration to create vg with an occupied disk
    2. check the log in the agent

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: v0.4.0
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
    enhancement 
    opened by peter-wangxu 2
  • Add value to optionally turn off init-job

    Add value to optionally turn off init-job

    On my single node cluster, the init-job is permanently pending due to a node affinity issue (my node is the master).

    The init-job makes many assumptions about the setup open-local is running on, and should be optional in case it's not needed.

    opened by murkylife 2
  • After a new PV is added to the VG, the logical volume of stripe type still fails to be created

    After a new PV is added to the VG, the logical volume of stripe type still fails to be created

    I tried to create a 3Gi 'striped' PVC, but failed to create it, log as follows:

    E0811 01:33:55.208394 1 controller.go:920] error syncing claim "90e63057-6723-40fd-83d2-39199018c149": failed to provision volume with StorageClass "test": rpc error: code = Unknown desc = Create Lvm with error rpc error: code = Internal desc = failed to create lv: Failed to run cmd: /nsenter --mount=/proc/1/ns/mnt --ipc=/proc/1/ns/ipc --net=/proc/1/ns/net --uts=/proc/1/ns/uts lvcreate -n disk-90e63057-6723-40fd-83d2-39199018c149 -L 3221225472b -W y -y -i 2 vg_test with out: Using default stripesize 64.00 KiB. Insufficient suitable allocatable extents for logical volume disk-90e63057-6723-40fd-83d2-39199018c149: 494 more required, with error: exit status 5

    PV info

    [[email protected] ~]# pvs -S vg_name=vg_test PV VG Fmt Attr PSize PFree /dev/sdb vg_test lvm2 a-- <20.00g 540m /dev/sdc vg_test lvm2 a-- <30.00g <10.54g

    After I added a 30G PV(/dev/sdd) to VG, but still failed to create it, log as follows:

    E0811 01:33:55.208394 1 controller.go:920] error syncing claim "4b3a6936-8526-41f0-aa3d-f9e04bad4f3f": failed to provision volume with StorageClass "test": rpc error: code = Unknown desc = Create Lvm with error rpc error: code = Internal desc = failed to create lv: Failed to run cmd: /nsenter --mount=/proc/1/ns/mnt --ipc=/proc/1/ns/ipc --net=/proc/1/ns/net --uts=/proc/1/ns/uts lvcreate -n disk-4b3a6936-8526-41f0-aa3d-f9e04bad4f3f -L 3221225472b -W y -y -i 3 vg_test with out: Using default stripesize 64.00 KiB. Insufficient suitable allocatable extents for logical volume disk-4b3a6936-8526-41f0-aa3d-f9e04bad4f3f: 238 more required, with error: exit status 5

    PV info

    [[email protected] ~]# pvs -S vg_name=vg_test PV VG Fmt Attr PSize PFree /dev/sdb vg_test lvm2 a-- <20.00g 540m /dev/sdc vg_test lvm2 a-- <30.00g <10.54g /dev/sdd vg_test lvm2 a-- <30.00g <30.00g

    Then I looked up the problem and found it was a bug. No matter how many new PVS I added, I couldn't create a 'striped' volume. It depends on the PV with the lowest free capacity.

    The code is as follows: https://github.com/alibaba/open-local/blob/e6feccc9591a755810aab375723aa4514dc6a1f1/pkg/csi/server/commands.go#L100

    If you're creating a 'striped' volume, I don't think it's right to get all the PVS in the VG as the '-i' parameter. I've optimized the code. I hope it helps.

    	if striping {
    		pvCount, err := getRequiredPVNumber(vg, size)
    		if err != nil {
    			return "", err
    		}
    		if pvCount == 0 {
    			return "", fmt.Errorf("could not create `striping` logical volume, not enough space")
    		}
    		args = append(args, "-i", strconv.Itoa(pvCount))
    	}
    
    func getRequiredPVNumber(vgName string, lvSize uint64) (int, error) {
    	pvs, err := ListPV(vgName)
    	if err != nil {
    		return 0, err
    	}
    	// calculate pv count
    	pvCount := len(pvs)
    	for pvCount > 0 {
    		avgPvRequest := lvSize / uint64(pvCount)
    		for num, pv := range pvs {
    			if pv.FreeSize < avgPvRequest {
    				pvs = append(pvs[:num], pvs[num+1:]...)
    			}
    		}
    
    		if pvCount == len(pvs) {
    			break
    		}
    		pvCount = len(pvs)
    	}
    	return pvCount, nil
    }
    
    
    good first issue 
    opened by tangtx3 2
  • [feature] Extend to support zfs

    [feature] Extend to support zfs

    Why you need it?

    openlocal is a general name, while only lvm is covered, extend it to support ZFS seem to make it better deserving the name ;)

    lvm is better performance, less resource demanding, but ZFS has some nice features like ensuring data integrity make it still a valid choice in many datacenter.

    How it could be?

    Providing lvm and zfs is so similar for end user perspective, I am wondering if it is possible to add a ZFS driver side by side, make openlocal support zfs as backend storage option. I see openebs have an option named local-zfs, but it is look pretty messy there. This project, seem providing much neat code. it could be better base to implement a zfs csi driver here. If zfs is included, I believe the project will get much more attention

    What do you thinkg @TheBeatles1994

    Other related information

    opened by liyimeng 6
  • [feature] CSI Volume Health

    [feature] CSI Volume Health

    Why you need it?

    Since we are about to migrate ACK-D 1.22, it would be nice to have CSIVolumeHealth alpha feature gate enabled and allowing us to detect abnormal volume conditions in a sensible fashion, also make Open-Local to provide compete features of CSI Spec.

    Check https://kubernetes.io/docs/concepts/storage/volume-health-monitoring/

    How it could be?

    N/A.

    Other related information

    opened by nashtsai 0
  • [feature] support SPDK

    [feature] support SPDK

    Why you need it?

    vhost-user-blk/scsi. is a high efficient way to transport data for virtual environments. Open-local currently doesn't support vhost-user-blk/scsi.

    How it could be?

    The Storage Performance Development Kit (SPDK) can provide vhost support. To support vhost-user-blk/scsi in open-local node CSI driver should communicate with SPDK. Following is a brief description :

    .  NodeStageVolume / NodeUnStageVolume
        n/a
    .  NodePublishVolume
        -  Create bdev
            # scripts/rpc.py bdev_aio_create <path_to_host_block_dev> <bdev_name>
            # scripts/rpc.py bdev_lvol_create_lvstore <bdev_name> <lvs_name >
            # scripts/rpc.py bdev_lvol_create  <lvol_name> <size> -l <lvs_name>
        -  Create vhost device
            # scripts/rpc.py vhost_create_blk_controller --cpumask 0x1 vhostblk0 <bdev_name>
            # mknod /var/run/kata-containers/vhost-user/block/devices/vhostblk0 b 241 0
            # mount --bind [...] /var/run/kata-containers/vhost-user/block/devices/vhostblk0 <target_path>
    .  NodeUnPublishVolume
           # umount <target_path>
           # scripts/rpc.py bdev_lvol_delete  <lvol_name>
           # rm /var/run/kata-containers/vhost-user/block/devices/vhostblk0
    

    besides, we need add a field in nlsc and nls to indicate if the storage is provided by SPDK.

    image

    Other related information

    opened by caiwenzh 5
  • kube-scheduler flag policy-config-file removed from v1.23

    kube-scheduler flag policy-config-file removed from v1.23

    Ⅰ. Issue Description

    Current oepn-local helm chart using job to append flag policy-config-file to kube-scheduler, and the flag is removed from v1.23, cause error.

    Ⅱ. Describe what happened

    Scheduler crash

    Error: unknown flag: --policy-config-file
    

    https://sourcegraph.com/github.com/kubernetes/kubernetes/-/blob/CHANGELOG/CHANGELOG-1.23.md?L1974

    The legacy scheduler policy config is removed in v1.23, the associated flags policy-config-file, policy-configmap, policy-configmap-namespace and use-legacy-policy-config are also removed. Migrate to Component Config instead, see https://kubernetes.io/docs/reference/scheduling/config/ for details. (#105424, @kerthcet) [SIG Scheduling and Testing]

    Ⅲ. Describe what you expected to happen

    Scheduler running success .

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. helm template openlocal ./helm > openlocal.yaml
    2. kubectl apply open-local.yaml

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • k8s version: v1.23.6
    • Open-Local version: v0.5.4
    • OS (e.g. from /etc/os-release): centos8
    • Kernel (e.g. uname -a): Linux 4.18.0-305.3.1.el8.x86_64 SMP Tue Jun 1 16:14:33 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    • Install tools:
    • Others:
    opened by chinglinwen 2
  • open-local does not work with kubernetes after v1.23

    open-local does not work with kubernetes after v1.23

    Ⅰ. Issue Description

    Ⅱ. Describe what happened

    I deploy open-local to my k8s cluster, and kube-scheduler restart failed. the error log:

    Error: unknown flag: --policy-config-file

    I use kubernetes version v1.23.6

    according to Kubernetes Documentation: Scheduling Policies policy-config-file flag is not supported after version v1.23

    In Kubernetes versions before v1.23, a scheduling policy can be used to specify the predicates and priorities process. For example, you can set a scheduling policy by running kube-scheduler --policy-config-file or kube-scheduler --policy-configmap .

    This scheduling policy is not supported since Kubernetes v1.23. Associated flags policy-config-file, policy-configmap, policy-configmap-namespace and use-legacy-policy-config are also not supported. Instead, use the Scheduler Configuration to achieve similar behavior.

    It seems following script in helm/templates/init-job.yaml which add '--policy-config-file' to kube-scheduler cause the problem

                if ! grep "^\    - --policy-config-file=*" /etc/kubernetes/manifests/kube-scheduler.yaml; then
                    sed -i "/    - --kubeconfig=/a \    - --policy-config-file=/etc/kubernetes/scheduler-policy-config.json" /etc/kubernetes/manifests/kube-scheduler.yaml
                fi
    

    Ⅲ. Describe what you expected to happen

    kube-scheduler running after deploy open-local

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. using kuberbetes version newer than v1.23
    2. deploy open-local 3.use "kubectl get componentstatus" check scheduler status

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: v0.5.4
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools: helm
    • Others:
    opened by john-liuqiming 2
Releases(v0.5.5)
  • v0.5.5(Jun 1, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [enhancement] init resource with exponential backoff by @lihezhong93 #130
    • [enhancement] make unpublishvolume more robust by @TheBeatles1994 #137
    • [enhancement] optimize pod list performance under large-scale clusters by @TheBeatles1994 #139
    • [bugfix] fix open-local-device-ssd mediaType Spelling mistake by @jieerliansan #140
    • [bugfix] fix incorrect snapshot size by @TheBeatles1994 #135
    Source code(tar.gz)
    Source code(zip)
  • v0.5.4(Apr 26, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support ipv6 mode #125 #127
    • [bugfix] fail to wipe start of new LV when creating inline volume #123
    • [bugfix] fail to update updateStatus of nls when deploying in lower k8s version(<1.20) #122
    • [bugfix] slice bounds out of range when length of logicalvolume name is less than 3 #126
    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(Mar 20, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support deleting orphaned volumesnapshotcontent #116
    • [feature] support turning off nls auto-updation of controller to be compatible with older versions #117
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(Mar 4, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support open-local controller #108
    • [feature] add grafana dashboard #113
    • [enhancement] support pushing to dockerhub #109
    • [enhancement] update readme #111 #112
    • [bugfix] build wrong arm64 image #110
    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Feb 20, 2022)

  • v0.5.0(Feb 16, 2022)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support ephemeral local volumes #101 #102
    • [enhancement] add introduction of vgName in param.md #103
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Dec 29, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support io-throttling #95
    • [enhancement] slim image size #92
    • [enhancement] remove mutex in agent #93
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Dec 10, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [bugfix] error cache when creating snapshot pv #87
    • [enhancement] upgrade snapshot client to v4.2.0 #89
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Dec 3, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [bugfix] csi-plugin crash when creating mountpoint pv #85
    • [bugfix] clean old data when deleting device pv #84
    • [bugfix] pvc pending when there are available devices on node #81
    • [bugfix] no events report in nls #80
    • [enhancement] refactor codes of metrics #77
    • [enhancement] more details when throwing error #79
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Nov 28, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support raw block volume #57 #69
    • [bugfix] check targetPath is existed when found no mountpoint #71
    • [enhancement] add docs: user-guide and api #59
    • [enhancement] add grpc-connection-timeout param in subcommadn csi #58
    • [enhancement] support open-simulator #60
    • [enhancement] Improve the readability of scheduling failure information
    • [enhancement] return fast when scoring
    • [enhancement] adjust the order of locks in the code to improve scheduling concurrency #75
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Oct 9, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [feature] support csi.storage.k8s.io/fstype in storageclass #51
    • [bugfix] fix slice bounds out of range bug #56
    • [enhancement] report err event in nls when creating VG failed #49
    • [enhancement] add build/release workflow, support building image of arm64 architecture #53
    • [enhancement] generate markdown docs of commandline automatically #50
    • [enhancement] Update issue templates #23
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Sep 20, 2021)

    Welcome to Join Open-Local dev DingTalk group: No.34118035

    • [bugfix] failed to create striped lv when add new PV in VG (tangtx3)
    • [bugfix] failed to mount mountpoint type PV in K8s 1.20 #25 (thebeatles1994)
    • [feature] support create vg forcelly #18 (thebeatles1994)
    • [enhancement] chore: add markdown, misspell and golangci action for open-local (allencloud)
    • [enhancement] fix golangci-lint error #13 (thebeatles1994)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Sep 1, 2021)

    ChangeLog:

    • [bugfix]get env SCHEDULER_HOST first when expand volume
    • [bugfix]add leases in helm/templates/rbac.yaml
    • fix README-zh_CN doc url broken #4
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Aug 27, 2021)

    ChangeLog:

    • Change driver name of open-local to local.csi.aliyun.com
    • Refactor code of ExpandVolume
    • Add user guide in English
    • Update chart to supporte k8s v1.18+ version
    • [Bugfix]Mountpoint does not exist when using PV of device type
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Aug 7, 2021)

    ChangeLog:

    • Add README-zh_CN.md
    • Free the dependency from alibaba csi-provisioner
    • [Bugfix]LookupVolumeGroup error: Volume group xxx not found
    • [Bugfix]User open-local support patching resource persistentvolumeclaims/status
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Aug 4, 2021)

Owner
Alibaba
Alibaba Open Source
Alibaba
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Bo-Yi Wu 14 Apr 19, 2022
Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Rook 25 Jun 24, 2022
Op - A small tool that will allow you to open language or framework documentation in your browser from your terminal

op "op" is a small tool that will allow you to open language or framework docume

Damien Sedgwick 3 Apr 14, 2022
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

The golang.design Initiative 65 May 16, 2022
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright © 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Changkun Ou 6 Nov 22, 2021
Perkeep (née Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Perkeep (née Camlistore) 5.9k Jun 23, 2022
A set of components that can be composed into a highly available metric system with unlimited storage capacity

Overview Thanos is a set of components that can be composed into a highly available metric system with unlimited storage capacity, which can be added

Rohan 0 Oct 20, 2021
Peerster refers to a gossip-based P2P system composed of multiple peers interacting with each other

Peerster design Peerster refers to a gossip-based P2P system composed of multiple peers interacting with each other. A peer refers to an autonomous en

João Correia 4 Jan 22, 2022
Local Disk Manager is one of HwameiStor components

Local Disk Manager is one of HwameiStor components. It will manage all the local disks of the HwameiStor nodes, including provision local Disk volume, and disk health management.

HwameiStor 28 Jun 2, 2022
Avalanche : a network composed of multiple blockchains

Timestamp Virtual Machine Avalanche is a network composed of multiple blockchains. Each blockchain is an instance of a Virtual Machine (VM), much like

null 1 Dec 15, 2021
Avalanche: a network composed of multiple blockchains

Coreth and the C-Chain Avalanche is a network composed of multiple blockchains.

Ava Labs 110 Jun 18, 2022
Avalanche: a network composed of multiple blockchains

Subnet EVM Avalanche is a network composed of multiple blockchains. Each blockchain is an instance of a Virtual Machine (VM), much like an object in a

Ava Labs 122 Jun 14, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

HwameiStor 163 Jun 22, 2022
Yeqllo 22 Nov 26, 2021
gophertunnel is composed of several packages that may be of use for creating Minecraft related tools

gophertunnel is composed of several packages that may be of use for creating Minecraft related tools. A brief overview of all packages may be found here.

Sandertv 265 Jun 21, 2022
PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology management, high availability, configuration management, and plugin extensions.

What is PolarDB Cluster Manager PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology manage

null 8 Dec 15, 2021
Provide open, community driven reusable components for building distributed applications

Components Contrib The purpose of Components Contrib is to provide open, community driven reusable components for building distributed applications. T

null 0 Nov 28, 2021
Libraries and CLIs for my personal all-in-one productivity system including components like bookmarks, notes, todos, projects, etc.

bntp.go Libraries and CLIs for my personal all-in-one productivity system including components like bookmarks, notes, todos, projects, etc. Neovim int

Jonas Mühlmann 11 Apr 29, 2022
Cloudprober is a monitoring software that makes it super-easy to monitor availability and performance of various components of your system.

Cloudprober is a monitoring software that makes it super-easy to monitor availability and performance of various components of your system. Cloudprobe

null 151 Jun 21, 2022
PC-INFO is a tool that gathers information of your system components.

PC-INFO PC-INFO is a tool that gathers information of your system components. Download CLICK HERE TO DOWNLOAD Features Mainboard CPU GPU RAM HOSTNAME

Alejandro Castillo 1 Mar 12, 2022
Zms - The Bhojpur ZMS is a software-as-a-service product applied in different risk management areas. It is a containment Zone Management System based on Bhojpur.NET Platform.

Bhojpur ZMS - Zone Management System The Bhojpur ZMS is a software-as-a-service product used as a Zone Management System based on Bhojpur.NET Platform

Bhojpur Consulting 0 Jan 2, 2022
tstorage is a lightweight local on-disk storage engine for time-series data

tstorage is a lightweight local on-disk storage engine for time-series data with a straightforward API. Especially ingestion is massively opt

Ryo Nakao 771 Jun 23, 2022
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Bo-Yi Wu 14 Apr 19, 2022
Kubernetes is an open source system for managing containerized applications across multiple hosts.

Kubernetes Kubernetes is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for deploym

null 0 Nov 25, 2021
This is a comprehensive system that simulate multiple servers’ consensus behavior at local machine using multi-process deployment.

Raft simulator with Golang This project is a simulator for the Raft consensus protocol. It uses HTTP for inter-server communication, and a job schedul

Yujie Zhang 1 Jan 30, 2022
An Open Source video surveillance management system for people making this world a safer place.

Kerberos Open Source Docker Hub | Documentation | Website Kerberos Open source (v3) is a cutting edge video surveillance management system made availa

Kerberos.io 291 Jun 25, 2022
cloud-native local storage management system

Open-Local是由多个组件构成的本地磁盘管理系统,目标是解决当前 Kubernetes 本地存储能力缺失问题。通过Open-Local,使用本地存储会像集中式存储一样简单。

null 211 Jun 26, 2022