Dynamically provisioning persistent local storage with Kubernetes

Overview

Local Path Provisioner

Build StatusGo Report Card

Overview

Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the user configuration, the Local Path Provisioner will create hostPath based persistent volume on the node automatically. It utilizes the features introduced by Kubernetes Local Persistent Volume feature, but make it a simpler solution than the built-in local volume feature in Kubernetes.

Compare to built-in Local Persistent Volume feature in Kubernetes

Pros

Dynamic provisioning the volume using hostPath.

Cons

  1. No support for the volume capacity limit currently.
    1. The capacity limit will be ignored for now.

Requirement

Kubernetes v1.12+.

Deployment

Installation

In this setup, the directory /opt/local-path-provisioner will be used across all the nodes as the path for provisioning (a.k.a, store the persistent volume data). The provisioner will be installed in local-path-storage namespace by default.

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Or, use kustomize to deploy.

kustomize build "github.com/rancher/local-path-provisioner/deploy?ref=master" | kubectl apply -f -

After installation, you should see something like the following:

$ kubectl -n local-path-storage get pod
NAME                                     READY     STATUS    RESTARTS   AGE
local-path-provisioner-d744ccf98-xfcbk   1/1       Running   0          7m

Check and follow the provisioner log using:

$ kubectl -n local-path-storage logs -f -l app=local-path-provisioner

Usage

Create a hostPath backend Persistent Volume and a pod uses it:

kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

Or, use kustomize to deploy them.

kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl apply -f -

You should see the PV has been created:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS   REASON    AGE
pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            Delete           Bound     default/local-path-pvc   local-path               4s

The PVC has been bound:

$ kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
local-path-pvc   Bound     pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            local-path     16s

And the Pod started running:

$ kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
volume-test   1/1       Running   0          3s

Write something into the pod

kubectl exec volume-test -- sh -c "echo local-path-test > /data/test"

Now delete the pod using

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

After confirm that the pod is gone, recreated the pod using

kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

Check the volume content:

$ kubectl exec volume-test cat /data/test
local-path-test

Delete the pod and pvc

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml

Or, use kustomize to delete them.

kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl delete -f -

The volume content stored on the node will be automatically cleaned up. You can check the log of local-path-provisioner-xxx for details.

Now you've verified that the provisioner works as expected.

Configuration

Customize the ConfigMap

The configuration of the provisioner is a json file config.json and two bash scripts setup and teardown, stored in the a config map, e.g.:

kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/opt/local-path-provisioner"]
                },
                {
                        "node":"yasker-lp-dev1",
                        "paths":["/opt/local-path-provisioner", "/data1"]
                },
                {
                        "node":"yasker-lp-dev3",
                        "paths":[]
                }
                ]
        }
  setup: |-
        #!/bin/sh
        while getopts "m:s:p:" opt
        do
            case $opt in
                p)
                absolutePath=$OPTARG
                ;;
                s)
                sizeInBytes=$OPTARG
                ;;
                m)
                volMode=$OPTARG
                ;;
            esac
        done

        mkdir -m 0777 -p ${absolutePath}
  teardown: |-
        #!/bin/sh
        while getopts "m:s:p:" opt
        do
            case $opt in
                p)
                absolutePath=$OPTARG
                ;;
                s)
                sizeInBytes=$OPTARG
                ;;
                m)
                volMode=$OPTARG
                ;;
            esac
        done

        rm -rf ${absolutePath}
  helperPod.yaml: |-
        apiVersion: v1
        kind: Pod
        metadata:
          name: helper-pod
        spec:
          containers:
          - name: helper-pod
            image: busybox

config.json

Definition

nodePathMap is the place user can customize where to store the data on each node.

  1. If one node is not listed on the nodePathMap, and Kubernetes wants to create volume on it, the paths specified in DEFAULT_PATH_FOR_NON_LISTED_NODES will be used for provisioning.
  2. If one node is listed on the nodePathMap, the specified paths in paths will be used for provisioning.
    1. If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.
    2. If more than one path was specified, the path would be chosen randomly when provisioning.
Rules

The configuration must obey following rules:

  1. config.json must be a valid json file.
  2. A path must start with /, a.k.a an absolute path.
  3. Root directory(/) is prohibited.
  4. No duplicate paths allowed for one node.
  5. No duplicate node allowed.

Scripts setup and teardown and helperPod.yaml

The script setup will be executed before the volume is created, to prepare the directory on the node for the volume.

The script teardown will be executed after the volume is deleted, to cleanup the directory on the node for the volume.

The yaml file helperPod.yaml will be created by local-path-storage to execute setup or teardown script with three paramemters -p <path> -s <size> -m <mode> :

  • path: the absolute path provisioned on the node
  • size: pvc.Spec.resources.requests.storage in bytes
  • mode: pvc.Spec.VolumeMode

Reloading

The provisioner supports automatic configuration reloading. Users can change the configuration using kubectl apply or kubectl edit with config map local-path-config. There is a delay between when the user updates the config map and the provisioner picking it up.

When the provisioner detects the configuration changes, it will try to load the new configuration. Users can observe it in the log

time="2018-10-03T05:56:13Z" level=debug msg="Applied config: {"nodePathMap":[{"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES","paths":["/opt/local-path-provisioner"]},{"node":"yasker-lp-dev1","paths":["/opt","/data1"]},{"node":"yasker-lp-dev3"}]}"

If the reload fails, the provisioner will log the error and continue using the last valid configuration for provisioning in the meantime.

time="2018-10-03T05:19:25Z" level=error msg="failed to load the new config file: fail to load config file /etc/config/config.json: invalid character '#' looking for beginning of object key string"

time="2018-10-03T05:20:10Z" level=error msg="failed to load the new config file: config canonicalization failed: path must start with / for path opt on node yasker-lp-dev1"

time="2018-10-03T05:23:35Z" level=error msg="failed to load the new config file: config canonicalization failed: duplicate path /data1 on node yasker-lp-dev1

time="2018-10-03T06:39:28Z" level=error msg="failed to load the new config file: config canonicalization failed: duplicate node yasker-lp-dev3"

Uninstall

Before uninstallation, make sure the PVs created by the provisioner have already been deleted. Use kubectl get pv and make sure no PV with StorageClass local-path.

To uninstall, execute:

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Debug

it providers a out-of-cluster debug env for deverlopers

debug

git clone https://github.com/rancher/local-path-provisioner.git
cd local-path-provisioner
go build
kubectl apply -f debug/config.yaml
./local-path-provisioner --debug start --service-account-name=default

example

Usage

clear

kubectl delete -f debug/config.yaml

License

Copyright (c) 2014-2020 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Issues
  • Custom teardown script doesn't work

    Custom teardown script doesn't work

    I've tried to set a custom teardown script using Helm values:

    teardown: |-
      #!/bin/sh
      path=$1
      archived_path="$(dirname ${path})/archived-$(basename ${path})"
      mv ${path} ${archived_path}
    

    Although the config map gets updated to the new teardown script, when I delete a pvc local-path-provisioner still deletes the pv folder instead of running the script.

    Any help would be appreciated :-)

    opened by rxbn 33
  • Documentation: Multiple Local Path Provisioners in the same cluster

    Documentation: Multiple Local Path Provisioners in the same cluster

    Suppose there are two kinds of drives in the K8s nodes, "fast" (SSD) and "big" (HDD). Suppose I want to create two storage classes, one of which provisions volumes on the "fast" drives and one on the "big" drives. Please document how to achieve this.

    From #80 I gather this is possible by deploying two instances of local-path-provisioner that are backed by directories on the fast and big drives respectively. But how do I specify in the storage class specification which instance of LPP to use? Do I have to change the provisioner value? How do I tell LPP which value of the provisioner field in SC to respond to?

    question 
    opened by japsu 11
  • working: multi-arch-images

    working: multi-arch-images

    This has been a very interesting area of docker to learn about. Hopefully this helps folks get more things running on ARM and other platforms.

    I copied the basic multi-arch build structure from https://github.com/rancher/dapper along with a couple of their scripts/*.

    I'm not certain the go build ... -o binary.arch suffixes in the scripts/build I included in this PR are 100% correct.

    I chose these so they match the arch+variant names associated with the current alpine manifest, (read: not the same as the bin_arch names used in https://github.com/rancher/dapper/blob/master/scripts/build .)

    Also, It looked like the multi-arch-images make target in rancher/dapper was being manually called -- as well as the resulting push.sh it emitted... so I'm not sure where/how the CI pipelines build and push the multi-arch images... ?

    I'm not sure if dapper handles exposing files created in containers back to the host, but there is now a manifest.yaml file that gets created and should be fed to the push.sh script which invokes manifest-tool.

    If there's a better way, please let me know, or maintainers can push more commits to this branch.

    Relates to #12

    opened by tamsky 10
  • Security context not respected

    Security context not respected

    I'm trying to use local-path-provisioner with kind. While it seems to generally work with multi-node clusters, security contexts are not respected. Volumes are always mounted with root as group. Here's a simple example that demonstrates this:

    apiVersion: v1
    kind: Pod
    metadata:
      name: local-path-test
      labels:
        app.kubernetes.io/name: local-path-test
    spec:
      containers:
        - name: test
          image: busybox
          command:
            - /config/test.sh
          volumeMounts:
            - name: test
              mountPath: /test
            - name: config
              mountPath: /config
      securityContext:
        fsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      terminationGracePeriodSeconds: 0
      volumes:
        - name: test
          persistentVolumeClaim:
            claimName: local-path-test
        - name: config
          configMap:
            name: local-path-test
            defaultMode: 0555
    
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: local-path-test
      labels:
        app.kubernetes.io/name: local-path-test
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "1Gi"
      storageClassName: local-path
    
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: local-path-test
      labels:
        app.kubernetes.io/name: local-path-test
    data:
      test.sh: |
        #!/bin/sh
    
        ls -al /test
    
        echo 'Hello from local-path-test'
        cp /config/text.txt /test/test.txt
        touch /test/foo
    
      text.txt: |
        some test content
    

    Here's the log from the container:

    total 4
    drwxr-xr-x    2 root     root            40 Feb 22 09:50 .
    drwxr-xr-x    1 root     root          4096 Feb 22 09:50 ..
    Hello from local-path-test
    cp: can't create '/test/test.txt': Permission denied
    touch: /test/foo: Permission denied
    

    As can be seen, the mounted volume has root as group instead of 1000 as specified by the security context. I also installed local-path-provisioner on Docker4Mac. The result is the same, so it is not a kind issue. Using the default storage class on Docker4Mac, it works as expected.

    opened by unguiculus 10
  • Race condition for helper-pod when multiple pvc's are provisioned

    Race condition for helper-pod when multiple pvc's are provisioned

    Hello. I think i have uncovered a bug. If you provision multiple pv's in rapid succession the helper pod will only run for the first one.

    First here will mean the helper pod that first get's created might be or not the same as the same pv that should be created.

    How to reproduce: On a single node kubernetes cluster:

    kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
    
    1. Instead of creating one pvc and one pod create 3 of them: pvcs.yaml:
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: local-path-pvc1
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-path
      resources:
        requests:
          storage: 2Gi
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: local-path-pvc2
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-path
      resources:
        requests:
          storage: 2Gi
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: local-path-pvc3
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-path
      resources:
        requests:
          storage: 2Gi
    

    pods.yaml:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-test01
      namespace: default
    spec:
      containers:
      - name: volume-test
        image: nginx:stable-alpine
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: volv
          mountPath: /data
        ports:
        - containerPort: 80
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: local-path-pvc1
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-test02
      namespace: default
    spec:
      containers:
      - name: volume-test
        image: nginx:stable-alpine
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: volv
          mountPath: /data
        ports:
        - containerPort: 80
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: local-path-pvc2
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-test03
      namespace: default
    spec:
      containers:
      - name: volume-test
        image: nginx:stable-alpine
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: volv
          mountPath: /data
        ports:
        - containerPort: 80
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: local-path-pvc3
    

    Then check the /opt/local-path-provisioner path on the node:

    [skiss-dev2 ~]$ ls -la /opt/local-path-provisioner
    total 0
    drwxr-xr-x  5 root root 222 Nov 11 08:56 .
    drwxr-xr-x. 5 root root  65 Nov 11 08:50 ..
    drwxr-xr-x  2 root root   6 Nov 11 08:56 pvc-919db3fc-6c88-446e-b77c-01e7ae260289_default_local-path-pvc1
    drwxrwxrwx  2 root root   6 Nov 11 08:56 pvc-95bc6533-cd29-4f89-adaf-7b740e311969_default_local-path-pvc2
    drwxr-xr-x  2 root root   6 Nov 11 08:56 pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5_default_local-path-pvc3
    

    As you can all 3 pv's are being created however only one has the correct set of permissions.

    The provisioner logs:

    I1111 08:56:10.202181       1 controller.go:1202] provision "default/local-path-pvc1" class "local-path": started
    I1111 08:56:10.214691       1 controller.go:1202] provision "default/local-path-pvc2" class "local-path": started
    time="2020-11-11T08:56:10Z" level=debug msg="config doesn't contain node master-worker-1487cf6df4b42f3c60ef, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2020-11-11T08:56:10Z" level=info msg="Creating volume pvc-919db3fc-6c88-446e-b77c-01e7ae260289 at master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-919db3fc-6c88-446e-b77c-01e7ae260289_default_local-path-pvc1"
    time="2020-11-11T08:56:10Z" level=info msg="create the helper pod helper-pod into local-path-storage"
    I1111 08:56:10.221367       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc1", UID:"919db3fc-6c88-446e-b77c-01e7ae260289", APIVersion:"v1", ResourceVersion:"26463", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-path-pvc1"
    I1111 08:56:10.403174       1 controller.go:1202] provision "default/local-path-pvc3" class "local-path": started
    time="2020-11-11T08:56:10Z" level=debug msg="config doesn't contain node master-worker-1487cf6df4b42f3c60ef, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2020-11-11T08:56:10Z" level=info msg="Creating volume pvc-95bc6533-cd29-4f89-adaf-7b740e311969 at master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-95bc6533-cd29-4f89-adaf-7b740e311969_default_local-path-pvc2"
    time="2020-11-11T08:56:10Z" level=info msg="create the helper pod helper-pod into local-path-storage"
    I1111 08:56:10.410110       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc2", UID:"95bc6533-cd29-4f89-adaf-7b740e311969", APIVersion:"v1", ResourceVersion:"26467", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-path-pvc2"
    time="2020-11-11T08:56:10Z" level=debug msg="config doesn't contain node master-worker-1487cf6df4b42f3c60ef, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2020-11-11T08:56:10Z" level=info msg="Creating volume pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5 at master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5_default_local-path-pvc3"
    time="2020-11-11T08:56:10Z" level=info msg="create the helper pod helper-pod into local-path-storage"
    I1111 08:56:10.420310       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc3", UID:"c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5", APIVersion:"v1", ResourceVersion:"26472", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-path-pvc3"
    time="2020-11-11T08:56:25Z" level=info msg="Volume pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5 has been created on master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5_default_local-path-pvc3"
    time="2020-11-11T08:56:25Z" level=info msg="Volume pvc-919db3fc-6c88-446e-b77c-01e7ae260289 has been created on master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-919db3fc-6c88-446e-b77c-01e7ae260289_default_local-path-pvc1"
    time="2020-11-11T08:56:25Z" level=info msg="Volume pvc-95bc6533-cd29-4f89-adaf-7b740e311969 has been created on master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-95bc6533-cd29-4f89-adaf-7b740e311969_default_local-path-pvc2"
    time="2020-11-11T08:56:28Z" level=error msg="unable to delete the helper pod: pods \"helper-pod\" not found"
    I1111 08:56:28.440603       1 controller.go:1284] provision "default/local-path-pvc1" class "local-path": volume "pvc-919db3fc-6c88-446e-b77c-01e7ae260289" provisioned
    I1111 08:56:28.440662       1 controller.go:1301] provision "default/local-path-pvc1" class "local-path": succeeded
    I1111 08:56:28.440687       1 volume_store.go:212] Trying to save persistentvolume "pvc-919db3fc-6c88-446e-b77c-01e7ae260289"
    I1111 08:56:28.444032       1 controller.go:1284] provision "default/local-path-pvc3" class "local-path": volume "pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5" provisioned
    I1111 08:56:28.444058       1 controller.go:1301] provision "default/local-path-pvc3" class "local-path": succeeded
    I1111 08:56:28.444067       1 volume_store.go:212] Trying to save persistentvolume "pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5"
    time="2020-11-11T08:56:28Z" level=error msg="unable to delete the helper pod: pods \"helper-pod\" not found"
    I1111 08:56:28.444364       1 controller.go:1284] provision "default/local-path-pvc2" class "local-path": volume "pvc-95bc6533-cd29-4f89-adaf-7b740e311969" provisioned
    I1111 08:56:28.444381       1 controller.go:1301] provision "default/local-path-pvc2" class "local-path": succeeded
    I1111 08:56:28.444387       1 volume_store.go:212] Trying to save persistentvolume "pvc-95bc6533-cd29-4f89-adaf-7b740e311969"
    I1111 08:56:28.456125       1 volume_store.go:219] persistentvolume "pvc-95bc6533-cd29-4f89-adaf-7b740e311969" saved
    I1111 08:56:28.456388       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc2", UID:"95bc6533-cd29-4f89-adaf-7b740e311969", APIVersion:"v1", ResourceVersion:"26467", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-95bc6533-cd29-4f89-adaf-7b740e311969
    I1111 08:56:28.457477       1 volume_store.go:219] persistentvolume "pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5" saved
    I1111 08:56:28.457550       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc3", UID:"c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5", APIVersion:"v1", ResourceVersion:"26472", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5
    I1111 08:56:28.459975       1 volume_store.go:219] persistentvolume "pvc-919db3fc-6c88-446e-b77c-01e7ae260289" saved
    I1111 08:56:28.460001       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc1", UID:"919db3fc-6c88-446e-b77c-01e7ae260289", APIVersion:"v1", ResourceVersion:"26463", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-919db3fc-6c88-446e-b77c-01e7ae260289
    

    Note the two messages regarding unable to delete the helper pod. That's because it want created for them. However there is no creation error message because:

    	// If it already exists due to some previous errors, the pod will be cleaned up later automatically
    	// https://github.com/rancher/local-path-provisioner/issues/27
    	logrus.Infof("create the helper pod %s into %s", helperPod.Name, p.namespace)
    	_, err = p.kubeClient.CoreV1().Pods(p.namespace).Create(helperPod)
    	if err != nil && !k8serror.IsAlreadyExists(err) {
    		return err
    	}
    

    From my understanding it seems that all 3 requests for pv provision are sent very close and the pod being named the same cannot be created multiple times. The first request that get's trough creates the pod the rest fails silently. I'm not entierly sure why the path on the node exists in any case since the helper pod does not get called.

    However it's pretty clear that only one helper pod runs at a time and only one custom provisioning code (such as the one that sets the permissions) is being run.

    opened by stefan-kiss 9
  • Do not create directory if not found

    Do not create directory if not found

    Use type: Directory instead of type: DirectoryOrCreate allows to block running workload on provisioned directories, to avoid the situations when initial storage is unmounted or broken.

    docker image containing the fix:

    kvaps/local-path-provisioner:v0.0.17-fix-137
    

    fixes https://github.com/rancher/local-path-provisioner/issues/137

    opened by kvaps 9
  • Why the provisioner only support ReadWriteOnce

    Why the provisioner only support ReadWriteOnce

    Why the provisioner only support ReadWriteOnce pvc and not ReadOnlyMany/ReadWriteMany.

    Since it's just a node-local directory, there's no problem with having multiple writer/readers as long as the application support this.

    opened by philpep 9
  • Permission denied when try to create a pvc

    Permission denied when try to create a pvc

    I am following the steps to install/configure local-path-provider.

    I have a local-path-provisioner running.

    I have created pvc and status is pending (waiting for the pod).

    I am trying to create a nginx pod like the example and i am facing this problem:

    When i check create-pvc-33e6692e.... pod i have this error: mkdir: can't create directory '/data/pvc-33e6692e-a32d-11e9-85ac-42010a8e0036': Permission denied

    My local path is already configured like root.root and 777 permissions.

    Anyone can help me?

    opened by danilorsilva 9
  • imagePullPolicy: Always for busybox image will not work in offline enviornments

    imagePullPolicy: Always for busybox image will not work in offline enviornments

    The default imagePullPolicy for the busybox helper image which creates the PVC directory on the host is set to "Always". There must be an option to set it to "IfNotPresent" for offline/airgap environments.

    opened by digger18 9
  • High CPU usage

    High CPU usage

    local-path-provisioner is one of the most CPU consuming processes in my cluster:

    image

    It seemingly ate 20 minutes of CPU time in just 22 hours:

    local-path-storage   local-path-provisioner-5fbd477b57-mpg4s    1/1     Running     5          22h
    

    During that time, it only created 3 PV's:

    ❯ k logs -n local-path-storage local-path-provisioner-5fbd477b57-mpg4s
    time="2019-04-22T03:00:46Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}"
    time="2019-04-22T03:00:46Z" level=debug msg="Provisioner started"
    time="2019-04-22T05:13:19Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2019-04-22T05:13:19Z" level=info msg="Creating volume pvc-5782375f-64bd-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
    time="2019-04-22T05:13:25Z" level=info msg="Volume pvc-5782375f-64bd-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
    time="2019-04-22T10:37:15Z" level=info msg="Deleting volume pvc-5782375f-64bd-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
    time="2019-04-22T10:37:19Z" level=info msg="Volume pvc-5782375f-64bd-11e9-a240-525400a0c459 has been deleted on kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
    time="2019-04-22T10:38:20Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2019-04-22T10:38:20Z" level=info msg="Creating volume pvc-bee28903-64ea-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
    time="2019-04-22T10:38:24Z" level=info msg="Volume pvc-bee28903-64ea-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
    time="2019-04-22T11:25:29Z" level=info msg="Deleting volume pvc-bee28903-64ea-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
    time="2019-04-22T11:25:33Z" level=info msg="Volume pvc-bee28903-64ea-11e9-a240-525400a0c459 has been deleted on kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
    time="2019-04-22T11:26:18Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2019-04-22T11:26:18Z" level=info msg="Creating volume pvc-72a9e446-64f1-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-72a9e446-64f1-11e9-a240-525400a0c459"
    time="2019-04-22T11:26:23Z" level=info msg="Volume pvc-72a9e446-64f1-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-72a9e446-64f1-11e9-a240-525400a0c459"
    

    Any ideas what's the matter? It it doing some kind of suboptimal high frequency polling? It bugs me that my brand new, almost empty single node Kubernetes setup already has LA 0.6 without any load coming towards it.

    It was deployed with:

    kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
    

    without any modifications to the manifest.

    opened by IlyaSemenov 9
  • Added options for worker threads and provisioning/deletion retry counts

    Added options for worker threads and provisioning/deletion retry counts

    The 'sig-storage-lib-external-provisioner' provides an option for volume provisioning and deletion concurrency. so, I added an option like '--worker-threads' to allow users to control provisioning concurrency. Failed provision/delete threshold values are zero, so the provisioner can complete provisioning and deletion. Please review.

    opened by skyoo2003 8
  • use provsioner w/o specifying pvc

    use provsioner w/o specifying pvc

    Is there an example of how to use the provisioner in a PodTemplate? For example using NFS storage in a pod template could look like this:

         template:
          spec:
           volumes:
           - name: volv
             nfs:
               server: nfs.example.org
               path: /nfs/path
    
    opened by aintbrokedontfix 0
  • Does local-path super multiple sc for deffrence performance disk

    Does local-path super multiple sc for deffrence performance disk

    I have hdd and ssd installed in host server. I want create two k8s storageclass to support for both hdd and ssd to adapt different performance requirement. Is local-path supported this and how to do with it

    opened by zhfg 2
  • Storage capacity support

    Storage capacity support

    opened by heyu1207 1
  • [Question] fsGroup Support

    [Question] fsGroup Support

    I read the interesting ticket #41 about it. It seems the fsGroup can't be natively supported. I dig a bit in the kubernetes PersitentVolumes and found that using local (https://kubernetes.io/docs/concepts/storage/volumes/#local since kubernetes 1.14) instead of hostPath will allow Kubelet to manage the fsGroup support instead of trying to implement a specific CSI for hostPath. https://github.com/kubernetes/kubernetes/blob/3bef1692ef2f4a8af87f03599303dbe794493048/pkg/volume/local/local.go#L612 I found that other provisioner use the local entry but does not provide the great flexibility provided by the setup/teardown ConfigMap :).

    Is the usage of local instead of hostPath can be a possibility for this provisioner ?

    opened by sushiMix 0
  • [Question] Using loopback device to manage space

    [Question] Using loopback device to manage space

    Hi folks, I'm looking for help/advice on adapting the setup/teardown scripts to create loopback devices as disks, to put a hard limit on the space allocated for each volume.

    My current rough idea looks like the scripts below (which have only had very light testing so far).

    Mostly, I'm looking for someone to tell me what obvious thing I'm missing that means this hasn't been done already.

    Thanks.

    setup:

    NAME=$( basename $absolutePath )
    # Store images in the folder above $absolutePath
    IMAGE="$absolutePath/../$NAME.img"
    # Create a backing file of the right size
    dd if=/dev/zero of=$IMAGE count=$sizeInBytes iflag=count_bytes status=progress 
    # Give it a filesystem 
    # TOOD: Check if we've been asked for a filesystem!
    mkfs.ext4 $IMAGE
    # Add the new filesystem to host/node fstab
    echo "$IMAGE $absolutePath ext4 loop 0 0" >> /etc/fstab
    # Mount the device
    mount $absolutePath
    

    teardown:

    IMAGE=$( grep $absolutePath /etc/fstab | awk '{print $1}' )
    umount $absolutePath
    sed -i "/$absolutePath/d' /etc/fstab
    rm $IMAGE
    

    To think about

    • Rather than giving the caller the root (with /lost+found), create a folder in the mount and give them that
    • Permissions?
    • Chose your own filesystem
    • A better way of naming/storing/identifying images
    opened by Moosemorals 1
Releases(v0.0.22)
Owner
Rancher
Rancher
A Packer plugin for provisioning with Terraform (local)

Packer Plugin Terraform Inspired by Megan Marsh's talk https://www.hashicorp.com/resources/extending-packer I bit the bullet and started making my own

Servian 6 Apr 28, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

HwameiStor 163 Jun 22, 2022
Karpenter: an open-source node provisioning project built for Kubernetes

Karpenter is an open-source node provisioning project built for Kubernetes. Its goal is to improve the efficiency and cost of running workloads on Kub

Rohan 1 Apr 10, 2022
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Weave Development Lab 3 May 20, 2022
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Shachar Sharon 0 Jan 17, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Fortress 0 Jan 23, 2022
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod

GPU Mounter GPU Mounter is a kubernetes plugin which enables add or remove GPU resources for running Pods. This Introduction(In Chinese) is recommende

XinYuan 72 Jun 22, 2022
topolvm operator provide kubernetes local storage which is light weight and high performance

Topolvm-Operator Topolvm-Operator is an open source cloud-native local storage orchestrator for Kubernetes, which bases on topolvm. Supported environm

Alauda.io 20 Feb 8, 2022
An high performance and ops-free local storage solution for Kubernetes.

Carina carina 是一个CSI插件,在Kubernetes集群中提供本地存储持久卷 项目状态:开发测试中 CSI Version: 1.3.0 Carina architecture 支持的环境 Kubernetes:1.20 1.19 1.18 Node OS:Linux Filesys

BoCloud 10 May 18, 2022
Carina: an high performance and ops-free local storage for kubernetes

Carina English | 中文 Background Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applicatio

null 342 Jun 25, 2022
Docker Swarm Ingress service based on OpenResty with automatic Let's Encrypt SSL provisioning

Ingress Service for Docker Swarm Swarm Ingress OpenResty is a ingress service for Docker in Swarm mode that makes deploying microservices easy. It con

OpCycle 5 Jun 23, 2022
FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)

FaaSNet FaaSNet is the first system that provides an end-to-end, integrated solution for FaaS-optimized container runtime provisioning. FaaSNet uses l

LeapLab @ CS_GMU 31 Jun 26, 2022
Easy cloud instance provisioning

post-init (work in progress) Post-Init is a set of tools that allows you to easily connect to, provision, and interact with cloud instances after they

Joe Kralicky 1 Dec 6, 2021
Custom Terraform provider that allows provisioning VGS Proxy Routes.

VGS Terraform Provider Custom Terraform provider that allows provisioning VGS Proxy Routes. How to Install Requirements: terraform ver 0.12 or later M

Very Good Security, Inc. 4 Mar 12, 2022
Extensible Provisioning Protocol (EPP) in Go

EPP for Go Extensible Provisioning Protocol (EPP) for Go. EPP is an XML-based protocol for provisioning and managing domain names and other objects at

Domainr 1 Jan 18, 2022
Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments.

Apollo Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments. Philosophy Linux-

K T Corp. 1 Feb 7, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10k Jun 26, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kaisen Linux 0 Feb 14, 2022
Simple wrapper around multiple fs.FS instances, recursively merging them together dynamically.

go-layerfs This is a simple wrapper around multiple fs.FS instances, recursively merging them together dynamically. If you have two directories, of wh

Dominik Schmidt 21 Feb 16, 2022