The Swift Virtual File System

Overview

*** This project is not maintained anymore ***

The Swift Virtual File System

Release Github All Releases Build Status Go Report Card Coverage Status GoDoc

SVFS is a Virtual File System over Openstack Swift built upon fuse. It is compatible with hubiC, OVH Public Cloud Storage and basically every endpoint using a standard Openstack Swift setup. It brings a layer of abstraction over object storage, making it as accessible and convenient as a filesystem, without being intrusive on the way your data is stored.

Disclaimer

This is not an official project of the Openstack community.

Installation

Download and install the latest release packaged for your distribution.

Usage

Mount command

On Linux (requires fuse and ruby) :

mount -t svfs -o <options> <device> /mountpoint

On OSX (requires osxfuse and ruby) :

mount_svfs <device> /mountpoint -o <options>

Notes :

  • You can pick any name you want for the device parameter.
  • All available mount options are described later in this document.

Credentials can be specified in mount options, however this may be desirable to read them from an external source. The following sections desribe alternative approaches.

Reading credentials from the environment

SVFS supports reading the following set of environment variables :

  • If you are using HubiC :
 HUBIC_AUTH
 HUBIC_TOKEN
  • If you are using a vanilla Swift endpoint (like OVH PCS), after sourcing your OpenRC file :
 OS_AUTH_URL
 OS_USERNAME
 OS_PASSWORD
 OS_REGION_NAME
 OS_TENANT_NAME
  • If you already authenticated to an identity endpoint :
 OS_AUTH_TOKEN
 OS_STORAGE_URL

Reading credentials from a configuration file

All environment variables can also be set in a YAML configuration file placed at /etc/svfs.yaml.

For instance :

hubic_auth: XXXXXXXXXX..
hubic_token: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX...

Usage with OVH products

  • Usage with OVH Public Cloud Storage is explained here.
  • Usage with hubiC is explained here.

Mount options

Keystone options

  • auth_url: keystone URL (default is https://auth.cloud.ovh.net/v2.0).
  • username: your keystone user name.
  • password: your keystone password.
  • tenant: your project name.
  • region: the region where your tenant is.
  • version: authentication version (0 means auto-discovery which is the default).
  • storage_url: the storage endpoint holding your data.
  • internal_endpoint: the storage endpoint type (default is false).
  • token: a valid token.

Options region, version, storage_url and token are guessed during authentication if not provided.

Hubic options

  • hubic_auth: hubic authorization token as returned by the hubic-application command.
  • hubic_times : use file times set by hubic synchronization clients. Option attr should also be set for this to work.
  • hubic_token : hubic refresh token as returned by the hubic-application command.

Swift options

  • container: which container should be selected while mounting the filesystem. If not set, all containers within the tenant will be available under the chosen mountpoint.
  • storage_policy: expected containers storage policy. This is used to ignore containers not matching a particular storage policy name. If empty, this setting is ignored (default).
  • segment_size: large object segments size in MB. When an object has a content larger than this setting, it will be uploaded in multiple parts of the specified size. Default is 256 MB. Segment size should not exceed 5 GB.
  • connect_timeout: connection timeout to the swift storage endpoint. Default is 15 seconds.
  • request_timeout: timeout of requests sent to the swift storage endpoint. Default is 5 minutes.

Prefetch options

  • block_size: Filesystem block size in bytes. This is only used to report correct stat() results.
  • readahead_size: Readahead size in KB. Default is 128 KB.
  • readdir: Overall concurrency factor when listing segmented objects in directories (default is 20).
  • attr: Handle base attributes.
  • xattr: Handle extended attributes.
  • transfer_mode: Enforce network transfer optimizations. The following flags / features can be combined :
  • 1 : disable explicit empty file creation.
  • 2 : disable explicit directory creation.
  • 4 : disable directory content check on removal.
  • 8 : disable file check in read only opening.

Cache options

  • cache_access: cache entry access count before refresh. Default is -1 (unlimited access).
  • cache_entries: maximum entry count in cache. Default is -1 (unlimited).
  • cache_ttl: cache entry timeout before refresh. Default is 1 minute.

Access restriction options

  • allow_other: Bypass allow_root.
  • allow_root: Restrict access to root and the user mounting the filesystem.
  • default_perm: Restrict access based on file mode (useful with allow_other).
  • uid: default files uid (defaults to current user uid).
  • gid: default files gid (defaults to current user gid).
  • mode: default files permissions (default is 0700).
  • ro: enable read-only access.

Debug options

  • debug: enable debug log.
  • stdout : stdout redirection expression (e.g. >/dev/null).
  • stderr : stderr redirection expression (e.g. >>/var/log/svfs.log).
  • profile_addr: Golang profiling information will be served at this address (ip:port) if set.
  • profile_cpu: Golang CPU profiling information will be stored to this file if set.
  • profile_ram: Golang RAM profiling information will be stored to this file if set.

Performance options

  • go_gc: set garbage collection target percentage. A garbage collection is triggered when the heap size exceeds, by this rate, the remaining heap size after the previous collection. A lower value triggers frequent GC, which means memory usage will be lower at the cost of higher CPU usage. Setting a higher value will let the heap size grow by this percent without collection, reducing GC frequency. A Garbage collection is forced if none happened for 2 minutes. Note that unused heap memory is not reclaimed after collection, it is returned to the operating system only if it appears unused for 5 minutes.

Limitations

Be aware that SVFS doesn't transform object storage to block storage.

SVFS doesn't support :

  • Opening files in other modes than O_CREAT, O_RDONLY and O_WRONLY.
  • Moving directories.
  • Renaming containers.
  • SLO (but supports DLO).
  • Per-file uid/gid/permissions (but per-mountpoint).
  • Symlink targets across containers (but within the same container).

Take a look at the docs for further discussions about SVFS approach.

FAQ

Got errors using rsync with svfs ? Can't change creation time ? Why svfs after all ?

Take a look at the FAQ.

Hacking

Make sure to use the latest version of go and follow contribution guidelines of SVFS.

License

This work is under the BSD license, see the LICENSE file for details.

Issues
  • Timeout when reading or writing data

    Timeout when reading or writing data

    Hello,

    I encounter an error while mounting my cloud hubic

    Context

    • svfs version : svfs_0.9.1_amd64
    • storage provider : Hubic
    • Tested on :
      • My dedicated server
      • Scaleway C02

    Error

    [email protected]:~# sudo mount -t svfs -o hubic_auth=YXBp******Xo=,hubic_token=xHac***Sv,container=default hubic /mnt/hubic
    
    [email protected]:~# cd /mnt/hubic
    FATA[2017-02-26T13:02:29Z] cannot obtain root node: Timeout when reading or writing data
    -bash: cd: /mnt/hubic: Transport endpoint is not connected
    

    Register application

    [email protected]:~# hubic-application
    Did you registered an application under your hubic account ? (y/N) y
     ~> Application redirect URL: http://localhost/
     ~> Application client_id: api_hubic_S*****0
     ~> Application client_secret:
    1) Setting scope ... OK
     ~> Email: p*******@gmail.com
     ~> Password:
    2) Granting access ... OK
    3) Getting refresh token ... OK
    
     == Your mount options ==
     ~> hubic_auth=YX**Xo=
     ~> hubic_token=xHa****Sv
    

    Do you have an idea ?

    Regards,

    usage 
    opened by spprod35 23
  • Weird behavior during rsync and failure

    Weird behavior during rsync and failure

    Hello, i'm using the last version of svfs (0.8.2 i386) on a debian stable. I mount a hubic fuse like this

    hubicfuse on /mnt/hubic type fuse.svfs (rw,nosuid,nodev,relatime,user_id=1000,group_id=100,default_permissions,allow_other,user=mathieu)

    I'm doing a rync like this

    rsync -rtW --inplace --progress /mnt/readynas/backup/biniou/2016.10/ /mnt/hubic/backup/biniou/2016.10/

    The command fails. After a timeout error (debug mode), it continues with a weird upload rate (my bandwidth is limited to 600KB/s).

    DEBU[2016-10-12T18:14:12+02:00] -> [ID=0xc43] Write 131064           source=fuse
    DEBU[2016-10-12T18:14:12+02:00] <- Write [ID=0xc44 Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x2 131072 @268042232 fl=WriteLockOwner lock=3159737928114442182 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-10-12T18:14:12+02:00] -> [ID=0xc44] Write 131072           source=fuse
    DEBU[2016-10-12T18:14:12+02:00] <- Write [ID=0xc45 Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x2 8 @268173304 fl=WriteLockOwner lock=3159737928114442182 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-10-12T18:14:12+02:00] -> [ID=0xc45] Write 8                source=fuse
    DEBU[2016-10-12T18:14:12+02:00] <- Write [ID=0xc46 Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x2 131064 @268173312 fl=WriteLockOwner lock=3159737928114442182 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-10-12T18:14:13+02:00] -> [ID=0xc46] Write 131064           source=fuse
    DEBU[2016-10-12T18:14:13+02:00] <- Write [ID=0xc47 Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x2 131072 @268304376 fl=WriteLockOwner lock=3159737928114442182 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-10-12T18:14:13+02:00] -> [ID=0xc47] Write 131072           source=fuse
    DEBU[2016-10-12T18:14:13+02:00] <- Write [ID=0xc48 Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x2 8 @268435448 fl=WriteLockOwner lock=3159737928114442182 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-10-12T18:14:13+02:00] -> [ID=0xc48] Write 8                source=fuse
    DEBU[2016-10-12T18:14:13+02:00] <- Write [ID=0xc49 Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x2 131064 @268435456 fl=WriteLockOwner lock=3159737928114442182 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-10-12T18:14:32+02:00] -> [ID=0xc49] Write error=EIO: Timeout when reading or writing data  source=fuse
    DEBU[2016-10-12T18:14:32+02:00] <- Flush [ID=0xc4a Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x1 fl=0x0 lk=0x2bd9a4e794970fc6  source=fuse
    DEBU[2016-10-12T18:14:32+02:00] -> [ID=0xc4a] Flush                  source=fuse
    DEBU[2016-10-12T18:14:32+02:00] <- Release [ID=0xc4b Node=0x5 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-10-12T18:14:32+02:00] -> [ID=0xc4b] Release                source=fuse
    DEBU[2016-10-12T18:14:32+02:00] <- Write [ID=0xc4c Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x2 131064 @268435456 fl=WriteLockOwner lock=3159737928114442182 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-10-12T18:14:47+02:00] -> [ID=0xc4c] Write error=EIO: Timeout when reading or writing data  source=fuse
    DEBU[2016-10-12T18:14:47+02:00] <- Flush [ID=0xc4d Node=0x5 Uid=1000 Gid=100 Pid=7479] 0x2 fl=0x0 lk=0x2bd9a4e794970fc6  source=fuse
    DEBU[2016-10-12T18:14:47+02:00] -> [ID=0xc4d] Flush                  source=fuse
        268,795,904  25%   79.31kB/s    2:49:08  DEBU[2016-10-12T18:14:47+02:00] <- Release [ID=0xc4e Node=0x5 Uid=0 Gid=0 Pid=0] 0x2 fl=OpenWriteOnly rfl=0x2 owner=0xc8253648  source=fuse
    DEBU[2016-10-12T18:14:47+02:00] -> [ID=0xc4e] Release                source=fuse
      1,073,741,824 100%    2.05MB/s    0:08:18 (xfr#1, to-chk=61/63)
    2016.10.01.FULL.10.dar
      1,073,741,824 100%   34.95MB/s    0:00:29 (xfr#2, to-chk=60/63)
    2016.10.01.FULL.11.dar
      1,073,741,824 100%   36.16MB/s    0:00:28 (xfr#3, to-chk=59/63)
    2016.10.01.FULL.12.dar
      1,073,741,824 100%   35.74MB/s    0:00:28 (xfr#4, to-chk=58/63)
    2016.10.01.FULL.13.dar
      1,073,741,824 100%   35.59MB/s    0:00:28 (xfr#5, to-chk=57/63)
    2016.10.01.FULL.14.dar
      1,073,741,824 100%   38.07MB/s    0:00:26 (xfr#6, to-chk=56/63)
    2016.10.01.FULL.15.dar
      1,073,741,824 100%   39.27MB/s    0:00:26 (xfr#7, to-chk=55/63)
    2016.10.01.FULL.16.dar
      1,073,741,824 100%   38.56MB/s    0:00:26 (xfr#8, to-chk=54/63)
    2016.10.01.FULL.17.dar
      1,073,741,824 100%   39.44MB/s    0:00:25 (xfr#9, to-chk=53/63)
    
    
    ...
    
    DB.PIWIGO.2016.10.12.sql.gz
                 20 100%    0.03kB/s    0:00:00 (xfr#62, to-chk=0/63)
    rsync: write failed on "/mnt/hubic/backup/biniou/2016.10/2016.10.01.FULL.1.dar": Input/output error (5)
    rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
    
    bug usage 
    opened by mathieuruellan 16
  • Packaging

    Packaging

    Bonjour, j'utilise Archlinux en 64 bits, je voulais savoir quelles sont les commandes pour pouvoir compiler le programme svfs? comme ça je construis un PKGBUILD et je pourrais le publier sur https://aur.archlinux.org/

    documentation packaging 
    opened by widowild1 15
  • Unable to mount svfs at boot time or with Ansible

    Unable to mount svfs at boot time or with Ansible

    Context

    • svfs version : 0.7.2 & 0.7.3
    • storage provider : OVH
    • product : Object Storage
    • os : Ubuntu 14.04

    Steps to reproduce this issue :

    1. Put this line in /etc/fstab
    svfs_backups /mnt/backups svfs username=MY_USER,password=MY_PWD,tenant=MY_TENANT,region=GRA1,container=backups,uid=deploy,gid=deploy,auto 0 0
    
    1. Reboot
    2. Try to access to /mnt/backups

    Results you expected :

    The directory should contain some files

    Results you observed :

    The directory is empty (fs not mounted)

    Debug log :

    I don't know how to find this using /etc/fstab at boot time.

    Additional information :

    When I call mount directly, it works. When the system reboot, nothing happen. When I use Ansible to call mount, I have this :

    ls: cannot access /mnt/backups: Transport endpoint is not connected
    
    wontfix usage 
    opened by jdesboeufs 11
  • ls: reading directory .: Input/output error (recurrent on

    ls: reading directory .: Input/output error (recurrent on "root" folder)

    Hello (I'm not saying how much of a fan I am each time anymore because, well I'm here ;))!

    whenever listing folders of the hubic root, i get either the following error:

    ls: reading directory .: Input/output error

    or an incomplete list.

    Same thing happens in file browser.

    Once I get one folder deeper however, no problem at all.

    Is that (once more) something I did not read right?

    Thanks and have a nice day! :)

    can't reproduce undefined 
    opened by beankylla 11
  • Input/output error while initializing a borgbackup repo

    Input/output error while initializing a borgbackup repo

    Hi, thanks for this project!

    I'm running into the following errors when trying to use svfs for regular backups (using borgbackup to an OVH PCS container). FYI, I could successfully cp and rsync several small files between my server and the container. I actually had some 408 issues which I'll post later, but the basic configuration should be OK.

    Thanks in advance for your help, Cheers

    Context

    • svfs version : 0.8.2 on Debian/jessie
    • storage provider : OVH
    • product : Public Cloud Object Storage

    Steps to reproduce this issue :

    1. source your openrc.sh script for the credential envvars to be exported

    2. mount the container with the standard options

       mount -t svfs -o debug,extra_attr,blocksize=4k,container=archives ovh-pcs /mnt/ovh-pcs
      

      I subsequently tried to remove extra_attr and blocksize=4k options: same results as below.

    3. attempt to initialize a borgbackup repo with the following command

       borg init --encryption=none /mnt/ovh-pcs/archives
      

    Results you expected :

    I expected the operation to proceed normally without a nasty Input/output error raising an exception :)

    Results you observed :

    The borg command fails quickly after launch

    borg init --encryption=keyfile /mnt/ovh-pcs/archives
    Enter new passphrase:
    Enter same passphrase again:
    Do you want your passphrase to be displayed for verification? [yN]:
    Local Exception.
    Traceback (most recent call last):
      File "/usr/lib/python3/dist-packages/borg/archiver.py", line 1684, in main
        exit_code = archiver.run(args)
      File "/usr/lib/python3/dist-packages/borg/archiver.py", line 1621, in run
        return args.func(args)
      File "/usr/lib/python3/dist-packages/borg/archiver.py", line 85, in wrapper
        return method(self, args, repository=repository, **kwargs)
      File "/usr/lib/python3/dist-packages/borg/archiver.py", line 131, in do_init
        manifest.write()
      File "/usr/lib/python3/dist-packages/borg/helpers.py", line 139, in write
        self.repository.put(self.MANIFEST_ID, self.key.encrypt(data))
      File "/usr/lib/python3/dist-packages/borg/repository.py", line 535, in put
        segment, offset = self.io.write_put(id, data)
      File "/usr/lib/python3/dist-packages/borg/repository.py", line 784, in write_put
        fd = self.get_write_fd(raise_full=raise_full)
      File "/usr/lib/python3/dist-packages/borg/repository.py", line 668, in get_write_fd
        sync_dir(os.path.join(self.path, 'data'))
      File "/usr/lib/python3/dist-packages/borg/platform.py", line 10, in sync_dir
        os.fsync(fd)
    OSError: [Errno 5] Input/output error
    
    Platform: Linux freenas 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64
    Linux: debian 8.6
    Borg: 1.0.7  Python: CPython 3.4.2
    PID: 2618  CWD: /root
    sys.argv: ['/usr/bin/borg', 'init', '--encryption=keyfile', '/mnt/ovh-pcs/archives']
    SSH_ORIGINAL_COMMAND: None
    

    Debug log :

    [email protected]:~# mount -t svfs -o debug,extra_attr,blocksize=4k,container=archives ovh-pcs /mnt/ovh-pcs
    [email protected]:~# DEBU[2016-11-07T07:32:36+01:00] Skipping configuration : open : no such file or directory  source=svfs
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0x2 Node=0x1 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x2] Getattr valid=1m0s ino=1 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x3 Node=0x1 Uid=0 Gid=0 Pid=2618] "archives"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x3] Lookup 0x2 gen=0 valid=1m0s attr={valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------}  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Open [ID=0x4 Node=0x2 Uid=0 Gid=0 Pid=2618] dir=true fl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x4] Open 0x1 fl=0            source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Read [ID=0x5 Node=0x2 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x5] Read 0                   source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0x7 Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Release [ID=0x6 Node=0x2 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblock rfl=0 owner=0x0 source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x6] Release                  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x7] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x8 Node=0x2 Uid=0 Gid=0 Pid=2618] "README"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x8] Lookup error=ENOENT      source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Create [ID=0x9 Node=0x2 Uid=0 Gid=0 Pid=2618] "README" fl=OpenWriteOnly+OpenCreate+OpenTruncate mode=-rw------- umask=----rwxrwx  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x9] Create {0x3 gen=0 valid=1m0s attr={valid=1m0s ino=13186341086054391566 size=0 mode=-rwx------}} {0x1 fl=OpenDirectIO+OpenNonSeekable}  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0xa Node=0x3 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0xa] Header error=ENOSYS      source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0xb Node=0x3 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0xb] Header error=ENOSYS      source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Write [ID=0xc Node=0x3 Uid=0 Gid=0 Pid=2618] 0x1 26 @0 fl=WriteLockOwner lock=15002273224907307667 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0xc] Write 26                 source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Flush [ID=0xd Node=0x3 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0xd] Flush                    source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Release [ID=0xe Node=0x3 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenWriteOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0xf Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0xf] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x10 Node=0x2 Uid=0 Gid=0 Pid=2618] "data"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x10] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Mkdir [ID=0x11 Node=0x2 Uid=0 Gid=0 Pid=2618] "data" mode=drwx------ umask=----rwxrwx  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x11] Mkdir 0x4 gen=0 valid=1m0s attr={valid=1m0s ino=12077135400190652796 size=4096 mode=drwx------}source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0x12 Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x12] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x13 Node=0x2 Uid=0 Gid=0 Pid=2618] "config"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x13] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Create [ID=0x14 Node=0x2 Uid=0 Gid=0 Pid=2618] "config" fl=OpenWriteOnly+OpenCreate+OpenTruncate mode=-rw------- umask=----rwxrwx  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0xe] Release                  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x14] Create {0x5 gen=0 valid=1m0s attr={valid=1m0s ino=6011966394027229074 size=0 mode=-rwx------}} {0x1 fl=OpenDirectIO+OpenNonSeekable}  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0x15 Node=0x5 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x15] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0x16 Node=0x5 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x16] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Write [ID=0x17 Node=0x5 Uid=0 Gid=0 Pid=2618] 0x1 164 @0 fl=WriteLockOwner lock=15002273224907307667 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x17] Write 164               source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Flush [ID=0x18 Node=0x5 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x18] Flush                   source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Release [ID=0x19 Node=0x5 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenWriteOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0x1a Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x1a] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x1b Node=0x2 Uid=0 Gid=0 Pid=2618] "lock.exclusive"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x1b] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Mkdir [ID=0x1c Node=0x2 Uid=0 Gid=0 Pid=2618] "lock.exclusive" mode=drwx------ umask=----rwxrwx  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x1c] Mkdir 0x6 gen=0 valid=1m0s attr={valid=1m0s ino=3025830946549282413 size=4096 mode=drwx------}  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0x1d Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x1d] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x1e Node=0x6 Uid=0 Gid=0 Pid=2618] "freenas.2618-0"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x19] Release                 source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x1e] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Create [ID=0x1f Node=0x6 Uid=0 Gid=0 Pid=2618] "freenas.2618-0" fl=OpenWriteOnly+OpenCreate+OpenTruncate mode=-rw------- umask=----rwxrwx  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x1f] Create {0x7 gen=0 valid=1m0s attr={valid=1m0s ino=9561885602189467838 size=0 mode=-rwx------}} {0x1 fl=OpenDirectIO+OpenNonSeekable}  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0x20 Node=0x7 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x20] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Flush [ID=0x21 Node=0x7 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x21] Flush                   source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Release [ID=0x22 Node=0x7 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenWriteOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x22] Release                 source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x23 Node=0x2 Uid=0 Gid=0 Pid=2618] "lock.roster"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x23] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x24 Node=0x2 Uid=0 Gid=0 Pid=2618] "lock.roster"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x24] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Lookup [ID=0x25 Node=0x2 Uid=0 Gid=0 Pid=2618] "lock.roster"  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x25] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Create [ID=0x26 Node=0x2 Uid=0 Gid=0 Pid=2618] "lock.roster" fl=OpenWriteOnly+OpenCreate+OpenTruncate mode=-rw------- umask=----rwxrwx  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x26] Create {0x8 gen=0 valid=1m0s attr={valid=1m0s ino=2329685365225963220 size=0 mode=-rwx------}} {0x1 fl=OpenDirectIO+OpenNonSeekable}  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0x27 Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x27] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0x28 Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x28] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Write [ID=0x29 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 37 @0 fl=WriteLockOwner lock=15002273224907307667 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x29] Write 37                source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Flush [ID=0x2a Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x2a] Flush                   source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Release [ID=0x2b Node=0x8 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenWriteOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0x2c Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x2c] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0x2d Node=0x5 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x2d] Getattr valid=1m0s ino=6011966394027229074 size=164 mode=-rwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Open [ID=0x2e Node=0x5 Uid=0 Gid=0 Pid=2618] dir=false fl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x2e] Open 0x1 fl=0           source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0x2f Node=0x5 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x2f] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- ID=0x30 Node=0x5 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x30] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Read [ID=0x31 Node=0x5 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0x0 dir=false fl=0 lock=0 ffl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x31] Read 4096               source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Getattr [ID=0x32 Node=0x5 Uid=0 Gid=0 Pid=2618] 0x1 fl=GetattrFh  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x32] Getattr valid=1m0s ino=6011966394027229074 size=164 mode=-rwx------  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Flush [ID=0x33 Node=0x5 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x33] Flush                   source=fuse
    DEBU[2016-11-07T07:33:20+01:00] <- Release [ID=0x34 Node=0x5 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:20+01:00] -> [ID=0x34] Release                 source=fuse
    DEBU[2016-11-07T07:33:21+01:00] -> [ID=0x2b] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Open [ID=0x35 Node=0x2 Uid=0 Gid=0 Pid=2618] dir=true fl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x35] Open 0x1 fl=0           source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Read [ID=0x36 Node=0x2 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x36] Read 176                source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Read [ID=0x37 Node=0x2 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0xb0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x37] Read 0                  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Release [ID=0x38 Node=0x2 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblock rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x38] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x39 Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x39] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Open [ID=0x3a Node=0x4 Uid=0 Gid=0 Pid=2618] dir=true fl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x3a] Open 0x1 fl=0           source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Read [ID=0x3b Node=0x4 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x3b] Read 0                  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Release [ID=0x3c Node=0x4 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblock rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x3c] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Open [ID=0x3d Node=0x2 Uid=0 Gid=0 Pid=2618] dir=true fl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x3d] Open 0x1 fl=0           source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Read [ID=0x3e Node=0x2 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x3e] Read 176                source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Read [ID=0x3f Node=0x2 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0xb0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x3f] Read 0                  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Release [ID=0x40 Node=0x2 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblock rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x40] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x41 Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x41] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x42 Node=0x6 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x42] Getattr valid=1m0s ino=3025830946549282413 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x43 Node=0x4 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x43] Getattr valid=1m0s ino=12077135400190652796 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Lookup [ID=0x44 Node=0x4 Uid=0 Gid=0 Pid=2618] "0"  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x44] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Lookup [ID=0x45 Node=0x4 Uid=0 Gid=0 Pid=2618] "0"  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x45] Lookup error=ENOENT     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Mkdir [ID=0x46 Node=0x4 Uid=0 Gid=0 Pid=2618] "0" mode=drwx------ umask=----rwxrwx  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x46] Mkdir 0x9 gen=0 valid=1m0s attr={valid=1m0s ino=1571364423535387257 size=4096 mode=drwx------}  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x47 Node=0x4 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x47] Getattr valid=1m0s ino=12077135400190652796 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Open [ID=0x48 Node=0x4 Uid=0 Gid=0 Pid=2618] dir=true fl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x48] Open 0x1 fl=0           source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Fsync [ID=0x49 Node=0x4 Uid=0 Gid=0 Pid=2618] Handle 0x1 Flags 0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x49] Fsync error=EIO         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Release [ID=0x4a Node=0x4 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x4a] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x4b Node=0x8 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x4b] Getattr valid=1m0s ino=2329685365225963220 size=37 mode=-rwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Open [ID=0x4c Node=0x8 Uid=0 Gid=0 Pid=2618] dir=false fl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x4c] Open 0x1 fl=0           source=fuse
    DEBU[2016-11-07T07:33:43+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- ID=0x4d Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x4d] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- ID=0x4e Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x4e] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Read [ID=0x4f Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0x0 dir=false fl=0 lock=0 ffl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x4f] Read 4096               source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x50 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 fl=GetattrFh  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x50] Getattr valid=1m0s ino=2329685365225963220 size=37 mode=-rwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Flush [ID=0x51 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x51] Flush                   source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Release [ID=0x52 Node=0x8 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x52] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Open [ID=0x53 Node=0x8 Uid=0 Gid=0 Pid=2618] dir=false fl=OpenWriteOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x53] Open 0x1 fl=OpenDirectIO+OpenNonSeekable  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Setattr [ID=0x54 Node=0x8 Uid=0 Gid=0 Pid=2618] size=0 handle=INVALID-0x0 lockowner  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x54] Setattr valid=1m0s ino=2329685365225963220 size=0 mode=-rwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- ID=0x55 Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x55] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- ID=0x56 Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x56] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Write [ID=0x57 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 17 @0 fl=WriteLockOwner lock=15002273224907307667 ffl=OpenWriteOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x57] Write 17                source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Flush [ID=0x58 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x58] Flush                   source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Release [ID=0x59 Node=0x8 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenWriteOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x5a Node=0x8 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x5a] Getattr valid=1m0s ino=2329685365225963220 size=17 mode=-rwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Open [ID=0x5b Node=0x8 Uid=0 Gid=0 Pid=2618] dir=false fl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x5b] Open 0x1 fl=0           source=fuse
    DEBU[2016-11-07T07:33:43+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- ID=0x5c Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x5c] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- ID=0x5d Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x5d] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Read [ID=0x5e Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0x0 dir=false fl=0 lock=0 ffl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x5e] Read 4096               source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x5f Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 fl=GetattrFh  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x5f] Getattr valid=1m0s ino=2329685365225963220 size=17 mode=-rwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Flush [ID=0x60 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x60] Flush                   source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Release [ID=0x61 Node=0x8 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x61] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Open [ID=0x62 Node=0x8 Uid=0 Gid=0 Pid=2618] dir=false fl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x62] Open 0x1 fl=0           source=fuse
    DEBU[2016-11-07T07:33:43+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- ID=0x63 Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x63] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] No opcode 39                         source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- ID=0x64 Node=0x8 Uid=0 Gid=0 Pid=2618  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x64] Header error=ENOSYS     source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Read [ID=0x65 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 4096 @0x0 dir=false fl=0 lock=0 ffl=OpenReadOnly  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x65] Read 4096               source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x66 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 fl=GetattrFh  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x66] Getattr valid=1m0s ino=2329685365225963220 size=17 mode=-rwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Flush [ID=0x67 Node=0x8 Uid=0 Gid=0 Pid=2618] 0x1 fl=0x0 lk=0xd032c80339f43a93  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x67] Flush                   source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Release [ID=0x68 Node=0x8 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly rfl=0 owner=0x0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x68] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Remove [ID=0x69 Node=0x2 Uid=0 Gid=0 Pid=2618] "lock.roster" dir=false  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x69] Remove                  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Getattr [ID=0x6a Node=0x2 Uid=0 Gid=0 Pid=2618] 0x0 fl=0  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x6a] Getattr valid=1m0s ino=1613058114198683883 size=4096 mode=drwx------  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Remove [ID=0x6b Node=0x6 Uid=0 Gid=0 Pid=2618] "freenas.2618-0" dir=false  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x6b] Remove                  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Remove [ID=0x6c Node=0x2 Uid=0 Gid=0 Pid=2618] "lock.exclusive" dir=true  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Forget [ID=0x6d Node=0x7 Uid=0 Gid=0 Pid=0] 1  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x6d] Forget                  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x59] Release                 source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Forget [ID=0x6e Node=0x8 Uid=0 Gid=0 Pid=0] 1  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x6e] Forget                  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x6c] Remove                  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] <- Forget [ID=0x6f Node=0x6 Uid=0 Gid=0 Pid=0] 1  source=fuse
    DEBU[2016-11-07T07:33:43+01:00] -> [ID=0x6f] Forget                  source=fuse
    

    Additional information :

    bug usage 
    opened by bosr 10
  • Pseudo directory not recognized by horizon

    Pseudo directory not recognized by horizon

    Context

    • svfs version : svfs_0.5.2
    • storage provider : Swift - Openstack (pcs)
    • product : svfs

    the first issue is: file size with du command is always zero (but ls -la is have file size)

    Steps to reproduce this issue :

    1. mount swift on your machine
    2. cd to one container
    3. type: du -sh *

    Results you expected :

    /home/ftpuser/cloudfuse/123# du -sh * 12K 10.jpg 476M thumuc

    Results you observed :

    /home/ftpuser/cloudfuse/123# du -sh * 0 10.jpg 0 thumuc

    Debug log :

    Additional information :

    and the second issue is: pseudo directory, may be you not implement function create pseudo directory well.

    Steps to reproduce this issue :

    1. mount swift on your machine
    2. cd to one container (ex: container1)
    3. mkdir one directory on this container (ex: dir1)
    4. use swift client type: swift stat comtainer1 dir1

    Results you expected

    Account: AUTH_751c471132a54a3b9d1730baf6a52cb3 Container: container1 Object: dir1/ Content Type: httpd/unix-directory Content Length: 0 Last Modified: Wed, 08 Jun 2016 01:40:34 GMT ETag: d41d8cd98f00b204e9800998ecf8427e Accept-Ranges: bytes X-Timestamp: 1465350033.69469 X-Trans-Id: txf2c6546b946c41f6a12c7-005757e4b8

    Results you observed

    Account: AUTH_751c471132a54a3b9d1730baf6a52cb3 Container: container1 Object: dir1 Content Type: application/directory Content Length: 0 Last Modified: Wed, 08 Jun 2016 07:26:52 GMT ETag: d41d8cd98f00b204e9800998ecf8427e Accept-Ranges: bytes X-Timestamp: 1465370811.72025 X-Trans-Id: tx76b30e75ae41455d841a8-005757e4d8

    Please read the difference at: Content Type: That make the swift think the dir1 is a object not a directory (when we get it via API).

    bug 
    opened by supnobita 9
  • Rsync errors : symlink and mkstemp

    Rsync errors : symlink and mkstemp

    Hi,

    i just get this :

    rsync: symlink "/mnt/svfs/hubic/.composer/vendor/bin/.laravel.15610" -> "../laravel/installer/laravel" failed: Input/output error (5)

    improvement documentation 
    opened by boscorelly 9
  • Error 401 with Hubic

    Error 401 with Hubic

    Hi,

    From many days, when I try to mount Hubic with svfs, I have this error :

    2016/04/18 09:02:37 Invalid reply from server when fetching hubic API token : 401

    I use this command : mount -t svfs -o hubic_auth=https://lb1040.hubic.ovh.net/v1/AUTH_xyz123,hubic_token=mytoken123,container=default hubic /mnt/hubic I use the 0.5.1 version on debian 7.10 x64

    usage 
    opened by remyj38 9
  • Crash on map write (race condition)

    Crash on map write (race condition)

    Context

    • svfs version : 0.7.0
    • storage provider : ovh
    • product : hubic

    Steps to reproduce this issue :

    1. mount
    2. rsync data to hubic (-rtW --inplace)
    3. after some time, svfs crashes

    Results you expected :

    no crash

    Results you observed :

    crash

    Debug log :

    This is a golang trace. You seem to miss a mutex when writing to some map. The problem may have existed since a long time, as runtime data race detection has been added to golang in v1.6.

    fatal error: concurrent map writes
    
    goroutine 191452 [running]:
    runtime.throw(0x999b80, 0x15)
            /home/admin/.gvm/gos/go1.6.2/src/runtime/panic.go:547 +0x90 fp=0xc82ab3b160 sp=0xc82ab3b148
    runtime.mapdelete(0x7cea20, 0xc82001b6b0, 0xc82ab3b1f8)
            /home/admin/.gvm/gos/go1.6.2/src/runtime/hashmap.go:559 +0x5a fp=0xc82ab3b1c0 sp=0xc82ab3b160
    github.com/ovh/svfs/svfs.(*SimpleCache).Remove(0xc82002a058, 0x7ffd94a14406, 0x7, 0xc82aa85200, 0x55)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/cache.go:191 +0x84 fp=0xc82ab3b210 sp=0xc82ab3b1c0
    github.com/ovh/svfs/svfs.(*ObjectHandle).Release(0xc8205a86c0, 0x7f93c5c6a1f0, 0xc823383980, 0xc82ad150e0, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/handle.go:56 +0x1c9 fp=0xc82ab3b278 sp=0xc82ab3b210
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).handleRequest(0xc82113ef00, 0x7f93c5c6a1f0, 0xc823383980, 0x7f93c5c6fce0, 0xc82a9a2de0, 0xc82a9998c0, 0x7f93c5c6a458, 0xc82ad150e0, 0xc82ab3bec8, 0x0, ...)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:1288 +0x3e50 fp=0xc82ab3bd88 sp=0xc82ab3b278
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).serve(0xc82113ef00, 0x7f93c5c6a458, 0xc82ad150e0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:871 +0x740 fp=0xc82ab3bf68 sp=0xc82ab3bd88
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).Serve.func1(0xc82113ef00, 0x7f93c5c6a458, 0xc82ad150e0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:418 +0x6d fp=0xc82ab3bf88 sp=0xc82ab3bf68
    runtime.goexit()
            /home/admin/.gvm/gos/go1.6.2/src/runtime/asm_amd64.s:1998 +0x1 fp=0xc82ab3bf90 sp=0xc82ab3bf88
    created by github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).Serve
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:419 +0x580
    
    goroutine 1 [syscall]:
    syscall.Syscall(0x0, 0x7, 0xc821210000, 0x1001000, 0xc84ad1a060, 0x1001000, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/syscall/asm_linux_amd64.s:18 +0x5
    syscall.read(0x7, 0xc821210000, 0x1001000, 0x1001000, 0xc82005e528, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/syscall/zsyscall_linux_amd64.go:783 +0x5f
    syscall.Read(0x7, 0xc821210000, 0x1001000, 0x1001000, 0x20000, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/syscall/syscall_unix.go:161 +0x4d
    github.com/ovh/svfs/vendor/bazil.org/fuse.(*Conn).ReadRequest(0xc82005e4e0, 0x0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fuse.go:555 +0xf3
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).Serve(0xc82113ef00, 0x7f93c5cb06e8, 0xbe7188, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:407 +0x456
    main.main()
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/main.go:192 +0x6fa
    
    goroutine 17 [syscall, 593 minutes, locked to thread]:
    runtime.goexit()
            /home/admin/.gvm/gos/go1.6.2/src/runtime/asm_amd64.s:1998 +0x1
    
    goroutine 20 [chan receive, 174 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 21 [chan receive, 175 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 22 [chan receive, 173 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 23 [chan receive, 173 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 24 [chan receive, 172 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 25 [chan receive, 172 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 26 [chan receive, 29 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 27 [chan receive, 169 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 28 [chan receive, 28 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 29 [chan receive]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 30 [chan receive]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 31 [chan receive, 10 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 32 [chan receive, 8 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 33 [chan receive, 11 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 34 [chan receive, 29 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 35 [chan receive, 12 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 36 [chan receive, 3 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 37 [chan receive, 5 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 38 [chan receive, 6 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 39 [chan receive, 5 minutes]:
    github.com/ovh/svfs/svfs.processTasks(0xc82112e000)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:48 +0x76
    created by github.com/ovh/svfs/svfs.(*Lister).Start
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/lister.go:31 +0x87
    
    goroutine 191258 [select]:
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).doTimeoutRequest(0xc82000ca00, 0xc82a9a1f80, 0xc836f4c7e0, 0xc8201ea160, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:231 +0x21e
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).Call(0xc82000ca00, 0xc8201cb220, 0x45, 0x7ffd94a14406, 0x7, 0xc82aa85200, 0x55, 0x93d268, 0x3, 0x0, ...)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:509 +0xa21
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).storage(0xc82000ca00, 0x7ffd94a14406, 0x7, 0xc82aa85200, 0x55, 0x93d268, 0x3, 0x0, 0xc820371a10, 0xc82001b650, ...)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:568 +0x128
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).ObjectCreate.func1(0xc82002a988, 0xc82000ca00, 0x7ffd94a14406, 0x7, 0xc82aa85200, 0x55, 0xc820371a10, 0xc82002a990)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:1215 +0x102
    created by github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).ObjectCreate
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:1219 +0x436
    
    goroutine 191273 [select]:
    net/http.(*persistConn).writeLoop(0xc82ac665b0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:1277 +0x472
    created by net/http.(*Transport).dialConn
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:858 +0x10cb
    
    goroutine 191478 [IO wait]:
    net.runtime_pollWait(0x7f93c5c61658, 0x77, 0x694ab4)
            /home/admin/.gvm/gos/go1.6.2/src/runtime/netpoll.go:160 +0x60
    net.(*pollDesc).Wait(0xc82322a4c0, 0x77, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/fd_poll_runtime.go:73 +0x3a
    net.(*pollDesc).WaitWrite(0xc82322a4c0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/fd_poll_runtime.go:82 +0x36
    net.(*netFD).connect(0xc82322a460, 0x0, 0x0, 0x7f93c5c69460, 0xc82322c440, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
            /home/admin/.gvm/gos/go1.6.2/src/net/fd_unix.go:127 +0x28e
    net.(*netFD).dial(0xc82322a460, 0x7f93c5ca6f08, 0x0, 0x7f93c5ca6f08, 0xc8201fd740, 0x0, 0xc800000000, 0x0, 0x0, 0x0, ...)
            /home/admin/.gvm/gos/go1.6.2/src/net/sock_posix.go:137 +0x364
    net.socket(0x943200, 0x3, 0x2, 0x1, 0x0, 0xc8201fd700, 0x7f93c5ca6f08, 0x0, 0x7f93c5ca6f08, 0xc8201fd740, ...)
            /home/admin/.gvm/gos/go1.6.2/src/net/sock_posix.go:89 +0x429
    net.internetSocket(0x943200, 0x3, 0x7f93c5ca6f08, 0x0, 0x7f93c5ca6f08, 0xc8201fd740, 0x0, 0xc800000000, 0x0, 0x1, ...)
            /home/admin/.gvm/gos/go1.6.2/src/net/ipsock_posix.go:161 +0x153
    net.dialTCP(0x943200, 0x3, 0x0, 0xc8201fd740, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, ...)
            /home/admin/.gvm/gos/go1.6.2/src/net/tcpsock_posix.go:171 +0x12b
    net.dialSingle(0xc820457e60, 0x7f93c5ca6e78, 0xc8201fd740, 0x0, 0x7ffd00000000, 0x0, 0x0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/dial.go:371 +0x40c
    net.dialSerial.func1(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/dial.go:343 +0x75
    net.dial(0x943200, 0x3, 0x7f93c5ca6e78, 0xc8201fd740, 0xc82322c420, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
            /home/admin/.gvm/gos/go1.6.2/src/net/fd_unix.go:40 +0x60
    net.dialSerial(0xc820457e60, 0xc82383fb00, 0xa, 0x10, 0x0, 0x0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/dial.go:345 +0x7d0
    net.(*Dialer).Dial(0xc82072f868, 0x943200, 0x3, 0xc82ab45620, 0x18, 0x0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/dial.go:239 +0x512
    net.Dial(0x943200, 0x3, 0xc82ab45620, 0x18, 0x0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/dial.go:193 +0x96
    net/http.(*Transport).dial(0xc82113e000, 0x943200, 0x3, 0xc82ab45620, 0x18, 0x0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:665 +0x1d3
    net/http.(*Transport).dialConn(0xc82113e000, 0x0, 0xc820fab600, 0x5, 0xc82ab45620, 0x18, 0xc8201fc780, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:762 +0x1e3d
    net/http.(*Transport).getConn.func4(0xc82113e000, 0x0, 0xc820fab600, 0x5, 0xc82ab45620, 0x18, 0xc82ad269c0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:706 +0x66
    created by net/http.(*Transport).getConn
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:708 +0x262
    
    goroutine 191454 [chan receive]:
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*ObjectCreateFile).Close(0xc82adf91d0, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:1130 +0xa2
    github.com/ovh/svfs/svfs.(*ObjectHandle).Release(0xc8205a8780, 0x7f93c5c6a1f0, 0xc82adcbbc0, 0xc82ad151d0, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/handle.go:51 +0xd0
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).handleRequest(0xc82113ef00, 0x7f93c5c6a1f0, 0xc82adcbbc0, 0x7f93c5c6fce0, 0xc82a9a2de0, 0xc82a9998c0, 0x7f93c5c6a458, 0xc82ad151d0, 0xc820731ec8, 0x0, ...)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:1288 +0x3e50
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).serve(0xc82113ef00, 0x7f93c5c6a458, 0xc82ad151d0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:871 +0x740
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).Serve.func1(0xc82113ef00, 0x7f93c5c6a458, 0xc82ad151d0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:418 +0x6d
    created by github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).Serve
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:419 +0x580
    
    goroutine 191477 [select]:
    net/http.(*Transport).getConn(0xc82113e000, 0xc836f4ca80, 0x0, 0xc820fab600, 0x5, 0xc82ab45620, 0x18, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:711 +0x4ef
    net/http.(*Transport).RoundTrip(0xc82113e000, 0xc836f4ca80, 0xc82113e000, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:311 +0x7e9
    net/http.send(0xc836f4ca80, 0x7f93c5ca5550, 0xc82113e000, 0x0, 0x0, 0x0, 0xc820fab6b0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/client.go:260 +0x6b7
    net/http.(*Client).send(0xc82011c090, 0xc836f4ca80, 0x0, 0x0, 0x0, 0x20, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/client.go:155 +0x185
    net/http.(*Client).doFollowingRedirects(0xc82011c090, 0xc836f4ca80, 0xa23540, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/client.go:475 +0x8a4
    net/http.(*Client).Do(0xc82011c090, 0xc836f4ca80, 0xc82002aa28, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/client.go:191 +0x1e4
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).doTimeoutRequest.func1(0xc82000ca00, 0xc836f4ca80, 0xc82ad26960)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:227 +0x36
    created by github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).doTimeoutRequest
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:229 +0x8b
    
    goroutine 191475 [semacquire]:
    sync.runtime_Syncsemacquire(0xc8207caf78)
            /home/admin/.gvm/gos/go1.6.2/src/runtime/sema.go:241 +0x201
    sync.(*Cond).Wait(0xc8207caf68)
            /home/admin/.gvm/gos/go1.6.2/src/sync/cond.go:63 +0x9b
    io.(*pipe).write(0xc8207caf00, 0xc82384e050, 0x1fff0, 0x1000fb0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/io/pipe.go:94 +0x23a
    io.(*PipeWriter).Write(0xc82002aa30, 0xc82384e050, 0x1fff0, 0x1000fb0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/io/pipe.go:161 +0x50
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*ObjectCreateFile).Write(0xc82ad15400, 0xc82384e050, 0x1fff0, 0x1000fb0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:1105 +0x61
    github.com/ovh/svfs/svfs.(*ObjectHandle).Write(0xc82ad266c0, 0x7f93c5c6a1f0, 0xc82adcbc00, 0xc82ab49dc0, 0xc82ad23d58, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/svfs/handle.go:87 +0x118
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).handleRequest(0xc82113ef00, 0x7f93c5c6a1f0, 0xc82adcbc00, 0x7f93c5c6fce0, 0xc82a9a2ea0, 0xc82adcba00, 0x7f93c5cb4e18, 0xc82ab49dc0, 0xc82a937ec8, 0x0, ...)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:1252 +0x2538
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).serve(0xc82113ef00, 0x7f93c5cb4e18, 0xc82ab49dc0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:871 +0x740
    github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).Serve.func1(0xc82113ef00, 0x7f93c5cb4e18, 0xc82ab49dc0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:418 +0x6d
    created by github.com/ovh/svfs/vendor/bazil.org/fuse/fs.(*Server).Serve
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/bazil.org/fuse/fs/serve.go:419 +0x580
    
    goroutine 191476 [select]:
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).doTimeoutRequest(0xc82000ca00, 0xc82adcbc80, 0xc836f4ca80, 0xc8201ea160, 0x0, 0x0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:231 +0x21e
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).Call(0xc82000ca00, 0xc8201cb220, 0x45, 0x7ffd94a14406, 0x7, 0xc82aa85260, 0x55, 0x93d268, 0x3, 0x0, ...)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:509 +0xa21
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).storage(0xc82000ca00, 0x7ffd94a14406, 0x7, 0xc82aa85260, 0x55, 0x93d268, 0x3, 0x0, 0xc8201fc780, 0xc82001b650, ...)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:568 +0x128
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).ObjectCreate.func1(0xc82002aa20, 0xc82000ca00, 0x7ffd94a14406, 0x7, 0xc82aa85260, 0x55, 0xc8201fc780, 0xc82002aa28)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:1215 +0x102
    created by github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).ObjectCreate
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:1219 +0x436
    
    goroutine 191272 [IO wait]:
    net.runtime_pollWait(0x7f93c5c61c58, 0x72, 0xc8205ae400)
            /home/admin/.gvm/gos/go1.6.2/src/runtime/netpoll.go:160 +0x60
    net.(*pollDesc).Wait(0xc82059b5d0, 0x72, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/fd_poll_runtime.go:73 +0x3a
    net.(*pollDesc).WaitRead(0xc82059b5d0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/fd_poll_runtime.go:78 +0x36
    net.(*netFD).Read(0xc82059b570, 0xc8205ae400, 0x400, 0x400, 0x0, 0x7f93c5ca1050, 0xc8200101a0)
            /home/admin/.gvm/gos/go1.6.2/src/net/fd_unix.go:250 +0x23a
    net.(*conn).Read(0xc820118c50, 0xc8205ae400, 0x400, 0x400, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/net.go:172 +0xe4
    crypto/tls.(*block).readFromUntil(0xc820273230, 0x7f93c5c69508, 0xc820118c50, 0x5, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/crypto/tls/conn.go:460 +0xcc
    crypto/tls.(*Conn).readRecord(0xc8207d7800, 0xa23a17, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/crypto/tls/conn.go:562 +0x2d1
    crypto/tls.(*Conn).Read(0xc8207d7800, 0xc838578000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/crypto/tls/conn.go:939 +0x167
    net/http.noteEOFReader.Read(0x7f93c5cb04a8, 0xc8207d7800, 0xc82ac66618, 0xc838578000, 0x1000, 0x1000, 0x60, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:1687 +0x67
    net/http.(*noteEOFReader).Read(0xc838a0bf80, 0xc838578000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
            <autogenerated>:284 +0xd0
    bufio.(*Reader).fill(0xc8205a9260)
            /home/admin/.gvm/gos/go1.6.2/src/bufio/bufio.go:97 +0x1e9
    bufio.(*Reader).Peek(0xc8205a9260, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/bufio/bufio.go:132 +0xcc
    net/http.(*persistConn).readLoop(0xc82ac665b0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:1073 +0x177
    created by net/http.(*Transport).dialConn
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:857 +0x10a6
    
    goroutine 191259 [select]:
    net/http.(*persistConn).roundTrip(0xc82ac665b0, 0xc8205ac010, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:1473 +0xf1f
    net/http.(*Transport).RoundTrip(0xc82113e000, 0xc836f4c7e0, 0xc82113e000, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/transport.go:324 +0x9bb
    net/http.send(0xc836f4c7e0, 0x7f93c5ca5550, 0xc82113e000, 0x0, 0x0, 0x0, 0xc820fab550, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/client.go:260 +0x6b7
    net/http.(*Client).send(0xc82011c090, 0xc836f4c7e0, 0x0, 0x0, 0x0, 0x20, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/client.go:155 +0x185
    net/http.(*Client).doFollowingRedirects(0xc82011c090, 0xc836f4c7e0, 0xa23540, 0x0, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/client.go:475 +0x8a4
    net/http.(*Client).Do(0xc82011c090, 0xc836f4c7e0, 0xc82002a990, 0x0, 0x0)
            /home/admin/.gvm/gos/go1.6.2/src/net/http/client.go:191 +0x1e4
    github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).doTimeoutRequest.func1(0xc82000ca00, 0xc836f4c7e0, 0xc82adf3aa0)
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:227 +0x36
    created by github.com/ovh/svfs/vendor/github.com/xlucas/swift.(*Connection).doTimeoutRequest
            /home/admin/.gvm/pkgsets/go1.6.2/global/src/github.com/ovh/svfs/vendor/github.com/xlucas/swift/swift.go:229 +0x8b
    
    bug performance 
    opened by speed47 8
  • /mnt is not accessible: Transport endpoint is not connected

    /mnt is not accessible: Transport endpoint is not connected

    I am using svfs to mount a container but when copying the data, aftyer few minutes I get : "/mnt is not accessible: Transport endpoint is not connected"

    I will be also great to have df and du working.

    performance usage 
    opened by lvenier 8
  • Upgrade keystone API v2 to v3

    Upgrade keystone API v2 to v3

    What I did

    Be able to connect to keystone API v3.

    How I did it

    Added the missing parameter "os_user_domain_name". Modified auth_v3.go to select the parametrized region.

    How to verify it

    Add this parameter to your configuration file: OS_USER_DOMAIN_NAME="default" And change this parameter to v3: OS_AUTH_URL=https://auth.cloud.ovh.net/v3

    Description for changelog

    Support keystone API v3

    opened by davidbullado 5
  • Lookup error=ENOENT

    Lookup error=ENOENT

    Context

    • svfs version : 0.9.1
    • storage provider : OVH
    • product : Cloud Object Storage

    Steps to reproduce this issue :

    1. mount in /etc/fstab

    backup /backup svfs auth_url=https://auth.cloud.ovh.net/v2.0,username=PRIVATE,password=PRIVATE,tenant=PRIVATE,region=BHS,container=backup,_netdev,auto,uid=1000,gid=1000,allow_other,default_perm=false,mode=0644,debug=true,stderr=/var/log/messages,extra_attr 0 0

    1. Write using curl sftp

    Results you expected :

    Write the file properly

    Results you observed :

    I see this error in SFTP(OpenSSH) server

    sftp-server[1286]: error: process_write: seek failed

    Debug log :

    Feb 11 07:07:30 backup-gateway systemd-logind: New session 2 of user centos.
    Feb 11 07:07:30 backup-gateway systemd: Started Session 2 of user centos.
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="<- Open [ID=0x4 Node=0x1 Uid=1000 Gid=1000 Pid=1273] dir=true fl=OpenReadOnly+OpenDirectory+OpenNonblock" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="-> [ID=0x4] Open 0x1 fl=0" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="<- Read [ID=0x5 Node=0x1 Uid=1000 Gid=1000 Pid=1273] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="-> [ID=0x5] Read 88" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="<- Lookup [ID=0x6 Node=0x1 Uid=1000 Gid=1000 Pid=1273] \"plesk01.gonkar.com\"" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="-> [ID=0x6] Lookup 0x2 gen=0 valid=1m0s attr={valid=1m0s ino=1822200889090297369 size=4096 mode=drw-r--r--}" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="<- Lookup [ID=0x7 Node=0x1 Uid=1000 Gid=1000 Pid=1273] \"secure-20200119\"" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="-> [ID=0x7] Lookup 0x3 gen=0 valid=1m0s attr={valid=1m0s ino=16608916460319320181 size=295382930 mode=-rw-r--r--}" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="<- Read [ID=0x8 Node=0x1 Uid=1000 Gid=1000 Pid=1273] 0x1 4096 @0x58 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="-> [ID=0x8] Read 0" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="<- Read [ID=0x9 Node=0x1 Uid=1000 Gid=1000 Pid=1273] 0x1 4096 @0x58 dir=true fl=0 lock=0 ffl=OpenReadOnly+OpenDirectory+OpenNonblock" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="-> [ID=0x9] Read 0" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="<- Release [ID=0xa Node=0x1 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblock rfl=0 owner=0x0" source=fuse
    Feb 11 07:07:30 backup-gateway mount: time="2020-02-11T07:07:30-05:00" level=debug msg="-> [ID=0xa] Release" source=fuse
    Feb 11 07:07:30 backup-gateway systemd-logind: Removed session 2.
    Feb 11 07:07:31 backup-gateway systemd-logind: New session 3 of user centos.
    Feb 11 07:07:31 backup-gateway systemd: Started Session 3 of user centos.
    Feb 11 07:07:31 backup-gateway mount: time="2020-02-11T07:07:31-05:00" level=debug msg="<- Lookup [ID=0xb Node=0x2 Uid=1000 Gid=1000 Pid=1286] \"test-7ed97e18dc7498e06a01bf95d0925ab3\"" source=fuse
    Feb 11 07:07:31 backup-gateway mount: time="2020-02-11T07:07:31-05:00" level=debug msg="-> [ID=0xb] Lookup error=ENOENT" source=fuse
    Feb 11 07:07:31 backup-gateway mount: time="2020-02-11T07:07:31-05:00" level=debug msg="<- Create [ID=0xc Node=0x2 Uid=1000 Gid=1000 Pid=1286] \"test-7ed97e18dc7498e06a01bf95d0925ab3\" fl=OpenWriteOnly+OpenCreate+OpenTruncate mode=-rw-r--r-- umask=--------w-" source=fuse
    Feb 11 07:07:31 backup-gateway mount: time="2020-02-11T07:07:31-05:00" level=debug msg="-> [ID=0xc] Create {0x4 gen=0 valid=1m0s attr={valid=1m0s ino=12840365178257753003 size=0 mode=-rw-r--r--}} {0x1 fl=OpenDirectIO+OpenNonSeekable}" source=fuse
    Feb 11 07:07:31 backup-gateway sftp-server[1286]: error: process_write: seek failed
    Feb 11 07:07:31 backup-gateway mount: time="2020-02-11T07:07:31-05:00" level=debug msg="<- Flush [ID=0xd Node=0x4 Uid=1000 Gid=1000 Pid=1286] 0x1 fl=0x0 lk=0xc46686e82fc5621b" source=fuse
    Feb 11 07:07:31 backup-gateway mount: time="2020-02-11T07:07:31-05:00" level=debug msg="-> [ID=0xd] Flush" source=fuse
    Feb 11 07:07:31 backup-gateway mount: time="2020-02-11T07:07:31-05:00" level=debug msg="<- Release [ID=0xe Node=0x4 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenWriteOnly rfl=0 owner=0x0" source=fuse
    Feb 11 07:07:31 backup-gateway mount: time="2020-02-11T07:07:31-05:00" level=debug msg="-> [ID=0xe] Release" source=fuse
    

    Additional information :

    The error is appearing when configuring Plesk (Web Hosting Panel) with SFTP Backup Extension. It uses cURL to perform the tasks

    opened by alebeta90 0
  • Keystone API v2 deprecated, v3 doesn't work

    Keystone API v2 deprecated, v3 doesn't work

    Context

    • svfs version : 0.9.1
    • storage provider : OVH
    • product : OVH Public Cloud Storage

    Steps to reproduce this issue :

    1. auth_url=https://auth.cloud.ovh.net/v3/

    Results you expected :

    It should mount

    Results you observed :

    It doesn't

    Debug log :

    DEBU[2020-02-05T16:04:46-05:00] Skipping configuration : open : no such file or directory  source=svfs
    FATA[2020-02-05T16:04:46-05:00] Bad Request
    

    Additional information :

    OVH said they were deprecating Keystone API v2 by March 24th, 2020 and v3 is not compatible with it. So will it ever work with it, and if not, what's the alternatives, if any? This problem is similar to #111 but I'm not sure it's the same thing as I use OVH as a provider and not my own Swift server.

    opened by juju2143 3
  • Inodes disk are always full

    Inodes disk are always full

    Thank you for an awesome idea - that project!

    Context

    • svfs version : 0.9.1
    • storage provider : swift (Open Stack)

    Steps to reproduce this issue :

    mount -t svfs \
          -o mode=510 \
          -o attr \
          -o container=foo \
          swift \
          /var/spool/foo
    
    # df -i
    Filesystem      Inodes  IUsed   IFree IUse% Mounted on
    swift             3000   3000       0  100% /var/spool/foo
    

    Results you expected :

    mount -t svfs \
          -o mode=510 \
          -o attr \
          -o container=foo \
          -o maxObjects=6000 \
          swift \
          /var/spool/foo
    
    # df -i
    Filesystem      Inodes  IUsed   IFree IUse% Mounted on
    swift             6000   3000   3000  50% /var/spool/foo
    

    Additional information :

    It would be useful to have mount option about how many objects can be created. So the inodes are not being shown as "always full" (100%). Because a typical heath checks automatically detects that as a critical issue.

    If there is any option for inodes steering kindly please point me to.

    Could it be that some quota option on Swift is not set, so the default max objects (max inodes) is a current number of objects created ?

    opened by sielaq 0
  • not synchronize in any way

    not synchronize in any way

    • ubuntu 18.04
    • svfs version : 0.9.1
    • storage provider : OVH
    • product : pcs (public cloud storage)

    I have mounted the unit for object public storage according to the manual as follows:

    sudo mount -t svfs -o username = $ OS_USERNAME, password = $ OS_PASSWORD, tenant = $ OS_TENANT_NAME, region = $ OS_REGION_NAME, container = publico pcs /home/user/mount

    where by means of OVH Manager I have created a publico name container that the type is public. The mount is correct because I do not get an error.

    But, my problem is that if in the folder / home / user / mount I create another folder and create a file.txt this file does not synchronize it in the container.

    In the same way if from OVH Manager I import a file manually to the container of publico name for example a photo .jpg, this file I do not see it either in the folder /home/user/mount. That is, nothing is synchronized on one side or the other side of creation.

    Because it does not work? Because what I believe from the mounted unit does not rise to the PC and vice versa? That is, the synchronization of objects does not work in any way. Why?

    opened by Configuracionpordefecto 0
Releases(v0.9.1)
Owner
OVHcloud
OVHcloud
Cross-platform file system notifications for Go.

File system notifications for Go fsnotify utilizes golang.org/x/sys rather than syscall from the standard library. Ensure you have the latest version

fsnotify 7.2k Aug 9, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Juicedata, Inc 5.7k Aug 9, 2022
A user-space file system for interacting with Google Cloud Storage

gcsfuse is a user-space file system for interacting with Google Cloud Storage. Current status Please treat gcsfuse as beta-quality software. Use it fo

Google Cloud Platform 1.5k Aug 5, 2022
Goofys is a high-performance, POSIX-ish Amazon S3 file system written in Go

Goofys is a high-performance, POSIX-ish Amazon S3 file system written in Go Overview Goofys allows you to mount an S3 bucket as a filey system. It's a

Ka-Hing Cheung 4.2k Aug 4, 2022
Cross-platform file system notifications for Go.

File system notifications for Go fsnotify utilizes golang.org/x/sys rather than syscall from the standard library. Ensure you have the latest version

alan liu 0 Aug 7, 2017
A FileSystem Abstraction System for Go

A FileSystem Abstraction System for Go Overview Afero is a filesystem framework providing a simple, uniform and universal API interacting with any fil

Steve Francia 4.7k Aug 8, 2022
SeaweedFS a fast distributed storage system for blobs, objects, files, and data lake, for billions of files

SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.

Chris Lu 14.9k Aug 1, 2022
Virtual-Operating-System - Virtual Operating System Using Golang And Fyne Implemented Gallery app

Virtual Operating System Virtual Operating System Using Golang And Fyne Implemen

Arjit Kumar 0 Jan 1, 2022
Clones dependencies from .resolved file of Swift Package Manager.

SPM-dep-cloner Clones dependencies from .resolved file of Swift Package Manager. Useful for setup of new project with dependencies in another repos. H

Artyom Andreev 0 Nov 29, 2021
Go language interface to Swift / Openstack Object Storage / Rackspace cloud files (golang)

Swift This package provides an easy to use library for interfacing with Swift / Openstack Object Storage / Rackspace cloud files from the Go Language

Nick Craig-Wood 292 Jun 30, 2022
Instant messaging platform. Backend in Go. Clients: Swift iOS, Java Android, JS webapp, scriptable command line; chatbots

Tinode Instant Messaging Server Instant messaging server. Backend in pure Go (license GPL 3.0), client-side binding in Java, Javascript, and Swift, as

Tinode 9.1k Aug 1, 2022
Clean-Swift source and test code auto-generator. It can save you time typing 500-600 lines of code.

Clean-Swift source & test code auto generator Overview Run Output Basic Usage make config.yaml target_project_name: Miro // target project name copyri

David Ha 20 Apr 13, 2022
"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

Website | Documentation | Download | Contributing | Changelog | Installation | Forum Rclone Rclone ("rsync for cloud storage") is a command-line progr

rclone 33.9k Aug 1, 2022
Pluggable, extensible virtual file system for Go

vfs Package vfs provides a pluggable, extensible, and opinionated set of file system functionality for Go across a number of file system types such as

C2FO 181 Aug 4, 2022
A virtual file system for small to medium sized datasets (MB or GB, not TB or PB). Like Docker, but for data.

AetherFS assists in the production, distribution, and replication of embedded databases and in-memory datasets. You can think of it like Docker, but f

mya 8 Feb 9, 2022
The full power of the Go Compiler directly in your browser, including a virtual file system implementation. Deployable as a static website.

Static Go Playground Features Full Go Compiler running on the browser. Supports using custom build tags. Incremental builds (build cache). Supports mu

null 25 Jun 16, 2022
A feature flag solution, with only a YAML file in the backend (S3, GitHub, HTTP, local file ...), no server to install, just add a file in a central system and refer to it. 🎛️

??️ go-feature-flag A feature flag solution, with YAML file in the backend (S3, GitHub, HTTP, local file ...). No server to install, just add a file i

Thomas Poignant 480 Aug 2, 2022
This is a simple file storage server. User can upload file, delete file and list file on the server.

Simple File Storage Server This is a simple file storage server. User can upload file, delete file and list file on the server. If you want to build a

BH_Lin 0 Jan 19, 2022
This is a Virtual Operating System made by using GOLANG and FYNE.

Virtual-Operating-System This is a Virtual Operating System made by using GOLANG and FYNE. Hello! All In this project I have made a virtual Operating

SURBHI SINHA 1 Nov 1, 2021
Built Virtual Operating System and integrated application like calculator, gallery app , weather app, and text editor.

Virtual Operating System Built Virtual Operating System and integrated application like calculator, gallery app , weather app, and text editor. Langua

null 0 Nov 2, 2021
Virtual Operating System Using Golang

Virtual Operating System Virtual Operating System Using Golang And Fyne Installation 1.Install Go 2.Install Gcc 3.Install Fyne Using This Command:- g

Ranjit Kumar Sahoo 2 Jun 5, 2022
OperatingSys-GO - A Virtual Operating System made by using GOLANG and FYNE

Operating-System This is a Virtual Operating System made by using GOLANG and FYN

null 0 Jan 2, 2022
Lima launches Linux virtual machines on macOS, with automatic file sharing, port forwarding, and containerd.

Lima: Linux-on-Mac ("macOS subsystem for Linux", "containerd for Mac")

Akihiro Suda 8.8k Aug 5, 2022
go-fastdfs 是一个简单的分布式文件系统(私有云存储),具有无中心、高性能,高可靠,免维护等优点,支持断点续传,分块上传,小文件合并,自动同步,自动修复。Go-fastdfs is a simple distributed file system (private cloud storage), with no center, high performance, high reliability, maintenance free and other advantages, support breakpoint continuation, block upload, small file merge, automatic synchronization, automatic repair.(similar fastdfs).

中文 English 愿景:为用户提供最简单、可靠、高效的分布式文件系统。 go-fastdfs是一个基于http协议的分布式文件系统,它基于大道至简的设计理念,一切从简设计,使得它的运维及扩展变得更加简单,它具有高性能、高可靠、无中心、免维护等优点。 大家担心的是这么简单的文件系统,靠不靠谱,可不

小张 3.2k Aug 1, 2022
DeepCopy a portable app that allows you to copy all forms of specified file types from your entire file system of the computer

DeepCopy a portable app that allows you to copy all forms of specified file types from your entire file system of the computer

subrahmanya  s hegade 1 Dec 20, 2021
Kitten is a distributed file system optimized for small file storage, inspired by Facebook's Haystack.

Kitten is a distributed file system optimized for small file storage, inspired by Facebook's Haystack.

Jack Lee 11 Jul 11, 2022
An embeddable implementation of the Ngaro Virtual Machine for Go programs

Ngaro Go Overview This is an embeddable Go implementation of the Ngaro Virtual Machine. This repository contains the embeddable virtual machine, a rud

Denis Bernard 22 Jan 23, 2022
Virtual Universe 3D Engine

Vu Vu (Virtual Universe) is a 3D engine based on the modern programming language Go (Golang). Vu is composed of packages, detailed in GoDoc, and brief

null 194 Aug 3, 2022
Create and manage Isolated Virtual Environments for Go

VenGO Create and manage Isolated Virtual Environments for Golang. Motivation Why a tool to generate and manage virtual environments in Go?. Well, some

Oscar Campos 123 Aug 2, 2022