JuiceFS is a distributed POSIX file system built on top of Redis and S3.

Overview

JuiceFS Logo

Build Status Join Slack Go Report Chinese Docs

JuiceFS is an open-source POSIX file system built on top of Redis and object storage (e.g. Amazon S3), designed and optimized for cloud native environment. By using the widely adopted Redis and S3 as the persistent storage, JuiceFS serves as a stateless middleware to enable many applications to share data easily.

The highlighted features are:

  • Fully POSIX-compatible: JuiceFS is a fully POSIX-compatible file system. Existing applications can work with it without any change. See pjdfstest result below.
  • Fully Hadoop-compatible: JuiceFS Hadoop Java SDK is compatible with Hadoop 2.x and Hadoop 3.x. As well as variety of components in Hadoop ecosystem.
  • S3-compatible: JuiceFS S3 Gateway provides S3-compatible interface.
  • Outstanding Performance: The latency can be as low as a few milliseconds and the throughput can be expanded to nearly unlimited. See benchmark result below.
  • Cloud Native: JuiceFS provides Kubernetes CSI driver to help people who want to use JuiceFS in Kubernetes.
  • Sharing: JuiceFS is a shared file storage that can be read and written by thousands clients.

In addition, there are some other features:

  • Global File Locks: JuiceFS supports both BSD locks (flock) and POSIX record locks (fcntl).
  • Data Compression: By default JuiceFS uses LZ4 to compress all your data, you could also use Zstandard instead.
  • Data Encryption: JuiceFS supports data encryption in transit and at rest, read the guide for more information.

Architecture | Getting Started | Administration | POSIX Compatibility | Performance Benchmark | Supported Object Storage | Status | Roadmap | Reporting Issues | Contributing | Community | Usage Tracking | License | Credits | FAQ


Architecture

JuiceFS Architecture

JuiceFS relies on Redis to store file system metadata. Redis is a fast, open-source, in-memory key-value data store and very suitable for storing the metadata. All the data will store into object storage through JuiceFS client.

JuiceFS Storage Format

The storage format of one file in JuiceFS consists of three levels. The first level called "Chunk". Each chunk has fixed size, currently it is 64MiB and cannot be changed. The second level called "Slice". The slice size is variable. A chunk may have multiple slices. The third level called "Block". Like chunk, its size is fixed. By default one block is 4MiB and you could modify it when formatting a volume (see following section). At last, the block will be compressed and encrypted (optional) store into object storage.

Getting Started

Precompiled binaries

You can download precompiled binaries from releases page.

Building from source

You need first installing Go 1.14+ and GCC 5.4+, then run following commands:

$ git clone https://github.com/juicedata/juicefs.git
$ cd juicefs
$ make

Dependency

A Redis server (>= 2.8) is needed for metadata, please follow Redis Quick Start.

macFUSE is also needed for macOS.

The last one you need is object storage. There are many options for object storage, local disk is the easiest one to get started.

Format a volume

Assume you have a Redis server running locally, we can create a volume called test using it to store metadata:

$ ./juicefs format localhost test

It will create a volume with default settings. If there Redis server is not running locally, the address could be specified using URL, for example, redis://user:password@host:6379/1, the password can also be specified by environment variable REDIS_PASSWORD to hide it from command line options.

As JuiceFS relies on object storage to store data, you can specify a object storage using --storage, --bucket, --access-key and --secret-key. By default, it uses a local directory to serve as an object store, for all the options, please see ./juicefs format -h.

For the details about how to setup different object storage, please read the guide.

Mount a volume

Once a volume is formatted, you can mount it to a directory, which is called mount point.

$ ./juicefs mount -d localhost ~/jfs

After that you can access the volume just like a local directory.

To get all options, just run ./juicefs mount -h.

If you wanna mount JuiceFS automatically at boot, please read the guide.

Command Reference

There is a command reference to see all options of the subcommand.

Kubernetes

Using JuiceFS on Kubernetes is so easy, have a try.

Hadoop Java SDK

If you wanna use JuiceFS in Hadoop, check Hadoop Java SDK.

Administration

POSIX Compatibility

JuiceFS passed all of the 8813 tests in latest pjdfstest.

All tests successful.

Test Summary Report
-------------------
/root/soft/pjdfstest/tests/chown/00.t          (Wstat: 0 Tests: 1323 Failed: 0)
  TODO passed:   693, 697, 708-709, 714-715, 729, 733
Files=235, Tests=8813, 233 wallclock secs ( 2.77 usr  0.38 sys +  2.57 cusr  3.93 csys =  9.65 CPU)
Result: PASS

Besides the things covered by pjdfstest, JuiceFS provides:

  • Close-to-open consistency. Once a file is closed, the following open and read can see the data written before close. Within same mount point, read can see all data written before it.
  • Rename and all other metadata operations are atomic guaranteed by Redis transaction.
  • Open files remain accessible after unlink from same mount point.
  • Mmap is supported (tested with FSx).
  • Fallocate with punch hole support.
  • Extended attributes (xattr).
  • BSD locks (flock).
  • POSIX record locks (fcntl).

Performance Benchmark

Throughput

Performed a sequential read/write benchmark on JuiceFS, EFS and S3FS by fio, here is the result:

Sequential Read Write Benchmark

It shows JuiceFS can provide 10X more throughput than the other two, read more details.

Metadata IOPS

Performed a simple mdtest benchmark on JuiceFS, EFS and S3FS by mdtest, here is the result:

Metadata Benchmark

It shows JuiceFS can provide significantly more metadata IOPS than the other two, read more details.

Analyze performance

There is a virtual file called .accesslog in the root of JuiceFS to show all the operations and the time they takes, for example:

$ cat /jfs/.accesslog
2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010>
2021.01.15 08:26:11.003473 [uid:0,gid:0,pid:4403] write (17675,198,997439): OK <0.000014>
2021.01.15 08:26:11.003616 [uid:0,gid:0,pid:4403] write (17666,390,951582): OK <0.000006>

The last number on each line is the time (in seconds) current operation takes. You can use this to debug and analyze performance issues. We will provide more tools to analyze it.

Supported Object Storage

  • Amazon S3
  • Google Cloud Storage
  • Azure Blob Storage
  • Alibaba Cloud Object Storage Service (OSS)
  • Tencent Cloud Object Storage (COS)
  • QingStor Object Storage
  • Ceph RGW
  • MinIO
  • Local disk
  • Redis

For the detailed list, see this document.

Status

It's considered as beta quality, the storage format is not stabilized yet. It's not recommended to deploy it into production environment. Please test it with your use cases and give us feedback.

Roadmap

  • Stabilize storage format
  • Other databases for metadata

Reporting Issues

We use GitHub Issues to track community reported issues. You can also contact the community for getting answers.

Contributing

Thank you for your contribution! Please refer to the CONTRIBUTING.md for more information.

Community

Welcome to join the Discussions and the Slack channel to connect with JuiceFS team members and other users.

Usage Tracking

JuiceFS by default collects anonymous usage data. It only collects core metrics (e.g. version number), no user or any sensitive data will be collected. You could review related code here.

These data help us understand how the community is using this project. You could disable reporting easily by command line option --no-usage-report:

$ ./juicefs mount --no-usage-report

License

JuiceFS is open-sourced under GNU AGPL v3.0, see LICENSE.

Credits

The design of JuiceFS was inspired by Google File System, HDFS and MooseFS, thanks to their great work.

FAQ

Why doesn't JuiceFS support XXX object storage?

JuiceFS already supported many object storage, please check the list first. If this object storage is compatible with S3, you could treat it as S3. Otherwise, try reporting issue.

Can I use Redis cluster?

The simple answer is no. JuiceFS uses transaction to guarantee the atomicity of metadata operations, which is not well supported in cluster mode. Sentinal or other HA solution for Redis are needed.

See "Redis Best Practices" for more information.

What's the difference between JuiceFS and XXX?

See "Comparison with Others" for more information.

For more FAQs, please see the full list.

Comments
  • [MariaDB] Error 1366: Incorrect string value

    [MariaDB] Error 1366: Incorrect string value

    What happened:

    While rsync from local disk to juicefs mount, it suddenly (after several hours) stopped with

    Failed to sync with 11 errors: last error was: open /mnt/juicefs/folder/file.xls: input/output error

    On the jfs log, i can find these:

    juicefs[187516] <ERROR>: error: Error 1366: Incorrect string value: '\xE9sa sa...' for column `jfsdata`.`jfs_edge`.`name` at row 1
    goroutine 43510381 [running]:
    runtime/debug.Stack()
            /usr/local/go/src/runtime/debug/stack.go:24 +0x65
    github.com/juicedata/juicefs/pkg/meta.errno({0x2df8860, 0xc0251d34a0})
            /go/src/github.com/juicedata/juicefs/pkg/meta/utils.go:76 +0xc5
    github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doMknod(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0x1, 0x1b4, 0x0, 0x0, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/sql.go:1043 +0x29e
    github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Mknod(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0xc0, 0x7b66, 0x7fca, 0x0, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/base.go:594 +0x275
    github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Create(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x26b5620, {0xc025268160, 0x2847500}, 0x8040, 0xed2, 0x8241, 0xc025267828, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/base.go:601 +0x109
    github.com/juicedata/juicefs/pkg/vfs.(*VFS).Create(0xc000140640, {0x2e90348, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0x81b4, 0x22a4, 0xc0)
            /go/src/github.com/juicedata/juicefs/pkg/vfs/vfs.go:357 +0x256
    github.com/juicedata/juicefs/pkg/fuse.(*fileSystem).Create(0xc000153900, 0xc024980101, 0xc022a48a98, {0xc025268160, 0x1f}, 0xc022a48a08)
            /go/src/github.com/juicedata/juicefs/pkg/fuse/fuse.go:221 +0xcd
    github.com/hanwen/go-fuse/v2/fuse.doCreate(0xc022a48900, 0xc022a48900)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/opcode.go:163 +0x68
    github.com/hanwen/go-fuse/v2/fuse.(*Server).handleRequest(0xc00179c000, 0xc022a48900)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:483 +0x1f3
    github.com/hanwen/go-fuse/v2/fuse.(*Server).loop(0xc00179c000, 0x20)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:456 +0x110
    created by github.com/hanwen/go-fuse/v2/fuse.(*Server).readRequest
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:323 +0x534
     [utils.go:76]
    

    Nothing was logged at MariaDB side.

    Environment:

    • juicefs version 1.0.0-beta2+2022-03-04T03:00:41Z.9e26080
    • Ubuntu 20.04
    • MariaDB 10.4
    kind/bug 
    opened by solracsf 21
  • failed to create fs on oos

    failed to create fs on oos

    What happened: i can't create fs on oos What you expected to happen: i can create fs on oos How to reproduce it (as minimally and precisely as possible): juicefs format --storage oos --bucket https://cyn.oos-hz.ctyunapi.cn
    --access-key xxxxxx
    --secret-key xxxxxxx
    redis://:[email protected]:6379/1
    myjfs Anything else we need to know?

    Environment:

    • JuiceFS version (use juicefs --version) or Hadoop Java SDK version: juicefs version 1.0.0-beta3+2022-05-05.0fb9155
    • Cloud provider or hardware configuration running JuiceFS:
    • OS (e.g cat /etc/os-release): Centos 7
    • Kernel (e.g. uname -a): Linux ecs-df87 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region, or self maintained): oos
    • Metadata engine info (version, cloud provider managed or self maintained): redis:6
    • Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
    • Others:
    opened by enlindashuaibi 17
  • Performance degenerates a lot when reading from multiple threads compared with a single thread (when running Clickhouse)

    Performance degenerates a lot when reading from multiple threads compared with a single thread (when running Clickhouse)

    What happened:

    Hi folks,

    We are trying to run ClickHouse benchmark on juicefs (with OSS as the underlying object storage), and under the settings that juicefs has already cached the whole file to the local disk we notice a huge performance gap (compared with running the benchmark on Local SSD) when executing ClickHouse with 4 threads, but such degeneration doesn't happen if we limit the ClickHouse thread to 1.

    More specifically, we are running the clickhouse benchmark with scale factor 1000, and playing query 29th query (the involved table Referer sizes around 24Gi, the query is a full table scan operation), and given clickhouse 100Gi local SSD as the cache directory.

    After serveral runs to make sure the involved file are fully cached locally by juicefs, we notices the following performance numbers

    | threads | ssd runtime (seconds) | juicefs runtime (seconds) | |:-------:|:----------------------:|:--------------------------:| | 4 | 24 | 56 | | 1 | 88 | 100 |

    You could see that the juicefs suffers much more performance degenerated when the workload executing in a multiple thread fashion. Is that behavour expected for juicefs?

    Thanks!

    What you expected to happen:

    The performance gap shouldn't be such large for 4 thread settings.

    How to reproduce it (as minimally and precisely as possible):

    Playing the clickhouse benchmark inside a juicefs mounted directory.

    Anything else we need to know?

    Environment:

    • JuiceFS version (use juicefs --version) or Hadoop Java SDK version: juicefs version 1.0.0-beta2+2022-03-04T03:00:41Z.9e26080
    • Cloud provider or hardware configuration running JuiceFS: aliyun ecs.i3g.2xlarge, (local ssd instance with 4 physical cores and 32Gi memory)
    • OS (e.g cat /etc/os-release): Ubuntu 20.04.3 LTS
    • Kernel (e.g. uname -a): Linux mk1 5.4.0-100-generic #113-Ubuntu SMP Thu Feb 3 18:43:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region, or self maintained): OSS
    • Metadata engine info (version, cloud provider managed or self maintained): redis
    • Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage): localhost, redis and juicefs are deployed on the same instance
    • Others: clickhouse latest version
    opened by sighingnow 15
  • Out of Memory 0.13.1

    Out of Memory 0.13.1

    What happened: juicefs eating every last bit of memory + swap then gets killed by oomkiller

    What you expected to happen: shouldnt it use less memory?

    How to reproduce it (as minimally and precisely as possible): store 8x104gb size files in juicefs, do rand reads on it, watch how memory consistently climbs until exhaustion

    Anything else we need to know? image

    Environment:

    • JuiceFS version (use ./juicefs --version) or Hadoop Java SDK version: juicefs version 0.13.1 (2021-05-27T08:14:30Z 1737d4e)
    • Cloud provider or hardware configuration running JuiceFS: 4 core, 16gb ram, backblaze b2
    • OS (e.g: cat /etc/os-release): ubuntu 20.04.2 LTS (Focal Fossa)
    • Kernel (e.g. uname -a): 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region): Backblaze B2 Europe
    • Redis info (version, cloud provider managed or self maintained): self maintained latset, from docker
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage): juicefs and redis are locally, 5gbit to backblaze b2
    • Others:
    kind/bug 
    opened by gallexme 15
  • Performance 3x ~ 8x slower than s5cmd (for large files)

    Performance 3x ~ 8x slower than s5cmd (for large files)

    While comparing basic read/write operations, it appears than s5cmd is 3x ~ 8x faster than juicefs

    What happened:

    #### WRITE IO ####

    $ time cp 1gb_file.txt /mnt/juicefs0/
    real	0m50.859s
    user	0m0.016s
    sys	0m1.365s
    
    $ time s5cmd cp 1gb_file.txt s3://bucket/path/
    real	0m20.614s
    user	0m9.411s
    sys	0m3.232s
    

    #### READ IO ####

    $ time cp /mnt/juicefs0/1gb_file.txt .
    real	0m45.539s
    user	0m0.014s
    sys	0m1.578s
    
    $ time s5cmd cp s3://bucket/path/1gb_file.txt .
    real	0m6.074s
    user	0m1.186s
    sys	0m2.504s
    

    Environment:

    • JuiceFS version or Hadoop Java SDK version: juicefs version 0.12.1 (2021-04-15T08:18:25Z 7b4df23)
    • Cloud provider or hardware configuration running JuiceFS: Linode 1 GB VM
    • OS: Fedora 33 (Server Edition)
    • Kernel: Linux 5.11.12-200.fc33.x86_64 #1 SMP Thu Apr 8 02:34:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage: Linode
    • Redis info: Redis 6.2.1
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage): redis (local), S3 Object Storage (Linode)
    area/performance 
    opened by cayolblake 15
  • OVH S3 compatible storage AuthorizationHeaderMalformed the region 'us-east-1' is wrong; expecting 'gra'

    OVH S3 compatible storage AuthorizationHeaderMalformed the region 'us-east-1' is wrong; expecting 'gra'

    Hi, i tried to configure JuiceFs with an S3 compatible vendor. OVH

    juicefs format --storage s3 \
        --bucket https://s3.gra.io.cloud.ovh.net/mybucket \
        --access-key $ACCESS_KEY \
        --secret-key $SECRET_KEY \
        "redis://127.0.0.1:9190/1" \
        datavault
    

    with the response:

    2022/12/27 01:06:21.060580 juicefs[97465] <INFO>: Meta address: redis://127.0.0.1:9190/1 [interface.go:402]
    2022/12/27 01:06:21.080483 juicefs[97465] <INFO>: Ping redis: 1.546708ms [redis.go:2878]
    2022/12/27 01:06:21.083500 juicefs[97465] <INFO>: Data use s3://mybucket/datavault/ [format.go:435]
    2022/12/27 01:06:33.842261 juicefs[97465] <FATAL>: Storage s3://mybucket/datavault/ is not configured correctly: Failed to create bucket s3://mybucket/datavault/: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'gra'
            status code: 400, request id: tx25660a4d66ac46a994f94-0063aa3702, host id: tx25660a4d66ac46a994f94-0063aa3702, previous error: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'gra'
            status code: 400, request id: tx9552f7f3bed04a04bed96-0063aa3702, host id: tx9552f7f3bed04a04bed96-0063aa3702 [format.go:438]
    

    juicefs version 1.0.2+2022-10-13.514ef03 on Macos 11.4

    I can confirm that rclone is working perfectly

    opened by DavidGOrtega 13
  • Errors on WebDAV operations, but files are created

    Errors on WebDAV operations, but files are created

    Juicefs (1.0.0-rc1) outputs a 401, but bucket is created (folder 'jfs' is created at the root path) on server (so, credentials are OK). Happy to provide more debug info if needed (note i have no admin access on WebdDAV server).

    # juicefs format --storage webdav --bucket https://web.dav.server/ --access-key 307399 --secret-key tTxX12NPMyy --trash-days 0 redis://127.0.0.1:6379/1 jfs
    
    2022/06/17 11:45:19.373300 juicefs[799909] <INFO>: Meta address: redis://127.0.0.1:6379/1 [interface.go:397]
    2022/06/17 11:45:19.375116 juicefs[799909] <INFO>: Ping redis: 82.354µs [redis.go:2869]
    2022/06/17 11:45:19.375437 juicefs[799909] <INFO>: Data use webdav://web.dav.server/jfs/ [format.go:420]
    2022/06/17 11:45:19.529334 juicefs[799909] <WARNING>: List storage webdav://web.dav.server/jfs/ failed: 401 Unauthorized: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>401 Unauthorized</title>
    </head><body>
    <h1>Unauthorized</h1>
    <p>This server could not verify that you
    are authorized to access the document
    requested.  Either you supplied the wrong
    credentials (e.g., bad password), or your
    browser doesn't understand how to supply
    the credentials required.</p>
    </body></html> [format.go:438]
    2022/06/17 11:45:19.535569 juicefs[799909] <INFO>: Volume is formatted as {Name:jfs UUID:05363857-2fce-42dc-94e2-4d41c33172d0 Storage:webdav Bucket:https://web.dav.server/ AccessKey:307399 SecretKey:removed BlockSize:4096 Compression:none Shards:0 HashPrefix:false Capacity:0 Inodes:0 EncryptKey: KeyEncrypted:true TrashDays:0 MetaVersion:1 MinClientVersion: MaxClientVersion:} [format.go:458]
    

    or trying to destroy it:

    # juicefs destroy redis://127.0.0.1:6379/1 05363857-2fce-42dc-94e2-4d41c33172d0
    
    2022/06/17 11:50:24.072353 juicefs[800639] <INFO>: Meta address: redis://127.0.0.1:6379/1 [interface.go:397]
    2022/06/17 11:50:24.073182 juicefs[800639] <INFO>: Ping redis: 67.778µs [redis.go:2869]
     volume name: jfs
     volume UUID: 05363857-2fce-42dc-94e2-4d41c33172d0
    data storage: webdav://web.dav.server/jfs/
      used bytes: 0
     used inodes: 0
    WARNING: The target volume will be destoried permanently, including:
    WARNING: 1. ALL objects in the data storage: webdav://web.dav.server/jfs/
    WARNING: 2. ALL entries in the metadata engine: redis://127.0.0.1:6379/1
    Proceed anyway? [y/N]: y
    2022/06/17 11:50:25.693697 juicefs[800639] <FATAL>: list all objects: 401 Unauthorized: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>401 Unauthorized</title>
    </head><body>
    <h1>Unauthorized</h1>
    <p>This server could not verify that you
    are authorized to access the document
    requested.  Either you supplied the wrong
    credentials (e.g., bad password), or your
    browser doesn't understand how to supply
    the credentials required.</p>
    </body></html> [destroy.go:158]
    

    On files operations, files (chunks folder is populated) but this is logged:

    2022/06/17 12:02:54.141831 juicefs[802424] <INFO>: Mounting volume jfs at /mnt/jfs ... [mount_unix.go:181]
    2022/06/17 12:04:31.053268 juicefs[802424] <WARNING>: Upload chunks/0/0/7_0_140: 403 Forbidden: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>403 Forbidden</title>
    </head><body>
    <h1>Forbidden</h1>
    <p>You don't have permission to access this resource.</p>
    </body></html> (try 1) [cached_store.go:462]
    

    Accessing WebDAV directly (outside juicefs) we can see correct file operations: image

    kind/bug 
    opened by solracsf 13
  • freezing when tikv is down

    freezing when tikv is down

    7 pd-server node, kill 2 pd-server node, juicefs freezing.

    [2021/09/02 15:57:58.152 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:57:59.021 +08:00] [ERROR] [client.go:599] ["[pd] getTS error"] [dc-location=global] [error="[PD:client:ErrClientGetTSO]rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"] [stack="github.com/tikv/pd/client.(*client).handleDispatcher\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:599"] [2021/09/02 15:57:59.022 +08:00] [ERROR] [pd.go:234] ["updateTS error"] [txnScope=global] [error="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"] [errorVerbose="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster\ngithub.com/tikv/pd/client.(*client).processTSORequests\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:717\ngithub.com/tikv/pd/client.(*client).handleDispatcher\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:587\nruntime.goexit\n\t/snap/go/7954/src/runtime/asm_amd64.s:1371\ngithub.com/tikv/pd/client.(*tsoRequest).Wait\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:913\ngithub.com/tikv/pd/client.(*client).GetTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:933\ngithub.com/tikv/client-go/v2/util.InterceptedPDClient.GetTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/util/pd_interceptor.go:79\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).getTimestamp\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:141\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS.func1\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:232\nsync.(*Map).Range\n\t/snap/go/7954/src/sync/map.go:345\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:230\nruntime.goexit\n\t/snap/go/7954/src/runtime/asm_amd64.s:1371"] [stack="github.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS.func1\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:234\nsync.(*Map).Range\n\t/snap/go/7954/src/sync/map.go:345\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:230"] [2021/09/02 15:57:59.317 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:00.608 +08:00] [WARN] [prewrite.go:198] ["slow prewrite request"] [startTS=427443780657872897] [region="{ region id: 4669, ver: 35, confVer: 1007 }"] [attempts=280] [2021/09/02 15:58:04.317 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:09.318 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:14.319 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:15.831 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:20.832 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:25.834 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:30.834 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"]

    What happened:

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?

    Environment:

    • JuiceFS version (use ./juicefs --version) or Hadoop Java SDK version:
    • Cloud provider or hardware configuration running JuiceFS:
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Object storage (cloud provider and region):
    • Redis info (version, cloud provider managed or self maintained):
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage):
    • Others:
    opened by whans 12
  • Deleted tag cause misbehaviour of `go get`

    Deleted tag cause misbehaviour of `go get`

    What happened:

    go get: added github.com/juicedata/juicefs v0.14.0
    

    What you expected to happen:

    go get: added github.com/juicedata/juicefs v0.13.1
    

    How to reproduce it (as minimally and precisely as possible):

    go get github.com/juicedata/juicefs
    

    Anything else we need to know? it looks like you probably deleted that tag, which is not supported by proxy.golang.org (see the FAQ)

    also, the current latest tag, v0.14-dev is not valid semver, thus is being ignored and mangled by the toolchain: (try v0.14.0-alpha)

    [user@localhost ~]$ go get github.com/juicedata/[email protected]
    go: downloading github.com/juicedata/juicefs v0.13.2-0.20210527090717-42ac85ce406c
    go get: downgraded github.com/juicedata/juicefs v0.14.0 => v0.13.2-0.20210527090717-42ac85ce406c
    
    opened by colin-sitehost 11
  • Mounting a fs with postgres + search_path throws database is not formatted, please run `juicefs format`

    Mounting a fs with postgres + search_path throws database is not formatted, please run `juicefs format`

    What happened: Mounting an s3 compatibel fs with postgres + schema as meta storage throws database is not formatted. Executing without setting search_path and thus using public works fine.

    What you expected to happen: The fs to be mounted error free

    How to reproduce it (as minimally and precisely as possible):

    juicefs format--storage s3 --bucket https://sos-de-muc-1.exo.io/testjuice-pgsql   --access-key xxx  --secret-key xxx postgres://user:pass@dburl/db?search_path=schema pgjuice
    Volume is formatted as {
      "Name": "pgjuice",
      "UUID": "33e5fe46-0fff-4a06-83c0-64251ef64df4",
      "Storage": "s3",
      "Bucket": "https://sos-de-muc-1.exo.io/testjuice-pgsql",
      "AccessKey": "vvvv",
      "SecretKey": "removed",
      "BlockSize": 4096,
      "Compression": "none",
      "KeyEncrypted": true,
      "TrashDays": 1,
      "MetaVersion": 1
    } [format.go:472]
    
    juicefs mount --background postgres://user:pass@dburl/db?search_path=schema /mnt/pgjuice
    2022/10/22 10:07:12.533714 juicefs[32846] <INFO>: Meta address: postgres://xxxx?search_path=schema [interface.go:402]
    2022/10/22 10:07:12.545518 juicefs[32846] <WARNING>: The latency to database is too high: 11.463676ms [sql.go:203]
    2022/10/22 10:07:12.548892 juicefs[32846] <FATAL>: load setting: database is not formatted, please run `juicefs format ...` first [main.go:31]
    

    Anything else we need to know?

    Environment:

    • juicefs version 1.0.2+2022-10-13.514ef03
    • Debian 11
    • Kernel (e.g. uname -a): 5.10.0-10-amd64 # 1 SMP Debian 5.10.84-1 (2021-12-08) x86_64 GNU/Linux
    • Object storage (cloud provider and region, or self maintained): Managed Exoscale SOS Germany Munich
    • Metadata engine info (version, cloud provider managed or self maintained): Managed Postgres 14 on Exoscale
    opened by danielband 10
  • Metadata Dump doesn't complete on MySQL

    Metadata Dump doesn't complete on MySQL

    What happened:

    Dump is incomplete.

    What you expected to happen:

    Dump completes successfully.

    How to reproduce it (as minimally and precisely as possible):

    Observe the dumped entries.

    juicefs dump "mysql://mount:Passw0d@(10.1.0.9:3306)/mount" /tmp/juicefs.dump
    
    2022/05/26 15:13:07.548210 juicefs[1785019] <WARNING>: no found chunk target for inode 854691 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548245 juicefs[1785019] <WARNING>: no found chunk target for inode 854709 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548264 juicefs[1785019] <WARNING>: no found chunk target for inode 854726 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548288 juicefs[1785019] <WARNING>: no found chunk target for inode 4756339 indx 0 [sql.go:2747]
     Snapshot keys count: 4727518 / 4727518 [==============================================================]  done
    Dumped entries count: 1370 / 1370 [==============================================================]  done
    

    Anything else we need to know?

    Environment:

    # juicefs --version
    juicefs version 1.0.0-dev+2022-05-26.54ddc5c4
    
    {
      "Setting": {
        "Name": "mount",
        "UUID": "5cd22d3c-12d6-4be4-9a52-19e753c416e9",
        "Storage": "s3",
        "Bucket": "https://data-jfs%d.s3.de",
        "AccessKey": "d61bc82480ab49fb8d",
        "SecretKey": "removed",
        "BlockSize": 4096,
        "Compression": "zstd",
        "Shards": 512,
        "HashPrefix": false,
        "Capacity": 0,
        "Inodes": 0,
        "EncryptKey": "/YShECK6Tirb0uHljlK8PIJ12C4Fj2idW5hbzARwYaGDGoSU>
        "KeyEncrypted": true,
        "TrashDays": 90,
        "MetaVersion": 1,
        "MinClientVersion": "",
        "MaxClientVersion": ""
      },
      "Counters": {
        "usedSpace": 4568750424064,
        "usedInodes": 4723758,
        "nextInodes": 4923302,
        "nextChunk": 4062001,
        "nextSession": 137,
        "nextTrash": 667
      },
    }
    
    opened by solracsf 10
  • Easily Reduce ListObjects Sorting Incompatibilities

    Easily Reduce ListObjects Sorting Incompatibilities

    What would you like to be added: Lift "lexicographically sorted" requirement for metadata backup cleanup, gc, fsck, and destroy. It appears to me as though backup-cleanup already sorts the result itself, and that gc, fsck, and destroy do not actually require sorted results to function correctly. Only sync seems to actually care and depend on sorted results. Further, if that interpretation is correct, it also seems that lifting the restriction on these commands (all except sync) can easily be done by adding a simple "fail if unsorted" boolean as input to sync.ListAll(), checking that flag before triggering the "keys out of order" error in sync.ListAll(), and setting that boolean to false for backup/gc/fsck/destroy (true for sync).

    Why is this needed: JuiceFS currently does not support Storj DCS and Cloudflare R2 object stores for several useful features (metadata backup, gc, fsck, destroy, sync), unless using an intermediate gateway that sorts results. Most of these limitations appear superficial and very easily supported per description above. It appears that only sync() would require any non-trivial effort to make compatible, which could be deferred to the future.

    kind/feature 
    opened by seth-hunter 1
  • Writeback cache synchronization issue

    Writeback cache synchronization issue

    What happened: JuiceFS appears to have a writeback cache synchronization issue. Example below may be exacerbated by slow IO and intolerance for misbehavior of the storage provider (STORJ), but it seems to me that the root cause is within JuiceFS.

    See chunk 11008_0_4194304 in attached log; several parallel upload attempts (resulting in server errors), continuing to try to upload after file has been deleted (at time 21:32:04), and other general signs of missing synchronization. Also appears (not observable in log below) as though --upload-delay is not being fully obeyed (uploads begin within about one second of file creation). This example is very repeatable. This problem does not occur without --writeback in the mount command.

    What you expected to happen: One concurrent upload attempt per chunk, and no attempt to continue uploading significantly after chunk has been deleted locally.

    How to reproduce it (as minimally and precisely as possible): Mount command: juicefs mount --no-usage-report --cache-size 512 --writeback -o allow_other --upload-delay 5 --backup-meta 0 --max-uploads 5 --verbose sqlite3://storj-test.db /jfs

    Test scenario: create 10 4MB files in rapid succession: for i in {1..10}; do dd if=/dev/urandom bs=1M count=4 of=./test${i}; done

    Environment:

    • JuiceFS version: juicefs version 1.0.3+2022-12-27.ec6c8abd
    • OS: Linux (AlmaLinux 8.7)
    • Kernel: 4.18.0-425.3.1.el8.x86_64
    • Object storage: STORJ
    • Metadata engine info: SQLite

    Log: See juicefs-storj-writeback-issue.log. Reflects that I have placed additional DEBUG statements just before and after s3.go:Put()'s call to s3.PutObject() to help clarify the misbehavior. Redacted my IP with <MY_IP>, juicefs cache path with <JFS_CACHE>

    kind/bug 
    opened by seth-hunter 0
  • juicefs sync with https_proxy

    juicefs sync with https_proxy

    What happened: with https_proxy, juicefs sync from aliyun oss to local. when I give a wrong schema, it works. when I give a right schema, it doesn't work. 3 What you expected to happen: juicefs sync work with http_proxy/https_proxy How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?

    Environment:

    • JuiceFS version (use juicefs --version) or Hadoop Java SDK version:
    • Cloud provider or hardware configuration running JuiceFS:
    • OS (e.g cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Object storage (cloud provider and region, or self maintained):
    • Metadata engine info (version, cloud provider managed or self maintained):
    • Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
    • Others:
    kind/feature 
    opened by fxmumu 1
  • spawn background processes with context timeout

    spawn background processes with context timeout

    Some background processes may cost more time than 1 period. For example, if a subentry of trash is corrupted and cannot be parsed as yyyy-mm-dd-hh, the doCleanupTrash goroutine will run into deadloop (fixed in https://github.com/juicedata/juicefs/pull/3032, but still exists in release-1.0).

    kind/enhancement 
    opened by Hexilee 0
  • storage webdav double encoding uri in func newWebDAV

    storage webdav double encoding uri in func newWebDAV

    endpoint got encoded elsewhere and gowebdav.NewClient would double encode the endpoint url. i can't find where endpoint gets first encoded sorry, so i post only bug report.

    kind/bug 
    opened by DEvmIb 3
Releases(v1.0.3)
  • v1.0.3(Dec 27, 2022)

    This is the third patch release for JuiceFS v1.0. It has 35 commits from 9 contributors, thanks to @zhijian-pro @SandyXSD @davies @tangyoupeng @Hexilee @baifachuan @neocxf @cmmp6 @zhoucheng361 !

    New

    • Support TOS as object storage (#2909, #2912, #2914)

    Changed

    • cmd: increase default value of max-deletes from 2 to 10 (#3080)
    • cmd/mount: support customizing permission of cache files (#2976)
    • meta: use an independent lock for session so it won't affect workloads (#3000)
    • meta: limit cleanupDelayedSlices by time, which may speed up the cleanup (#3079)
    • meta: force delete file data when cleaning up trash (#3119)
    • meta/sql: set global lock for sqlite3 transactions to reduce conflict (#3046)
    • chunk/cache: cleanup cached objects during scaning (#3113)
    • object: increase buffer size from 4KB to 32KB, which can reduce CPU utilization for syscalls (#3072)
    • object/encrypt: support PEM in PKCS8 (#3065)
    • object/bos: support user choosing http or https (#2978)
    • object/ks3: support both public and private regions (#3104)
    • fuse: reduce number of max readers to save CPU utilization for hosts with many CPU cores (#3069)
    • hadoop: improve the read performance when buffer size < 128 KiB (#3004)
    • hadoop: add timeout for pushgateway (#3044)
    • deps: upgrade tikv client-go to v2.0.2 (#2947)

    Bugfix

    • cmd/objbench: fix the issue that latency is wrongly calculated (#3100)
    • cmd/gateway: fix the issue than HEAD a non-existent directory returns 200 (#2955)
    • meta: fix the issue that client may crash when the value of slices is corrupt (#2876)
    • meta: fix the issue that password may be exposed when meta engine is not available (#3003)
    • meta: fix the issue that cleanup trash may run infinitely when there is a bad entry (#3032)
    • meta/redis: fix the issue that slice may leak when deleting chunk (#2879)
    • meta/sql: do not use the number of affected rows returned by update (#2986)
    • object/sftp: fix the issue that special characters in username and password are not properly handled (#2981)
    • object/s3: fix the issue that endpoint is wrongly parsed for oracle cloud (#3075)
    • object/obs: fix the issue that etag is not correct when obs enables encrypted (#3098)
    • hadoop: fix the issue that the running process may crash if the local juicefs-hadoop.jar is deleted (#2906)
    • hadoop: fix the issue that thread may leak because emptier filesystem is not properly closed (#3090)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(499 bytes)
    juicefs-1.0.3-darwin-amd64.tar.gz(25.31 MB)
    juicefs-1.0.3-darwin-arm64.tar.gz(23.79 MB)
    juicefs-1.0.3-linux-amd64.tar.gz(24.08 MB)
    juicefs-1.0.3-linux-arm64.tar.gz(22.15 MB)
    juicefs-1.0.3-windows-amd64.tar.gz(24.07 MB)
    juicefs-hadoop-1.0.3.jar(122.76 MB)
  • v1.0.2(Oct 14, 2022)

    This is the first patch release (v1.0.1 is broken) for JuiceFS v1.0. It has 32 commits from 10 contributors, thanks to @davies @zhijian-pro @tangyoupeng @SandyXSD @timfeirg @201341 @pigletfly @dugusword @Arvintian @zwwhdls !

    Changed

    • cmd/sync: sync uid/gid even if the user/group does not exist (#2502)
    • meta/badger: improve the performance to reset the database (#2811)
    • vfs: flush data and close fd even if the close is interrupted (#2745)
    • object/minio: support setting MinIO region via an environment variable (#2673)
    • object/ks3: support private region for KS3 (#2812)
    • gateway: support specifying umask for directories (#2445)
    • hadoop: support nest juicefs-hadoop.jar in spring-boot (#2756)

    Bugfix

    • cmd/format: fix the issue that schema in bucket address is not properly handled (#2749)
    • cmd/objbench: fix the issue that bucket is not automatically created when objbench is run with skip-functional-tests (#2773)
    • cmd/sync: fix the issue that '\' is wrongly replaced for non-windows system (#2778)
    • meta/pg: fix the issue that PostgreSQL does not recognize Unix domain socket properly (#2607)
    • chunk/cache: fix the issue that client may crash if there are invalid files in the cache dir (#2686)
    • vfs: fix the issue on Linux that entries are still visible after rmr finishes (#2776, #2838)
    • vfs/backup: fix the issue that metadata backup may fail because the client cannot access /tmp (#2707)
    • utils: fix the issue on Windows that logger prints color control codes (but no color) (#2567)
    • compress: fix the issue that client may crash if input is empty for LZ4 decompressing (#2499)
    • gateway: fix the issue that file descriptors may leak in gateway (#2661)
    • hadoop: fix the issue that trash emptier may fail because of no permission (#2512)
    • hadoop: fix the issue that relative paths are not properly qualified (#2696, #2709)
    • hadoop: fix the issue that juicefs.umask is not properly applied when creating parent dir (#2833)
    • arch: fix the issue on 32bits system that client may crash because of unaligned atomic operation (#2703)
    • arch: fix the issue on 32bits system that readAheadTotal may overflow (#2726)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(499 bytes)
    juicefs-1.0.2-darwin-amd64.tar.gz(24.23 MB)
    juicefs-1.0.2-darwin-arm64.tar.gz(22.70 MB)
    juicefs-1.0.2-linux-amd64.tar.gz(22.76 MB)
    juicefs-1.0.2-linux-arm64.tar.gz(20.98 MB)
    juicefs-1.0.2-windows-amd64.tar.gz(22.80 MB)
    juicefs-hadoop-1.0.2.jar(118.23 MB)
  • v1.0.0(Aug 9, 2022)

    This is the first stable release of JuiceFS, and is an LTS version that will be maintained for 24 months.

    Starting from v1.0.0-rc3 it has 73 commits from 13 contributors, thanks to @SandyXSD @zhijian-pro @xiaogaozi @zhoucheng361 @rayw000 @tangyoupeng @AIXjing @sanwan @davies @yuhr123 @timfeirg @201341 @solracsf !

    New

    • object/redis: support redis sentinel & cluster mode (#2368, #2383)

    Changed

    • cmd/gc: no warning log if no --delete specified (#2476)
    • cmd/load: reset root inode to 1 if loading from a subdir (#2389)
    • meta: check UUID and metadata version after setting reloaded (#2416, #2420)
    • meta: reset variables at the beginning of transactions (#2402, #2409)
    • meta refactor: distinguish Slice from Chunk (#2397)
    • fuse: support CAP_EXPORT_SUPPORT flag (#2382)
    • util: progressbar total equals current max (#2377)

    Bugfix

    • cmd/profile: ignore error of Scanln (#2400)
    • meta: fix potential overflowed size in Fallocate (#2403)
    • chunk: cleanup cache if it's added after removed (#2427)
    • hadoop: do not use local uid/gid if global user/group is specified (#2433)
    • hadoop: update guid and clean old ones (#2407)
    • hadoop: fix make in mac m1 (#2408)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(499 bytes)
    juicefs-1.0.0-darwin-amd64.tar.gz(24.22 MB)
    juicefs-1.0.0-darwin-arm64.tar.gz(22.70 MB)
    juicefs-1.0.0-linux-amd64.tar.gz(22.76 MB)
    juicefs-1.0.0-linux-arm64.tar.gz(20.97 MB)
    juicefs-1.0.0-windows-amd64.tar.gz(22.80 MB)
    juicefs-hadoop-1.0.0.jar(118.21 MB)
  • v1.0.0-rc3(Jul 14, 2022)

    JuiceFS v1.0.0-rc3 is the third release candidate for v1.0. It has 35 commits from 10 contributors, thanks to @zhijian-pro @SandyXSD @davies @tangyoupeng @sanwan @xiaogaozi @chenhaifengkeda @zhoucheng361 @201341 @Suave !

    New

    • Supports using unix socket for Redis (#2319)

    Changed

    • cmd/info: print objects for files, and add --raw option for slices (#2316)
    • fuse/context: ignore interrupt within one second (#2324)

    Bugfix

    • cmd/info: support get trash info (#2363)
    • cmd/bench: fix bench display (#2322)
    • cmd/objbench: fix skip functional test (#2341)
    • meta/redis: fix unlock on not-existed lock (#2325)
    • object: fix skip tls verify (#2317)
    • object/ks3: use virtual hosted-style for ks3 (#2349)
    • object/eos: using the Path style addressing in eos (#2344)
    • object/ceph: fix the arguments of register function (#2306)
    • vfs&fuse: Fix stale attribute cache in kernel (#2336)
    • hadoop: checksum fix for files with size close to blockSize (#2333)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(519 bytes)
    juicefs-1.0.0-rc3-darwin-amd64.tar.gz(24.06 MB)
    juicefs-1.0.0-rc3-darwin-arm64.tar.gz(22.53 MB)
    juicefs-1.0.0-rc3-linux-amd64.tar.gz(22.62 MB)
    juicefs-1.0.0-rc3-linux-arm64.tar.gz(20.83 MB)
    juicefs-1.0.0-rc3-windows-amd64.tar.gz(22.65 MB)
    juicefs-hadoop-1.0.0-rc3.jar(117.38 MB)
  • v1.0.0-rc2(Jun 24, 2022)

    JuiceFS v1.0.0-rc2 is the second release candidate for v1.0. It has 40 commits from 10 contributors, thanks to @davies @zhijian-pro @sanwan @zhoucheng361 @tangyoupeng @SandyXSD @201341 @chnliyong @solracsf @xiaogaozi !

    New

    • Supports using session token for object storage (#2261)

    Changed

    • cmd/info: make output more human friendly (#2303)
    • meta: optimize output of format structure (#2250)
    • meta/sql: support database without read-only transaction (#2259)
    • object/webdav: replace webdav client (#2288)
    • SDK: only release sdk jar (#2289)

    Bugfix

    • cmd: --backup-meta could be 0 to disable backup (#2275)
    • cmd: fix concurrent rmr/warmup in mac (#2265)
    • cmd/fsck: fix object name for Head method (#2281)
    • cmd/info: Fix repeat path info when inode is 1 (#2294)
    • meta: fix GetPaths for mount points with subdir specified (#2298)
    • meta: don't treat trash as dangling inode (#2296)
    • meta: fix index underflow in 32bit arch (#2239)
    • meta/tkv: use absolute path for badgerDB as metadata engine (#2256)
    • object/azure: fix azure list api and incorrect bucket URL (#2263)
    • metrics: use separate metrics per volume (#2253)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(519 bytes)
    juicefs-1.0.0-rc2-darwin-amd64.tar.gz(24.06 MB)
    juicefs-1.0.0-rc2-darwin-arm64.tar.gz(22.53 MB)
    juicefs-1.0.0-rc2-linux-amd64.tar.gz(22.61 MB)
    juicefs-1.0.0-rc2-linux-arm64.tar.gz(20.82 MB)
    juicefs-1.0.0-rc2-windows-amd64.tar.gz(22.65 MB)
    juicefs-hadoop-1.0.0-rc2.jar(117.39 MB)
  • v1.0.0-rc1(Jun 15, 2022)

    JuiceFS v1.0.0-rc1 is the first release candidate for v1.0. It has 184 commits from 17 contributors, thanks to @davies @zhoucheng361 @SandyXSD @zhijian-pro @sanwan @xiaogaozi @tangyoupeng @solracsf @showjason @rayw000 @AIXjing @helix-loop @Suave @zhouaoe @chnliyong @yuhr123 @liufuyang !

    Highlights

    • Dumping metadata from Redis has been improved, massively reducing the memory required to below 1/20 of that before. It also relieves the memory spike of a client when doing metadata backup. Dumping metadata from SQL and TiKV within a single transaction to ensure consistency.
    • Loading metadata to all engines has been improved as well. Instead of loading the whole dumped file in one step, JuiceFS will now read it in a stream, and simultaneously import metadata to the engine. This saves a lot of memory when the dumped file is huge.
    • Improved stability for SQL engine under heavy workload.
    • Added a new command juicefs objbench that can be used to run basic function tests and benchmarks on object storage, making sure it works as expected.

    New

    • Supports using SQL databases, etcd as data storage (#2003, #2009)
    • Supports finding all paths of an inode with juicefs info command (#2058, #2161, #2193)
    • Supports using Pyroscope to record JuiceFS profiling (#1952)
    • Added progress bar for juicefs rmr and juicefs warmup commands (#2197)
    • Added a new command juicefs objbench to run basic benchmarks on object storage (#2055, #2091)
    • Added a new command juicefs version to print version, as an alternative to --version (#2229)

    Changed

    • cmd: check the range of parameters (#2195)
    • cmd: eliminate panic which is triggered by missing argument (#2183)
    • cmd/mount: warn about the behavior of mounting the same directory multiple times (#2141)
    • cmd/warmup: support warmup from inside of a container (#2056)
    • meta: add delayed slice only when chunkid > 0 (#2231)
    • meta: speed up and reduce memory for loading metadata (#2142, #2148)
    • meta: add a pessimistic lock to reduce conflicted transactions in the database (#2111)
    • meta: limit the number of scanned files in cleanup (#2157)
    • meta: limit number of files when cleanup trash (#2061)
    • meta: limit the number of coroutines to delete file data (#2042)
    • meta: log last error if a transaction has been ever restarted (#2172)
    • meta: log session info when cleaning up a stale one (#2045)
    • meta: skip updating mtime/ctime of the parent if it's updated recently (#1960)
    • meta/redis: check config 'maxmemory-policy' (#2059)
    • meta/redis: Speedup dump for Redis and reduce memory usage (#2156)
    • meta/tkv: Speedup dump for kv storage (#2140)
    • meta/tkv: dump metadata using snapshot (#1961)
    • meta/tkv: use scanRange to get delayed slices (#2057)
    • meta/sql: dump metadata in a single transaction (#2131)
    • chunk/store: keep cache after uploading staging blocks (#2168)
    • object: reload the configuration for data storage (#1995)
    • object/sftp: load default private keys for sftp (#2014)
    • object/oss: add user agent for oss #1992 (#1993)
    • vfs: hide .control from readdir (#1998)
    • gateway: clean up expired temporary files (#2082)
    • SDK: package amd64 and arm64 libjfs (#2198)
    • SDK: don't reuse fd in Java SDK (#2122)
    • Dependency: upgrade coredns for CVE-2019-19794 (#2190)
    • Dependency: upgrade azblob sdk (#1962)
    • meta: keep valid utf8 in dumped JSON (#1973)
    • SDK: mvn shade some dependency to avoid class conflict (#2035)
    • meta: truncate trash entry name if it's too long (#2049)
    • meta/sql: use repeatable-read for transaction (#2128)

    Bugfix

    • cmd: fix not showing arguments for commands without META-URL (#2158)
    • cmd/sync: fix sync lost file (#2106)
    • cmd/warmup: fix warmup on read-only mount point (#2108)
    • meta: skip updating sliceRef if id is 0 (#2096)
    • meta: fix update xattr with the same value (#2078)
    • meta/redis: handle lua result from Redis v7.0+ (#2221)
    • meta/sql: fix dump with more than 10000 files (#2134)
    • meta/sql: one transaction in SQLite at a time (#2126)
    • meta/sql: fix rename with Postgres with repeatable read (#2109)
    • meta/sql: fix deadlock in PG (#2104)
    • meta/sql: ignore error about duplicated index (#2087)
    • meta/sql: read database inside transaction (#2073, #2086)
    • meta/sql: retry transaction on duplicated entry and concurrent update (#2077)
    • meta/sql: fix the deadlock in rename (#2067)
    • meta/sql: retry transaction for duplicate key in PG (#2071)
    • meta/sql: fix update query in SQL transaction (#2024)
    • meta/tkv: fix value of delSliceKey (#2054)
    • meta/tkv: upgrade TiKV client to 2.0.1 to fix nil connection (#2050)
    • chunk/store: fix stats of cached space in writeback mode (#2227)
    • object: delete should be idempotent (#2034)
    • object/file: Head of file should return File (#2133)
    • object/s3: check prefix and marker with returned keys from S3 (#2040)
    • object/prefix: fix with prefix returning nil error for unsupported ops (#2021)
    • object/sftp: fix auth of sftp with multiple keys (#2186)
    • object/sftp: fix prefix of sftp, support ssh-agent (#1954)
    • vfs/backup: skip cleanup if list failed (#2044)
    • SDK: handle atomic rename exception (#2192)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(519 bytes)
    juicefs-1.0.0-rc1-darwin-amd64.tar.gz(24.04 MB)
    juicefs-1.0.0-rc1-darwin-arm64.tar.gz(22.51 MB)
    juicefs-1.0.0-rc1-linux-amd64.tar.gz(22.60 MB)
    juicefs-1.0.0-rc1-linux-arm64.tar.gz(20.82 MB)
    juicefs-1.0.0-rc1-windows-amd64.tar.gz(22.63 MB)
    juicefs-hadoop-1.0.0-rc1-javadoc.jar(193.34 KB)
    juicefs-hadoop-1.0.0-rc1-sources.jar(114.05 MB)
    juicefs-hadoop-1.0.0-rc1.jar(117.34 MB)
  • v1.0.0-beta3(May 5, 2022)

    JuiceFS v1.0.0-beta3 is the third beta release for v1.0. It has 247 commits from 22 contributors, thanks to @SandyXSD @zhoucheng361 @davies @zhijian-pro @yuhr123 @sanwan @AIXjing @rayw000 @xiaogaozi @Suave @showjason @tangyoupeng @201341 @solracsf @guo-sj @chnliyong @DeanThompson @zwwhdls @wph95 @lidaohang @sjp00556 @DEvmIb !

    Highlights

    • Supports etcd as a new metadata engine. It can be a handy choice when you only need a small volume but cares more about the data availability and persistence.
    • Supports Redis Cluster and other compatible services (Amazon MemoryDB for Redis) as metadata engines.
    • When using SQL metadata engines, file names not encoded by UTF-8 can now be properly handled after manual modification to the table schema, see details.
    • A new session management format is introduced. Old clients are unable to detect sessions with version 1.0.0-beta3 or higher via juicefs status or juicefs destroy command, see details.
    • If trash is enabled, compacted slices are kept as well in case they are needed to recover file data. These slices will be cleaned up automatically after trash-days, and can be deleted manually via juicefs gc command.
    • A lot of improvements have been made to juicefs sync command.
    • A lot of protection checks against unintentional misuse have been added.

    New

    • Supports etcd as metadata engine (#1638)
    • Supports Redis in cluster mode while using one slot for each file system (#1696)
    • Supports handling file names not encoded by UTF-8 for SQL metadata engines (#1762)
    • Supports TLS when using TiKV as metadata engine or object storage (#1653, #1778)
    • Supports Oracle Object Storage as data storage (#1516)
    • Supports setting umask for S3 Gateway (#1537)
    • Java SDK now supports pushing metrics to Graphite (#1586)
    • Added a new option --heartbeat for the mount command to adjust heartbeat interval (#1591, #1865)
    • Added many improvements for sync command to make it more handy (#1554, #1619, #1651, #1836, #1897, #1901)
    • Added a new option --hash-prefix for the format command to add a hashed prefix for objects (#1657)
    • Added a new client option --storage to allow customized storage type (#1912)

    Changed

    • compacted slices will be kept for trash-days if trash is enabled (#1790)
    • cmd: support using integer for duration flags (#1796)
    • cmd: use homedir as default working directory for non-root users (#1869)
    • cmd/format: create a uuid object in the target bucket (#1548)
    • cmd/dump&load: dump load behavior supports non-ascii characters (#1691)
    • cmd/dump: omit empty value in dumped JSON (#1676)
    • cmd/dump: remove secret key (#1569)
    • meta: encrypt the secret-key and encrypt-key in setting (#1562)
    • meta: create subdir automatically (#1712)
    • meta: specify the format field preventing update (#1776)
    • meta: escape meta password from env (#1879)
    • meta/redis: check redis version (#1584)
    • meta/redis: use smaller retry backoff in sentinel mode (#1620)
    • meta/redis: retry transaction for connection error or EXECABORT (#1637)
    • meta/sql: retry transaction after too many connections (#1876)
    • meta/sql: add primary key for all tables (#1913, #1919)
    • meta&chunk: Set max retries of meta & chunk according to the config io-retries (#1713, #1800)
    • chunk: limit number of upload goroutines (#1625)
    • chunk/store: limit max retry for async upload as well (#1673)
    • object/obs: Verify Etag from OBS (#1715)
    • object/redis: implement listAll api for redis (#1777)
    • fuse: automatically add ro option if mount with --read-only (#1661)
    • vfs/backup: reduce the limit for skipping backup (#1659)
    • sync: reduce memory allocation when write into files (#1644)
    • SDK: use uint32 for uid,gid (#1648)
    • SDK: handle removeXAttr return code (#1775)
    • Dependency: switch to Go 1.17 (#1594)
    • Dependency: fix jwt replace (#1534)
    • Dependency: upgrade golang-cross version to v1.17.8 (#1539)
    • Dependency: upgrade tikv to v2.0.0 (#1643)
    • Dependency: reduce dep from minio (#1645)
    • Dependency: upgrade gjson to 1.9.3 (#1647)
    • Dependency: upgrade sdk for object storage (#1665)
    • Dependency: upgrade qiniu sdk (#1697)

    Bugfix

    • cmd/format: fix setting quota (#1658)
    • cmd/mount: fix parsing of cache dir (#1758)
    • cmd/warmup: fix handling of relative paths (#1735)
    • cmd/sync: fix sync command not working when destination is webdav (#1636)
    • cmd/gateway: fix s3 gateway DeleteObjects panic (#1527)
    • meta: forbid empty name for dentry (#1687)
    • meta: lock counters when loading entries (#1703)
    • meta: fix snap not released if error occurs when dumping meta (#1669)
    • meta: don't update parent attribute if it's a trash directory (#1580)
    • meta/redis: fix loading large directory into Redis (#1858)
    • meta/redis: update used space/inodes in memory whenever needed (#1573)
    • meta/sql: use upsert to update xattr for PG (#1825)
    • meta/sql: split insert batch (#1831)
    • meta/sql: fix wrong result from scanning SQL database (#1854)
    • chunk/cache: Fix read disk_cache not existed remove cache key (#1677)
    • object: fallback to List only if ListAll is not supported (#1623)
    • object/b2: check returned bucket from B2 (#1745)
    • object/encrypt: fix parse rsa from pem (#1724)
    • object/encrypt: Add JFS_RSA_PASSPHRASE environment variable prompt information (#1706)
    • object/sharding: fix ListAll returning invalid objects (#1616)
    • object/ceph: fix listAll hangs if there are many objects (#1891)
    • vfs: write control file asynchronously (#1747)
    • vfs: fix getlk in access log (#1788)
    • sync: Fix copied and copiedBytes (#1801)
    • utils: fix the problem that the progress bar loses the log (#1756)
    • SDK: rename libjfs atomic (#1939)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(529 bytes)
    juicefs-1.0.0-beta3-darwin-amd64.tar.gz(23.75 MB)
    juicefs-1.0.0-beta3-darwin-arm64.tar.gz(22.27 MB)
    juicefs-1.0.0-beta3-linux-amd64.tar.gz(22.31 MB)
    juicefs-1.0.0-beta3-linux-arm64.tar.gz(20.58 MB)
    juicefs-1.0.0-beta3-windows-amd64.tar.gz(22.35 MB)
    juicefs-hadoop-1.0.0-beta3-amd64.jar(51.18 MB)
  • v1.0.0-beta2(Mar 4, 2022)

    JuiceFS v1.0.0-beta2 is the second beta release for v1.0. It has 150+ commits from 16 contributors, thanks to @SandyXSD @zhijian-pro @yuhr123 @xiaogaozi @davies @sanwan @AIXjing @Suave @tangyoupeng @zwwhdls @201341 @zhexuany @chnliyong @liufuyang @rayw000 @fredchen2022 !

    New

    • Supports BadgerDB (an embeddable key-value database) as metadata engine (#1340)
    • Supports WebDAV protocol to access JuiceFS files (#1444)
    • Supports read-only clients connecting to a Redis Sentinel-controlled replica (#1433)
    • Added version control of metadata and clients (#1469)
    • Added categories and long descriptions for all commands (#1488, #1493)
    • Added a new option no-bgjob for service commands to disable background jobs like clean-up, backup, etc. (#1472)
    • Added metrics for number of rawstaging blocks (#1341)
    • Added cross-platform compile script (#1374)

    Changed

    • cmd: help command is removed; use --help/-h flag to get help information (#1488)
    • cmd: print usage if not enough args (#1491)
    • cmd/format: only try to create bucket when it really doesn't exist (#1289)
    • cmd/format: prevend reusing object storage when formating a volume (#1420, #1449)
    • cmd/destroy: show information of active sessions (#1377)
    • cmd/config: add sanity check of new values (#1479)
    • cmd/umount: ignore error from umount (#1425)
    • meta: add progress bar for CompactAll (#1317)
    • meta: hide password in meta-url (#1333, #1361)
    • meta: use META_PASSWORD env and omit unnecessary characters (#1388)
    • meta: limit the number when scanning sessions/files (#1397)
    • meta: limit the number of clients running background cleanup jobs (#1393)
    • meta: continue dump when encountering non-fatal errors (#1462)
    • meta/sql: increase max idle connections to CPU*2 for SQL engine (#1443)
    • object: set ContentType when putting object (#1468)
    • object: support skip https certificate verify (#1453)
    • object/s3: format support pvc link (#1382)
    • object/qingstor: support private cloud and replace the sdk repository (#1303)
    • vfs: don't do backup for read-only clients (#1435)
    • vfs: add BackupMeta in config (#1460)
    • utils: log now contains the exact line of caller, and is colorized in terminal (#1404, #1318, #1312)
    • utils: simplify the usage of progress bar (#1325)
    • utils: add SleepWithJitter to reduce collision of background jobs (#1412)
    • SDK: upgrade Hadoop common to 3.1.4 (#1411)
    • SDK: java sdk push gateway support multiple volumes (#1492)
    • Other: update chart move it to a standalone repository (#1281, #1336, #1348)
    • Improves documentation and coverage of tests

    Bugfix

    • cmd: fix buffer-size in gc and fsck (#1316)
    • cmd/bench: convert PATH to absolute path (#1305)
    • meta: return EROFS as soon as possible (#1477)
    • meta/redis: fix leaked inodes in Redis (#1353)
    • meta/tkv: fix divide by zero error when dumping meta (#1369)
    • meta/tikv: fix scan of tikv, limiting the upperbound (#1455)
    • meta/memkv: fix scanKeys, returning a sorted list (#1381)
    • meta/sql: delete warning message for empty directory (#1442)
    • meta/sql: fix return value of mustInsert (#1429)
    • vfs: fixed deadlock when truncate a released file handle. (#1383)
    • vfs/trash: fix access to trash dir (#1356)
    • vfs/backup: skip dir objects when scanning meta backups (#1370)
    • vfs/backup: fix incorrect inode number when using subdir (#1385)
    • utils: fix the contention between progress bar and logger (#1436)
    • Windows: fix rename fails because the chunk file is still open (#1315)
    • Windows: fix mkdir on windows platform (#1327)
    • SDK: hadoop: fix umask apply (#1338, #1394)
    • SDK: hadoop: fix libjfs.so load bug (#1458)
    • other: fix legend of "Operations" panel in Grafana template (#1321)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(423 bytes)
    juicefs-1.0.0-beta2-darwin-amd64.tar.gz(23.31 MB)
    juicefs-1.0.0-beta2-linux-amd64.tar.gz(22.04 MB)
    juicefs-1.0.0-beta2-linux-arm64.tar.gz(20.26 MB)
    juicefs-1.0.0-beta2-windows-amd64.tar.gz(22.07 MB)
    juicefs-hadoop-1.0.0-beta2.jar(47.73 MB)
  • v1.0.0-beta1(Jan 13, 2022)

    JuiceFS v1.0.0-beta1 is the first beta release for v1.0, arrived three months after v0.17. It has 300+ commits from 22 contributors, thanks to @SandyXSD @davies @xiaogaozi @yuhr123 @zhijian-pro @sanwan @zwwhdls @tangyoupeng @Suave @chiyutianyi @201341 @suzaku @reusee @tisonkun @chenjie4255 @dragonly @nature1995 @fredchen2022 @Shoothzj @nsampre @supermario1990 @sjp00556 !

    JuiceFS v1.0.0-beta1 is released under the Apache License 2.0.

    New

    • Backups the whole metadata as a compressed JSON file into object storage in every hour by default, so we can get most of the data back in case of losing the entire meta database. (#975)
    • Added trash bin: removed or overwritten file will be moved to trash bin, will be deleted after configured days. (#1066)
    • Added config command to update configuration of an existing volume (#1137)
    • Added destroy command to clean up all data & metadata of a volume (#1164)
    • Added a option to limit the concurrent deletes (#917)
    • Added a option to register prometheus metrics api to consul (#910)
    • Added a option to S3 gateway to keep the etag information of uploaded objects (#1154)
    • Added --check-all and --check-new to verify data integrity (#1208)
    • sync command supports anonymous access to S3 (#1228)

    Changed

    • meta: don't return EINVAL when encountering unknown flags (#862)
    • cmd/bench: remove default benchmark PATH (#864)
    • hadoop: sort list res to compatible with hdfs (#889)
    • expose metrics in S3 gateway (#897)
    • cleanup broken mountpoint (#912)
    • cmd/info: add '--recursive' flag to get summary of a tree (#935)
    • cache: force sync upload if local disk is too full for writeback (#943)
    • Speed up metadata dump&load for redis (#954)
    • fsck: list broken files (#958)
    • write to json file by stream (#970)
    • speedup listing on file (#622)
    • meta: unify transaction backoff for redis/tkv (#999)
    • speed up metadata dumping and loading for Tikv V2 (#998)
    • check AK and SK for gateway (#1001)
    • add a option to provide customized bucket endpoint (#1008)
    • change frsize to 4096 (#1016)
    • Speed up dump for sql engine (#1006)
    • format support provide only bucket name (#1022)
    • meta/tkv: fix clean stale session (#1041)
    • add namespace and label to existing metrics (#1056)
    • release memory after dumping meta (#1093)
    • meta: unify counters for all metadata engines (#1119)
    • Keep SGID for directory (#1133)
    • Speedup hadoop concat by delete src concurrently (#1163)
    • change default cache-size to 100GB (#1169)
    • metrics: expose only JuiceFS metrics to prometheus (#1185)
    • cleanup reference of unused slice in gc command (#1249)
    • Adjust log level for xorm and TiKV when they are in use (#1229)
    • hadoop: Users in supergroup are superusers (#1202)
    • Check permission with multiple groups (#1205)
    • Ret the region of the object storage compatible with the s3 protocol from the env (#1171)
    • Added prefix to metrics in gateway (#1189)
    • improve s3 url parsing (#1234)

    Bugfix

    • go-fuse: return ENOSYS if xattr is disabled (#863)
    • Fix WebDAV backend (#899)
    • meta: always update cache after got newest attributes (#902)
    • sql.go: fix delete file range error (#904)
    • meta: update parent when rename (#905)
    • sqlmeta: fix delete chunk sql error (#931)
    • Fix the bug that file credential authentication fails when using GCS (#933)
    • hadoop: compatible with kitedata in sqoop parquet use case (#941)
    • Fix redis-sentinel addresses parsing (#953)
    • utils: fix cond and its test (#983)
    • workaround for List with marker (#984)
    • sync: ignore broken symlink (#1015)
    • meta/sql: fix delete condition (#1018)
    • metadata: should not skip error if open a file not exist (#1035)
    • Fix minio with sync (#1055)
    • meta: remove xattr only when nlink <= 0 (#1078)
    • meta/sql: fix parentDst nlink in rename (#1082)
    • Fix upload bandwidth limit (#1086)
    • Fix lua script handling big inode number (#1095)
    • meta: fix lookup of corrupt entries (#1098)
    • Fix potential metadata corrupt in Redis caused by gc (#1110)
    • Fix fmtKey to sliceKey (#1143)
    • Fix mkdir/rename/delete/rmr with trailing slash (#1149)
    • Fix gateway GetObjectInfo http code 500 to 404 (#1158)
    • meta: fix nlink of parentDst in rename (#1170)
    • Sync: distinguish DeleteSrc from DeleteDSt (#1197)
    • Fix subdir in S3 gateway (#1201)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(423 bytes)
    juicefs-1.0.0-beta1-darwin-amd64.tar.gz(22.69 MB)
    juicefs-1.0.0-beta1-linux-amd64.tar.gz(21.44 MB)
    juicefs-1.0.0-beta1-linux-arm64.tar.gz(19.73 MB)
    juicefs-1.0.0-beta1-windows-amd64.tar.gz(21.48 MB)
    juicefs-hadoop-1.0.0-beta1-linux-amd64.jar(18.36 MB)
  • v0.17.5(Dec 10, 2021)

    JuiceFS v0.17.5 is a patch release for v0.17, which has the following changes:

    • Sync command crashes for broken symlinks (#1028).
    • Fixed the returned code when open a non-existed file (#1035).
    • Fixed the leaked key after cleanup stale sessions in TiKV (#1041).
    • Fixed sync from/to some prefix of MinIO (#1055).
    • Fixed the problem that extended attributes of hard links are removed unexpectedly (#1078).
    • Fixed the wrong used space in df (introduced in v0.17.2) (#1096).
    • Fixed a bug in gc command that could corrupt file in Redis (#1110).

    Thanks to @davies @SandyXSD @chiyutianyi for contributions!

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.17.5-darwin-amd64.tar.gz(24.09 MB)
    juicefs-0.17.5-linux-amd64.tar.gz(22.80 MB)
    juicefs-0.17.5-linux-arm64.tar.gz(20.96 MB)
    juicefs-0.17.5-windows-amd64.tar.gz(22.60 MB)
    juicefs-hadoop-0.17.5-linux-amd64.jar(18.65 MB)
  • v0.17.2(Nov 19, 2021)

    Changelog

    JuiceFS v0.17.2 is the second patch release for v0.17, which has the following changes:

    • Fixed potential data leak for SQL engine (#931 #1018).
    • Fixed a bug in CopyFileRange() (77a3a6252d).
    • Fixed authentication for GCS in container (#933).
    • Don't fill all the disk space in writeback mode(#943).
    • Fixed parsing address for Redis Sentinel (#953).
    • Added a workaround to sync with object storages that are not compatible with S3 (#984).
    • Increase backoff for Redis/TiKV to avoid potential failure under high contention (#999).
    • Added a check for AK/SK for S3 gateway (#1001).
    • Added an option to provide a customized endpoint for object storage (#1008).
    • Chnaged frsize to 4096 to be compatible with local filesystem (#1016).

    Thanks to @SandyXSD @davies @chiyutianyi @zhijian-pro @nsampre for contributions!

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.17.2-darwin-amd64.tar.gz(24.09 MB)
    juicefs-0.17.2-linux-amd64.tar.gz(22.80 MB)
    juicefs-0.17.2-linux-arm64.tar.gz(20.96 MB)
    juicefs-0.17.2-windows-amd64.tar.gz(22.61 MB)
    juicefs-hadoop-0.17.2-linux-amd64.jar(18.66 MB)
  • v0.17.1(Oct 28, 2021)

    JuiceFS v0.17.1 is a patch release for v0.17, which has the following changes:

    • Return ENOSYS if attr is disabled (#863).
    • Ignore unknown flags in setattr() (#862).
    • Sort files when list a directory in Java SDK to be compatible with HDFS (#889).
    • Upgrade nats-server to v2.2+ to address CVE-2021-3127 (#893).
    • Enable metrics for S3 gateway (#897).
    • Fixed WebDAV backend (#899).
    • Refresh cache once new attributes found (#902).
    • Upgrade dgrijalva/jwt-go to golang-jwt/jwt to address CVE-2020-26160 (#903)
    • Fixed parent of renamed files/dirs (#905).
    • Fixed chunk deletions for SQL engine (#904)
    • Upgrade gjson to fix CVE-2021-42836 (#912).

    Thanks to contributions from @SandyXSD @tangyoupeng @davies @zhijian-pro @chiyutianyi !

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.17.1-darwin-amd64.tar.gz(23.92 MB)
    juicefs-0.17.1-linux-amd64.tar.gz(22.64 MB)
    juicefs-0.17.1-linux-arm64.tar.gz(20.81 MB)
    juicefs-0.17.1-windows-amd64.tar.gz(22.44 MB)
    juicefs-hadoop-0.17.1-linux-amd64.jar(18.49 MB)
  • v0.17.0(Sep 24, 2021)

    JuiceFS v0.17 arrived one month after 0.16, with 80+ commits from 9 contributors (@SandyXSD, @davies, @xiaogaozi, @yuhr123, @Suave @tangyoupeng @201341 @zwwhdls @allwefantasy), thanks to them!

    This release improved the performance when JuiceFS is used for temporary data by using an in-memory meta engine (memkv) and delayed uploading. For example, we can use JuiceFS as the shuffle and spill disk without worrying about running out of space.

    Linux Test Project was used to verify the compatibility of JuiceFS, please checkout the current results here.

    This release introduced metadata cache for Java SDK and S3 gateway (similar to metadata cache in kernel), which could be turned to improve performance significantly.

    New

    • Added a option to delay upload in writeback mode (#736 #743), which is useful for temporary data.
    • Added an in-memory meta engine (memkv://) for temporary data (#751 #779).
    • Support CREATE and REPLACE flag in setxattr (#770).
    • Added metrics for in-memory cache (#776).
    • Support rename flags (NoReplace and Exchange) (#787).
    • New colorful result for bench command (#810).
    • Added entry and attributes cache for Java SDK and S3 gateway (#835).

    Changed

    • Default logging directory in macOS was changed to user's home directory (#744).
    • Limit the number of retry for listing to 3 to avoid infinite loop (#745).
    • Show total size of valid objects in gc command (#746).
    • Disable SHA256 checksum for S3 and other compatible object store to reduce CPU usage (#754).
    • Hide operations on internal files from access log (#766).
    • Require Go 1.15 to build JuiceFS and build the release with Go 1.16 (#771).
    • Inherit gid of parent when SGID is set (#772).
    • Keep SGID when file is non-group-executable (#773).
    • Allow remove broken dir/file (#784).
    • Retry transactions for TxnLockNotFound from TiKV (#789).
    • Cleanup current session during umount (#796).
    • Reduce memory allocation for OBS and Webdav backend (#800).
    • Support escaped access key for KS3 (#830).
    • Support lookup . and .. (#842).
    • No warning if compaction fail with missing objects(#844).
    • Increase available inodes based on current usage (#851).
    • Allow update access key and secret key with default compress algorithm (#855).

    Bugfix

    • Fixed a leak in SQL engine (#728).
    • Fixed a bug that may crash client (#729).
    • Fixed valid bytes of progress bar for gc command (#746).
    • Fixed warmup with a long list of files (#752).
    • Fixed supporting of secured Redis connection (regression in v0.16) (#758).
    • Fixed data corruption in SQL and TiKV engine when some slices are skipped during compaction (#759).
    • Fixed metrics for read bytes in Hadoop SDK (#761).
    • Fixed multipart upload in S3 gateway (#765).
    • Fixed POSIX locks on interweaved regions (#769).
    • Fixed latency metrics for libRADOS (#793).
    • Fixed concat in Java SDK and multipart upload (#817).
    • Fixed nlink of parent when rename directories (#839).
    • Fixed transaction for read-only mount (#844).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.17.0-darwin-amd64.tar.gz(23.73 MB)
    juicefs-0.17.0-linux-amd64.tar.gz(22.46 MB)
    juicefs-0.17.0-linux-arm64.tar.gz(20.64 MB)
    juicefs-0.17.0-windows-amd64.tar.gz(22.26 MB)
    juicefs-hadoop-0.17.0-linux-amd64.jar(18.48 MB)
  • v0.16.2(Aug 25, 2021)

    JuiceFS v0.16.2 is patched version for v0.16, is recommended to upgrade.

    Bugfix

    • Retries LIST three times to avoid infinite loop (#745).
    • Fixed valid bytes of progress bar for gc command (#746).
    • Fixed warmup with a long list of files (#752).
    • Fixed supporting of secured Redis connection (regression in v0.16) (#758).
    • Fixed data corruption in SQL and TiKV engine when some slices are skipped during compaction (#759).
    • Fixed metrics for read bytes in Hadoop SDK (#761).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.16.2-darwin-amd64.tar.gz(23.71 MB)
    juicefs-0.16.2-linux-amd64.tar.gz(22.43 MB)
    juicefs-0.16.2-linux-arm64.tar.gz(20.62 MB)
    juicefs-0.16.2-windows-amd64.tar.gz(22.24 MB)
    juicefs-hadoop-0.16.2-linux-amd64.jar(18.90 MB)
  • v0.16.1(Aug 16, 2021)

    JuiceFS v0.16.1 arrived one month after 0.15.2, with 80+ commits from 11 contributors (@davies, @Sandy4999 , @xiaogaozi @tangyoupeng @zhijian-pro @chnliyong @Suave @themaxdavitt @xuhui-lu @201341 @zwwhdls ), thanks to them!

    The biggest feature is supporting TiKV as the meta engine, which is a distributed transactional key-value database. With TiKV, JuiceFS can store trillions of files and exabytes of data, please .

    BREAKING CHANGE

    The meaning of password in Redis Sentinel URL is changed from Sentinel password to Redis server password, please update the password in the URL if you use sentinel and their password are different.

    New

    • Supports TiKV as meta engine and object store (#610 #629 #631 #636 #645 #633 #663 #666 #675 #704).
    • Added limit for upload/download bandwidth (#611).
    • Added a virtual file (.config) to show current configurations (#652).
    • Added a subcommand stats to watch performance metrics in realtime (#702 #721).
    • Added progress bar for fsck, gc, load and dump (#683 #684 #724)
    • Disabled updatedb for JuiceFS (#727).

    Changed

    • Speedup listing on file store (#593).
    • Upgrade KS3 SDK to 1.0.12 (#638).
    • Update mtime in fallocate (#602).
    • Improved performance for writing into file store (#621).
    • Changed the password in Redis URL to Redis Server (#620)
    • Support mixed read/write in Go/Java SDK (#647).
    • Enable debug agent for sync command (#659).
    • Improved stability for random write workload (#664).
    • Avoid some memory copy in block cache (#668).
    • disable fsync in writeback mode to be closer to local file system (#696).
    • profile: show final result when interval is set to 0 (#710).

    Bugfix

    • Fixed stats with MySQL engine (#590).
    • Fixed case insensitivity with MySQL engine(#591).
    • Fixed atime of file in Java SDK(#597).
    • Fixed a bug that block write when memory cache is enabled (#667).
    • Fixed fd leak in block cache (#672).
    • Fixed stale result for restarted transaction (#678)
    • Fixed a bug under mixed write and truncate (#677).
    • Fixed race condition under mixed read/write (#681).
    • Fixed compatibility with Redis clones.
    • Fixed data usage in read-only mode (#698).
    • Fixed key leak in Redis (#694).
    • Fixed a bug about collate with MySQL 5.6 (#697).
    • Fixed a bug that may cause crash when writeback_cache is used (#705).
    • Fixed pid of updated POSIX lock (#708).
    • Added a workaround for a data loss bug in Huawei OBS SDK (#720).
    • Fixed the metrics of uploaded bytes (#726).
    • Fixed a leak of chunk and sustained inode in SQL engine (#728).
    • Fixed a bug that may crash client (#729).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.16.1-darwin-amd64.tar.gz(23.71 MB)
    juicefs-0.16.1-linux-amd64.tar.gz(22.43 MB)
    juicefs-0.16.1-linux-arm64.tar.gz(20.62 MB)
    juicefs-0.16.1-windows-amd64.tar.gz(22.24 MB)
    juicefs-hadoop-0.16.1-linux-amd64.jar(18.90 MB)
  • v0.15.2(Jul 7, 2021)

    JuiceFS v0.15.2 arrived 1 month after v0.14.2, with more than 60+ changes from 8 contributors (@davies, @Sandy4999 , @xiaogaozi, @yuhr123, @Suave, @zzcclp , @tangyoupeng, @chnliyong, @yuhr123), thanks to them.

    This release introduced new tool to backup and restore metadata, which can also be used to migrate metadata between different meta engines, check Backup and Restore Metadata for details.

    This release also improved the performance significantly for read/write heavy workload by utilizing page cache in kernel.

    This release is backward-compatible with previous releases, should be safe to upgrade.

    New

    • Added command dump and load to backup and restore metadata (#510, #521, #529, #535, #551).
    • Added an option (--read-only) to mount as read-only (#520).
    • Support command auto-completion (#530, #534).
    • Run benchmark in parallel (-p N) (#545).
    • Support PostgreSQL as meta engine (#542).
    • Added an option (--subdir) to mount a sub-directory (#550).
    • Support WebDAV as an object store (#573).
    • Allow enable writeback cache in kernel by -o writeback_cache (#576).
    • Added a option to redirect the logging into a file in background mode (#575).

    Changed

    • Changed the batch size of LIST request to 1000 (some object storage may fail with 400 for larger limit) (1385587).
    • Exclude log4j from Java SDK to avoid potential conflict (#501).
    • Exit when unknown option found (d6a39f11db).
    • Report type of meta engine and storage together with usage (#504).
    • Changed the default configuration of Java SDK and S3 gateway to be consistent with juicefs mount (#517).
    • Keep page cache in kernel when files opened without changes (#528, #537).
    • Change REDIS-URL to META-URL in docs (#552).

    Bugfix

    • Fixed the memory leak in B2 client (#500).
    • Handle BucketAlreadyExists error for all object storage (#561).
    • Fixed a bug with SQLite and PostgreSQL engine when high bit of lock owner is 1 (#588)
    • Fixed updating stats for MySQL engine (#590).
    • Fixed case sensitivity for MySQL engine (#591).
    • Fixed potential leak for files overwritten by rename (#594, #495)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.15.2-darwin-amd64.tar.gz(21.66 MB)
    juicefs-0.15.2-linux-amd64.tar.gz(20.52 MB)
    juicefs-0.15.2-linux-arm64.tar.gz(18.91 MB)
    juicefs-0.15.2-windows-amd64.tar.gz(20.32 MB)
    juicefs-hadoop-0.15.2-linux-amd64.jar(16.93 MB)
  • v0.14.2(Jun 4, 2021)

    JuiceFS v0.14.2 received 30+ contributions from @davies @xiaogaozi @tangyoupeng @Sandy4999 @chnliyong @yuhr123 @xyb @meilihao @frankxieke , thanks to them!

    New

    • Added quota for space and inodes for whole volume (#495).
    • Lists all the client session in juicefs status (#491).
    • Cleanup any leaked inodes in Redis with juicefs gc (#494).
    • Supports sticky bits in Java SDK (#475).
    • Added juicefs.umask for Java SDK (#462).
    • Empty the Hadoop trash automatically (#456).

    Changed

    • Returns ENOSPC rather than IOError when Redis runs out of memory (#479).

    Bugfix

    • Allow superuser to change file mode in Java SDK (#467).
    • Allow owner to change grouping of file (#470).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.14.2-darwin-amd64.tar.gz(21.62 MB)
    juicefs-0.14.2-linux-amd64.tar.gz(20.48 MB)
    juicefs-0.14.2-linux-arm64.tar.gz(18.87 MB)
    juicefs-0.14.2-windows-amd64.tar.gz(20.27 MB)
    juicefs-hadoop-0.14.2-linux-amd64.jar(16.82 MB)
  • v0.13.1(May 27, 2021)

    JuiceFS v0.13.1 is a bugfix release for v0.13. We have created first release branch for 0.13, which will only have bugfix in future patch releases.

    New

    • Support flock for SQL engine (#422).
    • Support posix lock for SQL engine (#425).
    • Global user/gropu id mapping for Java SDK (#439, #447)
    • Added benchmark results for different meta engine (#445).

    Bugfix

    • Fixed transaction conflict for SQL engine(#448).
    • Fixed build on macos 11.4 (#452).
    • Fixed parsing redis versions (1c945d746be376831e706761a6a566d236123be3).
    • Ignore deleted file during sync (6e06c0ebd6a2e906235a80807418d18f5ea8a84a).
    • Fixed permission check in Lua script (used by Java SDK and S3 Gateway) (#430).
    • Fixed juicefs sync in distributed mode (#424)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.13.1-darwin-amd64.tar.gz(21.60 MB)
    juicefs-0.13.1-linux-amd64.tar.gz(20.47 MB)
    juicefs-0.13.1-linux-arm64.tar.gz(18.87 MB)
    juicefs-0.13.1-windows-amd64.tar.gz(20.26 MB)
    juicefs-hadoop-0.13.1-linux-amd64.jar(16.81 MB)
  • v0.13.0(May 19, 2021)

    JuiceFS v0.13 arrived 1 month after v0.12.1, with more than 80 changes from 9 contributors (@davies, @Sandy4999 , @xiaogaozi, @yuhr123, @polyrabbit, @suzaku, @tangyoupeng, @angristan, @chnliyong), thanks to them.

    The biggest feature in v0.13 is using SQL database as meta engine, SQLite, MySQL and TiDB are supported right now, we will add others later. Using SQL database will be slower than using Redis, but they have better persistency and scalability than Redis, is better in the cases that data safety and number of files are more important than performance, for example, backup.

    New

    • Support SQL database (SQLite, MySQL and TiDB) as meta engine (#375).
    • Added profile to analyze access log (#344).
    • Added status to show the setting and status (#368).
    • Added warmup to build cache for files/directory (#409).
    • Build Java SDK for Windows (#362).
    • Use multiple buckets as object store (#349).
    • Collect metrics for Java SDK (#327).
    • Added virtual file /.stats to show the internal metrics (#314).
    • Allow build a minimized binary without S3 gateway and other object storages (#324).

    Changed

    • Enable checksum for Swift (a16b106808aa1).
    • Added more buckets for object latency distribution (#321).
    • Added a option (--no-agent) to disable debug agent (#328).
    • Added internal details at the end of benchmark (#352).
    • More metrics for block cache(#387).
    • Speed up path resolution for Java SDK and S3 gateway using Lua script (#394).
    • Restart the Redis transaction after some known failures (#397).
    • Remove the limit on the number of cached blocks (#401).

    Bugfix

    • Fixed a bug in SetAttr to refresh new written data (429ce80100).
    • Fixed overflow in StatFS (abcb5c652b).
    • Fixed a bug when use MinIO with juicefs sync (199b4d35b).
    • Fixed a bug in CopyFileRange, which may affect multipart uploads and concat of Java SDK (fb611b0825).
    • Fixed deadlock when truncate together with read (2f8a8d9).
    • Fixed stale read after truncate (226b6a7e).
    • Fixed downloading a directory using S3 gateway (#378).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.13.0-darwin-amd64.tar.gz(21.59 MB)
    juicefs-0.13.0-linux-amd64.tar.gz(20.45 MB)
    juicefs-0.13.0-linux-arm64.tar.gz(18.85 MB)
    juicefs-0.13.0-windows-amd64.tar.gz(20.25 MB)
    juicefs-hadoop-0.13.0-linux-amd64.jar(16.79 MB)
  • v0.12.1(Apr 15, 2021)

    JuiceFS v0.12.1 had fixed a few bugs and improvements on scalability.

    Changes

    • Only cleanup leaked chunk in juicefs gc, which may overload redis on larger cluster (6358e388416c).
    • Improve sessions cleanup to avoid scanning all keys (#293).
    • Use hash set for refcount of slices to avoid scanning all keys (#294).
    • Cleanup zero refcount of slices to save memory (#295).
    • Support case insensitivity in Windows (#303).

    Bugfix

    • Fixed ranged get for swift (dcd705714f8f).
    • Fixed random read benchmark on files larger than 2G (#305).
    • Fixed listing of files on root directory in Windows (#306)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.12.1-darwin-amd64.tar.gz(20.24 MB)
    juicefs-0.12.1-linux-amd64.tar.gz(19.34 MB)
    juicefs-0.12.1-linux-arm64.tar.gz(17.78 MB)
    juicefs-0.12.1-windows-amd64.tar.gz(19.15 MB)
    juicefs-hadoop-0.12.1-linux-amd64.jar(15.30 MB)
  • v0.12.0(Apr 12, 2021)

    JuiceFS v0.12 was arrived 1 month after v0.11, with more than 70 changes from 7 contributors (@davies @xiaogaozi @chnliyong @tangyoupeng @Arvintian @luohy15 @angristan), thanks to them.

    New

    • Supports Windows (#195, #268 #271).
    • Added juicefs gc to collect garbage in object store (#248, #290).
    • Added juicefs fsck to check the consistency of file system (#253).
    • Added juicefs info to show internal information for file/directory (slow for large directory)(#288).

    Changes

    • Added prefix (juicefs_) and labels (vol_name and mp) for exposed metrics.
    • Support path-style endpoint for S3 compatible storage (#175).
    • Added --verbose as an alias to --debug.
    • Support proxy setting from environment variables for OBS (#245).
    • Wait for block to be persistent in disk in writeback mode (#255).
    • Change the default number of prefetch threads to 1.
    • Fail the mount if the mount point is not ready in 10 second in daemon mode.
    • Speed up juicefs rmr by parallelizing it.
    • Limit the used memory to 200% of --buffer-size (slow down if it's above 100%).
    • Reload the Lua scripts after Redis is restarted.
    • Improved compaction to skip first few large slices to reduce traffic to object store (#276).
    • Added logging when operation is interrupted.
    • Disable compression by default (#286).
    • Limit the concurrent deletion of objects to 2 (#282).
    • Accept named options after positional argument (#274).

    Bugfix

    • Accepts malformed repsonse from UFile (f4f5f53).
    • Fixed juicefs umount in Linux (#242).
    • Fixed order of returned objects from listing on SCS (#240).
    • Fixed fetching list of nodes in Java SDK when a customized URL handler is set (#247).
    • Support IPv6 address for sftp(#259).
    • Fixed build with librados (#260).
    • Supports relative path when mount in background (#266).
    • Fixed juicefs rmr with relative path.
    • Cleanup unused objects after failed compaction.
    • Fixed updated files and permissions in sftp.
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Mar 1, 2021)

    New

    • Added S3-compatible gateway based on MinIO (#178).
    • Expose metrics for Prometheus (#181), also a Grafana dashboard template.
    • Support copy_file_range in FUSE (#172) and concat in Hadoop SDK (#173).
    • Support data encryption at-rest (#185).
    • Support China Mobile Cloud (ECloud) (#206).
    • Added juicefs rmr to remove all files in a directory recursively (#207).
    • Support Redis with Senitel (#180).
    • Support multiple directories for disk cache.

    Changed

    • Increased read timeout for Redis from 3 seconds to 30 seconds to avoid IO error (#196).

    Bugfix

    • Fixed a bug that caused recent written data can't be read back (#157).
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Jan 29, 2021)

    New

    1. A Java SDK was released to provide HDFS compatible API for Hadoop ecosystem.
    2. Added juicefs umount to umount a volume.
    3. Added juicefs benchmark to do simple benchmark (read/write on big/small files)
    4. Merged juicesync into juicefs sync.
    5. Support Sina Cloud Storage.
    6. Support Swift as object store.

    Changed

    1. Release memory in read buffer after idle for 1 minute.
    2. Ignore invalid IP (for example, IPv6) returned from DNS of object store.
    3. Improve performance for huge directory (over 1 millions)
    4. Retry operations after Redis is disconnected or restarted.
    5. Added cache for symlink.

    Bugfix

    1. Fixed errno for getattr in Darwin when no extented attributes found.
    2. Updated length in readers to allow read latest data written by other clients.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.3(Jan 18, 2021)

    This release includes BREAKING changes and a few bugfix:

    New

    1. Support rados in Ceph.
    2. Allow update access key and secret key with existing volume.

    Changes

    1. Changed to logging into syslog only when running in background.
    2. Changed to use lower cases for command options.
    3. Disable extended attributes by default, which slow down operations even it's not used.

    Improvements

    1. Allow non-root to allow in macOS.
    2. Check configs of redis and show warning when there are risks for data safety or functions.
    3. Added volume name in macOS.
    4. wait the mountpoint to be ready when it's mounted in background.

    Bugfix

    1. Fixed a bug in ReaddirPlus which slows down tree travelling.
    2. Fixed leaking of extended attributes in Redis.
    3. Fix a bug that truncate may delete slices written in future.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Jan 15, 2021)

    This release addressed all the known issue reported from community since first public release.

    1. Added a flag -d to mount in background
    2. Finish compaction part 2: copy small objects into bigger one.
    3. Added pessimistic lock to reduce contention for Redis transaction, to avoid failure under high pressure.
    4. Show . and .. when list a directory with -a in Linux.
    5. Fixed errno for getxattr on internal files.
    6. Fixed local block cache.
    7. Support Scaleway and Minio as object store.
    8. Support mount using /etc/fstab and auto mount after boot.
    9. Added an option --force to overwrite existing format in Redis.
    10. log INFO into syslog, also DEBUG in daemon mode.
    Source code(tar.gz)
    Source code(zip)
Owner
Juicedata, Inc
Builds the best file system for cloud.
Juicedata, Inc
Built in user interface, LAN file transfer, such as mobile phone, computer, tablet, different operating system

Modao Built in user interface, LAN file transfer, such as mobile phone, computer, tablet, different operating systems, etc., as well as text transfer

null 0 May 7, 2022
Delta : File Sharing system for golang

delta is File Sharing system its good for Local networks or small teams Cross-platform delta runs anywhere Go can compile for: Windows, Mac, Linux, AR

null 0 Nov 29, 2021
oDrop, a fast efficient cross-platform file transfer software for server and home environments

oDrop is a cross-platform LAN file transfer software to efficiently transfer files between computers, oDrop is useful in environments where GUI is not available.

Flew Software 16 Jun 4, 2022
A web based drag and drop file transfer tool for sending files across the internet.

DnD A file transfer tool. Demo Usage Get go get github.com/0xcaff/dnd or download the latest release (you don't need go to run it) Run dnd Now navig

Martin Charles 20 Dec 16, 2022
transfer.sh - Easy and fast file sharing from the command-line.

Easy and fast file sharing from the command-line. This code contains the server with everything you need to create your own instance.

Dutchcoders 13.6k Jan 2, 2023
Easy and fast file sharing from the command-line.

Easy and fast file sharing from the command-line. This code contains the server with everything you need to create your own instance.

Dutchcoders 13.6k Jan 2, 2023
aqua is a simple file uploading and sharing server for personal use.

aqua is a simple file uploading and sharing server for personal use. It is built to be easy to set up and host on your own server, for example to use it in combination with uploading tools like ShareX.

Tobias B. 14 Jul 7, 2022
Simple temporary file upload and transfer web application coding with Go language.

Temp File Transfer Web Application Simple temporary file upload and transfer web application coding with Go language. Explore the Golang » Live Demo T

Alper Reha 0 Dec 2, 2022
Peerster refers to a gossip-based P2P system composed of multiple peers interacting with each other

Peerster design Peerster refers to a gossip-based P2P system composed of multiple peers interacting with each other. A peer refers to an autonomous en

João Correia 4 Jan 22, 2022
Syncthing is a continuous file synchronization program.

Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers. We strive to fulfill the goals below. The goals are listed in order of importance, the most important one being the first.

The Syncthing Project 48.6k Jan 9, 2023
fsync - a file sync server

fsync - a file sync server

null 5 Aug 25, 2022
Yet another netcat for fast file transfer

nyan Yet another netcat for fast file transfer When I need to transfer a file in safe environment (e.g. LAN / VMs), I just want to use a simple comman

null 6 Apr 30, 2022
Golang PoC software for reliable file transfers over a data diode. DIY gigabit data diode hardware instructions

DIY Data Diode Simple DIY gigabit data diode (hardware and software). Presented at SEC-T 2021. Hardware By doing a simple hardware mod to a fiber conv

Klockcykel 21 Dec 1, 2022
Subspace - File sharing application for golang

subspace File sharing application. Supported Platforms OS 386 amd64 arm6 arm64 L

Jim Male 1 Jan 29, 2022
Rsync - rsync (File syncing) in golang

Go rsync Minimal file syncing based on the rsync algorithm completely written

Emmanuel Garcia 1 Jun 27, 2022
FileTransferGo - File Transfer With Golang

FileTransferGo Packages used ?? Go: Gin (http server) ?? Cobra (CLI command fram

PiterDev 3 Jun 24, 2022
mini file transfer tool, use it just curl o wget

miniTransfer mini file transfer tool, use it just curl o wget How to use upload file curl -T localFileName 127.0.0.1:1234 # default save as localFileN

chenlianghong 0 Jan 12, 2022
peer-to-peer file sharing

what i want is a tool to use to send files my many virtual machines. I want to do this myself, and i want to make it work as expected. So maybe a daem

Joseph Attah 0 Jun 13, 2022
wholeaked is a file-sharing tool that allows you to find the responsible person in case of a leakage

wholeaked is a file-sharing tool that allows you to find the responsible person in case of a leakage

Utku Sen 749 Dec 25, 2022