JuiceFS is a distributed POSIX file system built on top of Redis and S3.

Overview

JuiceFS Logo

Build Status Join Slack Go Report Chinese Docs

JuiceFS is an open-source POSIX file system built on top of Redis and object storage (e.g. Amazon S3), designed and optimized for cloud native environment. By using the widely adopted Redis and S3 as the persistent storage, JuiceFS serves as a stateless middleware to enable many applications to share data easily.

The highlighted features are:

  • Fully POSIX-compatible: JuiceFS is a fully POSIX-compatible file system. Existing applications can work with it without any change. See pjdfstest result below.
  • Outstanding Performance: The latency can be as low as a few milliseconds and the throughput can be expanded to nearly unlimited. See benchmark result below.
  • Cloud Native: By utilizing cloud object storage, you can scale storage and compute independently, a.k.a. disaggregated storage and compute architecture.
  • Sharing: JuiceFS is a shared file storage that can be read and written by many clients.
  • Global File Locks: JuiceFS supports both BSD locks (flock) and POSIX record locks (fcntl).
  • Data Compression: By default JuiceFS uses LZ4 to compress all your data, you could also use Zstandard instead.

Architecture | Getting Started | Administration | POSIX Compatibility | Performance Benchmark | Supported Object Storage | Status | Roadmap | Reporting Issues | Contributing | Community | Usage Tracking | License | Credits | FAQ


Architecture

JuiceFS Architecture

JuiceFS relies on Redis to store file system metadata. Redis is a fast, open-source, in-memory key-value data store and very suitable for storing the metadata. All the data will store into object storage through JuiceFS client.

JuiceFS Storage Format

The storage format of one file in JuiceFS consists of three levels. The first level called "Chunk". Each chunk has fixed size, currently it is 64MiB and cannot be changed. The second level called "Slice". The slice size is variable. A chunk may have multiple slices. The third level called "Block". Like chunk, its size is fixed. By default one block is 4MiB and you could modify it when formatting a volume (see following section). At last, the block will be compressed and encrypted (optional) store into object storage.

Getting Started

Precompiled binaries

You can download precompiled binaries from releases page.

Building from source

You need first installing Go 1.13+, then run following commands:

$ git clone https://github.com/juicedata/juicefs.git
$ cd juicefs
$ make

Dependency

A Redis server (>= 2.2) is needed for metadata, please follow Redis Quick Start.

macFUSE is also needed for macOS.

The last one you need is object storage. There are many options for object storage, local disk is the easiest one to get started.

Format a volume

Assume you have a Redis server running locally, we can create a volume called test using it to store metadata:

$ ./juicefs format localhost test

It will create a volume with default settings. If there Redis server is not running locally, the address could be specified using URL, for example, redis://user:[email protected]:6379/1, the password can also be specified by environment variable REDIS_PASSWORD to hide it from command line options.

As JuiceFS relies on object storage to store data, you can specify a object storage using --storage, --bucket, --access-key and --secret-key. By default, it uses a local directory to serve as an object store, for all the options, please see ./juicefs format -h.

For the details about how to setup different object storage, please read the guide.

Mount a volume

Once a volume is formatted, you can mount it to a directory, which is called mount point.

$ ./juicefs mount -d localhost ~/jfs

After that you can access the volume just like a local directory.

To get all options, just run ./juicefs mount -h.

If you wanna mount JuiceFS automatically at boot, please read the guide.

Command Reference

There is a command reference to see all options of the subcommand.

Kubernetes

There is a Kubernetes CSI driver to use JuiceFS in Kubernetes easily.

Hadoop Java SDK

If you wanna use JuiceFS in Hadoop, check Hadoop Java SDK.

Administration

POSIX Compatibility

JuiceFS passed all of the 8813 tests in latest pjdfstest.

All tests successful.

Test Summary Report
-------------------
/root/soft/pjdfstest/tests/chown/00.t          (Wstat: 0 Tests: 1323 Failed: 0)
  TODO passed:   693, 697, 708-709, 714-715, 729, 733
Files=235, Tests=8813, 233 wallclock secs ( 2.77 usr  0.38 sys +  2.57 cusr  3.93 csys =  9.65 CPU)
Result: PASS

Besides the things covered by pjdfstest, JuiceFS provides:

  • Close-to-open consistency. Once a file is closed, the following open and read can see the data written before close. Within same mount point, read can see all data written before it.
  • Rename and all other metadata operations are atomic guaranteed by Redis transaction.
  • Open files remain accessible after unlink from same mount point.
  • Mmap is supported (tested with FSx).
  • Fallocate with punch hole support.
  • Extended attributes (xattr).
  • BSD locks (flock).
  • POSIX record locks (fcntl).

Performance Benchmark

Throughput

Performed a sequential read/write benchmark on JuiceFS, EFS and S3FS by fio, here is the result:

Sequential Read Write Benchmark

It shows JuiceFS can provide 10X more throughput than the other two, read more details.

Metadata IOPS

Performed a simple mdtest benchmark on JuiceFS, EFS and S3FS by mdtest, here is the result:

Metadata Benchmark

It shows JuiceFS can provide significantly more metadata IOPS than the other two, read more details.

Analyze performance

There is a virtual file called .accesslog in the root of JuiceFS to show all the operations and the time they takes, for example:

$ cat /jfs/.accesslog
2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010>
2021.01.15 08:26:11.003473 [uid:0,gid:0,pid:4403] write (17675,198,997439): OK <0.000014>
2021.01.15 08:26:11.003616 [uid:0,gid:0,pid:4403] write (17666,390,951582): OK <0.000006>

The last number on each line is the time (in seconds) current operation takes. We can use this to debug and analyze performance issues. We will provide more tools to analyze it.

Supported Object Storage

  • Amazon S3
  • Google Cloud Storage
  • Azure Blob Storage
  • Alibaba Cloud Object Storage Service (OSS)
  • Tencent Cloud Object Storage (COS)
  • QingStor Object Storage
  • Ceph RGW
  • MinIO
  • Local disk
  • Redis

For the detailed list, see this document.

Status

It's considered as beta quality, the storage format is not stabilized yet. It's not recommended to deploy it into production environment. Please test it with your use cases and give us feedback.

Roadmap

  • Stabilize storage format
  • S3 gateway
  • Windows client
  • Encryption at rest
  • Other databases for metadata

Reporting Issues

We use GitHub Issues to track community reported issues. You can also contact the community for getting answers.

Contributing

Thank you for your contribution! Please refer to the CONTRIBUTING.md for more information.

Community

Welcome to join the Discussions and the Slack channel to connect with JuiceFS team members and other users.

Usage Tracking

JuiceFS by default collects anonymous usage data. It only collects core metrics (e.g. version number), no user or any sensitive data will be collected. You could review related code here.

These data help us understand how the community is using this project. You could disable reporting easily by command line option --no-usage-report:

$ ./juicefs mount --no-usage-report

License

JuiceFS is open-sourced under GNU AGPL v3.0, see LICENSE.

Credits

The design of JuiceFS was inspired by Google File System, HDFS and MooseFS, thanks to their great work.

FAQ

Why doesn't JuiceFS support XXX object storage?

JuiceFS already supported many object storage, please check the list first. If this object storage is compatible with S3, you could treat it as S3. Otherwise, try reporting issue.

Can I use Redis cluster?

The simple answer is no. JuiceFS uses transaction to guarantee the atomicity of metadata operations, which is not well supported in cluster mode. Sentinal or other HA solution for Redis are needed.

Issues
  • [MariaDB] Error 1366: Incorrect string value

    [MariaDB] Error 1366: Incorrect string value

    What happened:

    While rsync from local disk to juicefs mount, it suddenly (after several hours) stopped with

    Failed to sync with 11 errors: last error was: open /mnt/juicefs/folder/file.xls: input/output error

    On the jfs log, i can find these:

    juicefs[187516] <ERROR>: error: Error 1366: Incorrect string value: '\xE9sa sa...' for column `jfsdata`.`jfs_edge`.`name` at row 1
    goroutine 43510381 [running]:
    runtime/debug.Stack()
            /usr/local/go/src/runtime/debug/stack.go:24 +0x65
    github.com/juicedata/juicefs/pkg/meta.errno({0x2df8860, 0xc0251d34a0})
            /go/src/github.com/juicedata/juicefs/pkg/meta/utils.go:76 +0xc5
    github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doMknod(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0x1, 0x1b4, 0x0, 0x0, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/sql.go:1043 +0x29e
    github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Mknod(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0xc0, 0x7b66, 0x7fca, 0x0, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/base.go:594 +0x275
    github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Create(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x26b5620, {0xc025268160, 0x2847500}, 0x8040, 0xed2, 0x8241, 0xc025267828, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/base.go:601 +0x109
    github.com/juicedata/juicefs/pkg/vfs.(*VFS).Create(0xc000140640, {0x2e90348, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0x81b4, 0x22a4, 0xc0)
            /go/src/github.com/juicedata/juicefs/pkg/vfs/vfs.go:357 +0x256
    github.com/juicedata/juicefs/pkg/fuse.(*fileSystem).Create(0xc000153900, 0xc024980101, 0xc022a48a98, {0xc025268160, 0x1f}, 0xc022a48a08)
            /go/src/github.com/juicedata/juicefs/pkg/fuse/fuse.go:221 +0xcd
    github.com/hanwen/go-fuse/v2/fuse.doCreate(0xc022a48900, 0xc022a48900)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/opcode.go:163 +0x68
    github.com/hanwen/go-fuse/v2/fuse.(*Server).handleRequest(0xc00179c000, 0xc022a48900)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:483 +0x1f3
    github.com/hanwen/go-fuse/v2/fuse.(*Server).loop(0xc00179c000, 0x20)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:456 +0x110
    created by github.com/hanwen/go-fuse/v2/fuse.(*Server).readRequest
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:323 +0x534
     [utils.go:76]
    

    Nothing was logged at MariaDB side.

    Environment:

    • juicefs version 1.0.0-beta2+2022-03-04T03:00:41Z.9e26080
    • Ubuntu 20.04
    • MariaDB 10.4
    kind/bug 
    opened by solracsf 21
  • failed to create fs on oos

    failed to create fs on oos

    What happened: i can't create fs on oos What you expected to happen: i can create fs on oos How to reproduce it (as minimally and precisely as possible): juicefs format --storage oos --bucket https://cyn.oos-hz.ctyunapi.cn
    --access-key xxxxxx
    --secret-key xxxxxxx
    redis://:[email protected]:6379/1
    myjfs Anything else we need to know?

    Environment:

    • JuiceFS version (use juicefs --version) or Hadoop Java SDK version: juicefs version 1.0.0-beta3+2022-05-05.0fb9155
    • Cloud provider or hardware configuration running JuiceFS:
    • OS (e.g cat /etc/os-release): Centos 7
    • Kernel (e.g. uname -a): Linux ecs-df87 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region, or self maintained): oos
    • Metadata engine info (version, cloud provider managed or self maintained): redis:6
    • Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
    • Others:
    opened by enlindashuaibi 17
  • Performance degenerates a lot when reading from multiple threads compared with a single thread (when running Clickhouse)

    Performance degenerates a lot when reading from multiple threads compared with a single thread (when running Clickhouse)

    What happened:

    Hi folks,

    We are trying to run ClickHouse benchmark on juicefs (with OSS as the underlying object storage), and under the settings that juicefs has already cached the whole file to the local disk we notice a huge performance gap (compared with running the benchmark on Local SSD) when executing ClickHouse with 4 threads, but such degeneration doesn't happen if we limit the ClickHouse thread to 1.

    More specifically, we are running the clickhouse benchmark with scale factor 1000, and playing query 29th query (the involved table Referer sizes around 24Gi, the query is a full table scan operation), and given clickhouse 100Gi local SSD as the cache directory.

    After serveral runs to make sure the involved file are fully cached locally by juicefs, we notices the following performance numbers

    | threads | ssd runtime (seconds) | juicefs runtime (seconds) | |:-------:|:----------------------:|:--------------------------:| | 4 | 24 | 56 | | 1 | 88 | 100 |

    You could see that the juicefs suffers much more performance degenerated when the workload executing in a multiple thread fashion. Is that behavour expected for juicefs?

    Thanks!

    What you expected to happen:

    The performance gap shouldn't be such large for 4 thread settings.

    How to reproduce it (as minimally and precisely as possible):

    Playing the clickhouse benchmark inside a juicefs mounted directory.

    Anything else we need to know?

    Environment:

    • JuiceFS version (use juicefs --version) or Hadoop Java SDK version: juicefs version 1.0.0-beta2+2022-03-04T03:00:41Z.9e26080
    • Cloud provider or hardware configuration running JuiceFS: aliyun ecs.i3g.2xlarge, (local ssd instance with 4 physical cores and 32Gi memory)
    • OS (e.g cat /etc/os-release): Ubuntu 20.04.3 LTS
    • Kernel (e.g. uname -a): Linux mk1 5.4.0-100-generic #113-Ubuntu SMP Thu Feb 3 18:43:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region, or self maintained): OSS
    • Metadata engine info (version, cloud provider managed or self maintained): redis
    • Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage): localhost, redis and juicefs are deployed on the same instance
    • Others: clickhouse latest version
    opened by sighingnow 15
  • Out of Memory 0.13.1

    Out of Memory 0.13.1

    What happened: juicefs eating every last bit of memory + swap then gets killed by oomkiller

    What you expected to happen: shouldnt it use less memory?

    How to reproduce it (as minimally and precisely as possible): store 8x104gb size files in juicefs, do rand reads on it, watch how memory consistently climbs until exhaustion

    Anything else we need to know? image

    Environment:

    • JuiceFS version (use ./juicefs --version) or Hadoop Java SDK version: juicefs version 0.13.1 (2021-05-27T08:14:30Z 1737d4e)
    • Cloud provider or hardware configuration running JuiceFS: 4 core, 16gb ram, backblaze b2
    • OS (e.g: cat /etc/os-release): ubuntu 20.04.2 LTS (Focal Fossa)
    • Kernel (e.g. uname -a): 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region): Backblaze B2 Europe
    • Redis info (version, cloud provider managed or self maintained): self maintained latset, from docker
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage): juicefs and redis are locally, 5gbit to backblaze b2
    • Others:
    kind/bug 
    opened by gallexme 15
  • Performance 3x ~ 8x slower than s5cmd (for large files)

    Performance 3x ~ 8x slower than s5cmd (for large files)

    While comparing basic read/write operations, it appears than s5cmd is 3x ~ 8x faster than juicefs

    What happened:

    #### WRITE IO ####

    $ time cp 1gb_file.txt /mnt/juicefs0/
    real	0m50.859s
    user	0m0.016s
    sys	0m1.365s
    
    $ time s5cmd cp 1gb_file.txt s3://bucket/path/
    real	0m20.614s
    user	0m9.411s
    sys	0m3.232s
    

    #### READ IO ####

    $ time cp /mnt/juicefs0/1gb_file.txt .
    real	0m45.539s
    user	0m0.014s
    sys	0m1.578s
    
    $ time s5cmd cp s3://bucket/path/1gb_file.txt .
    real	0m6.074s
    user	0m1.186s
    sys	0m2.504s
    

    Environment:

    • JuiceFS version or Hadoop Java SDK version: juicefs version 0.12.1 (2021-04-15T08:18:25Z 7b4df23)
    • Cloud provider or hardware configuration running JuiceFS: Linode 1 GB VM
    • OS: Fedora 33 (Server Edition)
    • Kernel: Linux 5.11.12-200.fc33.x86_64 #1 SMP Thu Apr 8 02:34:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage: Linode
    • Redis info: Redis 6.2.1
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage): redis (local), S3 Object Storage (Linode)
    area/performance 
    opened by cayolblake 15
  • Errors on WebDAV operations, but files are created

    Errors on WebDAV operations, but files are created

    Juicefs (1.0.0-rc1) outputs a 401, but bucket is created (folder 'jfs' is created at the root path) on server (so, credentials are OK). Happy to provide more debug info if needed (note i have no admin access on WebdDAV server).

    # juicefs format --storage webdav --bucket https://web.dav.server/ --access-key 307399 --secret-key tTxX12NPMyy --trash-days 0 redis://127.0.0.1:6379/1 jfs
    
    2022/06/17 11:45:19.373300 juicefs[799909] <INFO>: Meta address: redis://127.0.0.1:6379/1 [interface.go:397]
    2022/06/17 11:45:19.375116 juicefs[799909] <INFO>: Ping redis: 82.354µs [redis.go:2869]
    2022/06/17 11:45:19.375437 juicefs[799909] <INFO>: Data use webdav://web.dav.server/jfs/ [format.go:420]
    2022/06/17 11:45:19.529334 juicefs[799909] <WARNING>: List storage webdav://web.dav.server/jfs/ failed: 401 Unauthorized: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>401 Unauthorized</title>
    </head><body>
    <h1>Unauthorized</h1>
    <p>This server could not verify that you
    are authorized to access the document
    requested.  Either you supplied the wrong
    credentials (e.g., bad password), or your
    browser doesn't understand how to supply
    the credentials required.</p>
    </body></html> [format.go:438]
    2022/06/17 11:45:19.535569 juicefs[799909] <INFO>: Volume is formatted as {Name:jfs UUID:05363857-2fce-42dc-94e2-4d41c33172d0 Storage:webdav Bucket:https://web.dav.server/ AccessKey:307399 SecretKey:removed BlockSize:4096 Compression:none Shards:0 HashPrefix:false Capacity:0 Inodes:0 EncryptKey: KeyEncrypted:true TrashDays:0 MetaVersion:1 MinClientVersion: MaxClientVersion:} [format.go:458]
    

    or trying to destroy it:

    # juicefs destroy redis://127.0.0.1:6379/1 05363857-2fce-42dc-94e2-4d41c33172d0
    
    2022/06/17 11:50:24.072353 juicefs[800639] <INFO>: Meta address: redis://127.0.0.1:6379/1 [interface.go:397]
    2022/06/17 11:50:24.073182 juicefs[800639] <INFO>: Ping redis: 67.778µs [redis.go:2869]
     volume name: jfs
     volume UUID: 05363857-2fce-42dc-94e2-4d41c33172d0
    data storage: webdav://web.dav.server/jfs/
      used bytes: 0
     used inodes: 0
    WARNING: The target volume will be destoried permanently, including:
    WARNING: 1. ALL objects in the data storage: webdav://web.dav.server/jfs/
    WARNING: 2. ALL entries in the metadata engine: redis://127.0.0.1:6379/1
    Proceed anyway? [y/N]: y
    2022/06/17 11:50:25.693697 juicefs[800639] <FATAL>: list all objects: 401 Unauthorized: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>401 Unauthorized</title>
    </head><body>
    <h1>Unauthorized</h1>
    <p>This server could not verify that you
    are authorized to access the document
    requested.  Either you supplied the wrong
    credentials (e.g., bad password), or your
    browser doesn't understand how to supply
    the credentials required.</p>
    </body></html> [destroy.go:158]
    

    On files operations, files (chunks folder is populated) but this is logged:

    2022/06/17 12:02:54.141831 juicefs[802424] <INFO>: Mounting volume jfs at /mnt/jfs ... [mount_unix.go:181]
    2022/06/17 12:04:31.053268 juicefs[802424] <WARNING>: Upload chunks/0/0/7_0_140: 403 Forbidden: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>403 Forbidden</title>
    </head><body>
    <h1>Forbidden</h1>
    <p>You don't have permission to access this resource.</p>
    </body></html> (try 1) [cached_store.go:462]
    

    Accessing WebDAV directly (outside juicefs) we can see correct file operations: image

    kind/bug 
    opened by solracsf 13
  • freezing when tikv is down

    freezing when tikv is down

    7 pd-server node, kill 2 pd-server node, juicefs freezing.

    [2021/09/02 15:57:58.152 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:57:59.021 +08:00] [ERROR] [client.go:599] ["[pd] getTS error"] [dc-location=global] [error="[PD:client:ErrClientGetTSO]rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"] [stack="github.com/tikv/pd/client.(*client).handleDispatcher\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:599"] [2021/09/02 15:57:59.022 +08:00] [ERROR] [pd.go:234] ["updateTS error"] [txnScope=global] [error="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"] [errorVerbose="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster\ngithub.com/tikv/pd/client.(*client).processTSORequests\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:717\ngithub.com/tikv/pd/client.(*client).handleDispatcher\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:587\nruntime.goexit\n\t/snap/go/7954/src/runtime/asm_amd64.s:1371\ngithub.com/tikv/pd/client.(*tsoRequest).Wait\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:913\ngithub.com/tikv/pd/client.(*client).GetTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:933\ngithub.com/tikv/client-go/v2/util.InterceptedPDClient.GetTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/util/pd_interceptor.go:79\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).getTimestamp\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:141\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS.func1\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:232\nsync.(*Map).Range\n\t/snap/go/7954/src/sync/map.go:345\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:230\nruntime.goexit\n\t/snap/go/7954/src/runtime/asm_amd64.s:1371"] [stack="github.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS.func1\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:234\nsync.(*Map).Range\n\t/snap/go/7954/src/sync/map.go:345\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:230"] [2021/09/02 15:57:59.317 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:00.608 +08:00] [WARN] [prewrite.go:198] ["slow prewrite request"] [startTS=427443780657872897] [region="{ region id: 4669, ver: 35, confVer: 1007 }"] [attempts=280] [2021/09/02 15:58:04.317 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:09.318 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:14.319 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:15.831 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:20.832 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:25.834 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:30.834 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"]

    What happened:

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?

    Environment:

    • JuiceFS version (use ./juicefs --version) or Hadoop Java SDK version:
    • Cloud provider or hardware configuration running JuiceFS:
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Object storage (cloud provider and region):
    • Redis info (version, cloud provider managed or self maintained):
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage):
    • Others:
    opened by whans 12
  • Deleted tag cause misbehaviour of `go get`

    Deleted tag cause misbehaviour of `go get`

    What happened:

    go get: added github.com/juicedata/juicefs v0.14.0
    

    What you expected to happen:

    go get: added github.com/juicedata/juicefs v0.13.1
    

    How to reproduce it (as minimally and precisely as possible):

    go get github.com/juicedata/juicefs
    

    Anything else we need to know? it looks like you probably deleted that tag, which is not supported by proxy.golang.org (see the FAQ)

    also, the current latest tag, v0.14-dev is not valid semver, thus is being ignored and mangled by the toolchain: (try v0.14.0-alpha)

    [[email protected] ~]$ go get github.com/juicedata/[email protected]
    go: downloading github.com/juicedata/juicefs v0.13.2-0.20210527090717-42ac85ce406c
    go get: downgraded github.com/juicedata/juicefs v0.14.0 => v0.13.2-0.20210527090717-42ac85ce406c
    
    opened by colin-sitehost 11
  • Metadata Dump doesn't complete on MySQL

    Metadata Dump doesn't complete on MySQL

    What happened:

    Dump is incomplete.

    What you expected to happen:

    Dump completes successfully.

    How to reproduce it (as minimally and precisely as possible):

    Observe the dumped entries.

    juicefs dump "mysql://mount:[email protected](10.1.0.9:3306)/mount" /tmp/juicefs.dump
    
    2022/05/26 15:13:07.548210 juicefs[1785019] <WARNING>: no found chunk target for inode 854691 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548245 juicefs[1785019] <WARNING>: no found chunk target for inode 854709 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548264 juicefs[1785019] <WARNING>: no found chunk target for inode 854726 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548288 juicefs[1785019] <WARNING>: no found chunk target for inode 4756339 indx 0 [sql.go:2747]
     Snapshot keys count: 4727518 / 4727518 [==============================================================]  done
    Dumped entries count: 1370 / 1370 [==============================================================]  done
    

    Anything else we need to know?

    Environment:

    # juicefs --version
    juicefs version 1.0.0-dev+2022-05-26.54ddc5c4
    
    {
      "Setting": {
        "Name": "mount",
        "UUID": "5cd22d3c-12d6-4be4-9a52-19e753c416e9",
        "Storage": "s3",
        "Bucket": "https://data-jfs%d.s3.de",
        "AccessKey": "d61bc82480ab49fb8d",
        "SecretKey": "removed",
        "BlockSize": 4096,
        "Compression": "zstd",
        "Shards": 512,
        "HashPrefix": false,
        "Capacity": 0,
        "Inodes": 0,
        "EncryptKey": "/YShECK6Tirb0uHljlK8PIJ12C4Fj2idW5hbzARwYaGDGoSU>
        "KeyEncrypted": true,
        "TrashDays": 90,
        "MetaVersion": 1,
        "MinClientVersion": "",
        "MaxClientVersion": ""
      },
      "Counters": {
        "usedSpace": 4568750424064,
        "usedInodes": 4723758,
        "nextInodes": 4923302,
        "nextChunk": 4062001,
        "nextSession": 137,
        "nextTrash": 667
      },
    }
    
    opened by solracsf 10
  • I | invalid character 'S' after object key on `juicefs load`

    I | invalid character 'S' after object key on `juicefs load`

    I've tried to dump from MySQL and load to Redis, but i got this output on load:

    $ juicefs dump "mysql://jfs:[email protected](127.0.0.1:3306)/jfs_data" dump.json
    
    <INFO>: Meta address: mysql://jfs:****@(127.0.0.1:3306)/jfs_data [interface.go:383]
     Snapshot keys count: 10101347 / 10101347 [==============================================================]  done
     Dumped entries count: 3580869 / 3580869 [==============================================================]  done
    <INFO>: Dump metadata into dump.json succeed [dump.go:76]
    
    $ juicefs load redis://127.0.0.1:6379 dump.json
    
    <INFO>: Meta address: redis://127.0.0.1:6379 [interface.go:383]
    <INFO>: Ping redis: 194.003µs [redis.go:2497]
    I | invalid character 'S' after object key
    
    kind/bug 
    opened by solracsf 9
  • Support RocksDB as Metadata Engine

    Support RocksDB as Metadata Engine

    This could be supported, at least, for standalone deployments. There are some Go wrappers out there too.

    http://rocksdb.org/ and http://myrocks.io/ and https://mariadb.com/kb/en/myrocks/

    High Performance

    RocksDB uses a log structured database engine, written entirely in C++, for maximum performance. Keys and values are just arbitrarily-sized byte streams.

    Optimized for Fast Storage

    RocksDB is optimized for fast, low latency storage such as flash drives and high-speed disk drives. RocksDB exploits the full potential of high read/write rates offered by flash or RAM.

    Adaptable

    RocksDB is adaptable to different workloads. From database storage engines such as MyRocks to application data caching to embedded workloads, RocksDB can be used for a variety of data needs.

    Basic and Advanced Database Operations

    RocksDB provides basic operations such as opening and closing a database, reading and writing to more advanced operations such as merging and compaction filters.

    kind/feature priority/low 
    opened by solracsf 9
  • ltp syscall middle test failed

    ltp syscall middle test failed

    What happened: https://github.com/juicedata/juicefs/runs/7347698805?check_suite_focus=true

    <<<test_start>>> tag=msync04 stime=1657831727 cmdline="msync04" contacts="" analysis=exit <<<test_output>>> tst_device.c:88: TINFO: Found free device 3 '/dev/loop3' tst_supported_fs_types.c:89: TINFO: Kernel supports ext2 tst_supported_fs_types.c:51: TINFO: mkfs.ext2 does exist tst_supported_fs_types.c:89: TINFO: Kernel supports ext3 tst_supported_fs_types.c:51: TINFO: mkfs.ext3 does exist tst_supported_fs_types.c:89: TINFO: Kernel supports ext4 tst_supported_fs_types.c:51: TINFO: mkfs.ext4 does exist tst_supported_fs_types.c:89: TINFO: Kernel supports xfs tst_supported_fs_types.c:51: TINFO: mkfs.xfs does exist tst_supported_fs_types.c:89: TINFO: Kernel supports btrfs tst_supported_fs_types.c:51: TINFO: mkfs.btrfs does exist tst_supported_fs_types.c:89: TINFO: Kernel supports vfat tst_supported_fs_types.c:51: TINFO: mkfs.vfat does exist tst_supported_fs_types.c:115: TINFO: Filesystem exfat is not supported tst_supported_fs_types.c:119: TINFO: FUSE does support ntfs tst_supported_fs_types.c:51: TINFO: mkfs.ntfs does exist tst_supported_fs_types.c:147: TINFO: Skipping tmpfs as requested by the test tst_test.c:1431: TINFO: Testing on ext2 tst_test.c:932: TINFO: Formatting /dev/loop3 with ext2 opts='' extra opts='' mke2fs 1.45.5 (07-Jan-2020) tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on ext3 tst_test.c:932: TINFO: Formatting /dev/loop3 with ext3 opts='' extra opts='' mke2fs 1.45.5 (07-Jan-2020) tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on ext4 tst_test.c:932: TINFO: Formatting /dev/loop3 with ext4 opts='' extra opts='' mke2fs 1.45.5 (07-Jan-2020) tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on xfs tst_test.c:932: TINFO: Formatting /dev/loop3 with xfs opts='' extra opts='' tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:59: TFAIL: Expected dirty bit to be set after writing to mmap()-ed area tst_test.c:1431: TINFO: Testing on btrfs tst_test.c:932: TINFO: Formatting /dev/loop3 with btrfs opts='' extra opts='' tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on vfat tst_test.c:932: TINFO: Formatting /dev/loop3 with vfat opts='' extra opts='' tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on ntfs tst_test.c:932: TINFO: Formatting /dev/loop3 with ntfs opts='' extra opts='' The partition start sector was not specified for /dev/loop3 and it could not be obtained automatically. It has been set to 0. The number of sectors per track was not specified for /dev/loop3 and it could not be obtained automatically. It has been set to 0. The number of heads was not specified for /dev/loop3 and it could not be obtained automatically. It has been set to 0. To boot from a device, Windows needs the 'partition start sector', the 'sectors per track' and the 'number of heads' to be set. Windows will not be able to boot from this device. tst_test.c:946: TINFO: Trying FUSE... tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly

    Summary: passed 6 failed 1 broken 0 skipped 0 warnings 0 What you expected to happen: The case passed . How to reproduce it (as minimally and precisely as possible): Follow the action. Anything else we need to know? No.

    opened by sanwan 0
  • add / update fstab entry with --update-fstab

    add / update fstab entry with --update-fstab

    • [x] abs path check for sqlite and similars
    • [x] correctly handle slice arguments, even though they don't exist in juicefs mount yet
    • [x] do not run in containers
    • [x] unit tests

    PR code will produce the following result:

    image

    closes https://github.com/juicedata/juicefs/issues/2432

    opened by timfeirg 1
  • pkg/gateway: Support specifying umask for directories

    pkg/gateway: Support specifying umask for directories

    Currently, gateway does not respect the umask flag specified in the start command, instead, it hard codes 755 for all the directories it creates. I have added an option, umask-dir, which will dictate the umask for all the mkdir calls. I keep them as separate flags in case some users would like their umasks different for folders and directories.

    The following screenshots show different behaviors when gateway is started with umask-dir 022 vs 000. image

    opened by dugusword 3
Releases(v1.0.0)
  • v1.0.0(Aug 9, 2022)

    This is the first stable release of JuiceFS. It has 73 commits from 13 contributors, thanks to @SandyXSD @zhijian-pro @xiaogaozi @zhoucheng361 @rayw000 @tangyoupeng @AIXjing @sanwan @davies @yuhr123 @timfeirg @201341 @solracsf !

    New

    • object/redis: support redis sentinel & cluster mode (#2368, #2383)

    Changed

    • cmd/gc: no warning log if no --delete specified (#2476)
    • cmd/load: reset root inode to 1 if loading from a subdir (#2389)
    • meta: check UUID and metadata version after setting reloaded (#2416, #2420)
    • meta: reset variables at the beginning of transactions (#2402, #2409)
    • meta refactor: distinguish Slice from Chunk (#2397)
    • fuse: support CAP_EXPORT_SUPPORT flag (#2382)
    • util: progressbar total equals current max (#2377)

    Bugfix

    • cmd/profile: ignore error of Scanln (#2400)
    • meta: fix potential overflowed size in Fallocate (#2403)
    • chunk: cleanup cache if it's added after removed (#2427)
    • hadoop: do not use local uid/gid if global user/group is specified (#2433)
    • hadoop: update guid and clean old ones (#2407)
    • hadoop: fix make in mac m1 (#2408)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(499 bytes)
    juicefs-1.0.0-darwin-amd64.tar.gz(24.22 MB)
    juicefs-1.0.0-darwin-arm64.tar.gz(22.70 MB)
    juicefs-1.0.0-linux-amd64.tar.gz(22.76 MB)
    juicefs-1.0.0-linux-arm64.tar.gz(20.97 MB)
    juicefs-1.0.0-windows-amd64.tar.gz(22.80 MB)
    juicefs-hadoop-1.0.0.jar(118.21 MB)
  • v1.0.0-rc3(Jul 14, 2022)

    JuiceFS v1.0.0-rc3 is the third release candidate for v1.0. It has 35 commits from 10 contributors, thanks to @zhijian-pro @SandyXSD @davies @tangyoupeng @sanwan @xiaogaozi @chenhaifengkeda @zhoucheng361 @201341 @Suave !

    New

    • Supports using unix socket for Redis (#2319)

    Changed

    • cmd/info: print objects for files, and add --raw option for slices (#2316)
    • fuse/context: ignore interrupt within one second (#2324)

    Bugfix

    • cmd/info: support get trash info (#2363)
    • cmd/bench: fix bench display (#2322)
    • cmd/objbench: fix skip functional test (#2341)
    • meta/redis: fix unlock on not-existed lock (#2325)
    • object: fix skip tls verify (#2317)
    • object/ks3: use virtual hosted-style for ks3 (#2349)
    • object/eos: using the Path style addressing in eos (#2344)
    • object/ceph: fix the arguments of register function (#2306)
    • vfs&fuse: Fix stale attribute cache in kernel (#2336)
    • hadoop: checksum fix for files with size close to blockSize (#2333)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(519 bytes)
    juicefs-1.0.0-rc3-darwin-amd64.tar.gz(24.06 MB)
    juicefs-1.0.0-rc3-darwin-arm64.tar.gz(22.53 MB)
    juicefs-1.0.0-rc3-linux-amd64.tar.gz(22.62 MB)
    juicefs-1.0.0-rc3-linux-arm64.tar.gz(20.83 MB)
    juicefs-1.0.0-rc3-windows-amd64.tar.gz(22.65 MB)
    juicefs-hadoop-1.0.0-rc3.jar(117.38 MB)
  • v1.0.0-rc2(Jun 24, 2022)

    JuiceFS v1.0.0-rc2 is the second release candidate for v1.0. It has 40 commits from 10 contributors, thanks to @davies @zhijian-pro @sanwan @zhoucheng361 @tangyoupeng @SandyXSD @201341 @chnliyong @solracsf @xiaogaozi !

    New

    • Supports using session token for object storage (#2261)

    Changed

    • cmd/info: make output more human friendly (#2303)
    • meta: optimize output of format structure (#2250)
    • meta/sql: support database without read-only transaction (#2259)
    • object/webdav: replace webdav client (#2288)
    • SDK: only release sdk jar (#2289)

    Bugfix

    • cmd: --backup-meta could be 0 to disable backup (#2275)
    • cmd: fix concurrent rmr/warmup in mac (#2265)
    • cmd/fsck: fix object name for Head method (#2281)
    • cmd/info: Fix repeat path info when inode is 1 (#2294)
    • meta: fix GetPaths for mount points with subdir specified (#2298)
    • meta: don't treat trash as dangling inode (#2296)
    • meta: fix index underflow in 32bit arch (#2239)
    • meta/tkv: use absolute path for badgerDB as metadata engine (#2256)
    • object/azure: fix azure list api and incorrect bucket URL (#2263)
    • metrics: use separate metrics per volume (#2253)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(519 bytes)
    juicefs-1.0.0-rc2-darwin-amd64.tar.gz(24.06 MB)
    juicefs-1.0.0-rc2-darwin-arm64.tar.gz(22.53 MB)
    juicefs-1.0.0-rc2-linux-amd64.tar.gz(22.61 MB)
    juicefs-1.0.0-rc2-linux-arm64.tar.gz(20.82 MB)
    juicefs-1.0.0-rc2-windows-amd64.tar.gz(22.65 MB)
    juicefs-hadoop-1.0.0-rc2.jar(117.39 MB)
  • v1.0.0-rc1(Jun 15, 2022)

    JuiceFS v1.0.0-rc1 is the first release candidate for v1.0. It has 184 commits from 17 contributors, thanks to @davies @zhoucheng361 @SandyXSD @zhijian-pro @sanwan @xiaogaozi @tangyoupeng @solracsf @showjason @rayw000 @AIXjing @helix-loop @Suave @zhouaoe @chnliyong @yuhr123 @liufuyang !

    Highlights

    • Dumping metadata from Redis has been improved, massively reducing the memory required to below 1/20 of that before. It also relieves the memory spike of a client when doing metadata backup. Dumping metadata from SQL and TiKV within a single transaction to ensure consistency.
    • Loading metadata to all engines has been improved as well. Instead of loading the whole dumped file in one step, JuiceFS will now read it in a stream, and simultaneously import metadata to the engine. This saves a lot of memory when the dumped file is huge.
    • Improved stability for SQL engine under heavy workload.
    • Added a new command juicefs objbench that can be used to run basic function tests and benchmarks on object storage, making sure it works as expected.

    New

    • Supports using SQL databases, etcd as data storage (#2003, #2009)
    • Supports finding all paths of an inode with juicefs info command (#2058, #2161, #2193)
    • Supports using Pyroscope to record JuiceFS profiling (#1952)
    • Added progress bar for juicefs rmr and juicefs warmup commands (#2197)
    • Added a new command juicefs objbench to run basic benchmarks on object storage (#2055, #2091)
    • Added a new command juicefs version to print version, as an alternative to --version (#2229)

    Changed

    • cmd: check the range of parameters (#2195)
    • cmd: eliminate panic which is triggered by missing argument (#2183)
    • cmd/mount: warn about the behavior of mounting the same directory multiple times (#2141)
    • cmd/warmup: support warmup from inside of a container (#2056)
    • meta: add delayed slice only when chunkid > 0 (#2231)
    • meta: speed up and reduce memory for loading metadata (#2142, #2148)
    • meta: add a pessimistic lock to reduce conflicted transactions in the database (#2111)
    • meta: limit the number of scanned files in cleanup (#2157)
    • meta: limit number of files when cleanup trash (#2061)
    • meta: limit the number of coroutines to delete file data (#2042)
    • meta: log last error if a transaction has been ever restarted (#2172)
    • meta: log session info when cleaning up a stale one (#2045)
    • meta: skip updating mtime/ctime of the parent if it's updated recently (#1960)
    • meta/redis: check config 'maxmemory-policy' (#2059)
    • meta/redis: Speedup dump for Redis and reduce memory usage (#2156)
    • meta/tkv: Speedup dump for kv storage (#2140)
    • meta/tkv: dump metadata using snapshot (#1961)
    • meta/tkv: use scanRange to get delayed slices (#2057)
    • meta/sql: dump metadata in a single transaction (#2131)
    • chunk/store: keep cache after uploading staging blocks (#2168)
    • object: reload the configuration for data storage (#1995)
    • object/sftp: load default private keys for sftp (#2014)
    • object/oss: add user agent for oss #1992 (#1993)
    • vfs: hide .control from readdir (#1998)
    • gateway: clean up expired temporary files (#2082)
    • SDK: package amd64 and arm64 libjfs (#2198)
    • SDK: don't reuse fd in Java SDK (#2122)
    • Dependency: upgrade coredns for CVE-2019-19794 (#2190)
    • Dependency: upgrade azblob sdk (#1962)
    • meta: keep valid utf8 in dumped JSON (#1973)
    • SDK: mvn shade some dependency to avoid class conflict (#2035)
    • meta: truncate trash entry name if it's too long (#2049)
    • meta/sql: use repeatable-read for transaction (#2128)

    Bugfix

    • cmd: fix not showing arguments for commands without META-URL (#2158)
    • cmd/sync: fix sync lost file (#2106)
    • cmd/warmup: fix warmup on read-only mount point (#2108)
    • meta: skip updating sliceRef if id is 0 (#2096)
    • meta: fix update xattr with the same value (#2078)
    • meta/redis: handle lua result from Redis v7.0+ (#2221)
    • meta/sql: fix dump with more than 10000 files (#2134)
    • meta/sql: one transaction in SQLite at a time (#2126)
    • meta/sql: fix rename with Postgres with repeatable read (#2109)
    • meta/sql: fix deadlock in PG (#2104)
    • meta/sql: ignore error about duplicated index (#2087)
    • meta/sql: read database inside transaction (#2073, #2086)
    • meta/sql: retry transaction on duplicated entry and concurrent update (#2077)
    • meta/sql: fix the deadlock in rename (#2067)
    • meta/sql: retry transaction for duplicate key in PG (#2071)
    • meta/sql: fix update query in SQL transaction (#2024)
    • meta/tkv: fix value of delSliceKey (#2054)
    • meta/tkv: upgrade TiKV client to 2.0.1 to fix nil connection (#2050)
    • chunk/store: fix stats of cached space in writeback mode (#2227)
    • object: delete should be idempotent (#2034)
    • object/file: Head of file should return File (#2133)
    • object/s3: check prefix and marker with returned keys from S3 (#2040)
    • object/prefix: fix with prefix returning nil error for unsupported ops (#2021)
    • object/sftp: fix auth of sftp with multiple keys (#2186)
    • object/sftp: fix prefix of sftp, support ssh-agent (#1954)
    • vfs/backup: skip cleanup if list failed (#2044)
    • SDK: handle atomic rename exception (#2192)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(519 bytes)
    juicefs-1.0.0-rc1-darwin-amd64.tar.gz(24.04 MB)
    juicefs-1.0.0-rc1-darwin-arm64.tar.gz(22.51 MB)
    juicefs-1.0.0-rc1-linux-amd64.tar.gz(22.60 MB)
    juicefs-1.0.0-rc1-linux-arm64.tar.gz(20.82 MB)
    juicefs-1.0.0-rc1-windows-amd64.tar.gz(22.63 MB)
    juicefs-hadoop-1.0.0-rc1-javadoc.jar(193.34 KB)
    juicefs-hadoop-1.0.0-rc1-sources.jar(114.05 MB)
    juicefs-hadoop-1.0.0-rc1.jar(117.34 MB)
  • v1.0.0-beta3(May 5, 2022)

    JuiceFS v1.0.0-beta3 is the third beta release for v1.0. It has 247 commits from 22 contributors, thanks to @SandyXSD @zhoucheng361 @davies @zhijian-pro @yuhr123 @sanwan @AIXjing @rayw000 @xiaogaozi @Suave @showjason @tangyoupeng @201341 @solracsf @guo-sj @chnliyong @DeanThompson @zwwhdls @wph95 @lidaohang @sjp00556 @DEvmIb !

    Highlights

    • Supports etcd as a new metadata engine. It can be a handy choice when you only need a small volume but cares more about the data availability and persistence.
    • Supports Redis Cluster and other compatible services (Amazon MemoryDB for Redis) as metadata engines.
    • When using SQL metadata engines, file names not encoded by UTF-8 can now be properly handled after manual modification to the table schema, see details.
    • A new session management format is introduced. Old clients are unable to detect sessions with version 1.0.0-beta3 or higher via juicefs status or juicefs destroy command, see details.
    • If trash is enabled, compacted slices are kept as well in case they are needed to recover file data. These slices will be cleaned up automatically after trash-days, and can be deleted manually via juicefs gc command.
    • A lot of improvements have been made to juicefs sync command.
    • A lot of protection checks against unintentional misuse have been added.

    New

    • Supports etcd as metadata engine (#1638)
    • Supports Redis in cluster mode while using one slot for each file system (#1696)
    • Supports handling file names not encoded by UTF-8 for SQL metadata engines (#1762)
    • Supports TLS when using TiKV as metadata engine or object storage (#1653, #1778)
    • Supports Oracle Object Storage as data storage (#1516)
    • Supports setting umask for S3 Gateway (#1537)
    • Java SDK now supports pushing metrics to Graphite (#1586)
    • Added a new option --heartbeat for the mount command to adjust heartbeat interval (#1591, #1865)
    • Added many improvements for sync command to make it more handy (#1554, #1619, #1651, #1836, #1897, #1901)
    • Added a new option --hash-prefix for the format command to add a hashed prefix for objects (#1657)
    • Added a new client option --storage to allow customized storage type (#1912)

    Changed

    • compacted slices will be kept for trash-days if trash is enabled (#1790)
    • cmd: support using integer for duration flags (#1796)
    • cmd: use homedir as default working directory for non-root users (#1869)
    • cmd/format: create a uuid object in the target bucket (#1548)
    • cmd/dump&load: dump load behavior supports non-ascii characters (#1691)
    • cmd/dump: omit empty value in dumped JSON (#1676)
    • cmd/dump: remove secret key (#1569)
    • meta: encrypt the secret-key and encrypt-key in setting (#1562)
    • meta: create subdir automatically (#1712)
    • meta: specify the format field preventing update (#1776)
    • meta: escape meta password from env (#1879)
    • meta/redis: check redis version (#1584)
    • meta/redis: use smaller retry backoff in sentinel mode (#1620)
    • meta/redis: retry transaction for connection error or EXECABORT (#1637)
    • meta/sql: retry transaction after too many connections (#1876)
    • meta/sql: add primary key for all tables (#1913, #1919)
    • meta&chunk: Set max retries of meta & chunk according to the config io-retries (#1713, #1800)
    • chunk: limit number of upload goroutines (#1625)
    • chunk/store: limit max retry for async upload as well (#1673)
    • object/obs: Verify Etag from OBS (#1715)
    • object/redis: implement listAll api for redis (#1777)
    • fuse: automatically add ro option if mount with --read-only (#1661)
    • vfs/backup: reduce the limit for skipping backup (#1659)
    • sync: reduce memory allocation when write into files (#1644)
    • SDK: use uint32 for uid,gid (#1648)
    • SDK: handle removeXAttr return code (#1775)
    • Dependency: switch to Go 1.17 (#1594)
    • Dependency: fix jwt replace (#1534)
    • Dependency: upgrade golang-cross version to v1.17.8 (#1539)
    • Dependency: upgrade tikv to v2.0.0 (#1643)
    • Dependency: reduce dep from minio (#1645)
    • Dependency: upgrade gjson to 1.9.3 (#1647)
    • Dependency: upgrade sdk for object storage (#1665)
    • Dependency: upgrade qiniu sdk (#1697)

    Bugfix

    • cmd/format: fix setting quota (#1658)
    • cmd/mount: fix parsing of cache dir (#1758)
    • cmd/warmup: fix handling of relative paths (#1735)
    • cmd/sync: fix sync command not working when destination is webdav (#1636)
    • cmd/gateway: fix s3 gateway DeleteObjects panic (#1527)
    • meta: forbid empty name for dentry (#1687)
    • meta: lock counters when loading entries (#1703)
    • meta: fix snap not released if error occurs when dumping meta (#1669)
    • meta: don't update parent attribute if it's a trash directory (#1580)
    • meta/redis: fix loading large directory into Redis (#1858)
    • meta/redis: update used space/inodes in memory whenever needed (#1573)
    • meta/sql: use upsert to update xattr for PG (#1825)
    • meta/sql: split insert batch (#1831)
    • meta/sql: fix wrong result from scanning SQL database (#1854)
    • chunk/cache: Fix read disk_cache not existed remove cache key (#1677)
    • object: fallback to List only if ListAll is not supported (#1623)
    • object/b2: check returned bucket from B2 (#1745)
    • object/encrypt: fix parse rsa from pem (#1724)
    • object/encrypt: Add JFS_RSA_PASSPHRASE environment variable prompt information (#1706)
    • object/sharding: fix ListAll returning invalid objects (#1616)
    • object/ceph: fix listAll hangs if there are many objects (#1891)
    • vfs: write control file asynchronously (#1747)
    • vfs: fix getlk in access log (#1788)
    • sync: Fix copied and copiedBytes (#1801)
    • utils: fix the problem that the progress bar loses the log (#1756)
    • SDK: rename libjfs atomic (#1939)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(529 bytes)
    juicefs-1.0.0-beta3-darwin-amd64.tar.gz(23.75 MB)
    juicefs-1.0.0-beta3-darwin-arm64.tar.gz(22.27 MB)
    juicefs-1.0.0-beta3-linux-amd64.tar.gz(22.31 MB)
    juicefs-1.0.0-beta3-linux-arm64.tar.gz(20.58 MB)
    juicefs-1.0.0-beta3-windows-amd64.tar.gz(22.35 MB)
    juicefs-hadoop-1.0.0-beta3-amd64.jar(51.18 MB)
  • v1.0.0-beta2(Mar 4, 2022)

    JuiceFS v1.0.0-beta2 is the second beta release for v1.0. It has 150+ commits from 16 contributors, thanks to @SandyXSD @zhijian-pro @yuhr123 @xiaogaozi @davies @sanwan @AIXjing @Suave @tangyoupeng @zwwhdls @201341 @zhexuany @chnliyong @liufuyang @rayw000 @fredchen2022 !

    New

    • Supports BadgerDB (an embeddable key-value database) as metadata engine (#1340)
    • Supports WebDAV protocol to access JuiceFS files (#1444)
    • Supports read-only clients connecting to a Redis Sentinel-controlled replica (#1433)
    • Added version control of metadata and clients (#1469)
    • Added categories and long descriptions for all commands (#1488, #1493)
    • Added a new option no-bgjob for service commands to disable background jobs like clean-up, backup, etc. (#1472)
    • Added metrics for number of rawstaging blocks (#1341)
    • Added cross-platform compile script (#1374)

    Changed

    • cmd: help command is removed; use --help/-h flag to get help information (#1488)
    • cmd: print usage if not enough args (#1491)
    • cmd/format: only try to create bucket when it really doesn't exist (#1289)
    • cmd/format: prevend reusing object storage when formating a volume (#1420, #1449)
    • cmd/destroy: show information of active sessions (#1377)
    • cmd/config: add sanity check of new values (#1479)
    • cmd/umount: ignore error from umount (#1425)
    • meta: add progress bar for CompactAll (#1317)
    • meta: hide password in meta-url (#1333, #1361)
    • meta: use META_PASSWORD env and omit unnecessary characters (#1388)
    • meta: limit the number when scanning sessions/files (#1397)
    • meta: limit the number of clients running background cleanup jobs (#1393)
    • meta: continue dump when encountering non-fatal errors (#1462)
    • meta/sql: increase max idle connections to CPU*2 for SQL engine (#1443)
    • object: set ContentType when putting object (#1468)
    • object: support skip https certificate verify (#1453)
    • object/s3: format support pvc link (#1382)
    • object/qingstor: support private cloud and replace the sdk repository (#1303)
    • vfs: don't do backup for read-only clients (#1435)
    • vfs: add BackupMeta in config (#1460)
    • utils: log now contains the exact line of caller, and is colorized in terminal (#1404, #1318, #1312)
    • utils: simplify the usage of progress bar (#1325)
    • utils: add SleepWithJitter to reduce collision of background jobs (#1412)
    • SDK: upgrade Hadoop common to 3.1.4 (#1411)
    • SDK: java sdk push gateway support multiple volumes (#1492)
    • Other: update chart move it to a standalone repository (#1281, #1336, #1348)
    • Improves documentation and coverage of tests

    Bugfix

    • cmd: fix buffer-size in gc and fsck (#1316)
    • cmd/bench: convert PATH to absolute path (#1305)
    • meta: return EROFS as soon as possible (#1477)
    • meta/redis: fix leaked inodes in Redis (#1353)
    • meta/tkv: fix divide by zero error when dumping meta (#1369)
    • meta/tikv: fix scan of tikv, limiting the upperbound (#1455)
    • meta/memkv: fix scanKeys, returning a sorted list (#1381)
    • meta/sql: delete warning message for empty directory (#1442)
    • meta/sql: fix return value of mustInsert (#1429)
    • vfs: fixed deadlock when truncate a released file handle. (#1383)
    • vfs/trash: fix access to trash dir (#1356)
    • vfs/backup: skip dir objects when scanning meta backups (#1370)
    • vfs/backup: fix incorrect inode number when using subdir (#1385)
    • utils: fix the contention between progress bar and logger (#1436)
    • Windows: fix rename fails because the chunk file is still open (#1315)
    • Windows: fix mkdir on windows platform (#1327)
    • SDK: hadoop: fix umask apply (#1338, #1394)
    • SDK: hadoop: fix libjfs.so load bug (#1458)
    • other: fix legend of "Operations" panel in Grafana template (#1321)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(423 bytes)
    juicefs-1.0.0-beta2-darwin-amd64.tar.gz(23.31 MB)
    juicefs-1.0.0-beta2-linux-amd64.tar.gz(22.04 MB)
    juicefs-1.0.0-beta2-linux-arm64.tar.gz(20.26 MB)
    juicefs-1.0.0-beta2-windows-amd64.tar.gz(22.07 MB)
    juicefs-hadoop-1.0.0-beta2.jar(47.73 MB)
  • v1.0.0-beta1(Jan 13, 2022)

    JuiceFS v1.0.0-beta1 is the first beta release for v1.0, arrived three months after v0.17. It has 300+ commits from 22 contributors, thanks to @SandyXSD @davies @xiaogaozi @yuhr123 @zhijian-pro @sanwan @zwwhdls @tangyoupeng @Suave @chiyutianyi @201341 @suzaku @reusee @tisonkun @chenjie4255 @dragonly @nature1995 @fredchen2022 @Shoothzj @nsampre @supermario1990 @sjp00556 !

    JuiceFS v1.0.0-beta1 is released under the Apache License 2.0.

    New

    • Backups the whole metadata as a compressed JSON file into object storage in every hour by default, so we can get most of the data back in case of losing the entire meta database. (#975)
    • Added trash bin: removed or overwritten file will be moved to trash bin, will be deleted after configured days. (#1066)
    • Added config command to update configuration of an existing volume (#1137)
    • Added destroy command to clean up all data & metadata of a volume (#1164)
    • Added a option to limit the concurrent deletes (#917)
    • Added a option to register prometheus metrics api to consul (#910)
    • Added a option to S3 gateway to keep the etag information of uploaded objects (#1154)
    • Added --check-all and --check-new to verify data integrity (#1208)
    • sync command supports anonymous access to S3 (#1228)

    Changed

    • meta: don't return EINVAL when encountering unknown flags (#862)
    • cmd/bench: remove default benchmark PATH (#864)
    • hadoop: sort list res to compatible with hdfs (#889)
    • expose metrics in S3 gateway (#897)
    • cleanup broken mountpoint (#912)
    • cmd/info: add '--recursive' flag to get summary of a tree (#935)
    • cache: force sync upload if local disk is too full for writeback (#943)
    • Speed up metadata dump&load for redis (#954)
    • fsck: list broken files (#958)
    • write to json file by stream (#970)
    • speedup listing on file (#622)
    • meta: unify transaction backoff for redis/tkv (#999)
    • speed up metadata dumping and loading for Tikv V2 (#998)
    • check AK and SK for gateway (#1001)
    • add a option to provide customized bucket endpoint (#1008)
    • change frsize to 4096 (#1016)
    • Speed up dump for sql engine (#1006)
    • format support provide only bucket name (#1022)
    • meta/tkv: fix clean stale session (#1041)
    • add namespace and label to existing metrics (#1056)
    • release memory after dumping meta (#1093)
    • meta: unify counters for all metadata engines (#1119)
    • Keep SGID for directory (#1133)
    • Speedup hadoop concat by delete src concurrently (#1163)
    • change default cache-size to 100GB (#1169)
    • metrics: expose only JuiceFS metrics to prometheus (#1185)
    • cleanup reference of unused slice in gc command (#1249)
    • Adjust log level for xorm and TiKV when they are in use (#1229)
    • hadoop: Users in supergroup are superusers (#1202)
    • Check permission with multiple groups (#1205)
    • Ret the region of the object storage compatible with the s3 protocol from the env (#1171)
    • Added prefix to metrics in gateway (#1189)
    • improve s3 url parsing (#1234)

    Bugfix

    • go-fuse: return ENOSYS if xattr is disabled (#863)
    • Fix WebDAV backend (#899)
    • meta: always update cache after got newest attributes (#902)
    • sql.go: fix delete file range error (#904)
    • meta: update parent when rename (#905)
    • sqlmeta: fix delete chunk sql error (#931)
    • Fix the bug that file credential authentication fails when using GCS (#933)
    • hadoop: compatible with kitedata in sqoop parquet use case (#941)
    • Fix redis-sentinel addresses parsing (#953)
    • utils: fix cond and its test (#983)
    • workaround for List with marker (#984)
    • sync: ignore broken symlink (#1015)
    • meta/sql: fix delete condition (#1018)
    • metadata: should not skip error if open a file not exist (#1035)
    • Fix minio with sync (#1055)
    • meta: remove xattr only when nlink <= 0 (#1078)
    • meta/sql: fix parentDst nlink in rename (#1082)
    • Fix upload bandwidth limit (#1086)
    • Fix lua script handling big inode number (#1095)
    • meta: fix lookup of corrupt entries (#1098)
    • Fix potential metadata corrupt in Redis caused by gc (#1110)
    • Fix fmtKey to sliceKey (#1143)
    • Fix mkdir/rename/delete/rmr with trailing slash (#1149)
    • Fix gateway GetObjectInfo http code 500 to 404 (#1158)
    • meta: fix nlink of parentDst in rename (#1170)
    • Sync: distinguish DeleteSrc from DeleteDSt (#1197)
    • Fix subdir in S3 gateway (#1201)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(423 bytes)
    juicefs-1.0.0-beta1-darwin-amd64.tar.gz(22.69 MB)
    juicefs-1.0.0-beta1-linux-amd64.tar.gz(21.44 MB)
    juicefs-1.0.0-beta1-linux-arm64.tar.gz(19.73 MB)
    juicefs-1.0.0-beta1-windows-amd64.tar.gz(21.48 MB)
    juicefs-hadoop-1.0.0-beta1-linux-amd64.jar(18.36 MB)
  • v0.17.5(Dec 10, 2021)

    JuiceFS v0.17.5 is a patch release for v0.17, which has the following changes:

    • Sync command crashes for broken symlinks (#1028).
    • Fixed the returned code when open a non-existed file (#1035).
    • Fixed the leaked key after cleanup stale sessions in TiKV (#1041).
    • Fixed sync from/to some prefix of MinIO (#1055).
    • Fixed the problem that extended attributes of hard links are removed unexpectedly (#1078).
    • Fixed the wrong used space in df (introduced in v0.17.2) (#1096).
    • Fixed a bug in gc command that could corrupt file in Redis (#1110).

    Thanks to @davies @SandyXSD @chiyutianyi for contributions!

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.17.5-darwin-amd64.tar.gz(24.09 MB)
    juicefs-0.17.5-linux-amd64.tar.gz(22.80 MB)
    juicefs-0.17.5-linux-arm64.tar.gz(20.96 MB)
    juicefs-0.17.5-windows-amd64.tar.gz(22.60 MB)
    juicefs-hadoop-0.17.5-linux-amd64.jar(18.65 MB)
  • v0.17.2(Nov 19, 2021)

    Changelog

    JuiceFS v0.17.2 is the second patch release for v0.17, which has the following changes:

    • Fixed potential data leak for SQL engine (#931 #1018).
    • Fixed a bug in CopyFileRange() (77a3a6252d).
    • Fixed authentication for GCS in container (#933).
    • Don't fill all the disk space in writeback mode(#943).
    • Fixed parsing address for Redis Sentinel (#953).
    • Added a workaround to sync with object storages that are not compatible with S3 (#984).
    • Increase backoff for Redis/TiKV to avoid potential failure under high contention (#999).
    • Added a check for AK/SK for S3 gateway (#1001).
    • Added an option to provide a customized endpoint for object storage (#1008).
    • Chnaged frsize to 4096 to be compatible with local filesystem (#1016).

    Thanks to @SandyXSD @davies @chiyutianyi @zhijian-pro @nsampre for contributions!

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.17.2-darwin-amd64.tar.gz(24.09 MB)
    juicefs-0.17.2-linux-amd64.tar.gz(22.80 MB)
    juicefs-0.17.2-linux-arm64.tar.gz(20.96 MB)
    juicefs-0.17.2-windows-amd64.tar.gz(22.61 MB)
    juicefs-hadoop-0.17.2-linux-amd64.jar(18.66 MB)
  • v0.17.1(Oct 28, 2021)

    JuiceFS v0.17.1 is a patch release for v0.17, which has the following changes:

    • Return ENOSYS if attr is disabled (#863).
    • Ignore unknown flags in setattr() (#862).
    • Sort files when list a directory in Java SDK to be compatible with HDFS (#889).
    • Upgrade nats-server to v2.2+ to address CVE-2021-3127 (#893).
    • Enable metrics for S3 gateway (#897).
    • Fixed WebDAV backend (#899).
    • Refresh cache once new attributes found (#902).
    • Upgrade dgrijalva/jwt-go to golang-jwt/jwt to address CVE-2020-26160 (#903)
    • Fixed parent of renamed files/dirs (#905).
    • Fixed chunk deletions for SQL engine (#904)
    • Upgrade gjson to fix CVE-2021-42836 (#912).

    Thanks to contributions from @SandyXSD @tangyoupeng @davies @zhijian-pro @chiyutianyi !

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.17.1-darwin-amd64.tar.gz(23.92 MB)
    juicefs-0.17.1-linux-amd64.tar.gz(22.64 MB)
    juicefs-0.17.1-linux-arm64.tar.gz(20.81 MB)
    juicefs-0.17.1-windows-amd64.tar.gz(22.44 MB)
    juicefs-hadoop-0.17.1-linux-amd64.jar(18.49 MB)
  • v0.17.0(Sep 24, 2021)

    JuiceFS v0.17 arrived one month after 0.16, with 80+ commits from 9 contributors (@SandyXSD, @davies, @xiaogaozi, @yuhr123, @Suave @tangyoupeng @201341 @zwwhdls @allwefantasy), thanks to them!

    This release improved the performance when JuiceFS is used for temporary data by using an in-memory meta engine (memkv) and delayed uploading. For example, we can use JuiceFS as the shuffle and spill disk without worrying about running out of space.

    Linux Test Project was used to verify the compatibility of JuiceFS, please checkout the current results here.

    This release introduced metadata cache for Java SDK and S3 gateway (similar to metadata cache in kernel), which could be turned to improve performance significantly.

    New

    • Added a option to delay upload in writeback mode (#736 #743), which is useful for temporary data.
    • Added an in-memory meta engine (memkv://) for temporary data (#751 #779).
    • Support CREATE and REPLACE flag in setxattr (#770).
    • Added metrics for in-memory cache (#776).
    • Support rename flags (NoReplace and Exchange) (#787).
    • New colorful result for bench command (#810).
    • Added entry and attributes cache for Java SDK and S3 gateway (#835).

    Changed

    • Default logging directory in macOS was changed to user's home directory (#744).
    • Limit the number of retry for listing to 3 to avoid infinite loop (#745).
    • Show total size of valid objects in gc command (#746).
    • Disable SHA256 checksum for S3 and other compatible object store to reduce CPU usage (#754).
    • Hide operations on internal files from access log (#766).
    • Require Go 1.15 to build JuiceFS and build the release with Go 1.16 (#771).
    • Inherit gid of parent when SGID is set (#772).
    • Keep SGID when file is non-group-executable (#773).
    • Allow remove broken dir/file (#784).
    • Retry transactions for TxnLockNotFound from TiKV (#789).
    • Cleanup current session during umount (#796).
    • Reduce memory allocation for OBS and Webdav backend (#800).
    • Support escaped access key for KS3 (#830).
    • Support lookup . and .. (#842).
    • No warning if compaction fail with missing objects(#844).
    • Increase available inodes based on current usage (#851).
    • Allow update access key and secret key with default compress algorithm (#855).

    Bugfix

    • Fixed a leak in SQL engine (#728).
    • Fixed a bug that may crash client (#729).
    • Fixed valid bytes of progress bar for gc command (#746).
    • Fixed warmup with a long list of files (#752).
    • Fixed supporting of secured Redis connection (regression in v0.16) (#758).
    • Fixed data corruption in SQL and TiKV engine when some slices are skipped during compaction (#759).
    • Fixed metrics for read bytes in Hadoop SDK (#761).
    • Fixed multipart upload in S3 gateway (#765).
    • Fixed POSIX locks on interweaved regions (#769).
    • Fixed latency metrics for libRADOS (#793).
    • Fixed concat in Java SDK and multipart upload (#817).
    • Fixed nlink of parent when rename directories (#839).
    • Fixed transaction for read-only mount (#844).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.17.0-darwin-amd64.tar.gz(23.73 MB)
    juicefs-0.17.0-linux-amd64.tar.gz(22.46 MB)
    juicefs-0.17.0-linux-arm64.tar.gz(20.64 MB)
    juicefs-0.17.0-windows-amd64.tar.gz(22.26 MB)
    juicefs-hadoop-0.17.0-linux-amd64.jar(18.48 MB)
  • v0.16.2(Aug 25, 2021)

    JuiceFS v0.16.2 is patched version for v0.16, is recommended to upgrade.

    Bugfix

    • Retries LIST three times to avoid infinite loop (#745).
    • Fixed valid bytes of progress bar for gc command (#746).
    • Fixed warmup with a long list of files (#752).
    • Fixed supporting of secured Redis connection (regression in v0.16) (#758).
    • Fixed data corruption in SQL and TiKV engine when some slices are skipped during compaction (#759).
    • Fixed metrics for read bytes in Hadoop SDK (#761).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.16.2-darwin-amd64.tar.gz(23.71 MB)
    juicefs-0.16.2-linux-amd64.tar.gz(22.43 MB)
    juicefs-0.16.2-linux-arm64.tar.gz(20.62 MB)
    juicefs-0.16.2-windows-amd64.tar.gz(22.24 MB)
    juicefs-hadoop-0.16.2-linux-amd64.jar(18.90 MB)
  • v0.16.1(Aug 16, 2021)

    JuiceFS v0.16.1 arrived one month after 0.15.2, with 80+ commits from 11 contributors (@davies, @Sandy4999 , @xiaogaozi @tangyoupeng @zhijian-pro @chnliyong @Suave @themaxdavitt @xuhui-lu @201341 @zwwhdls ), thanks to them!

    The biggest feature is supporting TiKV as the meta engine, which is a distributed transactional key-value database. With TiKV, JuiceFS can store trillions of files and exabytes of data, please .

    BREAKING CHANGE

    The meaning of password in Redis Sentinel URL is changed from Sentinel password to Redis server password, please update the password in the URL if you use sentinel and their password are different.

    New

    • Supports TiKV as meta engine and object store (#610 #629 #631 #636 #645 #633 #663 #666 #675 #704).
    • Added limit for upload/download bandwidth (#611).
    • Added a virtual file (.config) to show current configurations (#652).
    • Added a subcommand stats to watch performance metrics in realtime (#702 #721).
    • Added progress bar for fsck, gc, load and dump (#683 #684 #724)
    • Disabled updatedb for JuiceFS (#727).

    Changed

    • Speedup listing on file store (#593).
    • Upgrade KS3 SDK to 1.0.12 (#638).
    • Update mtime in fallocate (#602).
    • Improved performance for writing into file store (#621).
    • Changed the password in Redis URL to Redis Server (#620)
    • Support mixed read/write in Go/Java SDK (#647).
    • Enable debug agent for sync command (#659).
    • Improved stability for random write workload (#664).
    • Avoid some memory copy in block cache (#668).
    • disable fsync in writeback mode to be closer to local file system (#696).
    • profile: show final result when interval is set to 0 (#710).

    Bugfix

    • Fixed stats with MySQL engine (#590).
    • Fixed case insensitivity with MySQL engine(#591).
    • Fixed atime of file in Java SDK(#597).
    • Fixed a bug that block write when memory cache is enabled (#667).
    • Fixed fd leak in block cache (#672).
    • Fixed stale result for restarted transaction (#678)
    • Fixed a bug under mixed write and truncate (#677).
    • Fixed race condition under mixed read/write (#681).
    • Fixed compatibility with Redis clones.
    • Fixed data usage in read-only mode (#698).
    • Fixed key leak in Redis (#694).
    • Fixed a bug about collate with MySQL 5.6 (#697).
    • Fixed a bug that may cause crash when writeback_cache is used (#705).
    • Fixed pid of updated POSIX lock (#708).
    • Added a workaround for a data loss bug in Huawei OBS SDK (#720).
    • Fixed the metrics of uploaded bytes (#726).
    • Fixed a leak of chunk and sustained inode in SQL engine (#728).
    • Fixed a bug that may crash client (#729).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.16.1-darwin-amd64.tar.gz(23.71 MB)
    juicefs-0.16.1-linux-amd64.tar.gz(22.43 MB)
    juicefs-0.16.1-linux-arm64.tar.gz(20.62 MB)
    juicefs-0.16.1-windows-amd64.tar.gz(22.24 MB)
    juicefs-hadoop-0.16.1-linux-amd64.jar(18.90 MB)
  • v0.15.2(Jul 7, 2021)

    JuiceFS v0.15.2 arrived 1 month after v0.14.2, with more than 60+ changes from 8 contributors (@davies, @Sandy4999 , @xiaogaozi, @yuhr123, @Suave, @zzcclp , @tangyoupeng, @chnliyong, @yuhr123), thanks to them.

    This release introduced new tool to backup and restore metadata, which can also be used to migrate metadata between different meta engines, check Backup and Restore Metadata for details.

    This release also improved the performance significantly for read/write heavy workload by utilizing page cache in kernel.

    This release is backward-compatible with previous releases, should be safe to upgrade.

    New

    • Added command dump and load to backup and restore metadata (#510, #521, #529, #535, #551).
    • Added an option (--read-only) to mount as read-only (#520).
    • Support command auto-completion (#530, #534).
    • Run benchmark in parallel (-p N) (#545).
    • Support PostgreSQL as meta engine (#542).
    • Added an option (--subdir) to mount a sub-directory (#550).
    • Support WebDAV as an object store (#573).
    • Allow enable writeback cache in kernel by -o writeback_cache (#576).
    • Added a option to redirect the logging into a file in background mode (#575).

    Changed

    • Changed the batch size of LIST request to 1000 (some object storage may fail with 400 for larger limit) (1385587).
    • Exclude log4j from Java SDK to avoid potential conflict (#501).
    • Exit when unknown option found (d6a39f11db).
    • Report type of meta engine and storage together with usage (#504).
    • Changed the default configuration of Java SDK and S3 gateway to be consistent with juicefs mount (#517).
    • Keep page cache in kernel when files opened without changes (#528, #537).
    • Change REDIS-URL to META-URL in docs (#552).

    Bugfix

    • Fixed the memory leak in B2 client (#500).
    • Handle BucketAlreadyExists error for all object storage (#561).
    • Fixed a bug with SQLite and PostgreSQL engine when high bit of lock owner is 1 (#588)
    • Fixed updating stats for MySQL engine (#590).
    • Fixed case sensitivity for MySQL engine (#591).
    • Fixed potential leak for files overwritten by rename (#594, #495)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.15.2-darwin-amd64.tar.gz(21.66 MB)
    juicefs-0.15.2-linux-amd64.tar.gz(20.52 MB)
    juicefs-0.15.2-linux-arm64.tar.gz(18.91 MB)
    juicefs-0.15.2-windows-amd64.tar.gz(20.32 MB)
    juicefs-hadoop-0.15.2-linux-amd64.jar(16.93 MB)
  • v0.14.2(Jun 4, 2021)

    JuiceFS v0.14.2 received 30+ contributions from @davies @xiaogaozi @tangyoupeng @Sandy4999 @chnliyong @yuhr123 @xyb @meilihao @frankxieke , thanks to them!

    New

    • Added quota for space and inodes for whole volume (#495).
    • Lists all the client session in juicefs status (#491).
    • Cleanup any leaked inodes in Redis with juicefs gc (#494).
    • Supports sticky bits in Java SDK (#475).
    • Added juicefs.umask for Java SDK (#462).
    • Empty the Hadoop trash automatically (#456).

    Changed

    • Returns ENOSPC rather than IOError when Redis runs out of memory (#479).

    Bugfix

    • Allow superuser to change file mode in Java SDK (#467).
    • Allow owner to change grouping of file (#470).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.14.2-darwin-amd64.tar.gz(21.62 MB)
    juicefs-0.14.2-linux-amd64.tar.gz(20.48 MB)
    juicefs-0.14.2-linux-arm64.tar.gz(18.87 MB)
    juicefs-0.14.2-windows-amd64.tar.gz(20.27 MB)
    juicefs-hadoop-0.14.2-linux-amd64.jar(16.82 MB)
  • v0.13.1(May 27, 2021)

    JuiceFS v0.13.1 is a bugfix release for v0.13. We have created first release branch for 0.13, which will only have bugfix in future patch releases.

    New

    • Support flock for SQL engine (#422).
    • Support posix lock for SQL engine (#425).
    • Global user/gropu id mapping for Java SDK (#439, #447)
    • Added benchmark results for different meta engine (#445).

    Bugfix

    • Fixed transaction conflict for SQL engine(#448).
    • Fixed build on macos 11.4 (#452).
    • Fixed parsing redis versions (1c945d746be376831e706761a6a566d236123be3).
    • Ignore deleted file during sync (6e06c0ebd6a2e906235a80807418d18f5ea8a84a).
    • Fixed permission check in Lua script (used by Java SDK and S3 Gateway) (#430).
    • Fixed juicefs sync in distributed mode (#424)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.13.1-darwin-amd64.tar.gz(21.60 MB)
    juicefs-0.13.1-linux-amd64.tar.gz(20.47 MB)
    juicefs-0.13.1-linux-arm64.tar.gz(18.87 MB)
    juicefs-0.13.1-windows-amd64.tar.gz(20.26 MB)
    juicefs-hadoop-0.13.1-linux-amd64.jar(16.81 MB)
  • v0.13.0(May 19, 2021)

    JuiceFS v0.13 arrived 1 month after v0.12.1, with more than 80 changes from 9 contributors (@davies, @Sandy4999 , @xiaogaozi, @yuhr123, @polyrabbit, @suzaku, @tangyoupeng, @angristan, @chnliyong), thanks to them.

    The biggest feature in v0.13 is using SQL database as meta engine, SQLite, MySQL and TiDB are supported right now, we will add others later. Using SQL database will be slower than using Redis, but they have better persistency and scalability than Redis, is better in the cases that data safety and number of files are more important than performance, for example, backup.

    New

    • Support SQL database (SQLite, MySQL and TiDB) as meta engine (#375).
    • Added profile to analyze access log (#344).
    • Added status to show the setting and status (#368).
    • Added warmup to build cache for files/directory (#409).
    • Build Java SDK for Windows (#362).
    • Use multiple buckets as object store (#349).
    • Collect metrics for Java SDK (#327).
    • Added virtual file /.stats to show the internal metrics (#314).
    • Allow build a minimized binary without S3 gateway and other object storages (#324).

    Changed

    • Enable checksum for Swift (a16b106808aa1).
    • Added more buckets for object latency distribution (#321).
    • Added a option (--no-agent) to disable debug agent (#328).
    • Added internal details at the end of benchmark (#352).
    • More metrics for block cache(#387).
    • Speed up path resolution for Java SDK and S3 gateway using Lua script (#394).
    • Restart the Redis transaction after some known failures (#397).
    • Remove the limit on the number of cached blocks (#401).

    Bugfix

    • Fixed a bug in SetAttr to refresh new written data (429ce80100).
    • Fixed overflow in StatFS (abcb5c652b).
    • Fixed a bug when use MinIO with juicefs sync (199b4d35b).
    • Fixed a bug in CopyFileRange, which may affect multipart uploads and concat of Java SDK (fb611b0825).
    • Fixed deadlock when truncate together with read (2f8a8d9).
    • Fixed stale read after truncate (226b6a7e).
    • Fixed downloading a directory using S3 gateway (#378).
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.13.0-darwin-amd64.tar.gz(21.59 MB)
    juicefs-0.13.0-linux-amd64.tar.gz(20.45 MB)
    juicefs-0.13.0-linux-arm64.tar.gz(18.85 MB)
    juicefs-0.13.0-windows-amd64.tar.gz(20.25 MB)
    juicefs-hadoop-0.13.0-linux-amd64.jar(16.79 MB)
  • v0.12.1(Apr 15, 2021)

    JuiceFS v0.12.1 had fixed a few bugs and improvements on scalability.

    Changes

    • Only cleanup leaked chunk in juicefs gc, which may overload redis on larger cluster (6358e388416c).
    • Improve sessions cleanup to avoid scanning all keys (#293).
    • Use hash set for refcount of slices to avoid scanning all keys (#294).
    • Cleanup zero refcount of slices to save memory (#295).
    • Support case insensitivity in Windows (#303).

    Bugfix

    • Fixed ranged get for swift (dcd705714f8f).
    • Fixed random read benchmark on files larger than 2G (#305).
    • Fixed listing of files on root directory in Windows (#306)
    Source code(tar.gz)
    Source code(zip)
    checksums.txt(403 bytes)
    juicefs-0.12.1-darwin-amd64.tar.gz(20.24 MB)
    juicefs-0.12.1-linux-amd64.tar.gz(19.34 MB)
    juicefs-0.12.1-linux-arm64.tar.gz(17.78 MB)
    juicefs-0.12.1-windows-amd64.tar.gz(19.15 MB)
    juicefs-hadoop-0.12.1-linux-amd64.jar(15.30 MB)
  • v0.12.0(Apr 12, 2021)

    JuiceFS v0.12 was arrived 1 month after v0.11, with more than 70 changes from 7 contributors (@davies @xiaogaozi @chnliyong @tangyoupeng @Arvintian @luohy15 @angristan), thanks to them.

    New

    • Supports Windows (#195, #268 #271).
    • Added juicefs gc to collect garbage in object store (#248, #290).
    • Added juicefs fsck to check the consistency of file system (#253).
    • Added juicefs info to show internal information for file/directory (slow for large directory)(#288).

    Changes

    • Added prefix (juicefs_) and labels (vol_name and mp) for exposed metrics.
    • Support path-style endpoint for S3 compatible storage (#175).
    • Added --verbose as an alias to --debug.
    • Support proxy setting from environment variables for OBS (#245).
    • Wait for block to be persistent in disk in writeback mode (#255).
    • Change the default number of prefetch threads to 1.
    • Fail the mount if the mount point is not ready in 10 second in daemon mode.
    • Speed up juicefs rmr by parallelizing it.
    • Limit the used memory to 200% of --buffer-size (slow down if it's above 100%).
    • Reload the Lua scripts after Redis is restarted.
    • Improved compaction to skip first few large slices to reduce traffic to object store (#276).
    • Added logging when operation is interrupted.
    • Disable compression by default (#286).
    • Limit the concurrent deletion of objects to 2 (#282).
    • Accept named options after positional argument (#274).

    Bugfix

    • Accepts malformed repsonse from UFile (f4f5f53).
    • Fixed juicefs umount in Linux (#242).
    • Fixed order of returned objects from listing on SCS (#240).
    • Fixed fetching list of nodes in Java SDK when a customized URL handler is set (#247).
    • Support IPv6 address for sftp(#259).
    • Fixed build with librados (#260).
    • Supports relative path when mount in background (#266).
    • Fixed juicefs rmr with relative path.
    • Cleanup unused objects after failed compaction.
    • Fixed updated files and permissions in sftp.
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Mar 1, 2021)

    New

    • Added S3-compatible gateway based on MinIO (#178).
    • Expose metrics for Prometheus (#181), also a Grafana dashboard template.
    • Support copy_file_range in FUSE (#172) and concat in Hadoop SDK (#173).
    • Support data encryption at-rest (#185).
    • Support China Mobile Cloud (ECloud) (#206).
    • Added juicefs rmr to remove all files in a directory recursively (#207).
    • Support Redis with Senitel (#180).
    • Support multiple directories for disk cache.

    Changed

    • Increased read timeout for Redis from 3 seconds to 30 seconds to avoid IO error (#196).

    Bugfix

    • Fixed a bug that caused recent written data can't be read back (#157).
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Jan 29, 2021)

    New

    1. A Java SDK was released to provide HDFS compatible API for Hadoop ecosystem.
    2. Added juicefs umount to umount a volume.
    3. Added juicefs benchmark to do simple benchmark (read/write on big/small files)
    4. Merged juicesync into juicefs sync.
    5. Support Sina Cloud Storage.
    6. Support Swift as object store.

    Changed

    1. Release memory in read buffer after idle for 1 minute.
    2. Ignore invalid IP (for example, IPv6) returned from DNS of object store.
    3. Improve performance for huge directory (over 1 millions)
    4. Retry operations after Redis is disconnected or restarted.
    5. Added cache for symlink.

    Bugfix

    1. Fixed errno for getattr in Darwin when no extented attributes found.
    2. Updated length in readers to allow read latest data written by other clients.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.3(Jan 18, 2021)

    This release includes BREAKING changes and a few bugfix:

    New

    1. Support rados in Ceph.
    2. Allow update access key and secret key with existing volume.

    Changes

    1. Changed to logging into syslog only when running in background.
    2. Changed to use lower cases for command options.
    3. Disable extended attributes by default, which slow down operations even it's not used.

    Improvements

    1. Allow non-root to allow in macOS.
    2. Check configs of redis and show warning when there are risks for data safety or functions.
    3. Added volume name in macOS.
    4. wait the mountpoint to be ready when it's mounted in background.

    Bugfix

    1. Fixed a bug in ReaddirPlus which slows down tree travelling.
    2. Fixed leaking of extended attributes in Redis.
    3. Fix a bug that truncate may delete slices written in future.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Jan 15, 2021)

    This release addressed all the known issue reported from community since first public release.

    1. Added a flag -d to mount in background
    2. Finish compaction part 2: copy small objects into bigger one.
    3. Added pessimistic lock to reduce contention for Redis transaction, to avoid failure under high pressure.
    4. Show . and .. when list a directory with -a in Linux.
    5. Fixed errno for getxattr on internal files.
    6. Fixed local block cache.
    7. Support Scaleway and Minio as object store.
    8. Support mount using /etc/fstab and auto mount after boot.
    9. Added an option --force to overwrite existing format in Redis.
    10. log INFO into syslog, also DEBUG in daemon mode.
    Source code(tar.gz)
    Source code(zip)
Owner
Juicedata, Inc
Builds the best file system for cloud.
Juicedata, Inc
Next generation distributed, event-driven, parallel config management!

mgmt: next generation config management! About: Mgmt is a real-time automation tool. It is familiar to existing configuration management software, but

James 3k Aug 6, 2022
An edge-native container management system for edge computing

SuperEdge is an open source container management system for edge computing to manage compute resources and container applications in multiple edge regions. These resources and applications, in the current approach, are managed as one single Kubernetes cluster. A native Kubernetes cluster can be easily converted to a SuperEdge cluster.

SuperEdge 840 Aug 8, 2022
cloud-native local storage management system

Open-Local是由多个组件构成的本地磁盘管理系统,目标是解决当前 Kubernetes 本地存储能力缺失问题。通过Open-Local,使用本地存储会像集中式存储一样简单。

null 227 Aug 9, 2022
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient

Cloud Storage Service Author: Marisa Tania, Ryan Tjakrakartadinata Professor: Matthew Malensek See project spec here: https://www.cs.usfca.edu/~mmalen

Ryan G Tjakrakartadinata 2 Dec 7, 2021
Experiment - Sync files to S3, fast. Go package and CLI.

gosync I want to be the fastest way to concurrently sync files and directories to/from S3. Gosync will concurrently transfer your files to and from S3

null 153 Dec 6, 2021
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Kubernetes 91.2k Aug 16, 2022
Run the same Docker images in AWS Lambda and AWS ECS

serverlessish tl;dr Run the exact same image for websites in Lambda as you do in ECS, Kubernetes, etc. Just add this to your Dockerfile, listen on por

Glass Echidna 182 Apr 2, 2022
Cloud cost estimates for Terraform in your CLI and pull requests 💰📉

Infracost shows cloud cost estimates for Terraform projects. It helps developers, devops and others to quickly see the cost breakdown and compare different options upfront.

Infracost 7.7k Aug 12, 2022
Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload.

Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload. Run tools like masscan, puredns, ffuf, httpx or anything you need and get results quickly!

null 159 Aug 7, 2022
☁️🏃 Get up and running with Go on Google Cloud.

Get up and running with Go and gRPC on Google Cloud Platform, with this lightweight, opinionated, batteries-included service SDK.

Einride 19 Aug 1, 2022
Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.

Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.

Bytedance Inc. 1.3k Aug 14, 2022
Sample apps and code written for Google Cloud in the Go programming language.

Google Cloud Platform Go Samples This repository holds sample code written in Go that demonstrates the Google Cloud Platform. Some samples have accomp

Google Cloud Platform 3.5k Aug 1, 2022
Use Google Cloud KMS as an io.Reader and rand.Source.

Google Cloud KMS Go io.Reader and rand.Source This package provides a struct that implements Go's io.Reader and math/rand.Source interfaces, using Goo

Seth Vargo 4 Nov 10, 2021
A Cloud Native Buildpack that contributes SDKMAN and uses it to install dependencies like the Java Virtual Machine

gcr.io/paketo-buildpacks/sdkman A Cloud Native Buildpack that contributes SDKMAN and uses it to install dependencies like the Java Virtual Machine. Be

Daniel Mikusa 1 Jan 8, 2022
Microshift is a research project that is exploring how OpenShift1 Kubernetes can be optimized for small form factor and edge computing.

Microshift is a research project that is exploring how OpenShift1 Kubernetes can be optimized for small form factor and edge computing.

Oleg Silkin 0 Nov 1, 2021
Contentrouter - Protect static content via Firebase Hosting with Cloud Run and Google Cloud Storage

contentrouter A Cloud Run service to gate static content stored in Google Cloud

G. Hussain Chinoy 0 Jan 2, 2022
A Cloud Foundry cli plugin that offers a faster and customizable alternative for cf apps

Panzer cf cli plugin A plugin for faster interaction (less API calls) with Cloud Foundry, and choose the columns you want in your output. Instead of "

Harry Metske 0 Feb 14, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is an open-source POSIX file system built on top of Redis and object storage

Juicedata, Inc 6.1k Aug 16, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Juicedata, Inc 6.1k Aug 16, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Juicedata, Inc 5.7k Aug 9, 2022
a high-performance, POSIX-ish Amazon S3 file system written in Go

Goofys allows you to mount an S3 bucket as a filey system.

Ka-Hing Cheung 4.2k Aug 9, 2022
GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go

GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go Overview GeeseFS allows you to mount an S3 bucket as a file sys

Yandex.Cloud 272 Jul 25, 2022
Goofys is a high-performance, POSIX-ish Amazon S3 file system written in Go

Goofys is a high-performance, POSIX-ish Amazon S3 file system written in Go Overview Goofys allows you to mount an S3 bucket as a filey system. It's a

Ka-Hing Cheung 4.3k Aug 15, 2022
REST based Redis client built on top of Upstash REST API

An HTTP/REST based Redis client built on top of Upstash REST API.

Andreas Thomas 5 Jul 31, 2022
A distributed systems library for Kubernetes deployments built on top of spindle and Cloud Spanner.

hedge A library built on top of spindle and Cloud Spanner that provides rudimentary distributed computing facilities to Kubernetes deployments. Featur

null 21 Jan 4, 2022
A distributed locking library built on top of Cloud Spanner and TrueTime.

A distributed locking library built on top of Cloud Spanner and TrueTime.

null 44 Jul 19, 2022
A Lightweight VPN Built on top of Libp2p for Truly Distributed Networks.

Hyprspace A Lightweight VPN Built on top of Libp2p for Truly Distributed Networks. demo.mp4 Table of Contents A Bit of Backstory Use Cases A Digital N

Hyprspace 302 Aug 13, 2022