"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

Overview

rclone logo

Website | Documentation | Download | Contributing | Changelog | Installation | Forum

Build Status Go Report Card GoDoc Docker Pulls

Rclone

Rclone ("rsync for cloud storage") is a command-line program to sync files and directories to and from different cloud storage providers.

Storage providers

Please see the full list of all storage providers and their features

Features

  • MD5/SHA-1 hashes checked at all times for file integrity
  • Timestamps preserved on files
  • Partial syncs supported on a whole file basis
  • Copy mode to just copy new/changed files
  • Sync (one way) mode to make a directory identical
  • Check mode to check for file hash equality
  • Can sync to and from network, e.g. two different cloud accounts
  • Optional large file chunking (Chunker)
  • Optional transparent compression (Compress)
  • Optional encryption (Crypt)
  • Optional FUSE mount (rclone mount)
  • Multi-threaded downloads to local disk
  • Can serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna

Installation & documentation

Please see the rclone website for:

Downloads

License

This is free software under the terms of MIT the license (check the COPYING file included in this package).

Issues
  • Any plan to add support to Google Photos?

    Any plan to add support to Google Photos?

    If possible, please add a support to upload photo/video files to Google Photos directly!

    Although it's possible to add a "Google Photos" folder in Google Drive, and all your Google Photos will be there (organized by date folder), However, photos uploaded into this folder does not seems to reflect into Google Photos.

    Also, if we upload in "High Quality" than there will be unlimited storage size for photos and Video. I am not sure the "down-sizing" is done locally or remotely by Google Photos server, however...

    I realize Google Photo is not a good place to organize photos but it's a good place to share photos with others. And with a stock of 300k+ photos I really don't want to have my PC running for God-knows-how-long for the upload.... It's the job of RPi!

    new backend 
    opened by lssong99 171
  • Google Drive (encrypted):

    Google Drive (encrypted): "failed to authenticate decrypted block - bad password?" on files during reading

    What is your rclone version (eg output from rclone -V)

    rclone 1.35

    Which OS you are using and how many bits (eg Windows 7, 64 bit)

    Devuan Linux 1.0 (systemd-free fork of Debian Jessie).

    Which cloud storage system are you using? (eg Google Drive)

    Google Drive, with the built-in rclone encryption.

    The command you were trying to run (eg rclone copy /tmp remote:tmp)

    rclone -v --dump-headers --log-file=LOGFILE copy egd:REDACTED/REDACTED/REDACTED/REDACTED.mp4 /tmp/REDACTED.mp4

    A log from the command with the -v flag (eg output from rclone -v copy /tmp remote:tmp)

    Please find it attached: LOGFILE.txt

    Note 1: This is related to #677, which is closed and I cannot reopen. Note 2: These errors are 100% reproducible.

    Cheers, Durval.

    bug 
    opened by DurvalMenezes 107
  • rclone still using too much memory

    rclone still using too much memory

    https://github.com/ncw/rclone/issues/2157

    Referencing the above ticket.
    My version rclone v1.40-034-g06a8d301Ξ²

    • os/arch: linux/amd64
    • go version: go1.10

    Still seeing hte above issue but it happens less frequently. My setup is exactly the same as that ticket but i've now upgraded the version. What can I help to provide to troubleshoot if it is the same issue or a different one?

    Ive just increased the --attr-timeout 5s just to try it. I'll see if that helps as a shot in the dark.

    bug 
    opened by calisro 97
  • Are we safe? Amazon Cloud Drive

    Are we safe? Amazon Cloud Drive

    I mean could be there the same scenarion that they disable rclone app from amazon? or does the rclone handle it other way than acd_cli did?

    ACD_CLI weird:

    https://github.com/yadayada/acd_cli/pull/562 - "I created this pull request only to ask what happend to acd_cli's issues page?! It just vanished! "

    opened by scriptzteam 95
  • CRITICAL: Amazon Drive does not work anymore with rclone 429:

    CRITICAL: Amazon Drive does not work anymore with rclone 429: "429 Too Many Requests" / Rate exceeded

    It seems Amazon Drive blocked rclone, I tested it on 4 different servers, tried reauth the app but no success.

    Any rclone command will deliver the following errors:

    2017/05/18 11:19:14 DEBUG : pacer: Rate limited, sleeping for 666.145821ms (1 consecutive low level retries)
    2017/05/18 11:19:14 DEBUG : pacer: low level retry 1/10 (error HTTP code 429: "429 Too Many Requests": response body: "{\"message\":\"Rate exceeded\"}")
    
    opened by ajkis 93
  • Can't connect to SharePoint Online team sites such as https://orgname.sharepoint.com/sites/Site-Name

    Can't connect to SharePoint Online team sites such as https://orgname.sharepoint.com/sites/Site-Name

    I’ve been able to successfully connect to the default https://orgname-my.sharepoint.com/ personal SharePoint Site...

    $ rclone lsd sp3:
    -1 2017-01-04 22:16:34         0 Attachments
    -1 2015-01-23 11:13:10         0 Shared with Everyone
    

    But I’m having difficultly figuring out how to connect to team sites on URLs such a: https://orgname.sharepoint.com/sites/Site-Name etc.

    The β€œrclone config” guided process doesn’t let you set the resource_url when setting it up. So I’ve tried editing ~/.config/rclone.conf using a few different methods, changing the resource_url and then reauthorizing, I've tried a number of different addresses like...

    For the main/default team site:

    https://orgname.sharepoint.com/ 
    https://orgname.sharepoint.com/Shared Documents
    

    For separate team sites, or what Microsoft call "site collections":

    https://orgname.sharepoint.com/sites/Site-Name
    https://orgname.sharepoint.com/sites/Site-Name/
    https://orgname.sharepoint.com/sites/Site-Name/Shared Documents
    https://orgname.sharepoint.com/sites/Site-Name/Shared Documents/
    https://orgname.sharepoint.com/sites/Site-Name/Shared%20Documents
    https://orgname.sharepoint.com/sites/Site-Name/Shared%20Documents/
    

    I'm not sure which address format I'm meant to use? (for either the main team site, or all the other ones under /sites/)

    I always get the error:

    $ rclone -vv lsd sp3:
    2017/10/25 03:17:18 DEBUG : Using config file from "/home/user/.config/rclone/rclone.conf"
    2017/10/25 03:17:18 DEBUG : rclone: Version "v1.38" starting with parameters ["rclone" "-vv" "lsd" "sp3:"]
    2017/10/25 03:17:19 Failed to create file system for "sp3:": failed to get root: 401 Unauthorized: 
    

    (there's nothing after that last colon)

    Does anyone know how I access team SharePoint sites?

    My rclone version is:

    rclone v1.38
    - os/arch: linux/amd64
    - go version: go1.9
    

    ...on Manjaro 64bit, installed from the distro's repos.

    I'm choosing the "business" option when asked in rclone config.

    enhancement Remote: One Drive 
    opened by hi2u 86
  • Two-way (bidirectional) synchronization

    Two-way (bidirectional) synchronization

    I'm sorry if this is answered elsewhere but I couldn't find it in that case.

    I want to replace my current Owncloud+Owncloud client (Linux)+FolderSync(Android) setup with Drive+Rclone+FolderSync. But there is one thing I can't figure out how to do with rclone β€” smart two-way deletion synchronization. Which means: if a file was present on both server (Drive) and local machine, and then was deleted on either of them, the file will be eventually removed on both regardless of which direction you run sync first. Same, if a file was added on either server or client, it will be uploaded to the other one.

    Can rclone do that, and if doesn't is there a chance of such functionality in future?

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
    IMPORTANT new feature change detection bisync 
    opened by alexander-yakushev 85
  • Manage folders

    Manage folders

    Some of rclones remote fs do understand the concept of folders, eg

    • drive
    • local
    • dropbox

    Make an optional interfaces (eg Mkdir, Rmdir) for these FS to manage the creation and deletion of folders. This would enable empty folders, and deletion of empty folders on sync.

    enhancement metadata 
    opened by ncw 81
  • On-the-fly encryption support

    On-the-fly encryption support

    I've seen a comment in the thread about ACD support regarding plans for an encryption mechanism in rclone. Could you please elaborate on that? When could this possibly become available?

    enhancement 
    opened by TripleEmcoder 73
  • Support for OpenDrive storage

    Support for OpenDrive storage

    I was just looking at OpenDrive as a potential storage provider. They offer pretty competitive prices already, but they also claim to do competitor price matching, so may be a viable alternative to ACD's unlimited storage.

    Their API documentation is linked here: https://www.opendrive.com/api

    They also claim to have (only beta so far) support for Webdav, so Webdav support (#580) may avoid the need for native support.

    new backend 
    opened by eharris 71
  • [GDrive + FUSE] 403 Forbidden Errors - API daily limit exceeded

    [GDrive + FUSE] 403 Forbidden Errors - API daily limit exceeded

    As discussed in the forum (https://forum.rclone.org/t/google-drive-vs-acd-for-plex/471), users are getting 403 forbidden errors and unable to file access when using rclone FUSE mount. This is especially with using Plex to access the mount. Appears to be related to exceeding daily API access: https://developers.google.com/drive/v3/web/handle-errors . Users will get a temporary ban from access files via rclone FUSE mount or download files. Access to the Google Drive website still seems to work and upload still works without issue. It seems the only viable solution is to have a local cache as mentioned in #897.

    What is your rclone version (eg output from rclone -V) v1.34-75-gcbfec0dΞ² Which OS you are using and how many bits (eg Windows 7, 64 bit) Linux Ubuntu Which cloud storage system are you using? (eg Google Drive) Google Drive The command you were trying to run (eg rclone copy /tmp remote:tmp) rclone copy --verbose --no-traverse gdrive:test/jellyfish-40-mbps-hd-h264.mkv ~/tmp A log from the command with the -v flag (eg output from rclone -v copy /tmp remote:tmp)

    2016/12/26 05:43:12 Local file system at /home/xxxxxx/tmp: Modify window is 1ms
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 1/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 2/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 3/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Failed to copy: failed to open source object: bad response: 403: 403 Forbidden 
    
    opened by natoriousbigg 65
  • rclone not working with Ceph S3 RGW

    rclone not working with Ceph S3 RGW

    What is the problem you are having with rclone?

    rclone not working with Ceph S3 RGW

    What is your rclone version (output from rclone version)

    rclone-v1.58.1-linux-amd64.deb rclone v1.58.1

    • os/version: ubuntu 22.04 (64 bit)
    • os/kernel: 5.15.0-37-generic (x86_64)
    • os/type: linux
    • os/arch: amd64
    • go/version: go1.17.9
    • go/linking: static
    • go/tags: none

    Which OS you are using and how many bits (e.g. Windows 7, 64 bit)

    ubuntu 22.04 (64 bit)

    Which cloud storage system are you using? (e.g. Google Drive)

    Ceph

    The command you were trying to run (e.g. rclone copy /tmp remote:tmp)

    rclone --dump bodies copy -vv cephS3:mybucket/test.xml .

    A log from the command with the -vv flag (e.g. output from rclone -vv copy /tmp remote:tmp)

    [email protected]:~# rclone --dump bodies copy -vv cephS3:mybucket/test.xml .
    2022/06/29 15:04:39 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "--dump" "bodies" "copy" "-vv" "cephS3:mybucket/test.xml" "."]
    2022/06/29 15:04:39 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
    2022/06/29 15:04:39 DEBUG : Creating backend with remote "cephS3:mybucket/test.xml"
    2022/06/29 15:04:39 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
    2022/06/29 15:04:39 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    2022/06/29 15:04:39 DEBUG : HTTP REQUEST (req 0xc000218d00)
    2022/06/29 15:04:39 DEBUG : HEAD /mybucket/test.xml HTTP/1.1
    Host: my.ceph3.host:18080
    User-Agent: rclone/v1.58.1
    Authorization: XXXX
    X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    X-Amz-Date: 20220629T120439Z
    
    2022/06/29 15:04:39 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    2022/06/29 15:04:39 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    2022/06/29 15:04:39 DEBUG : HTTP RESPONSE (req 0xc000218d00)
    2022/06/29 15:04:39 DEBUG : HTTP/1.1 200 OK
    Content-Length: 165
    Accept-Ranges: bytes
    Content-Type: application/xml
    Date: Wed, 29 Jun 2022 12:04:39 GMT
    Etag: "05bb8ceb97e8a21b103a81cd521ada5c"
    Last-Modified: Tue, 28 Jun 2022 21:21:35 GMT
    X-Amz-Expiration: expiry-date="Wed, 06 Jul 2022 00:00:00 GMT", rule-id="c879tn0mccft1f8ul3a0"
    X-Amz-Meta-Mtime: 1656448085.172
    X-Amz-Object-Lock-Mode: GOVERNANCE
    X-Amz-Object-Lock-Retain-Until-Date: Wed, 29 Jun 2022 21:21:35 GMT
    X-Amz-Request-Id: tx00000783e5660a83a5e9f-0062bc3fd7-13a7215-default
    X-Amz-Version-Id: dCfK-BpydcH4miYgbwVeNLXEoGLjXJj
    X-Rgw-Object-Type: Normal
    
    2022/06/29 15:04:39 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    2022/06/29 15:04:39 DEBUG : pacer: low level retry 1/10 (error SerializationError: empty response payload
    	status code: 200, request id: tx00000783e5660a83a5e9f-0062bc3fd7-13a7215-default, host id: 
    caused by: EOF)
    2022/06/29 15:04:39 DEBUG : pacer: Rate limited, increasing sleep to 10ms
    

    full log: https://paste.debian.net/1245595/

    Config

    [cephS3]
    type = s3
    provider = Ceph
    access_key_id = ***************
    secret_access_key = ******************
    endpoint = http://my.ceph3.host:18080
    acl = private
    

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
    opened by DySprozin 1
  • operations: fix possible unintended --size-only skip with multithread…

    operations: fix possible unintended --size-only skip with multithread…

    … copy

    What is the purpose of this change?

    Fix #6206 by withholding the very last byte until the rest of the transfer is complete

    Was the change discussed in an issue or in the forum before?

    #6206

    Checklist

    • [x] I have read the contribution guidelines.
    • [ ] I have added tests for all changes in this PR if appropriate.
    • [ ] I have added documentation for the changes if appropriate.
    • [x] All commit messages are in house style.
    • [x] I'm done, this Pull Request is ready for review :-)
    opened by 0evan 0
  • drive: add Workload Identity Federation support

    drive: add Workload Identity Federation support

    What is the purpose of this change?

    For supporting Workload Identity Federation, use google.CredentialsFromJSON instead of google.JWTConfigFromJSON.

    This changes make possible to use GitHub Actions with Workload Identity Federation like below.

        steps:
          - uses: actions/[email protected]
          - id: 'auth'
            name: 'Authenticate to Google Cloud'
            uses: 'google-github-actions/[email protected]'
            with:
              workload_identity_provider: 'your identity provider'
              service_account: 'your service account email'
          - run: rclone ls "drive:"
            env:
               RCLONE_DRIVE_SERVICE_ACCOUNT_FILE: "${{steps.auth.outputs.credentials_file_path}}" 
    

    Was the change discussed in an issue or in the forum before?

    No.

    Checklist

    • [x] I have read the contribution guidelines.
    • [x] I have added tests for all changes in this PR if appropriate.
    • [x] I have added documentation for the changes if appropriate.
    • [x] All commit messages are in house style.
    • [x] I'm done, this Pull Request is ready for review :-)
    opened by send 0
  • s3: extend metadata support for FSx for Lustre compatibility

    s3: extend metadata support for FSx for Lustre compatibility

    Now that metadata support has been introduced by #111 it would be useful if rclone could translate the POSIX attributes from the local backend into attributes understood by FSx for Lustre

    | key | description | example | |-----|-------------|---------| | mode | x-amz-meta-file-permissions | 0100664 | | uid | x-amz-meta-file-owner | in: 500 => out: 500 | | gid | x-amz-meta-file-group | in: 500 => out: 500 | | atime | x-amz-meta-file-atime | in: 2006-01-02T15:04:05.999999999Z07:00 => out: 1595002920000000000ns | | mtime | x-amz-meta-file-mtime | in: 2006-01-02T15:04:05.999999999Z07:00 => out: 1595002920000000000ns |

    https://docs.aws.amazon.com/datasync/latest/userguide/special-files.html

    When copying between self-managed NFS, FSx for Lustre, FSx for OpenZFS, or Amazon EFS and Amazon S3 – In this case, the following metadata is stored as Amazon S3 user metadata:

    • File and folder modification timestamps
    • User ID and group ID
    • POSIX permissions

    https://docs.aws.amazon.com/fsx/latest/LustreGuide/overview-dra-data-repo.html#posix-metadata-support

    • x-amz-meta-file-permissions – The file type and permissions in the format , consistent with st_mode in the Linux stat(2) man page.
    • x-amz-meta-file-owner – The owner user ID (UID) expressed as an integer.
    • x-amz-meta-file-group – The group ID (GID) expressed as an integer.
    • x-amz-meta-file-atime – The last-accessed time in nanoseconds. Terminate the time value with ns; otherwise Amazon FSx interprets the value as milliseconds.
    • x-amz-meta-file-mtime – The last-modified time in nanoseconds. Terminate the time value with ns; otherwise, Amazon FSx interprets the value as milliseconds.

    Note: FSx for Lustre does not import or retain setuid information.

    https://docs.aws.amazon.com/fsx/latest/LustreGuide/attach-s3-posix-permissions.html

        "Metadata": {
            "file-atime": "1595002920000000000ns",
            "file-owner": "500",
            "file-permissions": "0100664",
            "file-group": "500",
            "file-mtime": "1595002920000000000ns"
        }
    

    To make this work really well would need to do directories as well.

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
    enhancement Remote: S3 
    opened by ncw 0
  • Reduce default parallelism to improve stability on slow network backends; Document significance of --checkers and/or split them into finer controls

    Reduce default parallelism to improve stability on slow network backends; Document significance of --checkers and/or split them into finer controls

    What is your current rclone version (output from rclone version)?

    / $ rclone version
    rclone v1.58.1-DEV
    - os/version: alpine 3.15.4 (64 bit)
    - os/kernel: 3.10.0-957.27.2.el7.x86_64 (x86_64)
    - os/type: linux
    - os/arch: amd64
    - go/version: go1.18.2
    - go/linking: dynamic
    - go/tags: none
    

    What problem are you are trying to solve?

    Trying to verify whether any file in a large remote repo (a few hundreds of files) needs any updates. The remote server is a S3-compatible MinIO (containerized, mirekphd/ml-minio22:latest), with distributed network storage on an NFS server (slow). The container with minio has 1000 CPU millicores request (i.e. a soft. burstable, not a hard limit) (not exhaused - this server has many CPU threads available) and 4 to 40 GiB RAM limit (also not exhausted - most of the limit is free).

    The default rclone multi-threading settings - 8 threads (as controlled by the --checkers argument) is giving us consistent crashes (crash loop back-offs) of the MinIO servers during rclone's "checking" phase in multiple namespaces and nodes. Error messages in the MinIO 2022 server log indicate an excessive number of threads spawned by the server when servicing rclone requests (approaching a "fork bomb"), e.g. as follows:

    [..]
    runtime: failed to create new OS thread (have 4097 already; errno=11)
    runtime: may need to increase max user processes (ulimit -u)
    fatal error: newosproc
    runtime: failed to create new OS thread (have 4098 already; errno=11)
    runtime: may need to increase max user processes (ulimit -u)
    fatal error: newosproc
    [..]
    

    How do you think rclone should be changed to solve that?

    The default value of the --checkers argument should be reduced at least to 4 threads, on par with the default number of threads used for transferring files in rclone copy. Such a reduced setting (4 instead of default 8) has been tested to prevent any such errors for MinIO on NFS in our problematic use cases - namespaces with hundreds of thousands of files (in multiple subfolders and totalling up to 1.2 TB of data in each namespace), but of course there's no guarantee that it will also hold for millions of files and petabytes of data - maybe a single thread would be most prudent, like in restic?

    I'd also advertise the fact in rclone documentation that the innocuous --checkers setting controls parallelism in multiple places of rclone. or even in lots of places, ideally making separate arguments for various places where multi-threading is used, each intuitively called one, e.g. --threads-num-for-x-feature, --threads-num-for-y-feature etc.

    Related

    https://forum.rclone.org/t/rclone-sync-webdav-nextcloud-cpu-max-during-checks/27749

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
    doc fix 
    opened by mirekphd 6
Releases(v1.58.1)
Owner
rclone
Github organization for development of rclone and related projects
rclone
SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

Nicola Murino 4.4k Jun 26, 2022
QingStor Object Storage service support for go-storage

go-services-qingstor QingStor Object Storage service support for go-storage. Install go get github.com/minhjh/go-service-qingstor/v3 Usage import ( "

minhjh 0 Dec 13, 2021
Storj is building a decentralized cloud storage network

Ongoing Storj v3 development. Decentralized cloud object storage that is affordable, easy to use, private, and secure.

Storj 2.4k Jun 28, 2022
Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Rook 25 Jun 24, 2022
Cloud-Native distributed storage built on and for Kubernetes

Longhorn Build Status Engine: Manager: Instance Manager: Share Manager: Backing Image Manager: UI: Test: Release Status Release Version Type 1.1 1.1.2

Longhorn.io 3.9k Jul 3, 2022
s3git: git for Cloud Storage. Distributed Version Control for Data.

s3git: git for Cloud Storage. Distributed Version Control for Data. Create decentralized and versioned repos that scale infinitely to 100s of millions of files. Clone huge PB-scale repos on your local SSD to make changes, commit and push back. Oh yeah, it dedupes too and offers directory versioning.

s3git 1.4k Jun 22, 2022
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

The golang.design Initiative 65 May 16, 2022
tstorage is a lightweight local on-disk storage engine for time-series data

tstorage is a lightweight local on-disk storage engine for time-series data with a straightforward API. Especially ingestion is massively opt

Ryo Nakao 775 Jul 3, 2022
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Bo-Yi Wu 14 Apr 19, 2022
Terraform provider for the Minio object storage.

terraform-provider-minio A Terraform provider for Minio, a self-hosted object storage server that is compatible with S3. Check out the documenation on

Refaktory 8 Jan 29, 2022
A Redis-compatible server with PostgreSQL storage backend

postgredis A wild idea of having Redis-compatible server with PostgreSQL backend. Getting started As a binary: ./postgredis -addr=:6380 -db=postgres:/

Ivan Elfimov 1 Nov 8, 2021
CSI for S3 compatible SberCloud Object Storage Service

sbercloud-csi-obs CSI for S3 compatible SberCloud Object Storage Service This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage

Vitaly 2 Feb 17, 2022
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright Β© 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Changkun Ou 6 Nov 22, 2021
This is a simple file storage server. User can upload file, delete file and list file on the server.

Simple File Storage Server This is a simple file storage server. User can upload file, delete file and list file on the server. If you want to build a

BH_Lin 0 Jan 19, 2022
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Ama

Multi-Cloud Object Storage 33.9k Jul 3, 2022
Perkeep (nΓ©e Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Perkeep (nΓ©e Camlistore) 5.9k Jul 4, 2022
Storage Orchestration for Kubernetes

What is Rook? Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse se

Rook 10.1k Jul 1, 2022
A High Performance Object Storage released under Apache License

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

null 1 Sep 30, 2021
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient

Cloud Storage Service Author: Marisa Tania, Ryan Tjakrakartadinata Professor: Matthew Malensek See project spec here: https://www.cs.usfca.edu/~mmalen

Ryan G Tjakrakartadinata 2 Dec 7, 2021
Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers.

Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers.

rclone 33.4k Jun 24, 2022
Rclone ("rsync for cloud storage") is a command-line program to sync files and directories to and from different cloud storage providers.

Website | Documentation | Download | Contributing | Changelog | Installation | Forum Rclone Rclone ("rsync for cloud storage") is a command-line progr

null 0 Nov 5, 2021
Go language interface to Swift / Openstack Object Storage / Rackspace cloud files (golang)

Swift This package provides an easy to use library for interfacing with Swift / Openstack Object Storage / Rackspace cloud files from the Go Language

Nick Craig-Wood 292 Jun 30, 2022
Yandex Cloud Logging output for Fluent Bit

Fluent Bit plugin for Yandex Cloud Logging Fluent Bit output for Yandex Cloud Logging. Configuration parameters Key Description group_id (optional) Lo

Yandex.Cloud 14 Jun 11, 2022
Ydb-go-yc-metadata - Helpers to connect to YDB inside yandex-cloud using metadata service

ydb-go-yc-metadata helpers to connect to YDB inside yandex-cloud using metadata

YDB Platform 2 Apr 19, 2022
Instant messaging platform. Backend in Go. Clients: Swift iOS, Java Android, JS webapp, scriptable command line; chatbots

Tinode Instant Messaging Server Instant messaging server. Backend in pure Go (license GPL 3.0), client-side binding in Java, Javascript, and Swift, as

Tinode 8.9k Jul 1, 2022
Clean-Swift source and test code auto-generator. It can save you time typing 500-600 lines of code.

Clean-Swift source & test code auto generator Overview Run Output Basic Usage make config.yaml target_project_name: Miro // target project name copyri

David Ha 20 Apr 13, 2022
Clones dependencies from .resolved file of Swift Package Manager.

SPM-dep-cloner Clones dependencies from .resolved file of Swift Package Manager. Useful for setup of new project with dependencies in another repos. H

Artyom Andreev 0 Nov 29, 2021
The Swift Virtual File System

*** This project is not maintained anymore *** The Swift Virtual File System SVFS is a Virtual File System over Openstack Swift built upon fuse. It is

OVHcloud 375 Mar 16, 2022