SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

Overview

SFTPGo

CI Status Code Coverage License: AGPL v3 Docker Pulls Mentioned in Awesome Go

Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support, written in Go. Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.

Features

  • Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
  • Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
  • Configurable custom commands and/or HTTP hooks on file upload, pre-upload, download, pre-download, delete, pre-delete, rename, mmkdir, rmdir on SSH commands and on user add, update and delete.
  • Virtual accounts stored within a "data provider".
  • SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
  • Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
  • Per user and per directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode.
  • REST API for users and folders management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
  • Web based administration interface to easily manage users, folders and connections.
  • Web client interface so that end users can change their credentials and browse their files.
  • Public key and password authentication. Multiple public keys per user are supported.
  • SSH user certificate authentication.
  • Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
  • Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
  • Per user authentication methods.
  • Two-factor authentication based on time-based one time passwords (RFC 6238) which works with Authy, Google Authenticator and other compatible apps.
  • Custom authentication via external programs/HTTP API.
  • Data At Rest Encryption.
  • Dynamic user modification before login via external programs/HTTP API.
  • Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
  • Bandwidth throttling, with distinct settings for upload and download.
  • Per-protocol rate limiting is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
  • Per user maximum concurrent sessions.
  • Per user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
  • Per user and per directory shell like patterns filters: files can be allowed or denied based on shell like patterns.
  • Automatically terminating idle connections.
  • Automatic blocklist management using the built-in defender.
  • Atomic uploads are configurable.
  • Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (*NIX only).
  • Support for Git repositories over SSH.
  • SCP and rsync are supported.
  • FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
  • WebDAV is supported.
  • Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
  • Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
  • Prometheus metrics are exposed.
  • Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
  • Easy migration from Linux system user accounts.
  • Portable mode: a convenient way to share a single directory on demand.
  • SFTP subsystem mode: you can use SFTPGo as OpenSSH's SFTP subsystem.
  • Performance analysis using built-in profiler.
  • Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
  • Log files are accurate and they are saved in the easily parsable JSON format (more information).
  • SFTPGo supports a plugin system and therefore can be extended using external plugins.

Platforms

SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a GitHub Action. The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.

Requirements

  • Go as build only dependency. We support the Go version(s) used in continuous integration workflows.
  • A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or CockroachDB stable.
  • The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.

Installation

Binary releases for Linux, macOS, and Windows are available. Please visit the releases page.

An official Docker image is available. Documentation is here.

Some Linux distro packages are available:

  • For Arch Linux via AUR:
    • sftpgo. This package follows stable releases. It requires git, gcc and go to build.
    • sftpgo-bin. This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require git, gcc and go to build.
    • sftpgo-git. This package builds and installs the latest git main branch. It requires git, gcc and go to build.
  • Deb and RPM packages are built after each commit and for each release.
  • For Ubuntu a PPA is available here.

SFTPGo is also available on AWS Marketplace, purchasing from there will help keep SFTPGo a long-term sustainable project.

On FreeBSD you can install from the SFTPGo port.

On Windows you can use:

  • The Windows installer to install and run SFTPGo as a Windows service.
  • The portable package to start SFTPGo on demand.

You can easily test new features selecting a commit from the Actions page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.

Alternately, you can build from source.

Getting Started Guide for the Impatient.

Configuration

A full explanation of all configuration methods can be found here.

Please make sure to initialize the data provider before running the daemon.

To start SFTPGo with the default settings, simply run:

sftpgo serve

Check out this documentation if you want to run SFTPGo as a service.

Data provider initialization and management

Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.

For PostgreSQL, MySQL and CockroachDB providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.

SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.

Alternately, you can create/update the required data provider structures yourself using the initprovider command.

For example, you can simply execute the following command from the configuration directory:

sftpgo initprovider

Take a look at the CLI usage to learn how to specify a different configuration file:

sftpgo initprovider --help

You can disable automatic data provider checks/updates at startup by setting the update_mode configuration key to 1.

Create the first admin

To start using SFTPGo you need to create an admin user, you can do it in several ways:

  • by using the web admin interface. The default URL is http://127.0.0.1:8080/web/admin
  • by loading initial data
  • by enabling create_default_admin in your configuration file. In this case the credentials are admin/password

Upgrading

SFTPGo supports upgrading from the previous release branch to the current one. Some examples for supported upgrade paths are:

  • from 1.2.x to 2.0.x
  • from 2.0.x to 2.1.x and so on.

For supported upgrade paths, the data and schema are migrated automatically, alternately you can use the initprovider command.

So if, for example, you want to upgrade from a version before 1.2.x to 2.0.x, you must first install version 1.2.x, update the data provider and finally install the version 2.0.x. It is recommended to always install the latest available minor version, ie do not install 1.2.0 if 1.2.2 is available.

Loading data from a provider independent JSON dump is supported from the previous release branch to the current one too. After upgrading SFTPGo it is advisable to regenerate the JSON dump from the new version.

Downgrading

If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the revertprovider command for this task.

As for upgrading, SFTPGo supports downgrading from the previous release branch to the current one.

So, if you plan to downgrade from 2.0.x to 1.2.x, before uninstalling 2.0.x version, you can prepare your data provider executing the following command from the configuration directory:

sftpgo revertprovider --to-version 4

Take a look at the CLI usage to see the supported parameter for the --to-version argument and to learn how to specify a different configuration file:

sftpgo revertprovider --help

The revertprovider command is not supported for the memory provider.

Please note that we only support the current release branch and the current main branch, if you find a bug it is better to report it rather than downgrading to an older unsupported version.

Users and folders management

After starting SFTPGo you can manage users and folders using:

To support embedded data providers like bolt and SQLite we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.

Full details for users, folders, admins and other resources are documented in the OpenAPI schema. If you want to render the schema without importing it manually, you can explore it on Stoplight.

Tutorials

Some step-to-step tutorials can be found inside the source tree howto directory.

Authentication options

External Authentication

Custom authentication methods can easily be added. SFTPGo supports external authentication modules, and writing a new backend can be as simple as a few lines of shell script. More information can be found here.

Keyboard Interactive Authentication

Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client. This authentication method is typically used for multi-factor authentication.

More information can be found here.

Dynamic user creation or modification

A user can be created or modified by an external program just before the login. More information about this can be found here.

Custom Actions

SFTPGo allows you to configure custom commands and/or HTTP hooks to receive notifications about file uploads, deletions and several other events.

More information about custom actions can be found here.

Virtual folders

Directories outside the user home directory or based on a different storage provider can be exposed as virtual folders, more information here.

Other hooks

You can get notified as soon as a new connection is established using the Post-connect hook and after each login using the Post-login hook. You can use your own hook to check passwords.

Storage backends

S3 Compatible Object Storage backends

Each user can be mapped to the whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about S3 integration can be found here.

Google Cloud Storage backend

Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Google Cloud Storage integration can be found here.

Azure Blob Storage backend

Each user can be mapped with an Azure Blob Storage container or a container virtual folder. This way, the mapped container/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Azure Blob Storage integration can be found here.

SFTP backend

Each user can be mapped to another SFTP server account or a subfolder of it. More information can be found here.

Encrypted backend

Data at-rest encryption is supported via the cryptfs backend.

Other Storage backends

Adding new storage backends is quite easy:

  • implement the Fs interface.
  • update the user method GetFilesystem to return the new backend
  • update the web interface and the REST API CLI
  • add the flags for the new storage backed to the portable mode

Anyway, some backends require a pay per use account (or they offer free account for a limited time period only). To be able to add support for such backends or to review pull requests, please provide a test account. The test account must be available for enough time to be able to maintain the backend and do basic tests before each new release.

Brute force protection

The connection failed logs can be used for integration in tools such as Fail2ban. Example of jails and filters working with systemd/journald are available in fail2ban directory.

You can also use the built-in defender.

Account's configuration properties

Details information about account configuration properties can be found here.

Performance

SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.

More in-depth analysis of performance can be found here.

Release Cadence

SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new releases per year.

Acknowledgements

SFTPGo makes use of the third party libraries listed inside go.mod.

We are very grateful to all the people who contributed with ideas and/or pull requests.

Thank you ysura for granting me stable access to a test AWS S3 account.

Sponsors

I'd like to make SFTPGo into a sustainable long term project and your sponsorship will really help ❤️

Thank you to our sponsors!

License

GNU AGPLv3

Comments
  • Occasional Malfunction of Connection

    Occasional Malfunction of Connection

    Occasionally (about once every 12-18 hours) the server will stop accepting the correct username and password. It requires a service reboot to fix. This is the closest set of logs to the latest attempt to login while in this error state:

    {"level":"debug","sender":"sftpd","connection_id":"","time":"2019-09-11T03:27.05.832","message":"idle connections check ticker 2019-09-11 03:27:05.832582724 +0000 UTC m=+44100.001673267"}
    {"level":"info","sender":"sftpd","connection_id":"6508f97ceb9b67f8e7fc4928cd9b0cd10615a47024a8279611e9e8f3f6ef24b4","time":"2019-09-11T03:27.05.832","message":"close idle connection, idle time: 6h27m4.548987039s"}
    {"level":"warn","sender":"sftpd","connection_id":"6508f97ceb9b67f8e7fc4928cd9b0cd10615a47024a8279611e9e8f3f6ef24b4","time":"2019-09-11T03:27.05.832","message":"idle connection close failed: close tcp 172.31.18.230:22->148.59.44.16:64168: use of closed network connection"}
    {"level":"info","sender":"sftpd","connection_id":"8fe250f6489ada67fcb4161871743f4358892324ce8841dcbe4903eabfcc2a59","time":"2019-09-11T03:27.05.832","message":"close idle connection, idle time: 6h27m4.535060735s"}
    {"level":"warn","sender":"sftpd","connection_id":"8fe250f6489ada67fcb4161871743f4358892324ce8841dcbe4903eabfcc2a59","time":"2019-09-11T03:27.05.832","message":"idle connection close failed: close tcp 172.31.18.230:22->148.59.44.16:64169: use of closed network connection"}
    {"level":"debug","sender":"sftpd","connection_id":"","time":"2019-09-11T03:27.05.832","message":"check idle connections ended"}
    

    Any ideas? Is there any way to stop it trying to close idle connections? Perhaps that's the problem?

    bug 
    opened by jdmcd 48
  • Relatively lower performance than OpenSSH

    Relatively lower performance than OpenSSH

    Hi, Thanks for this great project !

    I did some test in my environment and the transfer speed is much lower than OpenSSH.

    Server || --- | --- | OS| Debian 10.2 x64 | CPU| Ryzen5 3600 | RAM| 64GB ECC | Disk| 3* Intel P4510 4TB RAID0 | Ethernet| Mellanox ConnectX-3 40GbE|

    Client || --- | --- | OS| Windows 10 1909 x64 | CPU| Threadripper 1920X | RAM| 64GB ECC | Disk| Samsung 960EVO 1TB | Ethernet| Mellanox ConnectX-3 40GbE|

    Under Filezilla I can get 500MB/s with OpenSSH, but only about 200MB/s with sftpgo.

    In both case I'm using AES256-CTR as cipher and SHA-256 as MAC, I've also tried AES128-CTR but nothing changes.

    CPU usage of sftpgo is higher than OpenSSH:

      PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND 
     4527 sftp      20   0 1795576  52044   8628 R 133.5   0.6   2:12.13 sftpgo 
    
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND 
    27934 xxxxxx    20   0   17112   5360   4188 R  67.8   0.1   0:10.01 sshd                                                  
    27942 xxxxxx    20   0   17112   5344   4176 R  27.4   0.1   0:12.52 sshd 
    

    In both case I've got a maximum TCP window size of 4MB.

    performance 
    opened by HiFiPhile 45
  • Account Permissions and sub-dirs

    Account Permissions and sub-dirs

    I have a strange issue with an account:

    My user defitinion is as follows:

          {
            "id": 1,
            "status": 1,
            "username": "test",
            "password": "***",
            "home_dir": "/sftp",
            "uid": 1100,
            "gid": 1100,
            "permissions": {
              "/": [
                "list"
              ],
              "/sftp/test/upload: [
                "list",
                "upload",
                "download",
                "rename",
                "create_dirs",
                "delete"
              ]
            }
          },
    

    Although I can list in all directories in /sftp, I wanted to be able to upload to /sftp/test/upload with the above definition. Am i doing something incorrect here?

    Using edge

    enhancement 
    opened by ryjogo 44
  • Add Dockerfile and GitHub Actions workflow

    Add Dockerfile and GitHub Actions workflow

    This PR adds a Dockerfile to the project. It's largely based on my previous Go applications and the existing examples in the repo. It's not perfect though, so we might need some fine tuning in the future, particularly around volumes and configuration.

    Open questions:

    • What are backups? Are they supposed to be saved to persistent storage or are they only meaningful during the lifecycle of the application?
    • Where does the credential path point? What's being read from there? (Existing dockerfiles contain no reference)

    Currently the above settings can only be overridden from environment variables, because defaults are defined for them in env vars. (I think that's an acceptable compromise though).

    I'll work with this image anyway, so I'll provide patches along the way.

    The PR also adds a GitHub Actions workflow for building Docker images based on the official example (I removed a few parts):

    https://github.com/docker/build-push-action#complete-workflow

    After merging the PR, @drakkan , you'll need to do some manual steps though:

    1. Login to ghcr.io: docker login ghcr.io. You'll need a GH PAT for that (with package write scope)
    2. Build the image locally: docker build -t ghcr.io/drakkan/sftpgo:edge .
    3. Push the image docker push ghcr.io/drakkan/sftpgo:edge
    4. Go to packages under your account
    5. Make the package public
    6. Configure a secret called CR_PAT with package write scope under secrets
    7. Uncomment the relevant parts in docker.yml (login and push) (happy to review a PR)

    These steps are necessary, because GitHub packages seem to be super private for the first time, so you need to push and publish manually. (Although rereading the above: you should also be fine to do step 6 and 7 first, since CR_PAT will be your token anyway, and then step 5 to publish the package. I had some trouble with packages under organizations, but this is under your personal account, so it should be fine)

    Sorry about the complicated process, let me know if you need any help.

    opened by sagikazarmark 42
  • How to install sftpGO with Traefik

    How to install sftpGO with Traefik

    Hello, I would like to install SftpGO with Traefik, the problem is that it does not work,

    I access the dashboard without problem but when I connect with Filezilla in SFTP it doesn't work and I really don't understand why...?

    My configuration :

    traefik.yml image

    docker-compose.yml

    `version: '3.3' services:

    ftp: container_name: ftp image: drakkan/sftpgo:edge-alpine-slim networks: network-traefik: ipv4_address: 192.168.128.xxx labels: - "traefik.enable=true" - "traefik.tcp.routers.ftp.rule=HostSNI(*)" - "traefik.tcp.routers.ftp.entrypoints=sftp" - "traefik.tcp.services.ftp.loadbalancer.server.port=2022" - "traefik.http.routers.uiftp.rule=Host(MYDOMAIN.FR)" - "traefik.http.routers.uiftp.entrypoints=https" - "traefik.http.routers.uiftp.tls.certresolver=http" - "traefik.http.services.uiftp.loadbalancer.server.port=8080"

    networks: network-traefik: external: true`

    I'm really stuck help me! :D

    opened by WiFEED 37
  • Error with WebDAV to SFTP Backend, works with SFTP to SFTP Backend

    Error with WebDAV to SFTP Backend, works with SFTP to SFTP Backend

    Hey there @drakkan, thank so much for your work on this, it's a great tool!

    I'm trying to connect to an SFTP server on the backend (server using OpenSSH) but am getting error SSH_FX_FAILURE when doing so. It only affects WebDAV. Without changing any settings, if I open the virtual folder using an SFTP connection, it shows the backend server's file structure fine.

    Computer via SFTP > SFTPGo > Virtual Folder SFTP Backend > Success Computer via WebDAV > SFTPGo > Virtual Folder SFTP Backend > Fails with SSH_FX_FAILURE


    Figured out what is causing this. If a folder is symlinked, SFTP doesn't find out about it until that folder is opened, whereas WebDAV, at least in Windows, tries to access that folder upon opening its parent folder.

    Computer via SFTP > SFTPGo > Virtual Folder SFTP Backend: accessing /parent with /parent/inaccessible subdirectory inside of it works until I open /parent/inaccessible any accessible folders are fine

    Computer via WebDAV > SFTPGo > Virtual Folder SFTP Backend: accessing /parent with /parent/inaccessible subdirectory inside of it does not let me open /parent at all cannot access any folders as long as at least 1 folder is inaccessible

    Error from log: Oct 16 00:07:35 control sftpgo[29746]: {"level":"info","time":"2021-10-16T00:07:35.724","sender":"webdavd","elapsed_ms":86,"method":"PROPFIND","proto":"HTTP/2.0","remote_addr":"<redacted>","request_id":"c5l5s5p6q9grffh6gui0","uri":"https://mySFTPGOserver/virtualfolder","user_agent":"Microsoft-WebDAV-MiniRedir/10.0.19043","error":"sftp: \"Failure\" (SSH_FX_FAILURE)"}

    Oct 16 00:43:43 control sftpgo[31376]: {"level":"warn","time":"2021-10-16T00:43:43.931","sender":"sftpfs \"<redacted>"","connection_id":"DAV_c5l6d3p6q9gri73uier0","message":"Invalid path resolution, dir \"/real/path/to/404\" original path \"/website/public_html/404\" resolved \"/home/user/web/website/public_html/404\" err: path \"/real/path/to/404\" is not inside: \"/home/user/web\""}


    Also discovered another issue. If you're using WebDAV -> SFTPGo -> S3 Backend, a password of Abcde!+ will not work but Abcde will work. If using Abcde!+ logs say "invalid credentials". If you remove the S3 Backend virtual folder, without changing anything, you can successfully login without error (local folders work fine).

    Thank you

    opened by asheroto 30
  • Consider making the sdk package a separate module

    Consider making the sdk package a separate module

    It's a common practice to make packages that are supposed to be consumed by third party code (eg. plugins) separate modules. (Example: api packages in most Hashicorp software).

    This would make the SDK package lighter in terms of dependencies (it doesn't need all deps of SFTPGo).

    At the same time, until plugins are experimental, it might be easier to keep it in the same module (although making an existing package a module later has it's own quirks).

    opened by sagikazarmark 29
  • Symlinks are not supported

    Symlinks are not supported

    I am running a server on a local dir:

    sftpgo portable -d ~/Base/shared --permissions 'list,download' --username 'x' --password 'x' --webdav-port 8114 --sftpd-port 8115 --ftpd-port 8116 --log-verbose --log-file-path sftpgo.log
    

    I symlink other directories I want to share into ~/Base/shared, but sftpgo does not handle the symlinks correctly. (I know sftpgo supports virtual dirs, but I use that ~/Base/shared for other services, too, and I do not want to duplicate my config.)

    PS: That command also does not start the webdav server, saying

    {"level":"debug","time":"2021-03-05T10:49:41.514","sender":"service","message":"WebDAV server not started, disabled in config file"}
    
    wontfix 
    opened by NightMachinery 29
  • [FTP] Invalid passive IP gives error `index out of range`

    [FTP] Invalid passive IP gives error `index out of range`

    I configured the passive port range, but first forgot, then misplaced the passive ip configuration. Sftpgo then gives a confusing "index out of range" error, probably because it's searching for the passive ip and not finding it?

    opened by jovandeginste 25
  • Simplify configuration file/dir

    Simplify configuration file/dir

    Currently configuring SFTPGo is a bit complicated in the various environments it might run in (thanks to the available Docker image).

    Namely, the fact that the config dir is tied to a bunch of other things (host keys, sqlite, etc) and that you cannot pass a full path to a config file is a bit uncomfortable.

    To give you a concrete example: when running on Kubernetes, I can mount a config map to a certain path (eg. /etc/sftpgo), containing an sftpgo.yaml file. The problem is, by default, SFTPGo will try to write default host keys to the config dir, but that mount is read only. Right now I solve this by manually setting the path to host keys.

    I see two potential improvements:

    • allow passing a full path to SFTPGo as a flag (analogous to viper.SetConfigFile)
    • introduce a separate directory for host keys and other stuff (although there are already enough directories)

    The first approach would allow mounting a config map under something like /etc/sftpgo/config (and since --config-file is already taken it would allow passing --config-path /etc/sftpgo/config/sftpgo.yaml) and would not interfere with the current config dir concept (/etc/sftpgo would still be writable).

    enhancement 
    opened by sagikazarmark 25
  • webdav

    webdav

    hello, I use sftpgo as a bridge to an s3 endpoint that when a file is uploaded moves it elsewhere. Using webdav and uploading a file with windows the explorer say:

    explorer_nfqgfjM0YX

    this happens because the file is no longer avilable and I guess it is done a stat that fails. Using Curl,for obvious reasons, is ok :

    > PUT /fake0/out/test.bin HTTP/1.1
    > Authorization: Basic XYZ
    > User-Agent: curl/7.29.0
    > Host: FQDN:2080
    > Accept: */*
    > Content-Length: 441
    > Expect: 100-continue
    >
    < HTTP/1.1 100 Continue
    * We are completely uploaded and fine
    < HTTP/1.1 201 Created
    < Etag: "1641d9285d8c3d961b9"
    < Date: Tue, 27 Oct 2020 12:28:29 GMT
    < Content-Length: 7
    < Content-Type: text/plain; charset=utf-8
    <
    * Connection #0 to host FQDN left intact
    
    

    could it be a cache problem (assuming you use it) of the driver that uses sftpgo?

    thnx a lot.

    enhancement 
    opened by fdefilippo 25
  • Unable to remove per-directory permissions on group via REST API

    Unable to remove per-directory permissions on group via REST API

    When I update secondary group's ACL with REST API call PUT /groups/{name}, new per-directory permissions are merged into existing ones: I'm able to add or change them but not delete.

    This happens because request's JSON is unmarshalled directly into pre-existing group in httpd.updateGroup func.

    I submit new PR with same code taken from httpd.updateUser.

    opened by g-t-u 0
  • Bump golang.org/x/time from 0.2.0 to 0.3.0

    Bump golang.org/x/time from 0.2.0 to 0.3.0

    Bumps golang.org/x/time from 0.2.0 to 0.3.0.

    Commits
    • 2c09566 rate: the state of the limiter should not be changed when the requests failed
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies go 
    opened by dependabot[bot] 1
  • Bump golang.org/x/sys from 0.2.0 to 0.3.0

    Bump golang.org/x/sys from 0.2.0 to 0.3.0

    Bumps golang.org/x/sys from 0.2.0 to 0.3.0.

    Commits
    • 3ca3b18 windows: add GetLargePageMinimum
    • d684c6f execabs: isGo119ErrDot: use errors.Is instead of string-matching
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies go 
    opened by dependabot[bot] 1
  • How to support auth2 external login

    How to support auth2 external login

    Sir, thank you very much for developing the tool, I read the document carefully, admin/client page external login only support OpenId connetion, but our internal authentication is auth2, can you give me some tips? thank you.

    opened by a497134710 1
  • Duplicate Upload Actions for same file

    Duplicate Upload Actions for same file

    Hi!

    I'm trying to investigate an issue where we are seeing two 'upload' actions get dispatched for the same file. The payload of the request contains identical data - including the same session Id. image image

    For this file, we only got 1 'pre-upload' action. The payload in both upload actions indicate that the file was successfully uploaded with the correct file size.

    I seem to notice this issue more when the system is under load via JMeter. We are using the default DB (SQLite) and I have seen errors related to SQLite 'context deadline exceeded'. We have considered moving to either the in-memory or Bolt DB. I'm not sure if the duplicate hooks getting dispatched could be related to the DB being under extreme load - perhaps triggering a retry of the upload hook.

    Let me know what I can do to help :)

    Thanks!

    Edit: System Info image Running on Azure "Premium SSD" image

    opened by JoeAtThru 6
Releases(v2.4.2)
Owner
Nicola Murino
Nicola Murino
TurtleDex is a decentralized cloud storage platform that radically alters the landscape of cloud storage

TurtleDex is a decentralized cloud storage platform that radically alters the landscape of cloud storage. By leveraging smart contracts, client-side encryption, and sophisticated redundancy (via Reed-Solomon codes), TurtleDex allows users to safely store their data with hosts that they do not know or trust.

TurtleDev 531 May 29, 2021
QingStor Object Storage service support for go-storage

go-services-qingstor QingStor Object Storage service support for go-storage. Install go get github.com/minhjh/go-service-qingstor/v3 Usage import ( "

minhjh 0 Dec 13, 2021
This is a simple file storage server. User can upload file, delete file and list file on the server.

Simple File Storage Server This is a simple file storage server. User can upload file, delete file and list file on the server. If you want to build a

BH_Lin 0 Jan 19, 2022
Cloud-Native distributed storage built on and for Kubernetes

Longhorn Build Status Engine: Manager: Instance Manager: Share Manager: Backing Image Manager: UI: Test: Release Status Release Version Type 1.1 1.1.2

Longhorn.io 4.2k Nov 30, 2022
Storj is building a decentralized cloud storage network

Ongoing Storj v3 development. Decentralized cloud object storage that is affordable, easy to use, private, and secure.

Storj 2.6k Dec 6, 2022
Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Rook 27 Oct 25, 2022
s3git: git for Cloud Storage. Distributed Version Control for Data.

s3git: git for Cloud Storage. Distributed Version Control for Data. Create decentralized and versioned repos that scale infinitely to 100s of millions of files. Clone huge PB-scale repos on your local SSD to make changes, commit and push back. Oh yeah, it dedupes too and offers directory versioning.

s3git 1.4k Nov 27, 2022
A Redis-compatible server with PostgreSQL storage backend

postgredis A wild idea of having Redis-compatible server with PostgreSQL backend. Getting started As a binary: ./postgredis -addr=:6380 -db=postgres:/

Ivan Elfimov 1 Nov 8, 2021
Perkeep (née Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Perkeep (née Camlistore) 6.1k Dec 3, 2022
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

The golang.design Initiative 77 Nov 28, 2022
tstorage is a lightweight local on-disk storage engine for time-series data

tstorage is a lightweight local on-disk storage engine for time-series data with a straightforward API. Especially ingestion is massively opt

Ryo Nakao 833 Dec 7, 2022
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Bo-Yi Wu 14 Apr 19, 2022
Terraform provider for the Minio object storage.

terraform-provider-minio A Terraform provider for Minio, a self-hosted object storage server that is compatible with S3. Check out the documenation on

Refaktory 9 Dec 1, 2022
CSI for S3 compatible SberCloud Object Storage Service

sbercloud-csi-obs CSI for S3 compatible SberCloud Object Storage Service This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage

Vitaly 2 Feb 17, 2022
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright © 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Changkun Ou 6 Nov 22, 2021
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Ama

Multi-Cloud Object Storage 36.5k Dec 2, 2022
Storage Orchestration for Kubernetes

What is Rook? Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse se

Rook 10.6k Dec 7, 2022
A High Performance Object Storage released under Apache License

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

null 1 Sep 30, 2021
Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

SFTPGo Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support, written in Go. Several storage backends are supporte

Nicola Murino 5.2k Dec 3, 2022