Syncthing is a continuous file synchronization program.

Overview

Syncthing


Latest Linux & Cross Build Latest Windows Build Latest Mac Build MPLv2 License CII Best Practices Go Report Card

Goals

Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers. We strive to fulfill the goals below. The goals are listed in order of importance, the most important one being the first. This is the summary version of the goal list - for more commentary, see the full Goals document.

Syncthing should be:

  1. Safe From Data Loss

    Protecting the user's data is paramount. We take every reasonable precaution to avoid corrupting the user's files.

  2. Secure Against Attackers

    Again, protecting the user's data is paramount. Regardless of our other goals we must never allow the user's data to be susceptible to eavesdropping or modification by unauthorized parties.

  3. Easy to Use

    Syncthing should be approachable, understandable and inclusive.

  4. Automatic

    User interaction should be required only when absolutely necessary.

  5. Universally Available

    Syncthing should run on every common computer. We are mindful that the latest technology is not always available to any given individual.

  6. For Individuals

    Syncthing is primarily about empowering the individual user with safe, secure and easy to use file synchronization.

  7. Everything Else

    There are many things we care about that don't make it on to the list. It is fine to optimize for these values, as long as they are not in conflict with the stated goals above.

Getting Started

Take a look at the getting started guide.

There are a few examples for keeping Syncthing running in the background on your system in the etc directory. There are also several GUI implementations for Windows, Mac and Linux.

Docker

To run Syncthing in Docker, see the Docker README.

Vote on features/bugs

We'd like to encourage you to vote on issues that matter to you. This helps the team understand what are the biggest pain points for our users, and could potentially influence what is being worked on next.

Getting in Touch

The first and best point of contact is the Forum. There is also an IRC channel, #syncthing on freenode (with a web client), for talking directly to developers and users. If you've found something that is clearly a bug, feel free to report it in the GitHub issue tracker.

Building

Building Syncthing from source is easy, and there's a guide that describes it for both Unix and Windows systems.

Signed Releases

As of v0.10.15 and onwards release binaries are GPG signed with the key D26E6ED000654A3E, available from https://syncthing.net/security.html and most key servers.

There is also a built in automatic upgrade mechanism (disabled in some distribution channels) which uses a compiled in ECDSA signature. macOS binaries are also properly code signed.

Documentation

Please see the Syncthing documentation site.

All code is licensed under the MPLv2 License.

Issues
  • Filesystem Notification - Continuation

    Filesystem Notification - Continuation

    This is the continuation of plouj's PR #2807 on a branch in the syncthing repo for ease of maintenance. Refer to that PR for general information what this is about and commits previous to https://github.com/syncthing/syncthing/commit/1d424d84609a40d2a11ac8e06cb87aac6b130629.

    frozen-due-to-age pr-merged 
    opened by imsodin 184
  • Support for file encryption (e.g. non-trusted servers)

    Support for file encryption (e.g. non-trusted servers)

    So I have had a look at BitTorrent sync, syncthing and alternatives and what I always wondered about was the possibility to not only sync between resources I own and trust, but also external resources/servers which I do NOT trust with my data, up to a certain extent.

    One way to do this is using ecryptfs or encfs, but this has many obvious downsides: it is not an interoperable solution (only works on Linux), the files are actually stored in encrypted form on the disk (even if the resource is trusted and this is not necessary, for instance because of the file system being encrypted already), etc.

    What I propose is somehow configuring nodes which are only sent the files in an encrypted format, with all file contents (and potentially file/directory names as well; or even permissions) being encrypted. This way, if I want to store my private files on a fast server in a datacenter to access them from anywhere, I could do this with syncthing without essentially giving up ownership of those files. I could also prevent that particular sync node from being allowed/able to make any changes to the files without me noticing.

    I realize that this requires a LOT of additional effort, but it would be a killer feature that seems to not be available in any other "private cloud" solution so far. What are your thoughts on this feature?

    EDIT: BitTorrent sync mentions a feature like this in their API docs: "Encryption secret API users can generate folder secrets with encrypted peer support. Encryption secrets are read-only. They make Sync data encrypted on the receiver’s side. Recipients can sync files, but they can’t see file content, and they can’t modify the files. Encryption secrets come in handy if you need to sync to an untrusted location." (from http://www.bittorrent.com/intl/de/sync/developers/api)

    enhancement 
    opened by Natanji 143
  • Inotify support

    Inotify support

    To notice changes more quickly.

    enhancement 
    opened by jpjp 116
  • all: Store pending devices and folders in database (fixes #7178)

    all: Store pending devices and folders in database (fixes #7178)

    Purpose

    As discussed in #5758 and mentioned on the forum, storing the pending (offered from remote but not yet added locally) devices and folders in the XML configuration is not a nice and scalable design. Instead, the information should live in the database, properly structured, and made available over dedicated API endpoints.

    This is also ground work and practice to finally approach an acceptable implementation of the prototype in #5758, extending the concept to other devices we have heard about in ClusterConfig messages. That is deliberately saved for another PR though.

    Testing

    All code paths have undergone extensive manual testing within an existing setup of four instances (one compiled from this PR). Especially regarding the clean-up of pending entries upon config changes, both online through the POST /rest/system/config REST API and offline by modifying the XML while Syncthing is not running. Notifications on the GUI look as before, API endpoints have been verified with curl.

    ~~Unit tests would be rather complicated and were discussed as not strictly needed considering how low the importance of the handled information is.~~ Unit tests included, with a few basic test cases.

    Documentation

    The envisioned API for this stuff is described in https://github.com/syncthing/docs/pull/498.

    The commit messages contain much of the rationale for each change, so I'd be happy to keep them intact instead of squash-merging in a single commit. If desired, I can rebase to clean out obvious fixup commits and end up with a nice patch series of self-contained changes.

    Breaking Changes For The User

    Existing <pendingDevice> / <pendingFolder> entries in the XML config are not carried over to the database. The corresponding notifications will disappear until the next connection attempt with the offering device. It has been discussed that the low importance of this information does not warrant a separate logic for config --> database info migrations. One possibility to minimize disruption would be to just keep the XML entries intact for some releases and hold back on commit 8ae24612397fea6182cfb3d6dcce592c78c6df5d until then.

    opened by acolomb 109
  • Option to not sync mtime

    Option to not sync mtime

    Trying to sync my android device and my NAS(armv5) both using syncthing v0.10.0:

    On the ntfs mount of the NAS the file /my/directory/.syncthing.example.test is created then I get:

    puller: final: chtimes /my/directory/.syncthing.example.test: invalid argument

    File /my/directory/example.test is not created. Syncthing continues with the next file but the same error happens for all the files in the directory.

    enhancement frozen-due-to-age 
    opened by otrag 108
  • GUI display of global changes

    GUI display of global changes

    Purpose

    This PR adds a button to the device section of the GUI that the user can click on to see all filesystem changes that originated/occurred on that machine. That is to say, not changed copied from the network. This should help people discover more easily what computer made a certain change that was propagated in larger P2P swarms.

    Screenshots

    image

    frozen-due-to-age 
    opened by nrm21 107
  • Folder isn't making progress

    Folder isn't making progress

    All nodes are on v0.10.2 GUI says

    9:35:58: Folder "default" isn't making progress - check logs for possible root cause. Pausing puller for 1m0s.
    9:36:41: Folder "PIA" isn't making progress - check logs for possible root cause. Pausing puller for 1m0s.
    

    In the log I see the "hash mismatch" info but the file is not changed during pull. It was probably the whole night in that state but i don't have more logs, I should probably increase the size... logs from the pulling node: http://alex-graf.de/public/st/no-progress.tar.gz

    Restarting also does not fix it

    opened by alex2108 104
  • lib/connections: Add KCP support (fixes #804)

    lib/connections: Add KCP support (fixes #804)

    This is mostly for benchmarks and testing.

    Seems to connect, and not crash immediately, which is good news.

    Known issues:

    1. No timeouts as part of the protocol, so we usually end up with a dead connection for 5-7.5 minutes until it times out. This could be addressed by always using the new connection that we get, as long as the priority is right
    2. Has some weirdness around closing connections. See https://github.com/xtaci/kcp-go/issues/18

    The diff is massive due to deps, so I suggest you pull the PR locally and diff inside the lib and cmd/syncthing dirs.

    frozen-due-to-age pr-merged 
    opened by AudriusButkevicius 91
  • gui: add advance config port mapping to gui

    gui: add advance config port mapping to gui

    Purpose

    https://github.com/syncthing/syncthing/issues/4824

    Gives the user the opportunity to display a link to web gui remote machines. The user can choose the port and set if the link will display.

    Testing

    I just check to see if the link was shown depending on the setting.

    Screenshots

    Screen Shot 2020-09-29 at 11 13 53 PM Screen Shot 2020-09-29 at 11 34 37 PM

    Documentation

    https://github.com/syncthing/docs/pull/573

    Authorship

    Your name and email will be added automatically to the AUTHORS file based on the commit metadata.

    opened by rjpruitt16 91
  • Memory and CPU usage is prohibitively high

    Memory and CPU usage is prohibitively high

    I've been testing syncthing across 3 machines, a laptop with 8GB of RAM and two NAS-style servers with 1GB and 2GB of RAM. My repositories have the following sizes:

    • 19 items, 536 KiB
    • 83471 items, 16.2 GiB
    • 181482 items, 387 GiB

    To sync these three repositories syncthing 0.9.0 uses a bit over 700MB of RAM and while syncing continuously pegs the CPU at 150% on all nodes.

    While the CPU usage I could manage during the initial sync the memory usage is simply too high. A typical NAS server like the two I have holds >4TB of storage. At the current level of usage that would require ~8GB of memory just for syncthing.

    Without looking at the code I assume an index is being kept in memory for all the repository contents. 700MB is 2.6kb per item on disk, which seems way too high. The index should only really need to store filename, parent, permissions, size and timestamp. On average (depends on filename sizes) that should only be 50-60 bytes per item which would only be 13MB. Moving that index to disk would also make a lot more sense.

    I assume the CPU usage is hashing lots of files. There I assume using librsync might be a better option.

    enhancement frozen-due-to-age 
    opened by pedrocr 86
  • Consistently check config only once

    Consistently check config only once

    Currently the config is checked through .prepare multiple times: Both when calling ReadJSON and when modifying the config in the wrapper. I propose to remove the checking on deserialization/reading and doing it only inside the wrapper (when wrapping or modifying/replacing a config). Needs some carefulness to ensure a "raw" config object loaded from disk isn't used anywhere before wrapping it. That use case could be handled by exposing .prepare as e.g. CheckConfig(cfg *Configuration).

    bug needs-triage 
    opened by imsodin 0
  • build(deps): bump golang.org/x/tools from 0.1.5 to 0.1.6

    build(deps): bump golang.org/x/tools from 0.1.5 to 0.1.6

    Bumps golang.org/x/tools from 0.1.5 to 0.1.6.

    Commits
    • 2758b04 gopls/api-diff: create api-diff command for gopls api
    • aba0c5f go/callgraph/vta: optimize scc type initialization
    • 1a7ca93 internal/lsp/source: update SuggestedFixFunc to accept source.Snapshot
    • 76d4494 internal/lsp: fix panic in find references on Error
    • 4ba3eff gopls/doc: remove -tags=typeparams from generic build instructions
    • 02e5238 go/internal/gcimporter: rename instType to instanceType
    • 0cffec9 go/internal/gcimporter: update iimport.go to support type parameters
    • 5492d01 internal/lsp/testdata: update inferred.go to use hoverdef
    • 9207707 go/internal/gcimporter: skip TestIExportData_stdlib on go1.18
    • cd7c003 internal/lsp: add support for hovering runes
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Feature/docs request: Add systemd instructions to APT download page

    Feature/docs request: Add systemd instructions to APT download page

    Just a suggestion to add instructions for enabling the systemd service that comes with the syncthing package to this page (https://apt.syncthing.net/). Otherwise, users may install syncthing and not know why they can't access it in their browser. sudo systemctl enable [email protected] sudo systemctl start [email protected]

    opened by springfielddatarecovery 0
  • fs events have wrong paths after renaming a directory, then back to original name

    fs events have wrong paths after renaming a directory, then back to original name

    I'm observing that files aren't syncing immediately from my Linux laptop to my Android (Syncthing-fork) phone. Triggering rescan on laptop makes the file sync immediately. This is leading to a lot of sync conflicts in my use-case.

    • Syncthing 0.18.1 Arch Linux 5.13.12-arch1-1
    • ext4 fs (root) under lvm under dmcrypt

    Log on changing test file (wikis/ericwiki/2021-09-15.md under Documents folder)

    2021-09-15 13:49:33 aggregator/Documents: Creating eventDir at: c_wikis
    2021-09-15 13:49:33 aggregator/Documents: Creating eventDir at: c_wikis/ericwiki
    2021-09-15 13:49:33 aggregator/Documents: Tracking (type non-remove): c_wikis/ericwiki/2021-09-15.md
    2021-09-15 13:49:33 aggregator/Documents: Already tracked (type non-remove): c_wikis/ericwiki/2021-09-15.md
    2021-09-15 13:49:35 aggregator/Documents: No old fs events
    2021-09-15 13:49:35 aggregator/Documents: Resetting notifyTimer to 10s
    2021-09-15 13:49:45 aggregator/Documents: Notifying about 1 fs events
    2021-09-15 13:49:45 aggregator/Documents: Resetting notifyTimer to 10s
    2021-09-15 13:49:45 sendreceive/[email protected] Scan due to watcher
    2021-09-15 13:49:45 sendreceive/[email protected] scanning
    2021-09-15 13:49:45 log 15483 StateChanged map[duration:19.363062358 folder:Documents from:idle to:scan-waiting]
    2021-09-15 13:49:45 log 15484 StateChanged map[duration:0.363243098 folder:Documents from:scan-waiting to:scanning]
    2021-09-15 13:49:45 sendreceive/[email protected] finished scanning, detected 0 changes
    2021-09-15 13:49:45 log 15485 StateChanged map[duration:0.366131453 folder:Documents from:scanning to:idle]
    2021-09-15 13:49:55 aggregator/Documents: No old fs events
    2021-09-15 13:49:55 aggregator/Documents: Resetting notifyTimer to 10s
    

    Log on triggering rescan:

    2021-09-15 13:50:59 sendreceive/[email protected] Running something due to request
    2021-09-15 13:50:59 sendreceive/[email protected] scanning
    2021-09-15 13:50:59 log 15486 StateChanged map[duration:74.05606869 folder:Documents from:idle to:scan-waiting]
    2021-09-15 13:50:59 log 15487 StateChanged map[duration:0.056147477 folder:Documents from:scan-waiting to:scanning]
    2021-09-15 13:50:59 log 15488 LocalIndexUpdated map[filenames:[wikis/ericwiki/2021-09-15.md] folder:Documents items:1 sequence:1351352 version:1351352]
    2021-09-15 13:50:59 log 15489 LocalChangeDetected map[action:modified folder:Documents folderID:Documents label: modifiedBy:SXOQY6C path:wikis/ericwiki/2021-09-15.md type:file]
    
    bug needs-triage 
    opened by edrex 14
  • "Watch for Changes" does not work if the folder is at the root of the Windows drive.

    Nothing simply happens when I delete files on remote machine.

    Rescan works and help.

    Can it be incompatibility with DiscCryptor?

    bug needs-triage 
    opened by hardhub 13
  • Folder marker not located in the folder root is accepted

    Folder marker not located in the folder root is accepted

    A folder marker point at the folder root or even outside of it (e.g. ../foo/bar) is accepted. This is unsafe, as it breaks the check against fully disappeared folder contents. If that check should be possible to be disabled, then that should be an explicit option (I don't think it should be).

    bug needs-triage 
    opened by imsodin 4
  • lib/config: Decouple VerifyConfiguration from Committer

    lib/config: Decouple VerifyConfiguration from Committer

    ... and remove 8/10 implementations, which were no-ops. This saves code and time copying configurations.

    opened by greatroar 0
  • lib/model, lib/protocol: Don't generate encrypted filenames that are reserved on Windows (fixes #7808)

    lib/model, lib/protocol: Don't generate encrypted filenames that are reserved on Windows (fixes #7808)

    Purpose

    Avoid creating reserved filenames by trivial escaping.

    Testing

    I don't know? Hence this is draft. How do we test this?

    Badness

    This handles the practicalities, but not really all the metadata. If we already have announced a file under a given bad name, the next update will use a better name, but the old metadata will be around forever, causing a conflict if a new trusted device is set up and receives all metadata from the untrusted device. There will be two entries representing the same trusted-side file.

    opened by calmh 1
  • build(deps): bump github.com/lib/pq from 1.10.1 to 1.10.3

    build(deps): bump github.com/lib/pq from 1.10.1 to 1.10.3

    Bumps github.com/lib/pq from 1.10.1 to 1.10.3.

    Release notes

    Sourced from github.com/lib/pq's releases.

    v1.10.3

    • implement ConnPrepareContext/StmtQueryContext/StmtExecContext interfaces (context.Cancel() now ends connections)
    • Avoid type assertion to the same type
    • Fix build for illumos and solaris

    v1.10.2

    • fix TimeTZ with second offsets
    • fix GOOS compilation
    Commits
    • 756b4d7 Merge pull request #1053 from EnergySRE/patch-1
    • 8667c6b Merge pull request #1047 from michaelshobbs/feature/context-interfaces
    • 9fa33e2 implement ConnPrepareContext/StmtQueryContext/StmtExecContext interfaces
    • 2140507 Merge pull request #1054 from michaelshobbs/feature/gh-actions
    • e10fdd5 remove travis ci artifacts
    • 7da16d9 mv certs Makefile to certs dir and add explanation
    • b81abde update goimports formatting for go1.17
    • 1c85910 implement gh actions workflow
    • 6ede22e Clarify maintenance mode
    • 9e747ca Merge pull request #1044 from xhit/fix-build-solaris-illumos
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
Releases(v1.18.3-rc.1)
transfer.sh - Easy and fast file sharing from the command-line.

Easy and fast file sharing from the command-line. This code contains the server with everything you need to create your own instance.

Dutchcoders 11.7k Sep 19, 2021
Easy and fast file sharing from the command-line.

Easy and fast file sharing from the command-line. This code contains the server with everything you need to create your own instance.

Dutchcoders 11.7k Sep 18, 2021
A download manager package for Go

grab Downloading the internet, one goroutine at a time! $ go get github.com/cavaliercoder/grab Grab is a Go package for downloading files from the in

Ryan Armstrong 912 Sep 17, 2021
pcp - 📦 Command line peer-to-peer data transfer tool based on libp2p.

pcp - Command line peer-to-peer data transfer tool based on libp2p.

Dennis Trautwein 787 Sep 16, 2021
⟁ Tendermint Core (BFT Consensus) in Go

Tendermint Byzantine-Fault Tolerant State Machines. Or Blockchain, for short. Branch Tests Coverage Linting master Tendermint Core is Byzantine Fault

Tendermint 4.3k Sep 25, 2021
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is an open-source POSIX file system built on top of Redis and object storage

Juicedata, Inc 3.7k Sep 24, 2021
oDrop, a fast efficient cross-platform file transfer software for server and home environments

oDrop is a cross-platform LAN file transfer software to efficiently transfer files between computers, oDrop is useful in environments where GUI is not available.

Flew Software 15 Aug 26, 2021
Send and receive files securely through Tor.

onionbox A basic implementation of OnionShare in Go. Mostly built as a fun project, onionbox is still a WIP so usage is not guaranteed secure, yet. Ke

Ryan Ciehanski 35 Aug 8, 2021
便捷的文件分享工具

文件分享工具,可用于局域网内分享文件,直接跑满本地带宽

JustSong 46 Sep 11, 2021
🐑 Share files at once

?? share Share files at once Getting started Step 1. Install go install Step 2. Set your account Create these environment variables. export CH

E99p1ant 4 Jul 26, 2021
A(nother) Bittorrent client written in the go programming language

Taipei Torrent This is a simple command-line-interface BitTorrent client coded in the go programming language. Features: Supports multiple torrent fil

Jack Palevich 802 Aug 16, 2021
Yet another netcat for fast file transfer

nyan Yet another netcat for fast file transfer When I need to transfer a file in safe environment (e.g. LAN / VMs), I just want to use a simple comman

null 5 May 6, 2021
proxyd proxies data between TCP, TLS, and unix sockets

proxyd proxyd proxies data between TCP, TLS, and unix sockets TLS termination: Connecting to a remote application's unix socket: +---------+

Hayden Parker 14 Nov 22, 2019
🌧 BitTorrent client and library in Go

rain BitTorrent client and library in Go. Running in production at put.io. Features Core protocol Fast extension Magnet links Multiple trackers UDP tr

Cenk Altı 631 Sep 23, 2021
fsync - a file sync server

fsync - a file sync server

null 4 May 24, 2021
A web based drag and drop file transfer tool for sending files across the internet.

DnD A file transfer tool. Demo Usage Get go get github.com/0xcaff/dnd or download the latest release (you don't need go to run it) Run dnd Now navig

Martin Charles 16 Feb 10, 2021
FSS3 is an S3 filesystem abstraction layer for Golang

FSS3 is an S3 filesystem abstraction layer for Golang

Ayman Bagabas 45 Aug 30, 2021
BitTorrent DHT Protocol && DHT Spider.

See the video on the Youtube. 中文版README Introduction DHT implements the bittorrent DHT protocol in Go. Now it includes: BEP-3 (part) BEP-5 BEP-9 BEP-1

Lime 2.4k Sep 24, 2021
a ed2k link crawler written in Go / golang

Install/Run Example $ git clone git://github.com/kevinwatt/ed2kcrawler.git Build ed2kcrawler $ make Create an config.cfg file [GenSetting] OThread =

Kevin Watt 29 Oct 11, 2019