IPFS implementation in Go

Related tags

Network ipfs
Overview

go-ipfs

banner

Matrix IRC Discord GoDoc standard-readme compliant CircleCI

What is IPFS?

IPFS is a global, versioned, peer-to-peer filesystem. It combines good ideas from previous systems such as Git, BitTorrent, Kademlia, SFS, and the Web. It is like a single BitTorrent swarm, exchanging git objects. IPFS provides an interface as simple as the HTTP web, but with permanence built-in. You can also mount the world at /ipfs.

For more info see: https://docs.ipfs.io/introduction/overview/

Before opening an issue, consider using one of the following locations to ensure you are opening your thread in the right place:

Table of Contents

Security Issues

The IPFS protocol and its implementations are still in heavy development. This means that there may be problems in our protocols, or there may be mistakes in our implementations. And -- though IPFS is not production-ready yet -- many people are already running nodes in their machines. So we take security vulnerabilities very seriously. If you discover a security issue, please bring it to our attention right away!

If you find a vulnerability that may affect live deployments -- for example, by exposing a remote execution exploit -- please send your report privately to [email protected]. Please DO NOT file a public issue.

If the issue is a protocol weakness that cannot be immediately exploited or something not yet deployed, just discuss it openly.

Install

The canonical download instructions for IPFS are over at: https://docs.ipfs.io/guides/guides/install/. It is highly recommended you follow those instructions if you are not interested in working on IPFS development.

System Requirements

IPFS can run on most Linux, macOS, and Windows systems. We recommend running it on a machine with at least 2 GB of RAM and 2 CPU cores (go-ipfs is highly parallel). On systems with less memory, it may not be completely stable.

If your system is resource-constrained, we recommend:

  1. Installing OpenSSL and rebuilding go-ipfs manually with make build GOTAGS=openssl. See the download and compile section for more information on compiling go-ipfs.
  2. Initializing your daemon with ipfs init --profile=lowpower

Install prebuilt packages

We host prebuilt binaries over at our distributions page.

From there:

  • Click the blue "Download go-ipfs" on the right side of the page.
  • Open/extract the archive.
  • Move ipfs to your path (install.sh can do it for you).

You can also download go-ipfs from this project's GitHub releases page if you are unable to access ipfs.io.

From Linux package managers

Arch Linux

In Arch Linux go-ipfs is available as go-ipfs package.

$ sudo pacman -S go-ipfs

Development version of go-ipfs is also on AUR under go-ipfs-git. You can install it using your favorite AUR Helper or manually from AUR.

Nix

For Linux and MacOSX you can use the purely functional package manager Nix:

$ nix-env -i ipfs

You can also install the Package by using its attribute name, which is also ipfs.

Guix

GNU's functional package manager, Guix, also provides a go-ipfs package:

$ guix package -i go-ipfs

Solus

In solus, go-ipfs is available in the main repository as go-ipfs.

$ sudo eopkg install go-ipfs

You can also install it through the Solus software center.

Snap

With snap, in any of the supported Linux distributions:

$ sudo snap install ipfs

From Windows package managers

Chocolatey

The package ipfs currently points to go-ipfs and is being maintained.

PS> choco install ipfs

Scoop

Scoop provides go-ipfs in its 'extras' bucket.

PS> scoop bucket add extras
PS> scoop install go-ipfs

Build from Source

go-ipfs's build system requires Go 1.14.4 and some standard POSIX build tools:

  • GNU make
  • Git
  • GCC (or some other go compatible C Compiler) (optional)

To build without GCC, build with CGO_ENABLED=0 (e.g., make build CGO_ENABLED=0).

Install Go

The build process for ipfs requires Go 1.14.4 or higher. If you don't have it: Download Go 1.14+.

You'll need to add Go's bin directories to your $PATH environment variable e.g., by adding these lines to your /etc/profile (for a system-wide installation) or $HOME/.profile:

export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$GOPATH/bin

(If you run into trouble, see the Go install instructions).

Download and Compile IPFS

$ git clone https://github.com/ipfs/go-ipfs.git

$ cd go-ipfs
$ make install

Alternatively, you can run make build to build the go-ipfs binary (storing it in cmd/ipfs/ipfs) without installing it.

NOTE: If you get an error along the lines of "fatal error: stdlib.h: No such file or directory", you're missing a C compiler. Either re-run make with CGO_ENABLED=0 or install GCC.

Cross Compiling

Compiling for a different platform is as simple as running:

make build GOOS=myTargetOS GOARCH=myTargetArchitecture
OpenSSL

To build go-ipfs with OpenSSL support, append GOTAGS=openssl to your make invocation. Building with OpenSSL should significantly reduce the background CPU usage on nodes that frequently make or receive new connections.

Note: OpenSSL requires CGO support and, by default, CGO is disabled when cross-compiling. To cross-compile with OpenSSL support, you must:

  1. Install a compiler toolchain for the target platform.
  2. Set the CGO_ENABLED=1 environment variable.

Troubleshooting

  • Separate instructions are available for building on Windows.
  • git is required in order for go get to fetch all dependencies.
  • Package managers often contain out-of-date golang packages. Ensure that go version reports at least 1.10. See above for how to install go.
  • If you are interested in development, please install the development dependencies as well.
  • WARNING: Older versions of OSX FUSE (for Mac OS X) can cause kernel panics when mounting!- We strongly recommend you use the latest version of OSX FUSE. (See https://github.com/ipfs/go-ipfs/issues/177)
  • For more details on setting up FUSE (so that you can mount the filesystem), see the docs folder.
  • Shell command completion is available in misc/completion/ipfs-completion.bash. Read docs/command-completion.md to learn how to install it.
  • See the misc folder for how to connect IPFS to systemd or whatever init system your distro uses.

Updating go-ipfs

Using ipfs-update

IPFS has an updating tool that can be accessed through ipfs update. The tool is not installed alongside IPFS in order to keep that logic independent of the main codebase. To install ipfs update, download it here.

Downloading IPFS builds using IPFS

List the available versions of go-ipfs:

$ ipfs cat /ipns/dist.ipfs.io/go-ipfs/versions

Then, to view available builds for a version from the previous command ($VERSION):

$ ipfs ls /ipns/dist.ipfs.io/go-ipfs/$VERSION

To download a given build of a version:

$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_darwin-386.tar.gz # darwin 32-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_darwin-amd64.tar.gz # darwin 64-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_freebsd-amd64.tar.gz # freebsd 64-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_linux-386.tar.gz # linux 32-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_linux-amd64.tar.gz # linux 64-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_linux-arm.tar.gz # linux arm build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_windows-amd64.zip # windows 64-bit build

Getting Started

See also: https://docs.ipfs.io/introduction/usage/

To start using IPFS, you must first initialize IPFS's config files on your system, this is done with ipfs init. See ipfs init --help for information on the optional arguments it takes. After initialization is complete, you can use ipfs mount, ipfs add and any of the other commands to explore!

Some things to try

Basic proof of 'ipfs working' locally:

echo "hello world" > hello
ipfs add hello
# This should output a hash string that looks something like:
# QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o
ipfs cat <that hash>

Usage

  ipfs - Global p2p merkle-dag filesystem.

  ipfs [<flags>] <command> [<arg>] ...

SUBCOMMANDS
  BASIC COMMANDS
    init          Initialize local IPFS configuration
    add <path>    Add a file to IPFS
    cat <ref>     Show IPFS object data
    get <ref>     Download IPFS objects
    ls <ref>      List links from an object
    refs <ref>    List hashes of links from an object

  DATA STRUCTURE COMMANDS
    dag           Interact with IPLD DAG nodes
    files         Interact with files as if they were a unix filesystem
    object        Interact with dag-pb objects (deprecated, use 'dag' or 'files')
    block         Interact with raw blocks in the datastore

  ADVANCED COMMANDS
    daemon        Start a long-running daemon process
    mount         Mount an IPFS read-only mount point
    resolve       Resolve any type of name
    name          Publish and resolve IPNS names
    key           Create and list IPNS name keypairs
    dns           Resolve DNS links
    pin           Pin objects to local storage
    repo          Manipulate the IPFS repository
    stats         Various operational stats
    p2p           Libp2p stream mounting
    filestore     Manage the filestore (experimental)

  NETWORK COMMANDS
    id            Show info about IPFS peers
    bootstrap     Add or remove bootstrap peers
    swarm         Manage connections to the p2p network
    dht           Query the DHT for values or peers
    ping          Measure the latency of a connection
    diag          Print diagnostics

  TOOL COMMANDS
    config        Manage configuration
    version       Show IPFS version information
    update        Download and apply go-ipfs updates
    commands      List all available commands
    cid           Convert and discover properties of CIDs
    log           Manage and show logs of running daemon

  Use 'ipfs <command> --help' to learn more about each command.

  ipfs uses a repository in the local file system. By default, the repo is located at
  ~/.ipfs. To change the repo location, set the $IPFS_PATH environment variable:

    export IPFS_PATH=/path/to/ipfsrepo

Running IPFS inside Docker

An IPFS docker image is hosted at hub.docker.com/r/ipfs/go-ipfs. To make files visible inside the container you need to mount a host directory with the -v option to docker. Choose a directory that you want to use to import/export files from IPFS. You should also choose a directory to store IPFS files that will persist when you restart the container.

export ipfs_staging=</absolute/path/to/somewhere/>
export ipfs_data=</absolute/path/to/somewhere_else/>

Start a container running ipfs and expose ports 4001, 5001 and 8080:

docker run -d --name ipfs_host -v $ipfs_staging:/export -v $ipfs_data:/data/ipfs -p 4001:4001 -p 4001:4001/udp -p 127.0.0.1:8080:8080 -p 127.0.0.1:5001:5001 ipfs/go-ipfs:latest

Watch the ipfs log:

docker logs -f ipfs_host

Wait for ipfs to start. ipfs is running when you see:

Gateway (readonly) server
listening on /ip4/0.0.0.0/tcp/8080

You can now stop watching the log.

Run ipfs commands:

docker exec ipfs_host ipfs <args...>

For example: connect to peers

docker exec ipfs_host ipfs swarm peers

Add files:

cp -r <something> $ipfs_staging
docker exec ipfs_host ipfs add -r /export/<something>

Stop the running container:

docker stop ipfs_host

When starting a container running ipfs for the first time with an empty data directory, it will call ipfs init to initialize configuration files and generate a new keypair. At this time, you can choose which profile to apply using the IPFS_PROFILE environment variable:

docker run -d --name ipfs_host -e IPFS_PROFILE=server -v $ipfs_staging:/export -v $ipfs_data:/data/ipfs -p 4001:4001 -p 4001:4001/udp -p 127.0.0.1:8080:8080 -p 127.0.0.1:5001:5001 ipfs/go-ipfs:latest

Private swarms inside Docker

It is possible to initialize the container with a swarm key file (/data/ipfs/swarm.key) using the variables IPFS_SWARM_KEY and IPFS_SWARM_KEY_FILE. The IPFS_SWARM_KEY creates swarm.key with the contents of the variable itself, whilst IPFS_SWARM_KEY_FILE copies the key from a path stored in the variable. The IPFS_SWARM_KEY_FILE overwrites the key generated by IPFS_SWARM_KEY.

docker run -d --name ipfs_host -e IPFS_SWARM_KEY=<your swarm key> -v $ipfs_staging:/export -v $ipfs_data:/data/ipfs -p 4001:4001 -p 4001:4001/udp -p 127.0.0.1:8080:8080 -p 127.0.0.1:5001:5001 ipfs/go-ipfs:latest

The swarm key initialization can also be done using docker secrets (requires docker swarm or docker-compose):

cat your_swarm.key | docker secret create swarm_key_secret -
docker run -d --name ipfs_host --secret swarm_key_secret -e IPFS_SWARM_KEY_FILE=/run/secrets/swarm_key_secret -v $ipfs_staging:/export -v $ipfs_data:/data/ipfs -p 4001:4001 -p 4001:4001/udp -p 127.0.0.1:8080:8080 -p 127.0.0.1:5001:5001 ipfs/go-ipfs:latest

Key rotation inside Docker

If needed, it is possible to do key rotation in an ephemeral container that is temporarily executing against a volume that is mounted under /data/ipfs:

# given container named 'ipfs-test' that persists repo at /path/to/persisted/.ipfs
$ docker run -d --name ipfs-test -v /path/to/persisted/.ipfs:/data/ipfs ipfs/go-ipfs:v0.7.0 
$ docker stop ipfs-test  

# key rotation works like this (old key saved under 'old-self')
$ docker run --rm -it -v /path/to/persisted/.ipfs:/data/ipfs ipfs/go-ipfs:v0.7.0 key rotate -o old-self -t ed25519
$ docker start ipfs-test # will start with the new key

Troubleshooting

If you have previously installed IPFS before and you are running into problems getting a newer version to work, try deleting (or backing up somewhere else) your IPFS config directory (~/.ipfs by default) and rerunning ipfs init. This will reinitialize the config file to its defaults and clear out the local datastore of any bad entries.

Please direct general questions and help requests to our forum or our IRC channel (freenode #ipfs).

If you believe you've found a bug, check the issues list and, if you don't see your problem there, either come talk to us on IRC (freenode #ipfs) or file an issue of your own!

Packages

This table is generated using the module package-table with package-table --data=package-list.json.

Listing of the main packages used in the IPFS ecosystem. There are also three specifications worth linking here:

Name CI/Travis Coverage Description
Libp2p
go-libp2p Travis CI codecov p2p networking library
go-libp2p-pubsub Travis CI codecov pubsub built on libp2p
go-libp2p-kad-dht Travis CI codecov dht-backed router
go-libp2p-pubsub-router Travis CI codecov pubsub-backed router
Multiformats
go-cid Travis CI codecov CID implementation
go-multiaddr Travis CI codecov multiaddr implementation
go-multihash Travis CI codecov multihash implementation
go-multibase Travis CI codecov mulitbase implementation
Files
go-unixfs Travis CI codecov the core 'filesystem' logic
go-mfs Travis CI codecov a mutable filesystem editor for unixfs
go-ipfs-posinfo Travis CI codecov helper datatypes for the filestore
go-ipfs-chunker Travis CI codecov file chunkers
Exchange
go-ipfs-exchange-interface Travis CI codecov exchange service interface
go-ipfs-exchange-offline Travis CI codecov (dummy) offline implementation of the exchange service
go-bitswap Travis CI codecov bitswap protocol implementation
go-blockservice Travis CI codecov service that plugs a blockstore and an exchange together
Datastores
go-datastore Travis CI codecov datastore interfaces, adapters, and basic implementations
go-ipfs-ds-help Travis CI codecov datastore utility functions
go-ds-flatfs Travis CI codecov a filesystem-based datastore
go-ds-measure Travis CI codecov a metric-collecting database adapter
go-ds-leveldb Travis CI codecov a leveldb based datastore
go-ds-badger Travis CI codecov a badgerdb based datastore
Namesys
go-ipns Travis CI codecov IPNS datastructures and validation logic
Repo
go-ipfs-config Travis CI codecov go-ipfs config file definitions
go-fs-lock Travis CI codecov lockfile management functions
fs-repo-migrations Travis CI codecov repo migrations
IPLD
go-block-format Travis CI codecov block interfaces and implementations
go-ipfs-blockstore Travis CI codecov blockstore interfaces and implementations
go-ipld-format Travis CI codecov IPLD interfaces
go-ipld-cbor Travis CI codecov IPLD-CBOR implementation
go-ipld-git Travis CI codecov IPLD-Git implementation
go-merkledag Travis CI codecov IPLD-Merkledag implementation (and then some)
Commands
go-ipfs-cmds Travis CI codecov CLI & HTTP commands library
go-ipfs-files Travis CI codecov CLI & HTTP commands library
go-ipfs-api Travis CI codecov an old, stable shell for the IPFS HTTP API
go-ipfs-http-client Travis CI codecov a new, unstable shell for the IPFS HTTP API
interface-go-ipfs-core Travis CI codecov core go-ipfs API interface definitions
Metrics & Logging
go-metrics-interface Travis CI codecov metrics collection interfaces
go-metrics-prometheus Travis CI codecov prometheus-backed metrics collector
go-log Travis CI codecov logging framework
Generics/Utils
go-ipfs-routing Travis CI codecov routing (content, peer, value) helpers
go-ipfs-util Travis CI codecov the kitchen sink
go-ipfs-addr Travis CI codecov utility functions for parsing IPFS multiaddrs

For brevity, we've omitted most go-libp2p, go-ipld, and go-multiformats packages. These package tables can be found in their respective project's READMEs:

Development

Some places to get you started on the codebase:

Map of go-ipfs Subsystems

WIP: This is a high-level architecture diagram of the various sub-systems of go-ipfs. To be updated with how they interact. Anyone who has suggestions is welcome to comment here on how we can improve this!

CLI, HTTP-API, Architecture Diagram

Origin

Description: Dotted means "likely going away". The "Legacy" parts are thin wrappers around some commands to translate between the new system and the old system. The grayed-out parts on the "daemon" diagram are there to show that the code is all the same, it's just that we turn some pieces on and some pieces off depending on whether we're running on the client or the server.

Testing

make test

Development Dependencies

If you make changes to the protocol buffers, you will need to install the protoc compiler.

Developer Notes

Find more documentation for developers on docs

Contributing

We ❤️ all our contributors; this project wouldn’t be what it is without you! If you want to help out, please see CONTRIBUTING.md.

This repository falls under the IPFS Code of Conduct.

You can contact us on the freenode #ipfs-dev channel or attend one of our weekly calls.

License

The go-ipfs project is dual-licensed under Apache 2.0 and MIT terms:

Comments
  • Extract and rework commands package

    Extract and rework commands package

    This PR rips out commands/, changes a lot of stuff and puts it in go-ipfs-cmds. This PR is WIP and primarily here for reviewing.

    see #3524 and go-ipfs-cmds

    Some remarks:

    Backwards compatibility

    We don't want to rewrite all of core/commands at once, so I built a shimming layer that allows us to use the current go-ipfs/commands.Command structs (with some limitations). All the shimming code is in go-ipfs-cmds/legacy.go. That is a lot of code and in the medium term it would be great to dump it. For that we need to use the new commands library in all of core/commands. That is not a short-term project though.

    Shared code

    There is quite a lot of shared code between go-ipfs/commands and go-ipfs-cmds. To reduce the number of type conversions, I moved most of it to go-ipfs-cmds/cmdsutil. I don't think this is a very good name though and it might change in the future. $pkg/${pkg}util usually contains code that operates on types in $pkg and here it's the other way around: cmdsutil contains a lot of basic tools (e.g. an error type) that is used by both go-ipfs-cmds and go-ipfs/commands. Maybe go-ipfs-cmds/core is better?

    Basic model

    Most changes I made affected the type Response, which I split up into Response and ResponseEmitter. Basically a producer is given a ResponseEmitter on which it can call Emit(v) a few times (basically it's a channel write) - or just once if v is an io.Reader. These values can be received by a consumer which has the corresponding Response by calling v, err := res.Next(). These can be chained at will - which actually happens when there is a PostRun.

    Marshalers, Encoders, PostRun

    When I started this project I complained about PostRun. I still think it's a bit weird, but it fulfills an important use case. In go-ipfs/commands, the Marshal function takes the value passed to SetOutput and do it's magic to build a bytestream from that. In go-ipfs-cmds there is no singular value, instead we operate on streams. Encode() is very similar, but instead it operates on a single emitted value. It is called once per call to Emit. This means that no state is kept between emitted values. If you want to e.g. not abort on errors and print an error digest after everything completed, you need to use PostRun. PostRun takes a request and the actual ResponseEmitter and returns a new ResponseEmitter. Usually the first thing that happens in a PostRun is to create a ResponseEmitter/Response pair backed by a channel. In a goroutine we read from that Response, do our thing, and send it to the underlying ResponseEmitter. After calling the goroutine we return the channel-backed ResponseEmitter. Usually the call to PostRun looks like this:

    re = cmd.PostRun[enc](req, re)
    // ...
    cmd.Run(req, re)
    

    That way the returned ResponseEmitter will be used by the Run function and all emitted values will pass through PostRun before ending up at the final destination (usually a cmds/cli.ResponseEmitter). This allows building flexible pipelines.

    Changes to the sharness tests

    I changed some small stuff in the sharness tests. I don't intend to keep these changes, but they are currently handy. The changes basically include a lot of debug output and a few disabled tests, because my computer doesn't like tests that use nc. I don't know why, but they also fail for master. Maybe you can try to run them? That is t0060, t0061 and t0235. Remove the test_done from the test's preamble.

    I also wasn't able to test fuse and docker.


    That was a lot of explanation. Let me know what you think!

    opened by keks 122
  • Standard URI for ipfs and ipns protocols (Discussion)

    Standard URI for ipfs and ipns protocols (Discussion)

    I would like to add ipfs support to a tool that expects a URL-format specification. Hypothetically, let's say I wanted to add ipfs suport to curl. I would need a scheme:data format specification that follows the standard url format.

    I asked about this on irc and immediately folks started trying to direct me away from URLs to the multiaddr spec. Setting aside for the moment then I'm not clear what problem multiaddr is trying to solve or why URLs aren't appropriate, some tools will simply require a URL format to operate.

    In the absence of any other suggestions, I would like to suggest that we document the following standard forms:

    • ipfs:<hash>[/<path>] for IPFS objects, as in:

      ipfs:QmPXME1oRtoT627YKaDPDQ3PwA8tdP9rWuAAweLzqSwAWT/readme
      
    • ipns:<hash>[/path] for IPNS names:

      ipns:QmXfrS3pHerg44zzK6QKQj6JDk8H6cMtQS7pdXbohwNQfK/pages/gpg.md
      
    need/community-input 
    opened by larsks 90
  • Pinning new cbor object doesn't appear to work

    Pinning new cbor object doesn't appear to work

    Version information:

    go-ipfs version: 0.4.5-dev-4cb236c Repo version: 4 System version: amd64/linux Golang version: go1.7.1

    Type:

    Bug

    Priority:

    P0

    Description:

    Pinning a new cbor object created using block.put doesn't appear to work. To reproduce:

    >> echo -e "\x4b\x67\x27\x64\x61\x79\x20\x49\x50\x46\x53\x21" | ipfs block put --format=cbor
    zdpuAue4NBRG6ZH5M7aJvvdjdNbFkwZZCooKWM1m2faRAodRe
    >> echo -e "\xd9\x01\x02\x58\x25\xa5\x03\x22\x12\x20\x65\x96\x50\xfc\x34\x43\xc9\x16\x42\x80\x48\xef\xc5\xba\x45\x58\xdc\x86\x35\x94\x98\x0a\x59\xf5\xcb\x3c\x4d\x84\x86\x7e\x6d\x31" | ipfs block put --format=cbor
    zdpuApNFmG7PZ53BWxwix4HztiVDHomrvdJLTegycZb8YU5Qr
    >> ipfs pin add -r zdpuApNFmG7PZ53BWxwix4HztiVDHomrvdJLTegycZb8YU5Qr
    >> ipfs repo gc
    >> ipfs block get zdpuApNFmG7PZ53BWxwix4HztiVDHomrvdJLTegycZb8YU5Qr
    >> ipfs block get zdpuAue4NBRG6ZH5M7aJvvdjdNbFkwZZCooKWM1m2faRAodRe
    

    The gc should NOT remove the two blocks added (it currently removes both). And the subsequent gets should succeed. The first block is just a cbor byte array of 'gday IPFS!' The second is just a cbor merkle link to /ipfs/zdpuAue4N...

    N.B. I may not have the correct serialization for the merkle link, but as far as I can tell it is correct (a cbor tag of 258 for the multiaddr)

    kind/bug topic/repo 
    opened by ianopolous 88
  • Windows mount support

    Windows mount support

    Tracking/discussion issue for the topic of implementing ipfs mount on Windows. This is a broad topic, all comments and criticisms are welcome here as long as they help drive us towards a solution. Nothing is set in stone yet and we'd like to implement this correctly.

    Currently we have no first party support for this:

    Error: Mount isn't compatible with Windows yet

    For third party, I'm aware of these projects: https://github.com/alexpmorris/dipfs https://github.com/richardschneider/net-ipfs-mount both utilize the IPFS API and Dokany(a Windows FUSE alternative).

    dipfs does not appear to be maintained. In my experience it will succeed in mounting IPFS as a drive letter, and allows you to traverse IPFS paths via CLI, however it hangs in explorer (likely from it trying to access metadata), or when trying to read data through any means.

    net-ipfs-mount is currently being maintained. This is our best contender, everything appears to work as intended unless you have a non-tiny pinset. When trying to traverse IPFS paths with a large enough pinset, passing the list of pins from the API to net-ipfs-mount can take long enough for Windows to deem /ipfs inaccessible. For local testing you can run ipfs pins ls and see how long it takes to return.


    I would like to start an initiative, aimed at getting Windows mounting on par with the other platforms. That is to say, at the very least, exposing read only access to /ipfs and /ipns constructs. If possible, it would be nice to extend the feature set to expose a writable MFS root as well, similar to this https://github.com/tableflip/ipfs-fuse

    Most likely, this will mean implementing first party support for mount in go-ipfs, utilizing core APIs where possible.

    Mention of https://github.com/billziss-gh/winfsp has come up as an alternative to Dokany. It's likely that winfsp's native API will be our target, however this is not locked down yet. If you have opinions on Windows filesystem APIs, positive or negative, please post them here.

    cc: @mgoelzer, @alanshaw, @dryajov, @mrlindblom, @Kubuxu, @whyrusleeping

    Edit: forgot to cc: @alexpmorris, @richardschneider

    topic/windows 
    opened by djdv 85
  • Reduce memory usage

    Reduce memory usage

    As part of our resouce consumption reduction milestore, Lets make an effort to get the idle memory usage of an ipfs node down below 100MB.

    Things that could help here are:

    • peerstore written to disk
    • providers garbage collection smarter
    • fewer goroutines per peer connection
    • bitswap wantlists to disk

    READ BEFORE COMMENTING

    Please make sure to upgrade to the latest version of go-ipfs before chiming in. Memory usage still needs to be reduced but this gets better every release.

    status/deferred 
    opened by whyrusleeping 75
  • [http_proxy_over_p2p]

    [http_proxy_over_p2p]

    This implements an http-proxy over p2p-streams, for context see https://github.com/ipfs/go-ipfs/issues/5341.

    This script is a useful test of the functionality. In case it causes portability issues I've not included it as a sharness test (since it uses python to serve HTTP content, although happy to add it since python be ~ as available as bash).

    (inline since GH doesn't support *.sh as an attachment)

    #!/bin/bash                                                                                                                                                   
                                                                                                                                                                  
    #                                                                                                                                                             
    # clean up all the things started in this script                                                                                                              
    #                                                                                                                                                             
    function teardown() {                                                                                                                                         
        jobs -p | xargs kill -9 ;                                                                                                                                 
    }                                                                                                                                                             
    trap teardown INT EXIT                                                                                                                                        
                                                                                                                                                                  
    #                                                                                                                                                             
    # serve the thing over HTTP                                                                                                                                   
    #                                                                                                                                                             
    SERVE_PATH=$(mktemp -d)                                                                                                                                       
    echo "YOU ARE THE CHAMPION MY FRIEND" > $SERVE_PATH/index.txt                                                                                                 
    cd $SERVE_PATH                                                                                                                                                
    # serve this on port 8000
    python -m SimpleHTTPServer 8000 &
    
    
    cd -
    
    IPFS=cmd/ipfs/ipfs
    
    PATH1=$(mktemp -d)
    PATH2=$(mktemp -d)
    
    RECEIVER_LOG=$PATH1/log.log
    SENDER_LOG=$PATH2/log.log
    
    export IPFS_PATH=$PATH1
    
    #
    # start RECEIVER IPFS daemon
    #
    $IPFS init >> $RECEIVER_LOG 2>&1
    $IPFS config --json Experimental.Libp2pStreamMounting true >> $RECEIVER_LOG 2>&1
    $IPFS config --json Addresses.API "\"/ip4/127.0.0.1/tcp/6001\"" >> $RECEIVER_LOG 2>&1
    $IPFS config --json Addresses.Gateway "\"/ip4/127.0.0.1/tcp/8081\"" >> $RECEIVER_LOG 2>&1
    $IPFS config --json Addresses.Swarm "[\"/ip4/0.0.0.0/tcp/7001\", \"/ip6/::/tcp/7001\"]" >> $RECEIVER_LOG 2>&1
    $IPFS daemon >> $RECEIVER_LOG 2>&1 &
    # wait for daemon to start.. maybe?
    # ipfs id returns empty string if we don't wait here..
    sleep 5
    
    #
    # start a p2p listener on RECIVER to the HTTP server with our content
    #
    $IPFS p2p listen /x/test /ip4/127.0.0.1/tcp/8000 >> $RECEIVER_LOG 2>&1
    FIRST_ID=$($IPFS id -f "<id>")
    
    export IPFS_PATH=$PATH2
    $IPFS init >> $SENDER_LOG 2>&1
    $IPFS config --json Experimental.Libp2pStreamMounting true >> $SENDER_LOG 2>&1
    $IPFS daemon >> $SENDER_LOG 2>&1 &
    # wait for daemon to start.. maybe?
    sleep 5
    
    
    
    # send a http request to SENDER via proxy to RECIEVER that will proxy to web-server
    
    echo "******************"
    echo proxy response
    echo "******************"
    curl http://localhost:5001/proxy/http/$FIRST_ID/test/index.txt
    
    
    
    echo "******************"
    echo link http://localhost:5001/proxy/http/$FIRST_ID/test/index.txt
    echo "******************"
    echo "RECEIVER IPFS LOG " $RECEIVER_LOG
    echo "******************"
    cat $RECEIVER_LOG
    
    echo "******************"
    echo "SENDER IPFS LOG " $SENDER_LOG
    echo "******************"
    cat $SENDER_LOG
    
    
    opened by cboddy 69
  • Go-IPFS 0.5.0 Release

    Go-IPFS 0.5.0 Release

    go-ipfs 0.5.0 Release

    Release: https://dist.ipfs.io#go-ipfs

    We're happy to announce go-ipfs 0.5.0, ...

    🗺 What's left for release

    • [x] Merge https://github.com/ipfs/go-ipfs/issues/6870 (punted till RC2 as these changes should be pretty safe).
    • [x] Fix libp2p address filtering https://github.com/ipfs/go-ipfs/issues/6995
      • Will be fixed by upgrading QUIC, see below.
    • [x] Upgrade the QUIC transport (downgraded as it was incompatible with go 1.13).
    • [x] Prevent unreachable nodes from joining the DHT by default, and ensure that nodes running in VPNs, disconnected LANs, etc. continue working. See: https://github.com/libp2p/go-libp2p/issues/803 and https://github.com/libp2p/go-libp2p-kad-dht/issues/564 for context.
    • [x] Bitswap leak when sending cancels https://github.com/ipfs/go-bitswap/issues/341. Only shows up with long-lived peers in bitswap heavy machines (e.g., gateways).
    • [x] Sharness test passing that shouldn't be passing: https://github.com/ipfs/go-ipfs/issues/7117
    • [x] Change in bitswap (?) is causing issues writing to flatfs on windows: https://github.com/ipfs/go-ipfs/issues/7115.
    • [x] Pubsub race condition https://github.com/libp2p/go-libp2p-pubsub/issues/294
    • [x] Websocket race: https://github.com/libp2p/go-ws-transport/issues/86
    • [x] QUIC update to fix small bugs.

    🔦 Highlights

    UNDER CONSTRUCTION

    This release includes many important changes users should be aware of.

    New DHT

    This release includes an almost completely rewritten DHT implementation with a new protocol version. From a user's perspective, providing content, finding content, and resolving IPNS records should simply get faster. However, this is a significant (albeit well tested) change and significant changes are always risky, so heads up.

    Old v. New

    The current DHT suffers from three core issues addressed in this release:

    1. Most peers in the DHT cannot be dialed (e.g., due to firewalls and NATs). Much of a DHT query time is wasted trying to connect to peers that cannot be reached.
    2. The DHT query logic doesn't properly terminate when it hits the end of the query and, instead, aggressively keeps on searching.
    3. The routing tables are poorly maintained. This can cause a search that should be logarithmic in the size of the network to be linear.
    Reachable

    We have addressed the problem of undialable nodes by having nodes wait to join the DHT as "server" nodes until they've confirmed that they are reachable from the public internet. Additionally, we've introduced:

    • A new libp2p protocol to push updates to our peers when we start/stop listen on protocols.
    • A libp2p event bus for processing updates like these.
    • A new DHT protocol version. New DHT nodes will not admit old DHT nodes into their routing tables. Old DHT nodes will still be able to issue queries against the new DHT, they just won't be queried or referred by new DHT nodes. This way, old, potentially unreachable nodes with bad routing tables won't pollute the new DHT.

    Unfortunately, there's a significant downside to this approach: VPNs, offline LANs, etc. where all nodes on the network have private IP addresses and never communicate over the public internet. In this case, none of these nodes would be "publicly reachable".

    To address this last point, go-ipfs 0.5.0 will run two DHTs: one for private networks and one for the public internet. That is, every node will participate in a LAN DHT and a public WAN DHT.

    RC2 NOTE: All the features not enabled in RC1 have been enabled in RC2.

    RC1 NOTE: Several of these features have not been enabled in RC1:

    1. We haven't yet switched the protocol version and will be running the DHT in "compatibility mode" with the old DHT. Once we flip the switch and enable the new protocol version, we will need to ensure that at least 20% of the publicly reachable DHT speaks the new protocol, all at once. The plan is to introduce a large number of "booster" nodes while the network transitions.
    2. We haven't yet introduced the split LAN/WAN DHTs. We're still testing this approach and considering alternatives.
    3. Because we haven't introduced the LAN/WAN DHT split, IPFS nodes running in DHT server mode will continue to run in DHT server mode without waiting to confirm that they're reachable from the public internet. Otherwise, we'd break IPFS nodes running DHTs in VPNs and disconnected LANs.
    Query Logic

    We've fixed the DHT query logic by correctly implementing Kademlia (with a few tweaks). This should significantly speed up:

    • Publishing IPNS & provider records. We previously continued searching for closer and closer peers to the "target" until we timed out, then we put to the closest peers we found.
    • Resolving IPNS addresses. We previously continued IPNS record searches until we ran out of peers to query, timed out, or found 16 records.

    In both cases, we now continue till we find the closest peers then stop.

    Routing Tables

    Finally, we've addressed the poorly maintained routing tables by:

    • Reducing the likelihood that the connection manager will kill connections to peers in the routing table.
    • Keeping peers in the routing table, even if we get disconnected from them.
    • Actively and frequently querying the DHT to keep our routing table full.

    Testing

    The DHT rewrite was made possible by our new testing framework, testground, which allows us to spin up multi-thousand node tests with simulated real-world network conditions. With testground and some custom analysis tools, we were able to gain confidence that the new DHT implementation behaves correctly.

    Refactored Bitswap

    This release includes a major bitswap refactor running a new, but backwards compatible, bitswap protocol. We expect these changes to improve performance significantly.

    With the refactored bitswap, we expect:

    • Few to no duplicate blocks when fetching data from other nodes speaking the new protocol.
    • Better parallelism when fetching from multiple peers.

    Note, the new bitswap won't magically make downloading content any faster until both seeds and leaches have updated. If you're one of the first to upgrade to 0.5.0 and try downloading from peers that haven't upgraded, you're unlikely to see much of a performance improvement, if any.

    Provider Record Changes

    When you add content to your IPFS node, you advertise this content to the network by announcing it in the DHT. We call this "providing".

    However, go-ipfs has multiple ways to address the same underlying bytes. Specifically, we address content by content ID (CID) and the same underlying bytes can be addressed using (a) two different versions of CIDs (CIDv1 and CIDv2) and (b) with different "codecs" depending on how we're interpreting the data.

    Prior to go-ipfs 0.5.0, we used the content id (CID) in the DHT when sending out provider records for content. Unfortunately, this meant that users trying to find data announced using one CID wouldn't find nodes providing the content under a different CID.

    In go-ipfs 0.5.0, we're announcing data by multihash, not CID. This way, regardless of the CID version used by the peer adding the content, the peer trying to download the content should still be able to find it.

    Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.

    IPFS/Libp2p Address Format

    If you've ever run a command like ipfs swarm peers, you've likely seen paths that look like /ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID. These paths are not file paths, they're multiaddrs; addresses of peers on the network.

    Unfortunately, /ipfs/Qm... is also the same path format we use for files. This release, changes the multiaddr format from /ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID to /ip4/193.45.1.24/tcp/4001/p2p/QmSomePeerID to make the distinction clear.

    What this means for users:

    • Old-style multiaddrs will still be accepted as inputs to IPFS.
    • If you were using a multiaddr library (go, js, etc.) to name files because /ipfs/QmSomePeerID looks like /ipfs/QmSomeFile, your tool may break if you upgrade this library.
    • If you're manually parsing multiaddrs and are searching for the string /ipfs/..., you'll need to search for /p2p/....

    Minimum RSA Key Size

    Previously, IPFS did not enforce a minimum RSA key size. In this release, we've introduced a minimum 2048 bit RSA key size. IPFS generates 2048 bit RSA keys by default so this shouldn't be an issue for anyone in practice. However, users who explicitly chose a smaller key size will not be able to communicate with new nodes.

    Unfortunately, the some of the bootstrap peers did intentionally generate 1024 bit RSA keys so they'd have vanity peer addresses (starting with QmSoL for "solar net"). All IPFS nodes should also have peers with >= 2048 bit RSA keys in their bootstrap list, but we've introduced a migration to ensure this.

    We implemented this change to follow security best practices and to remove a potential foot-gun. However, in practice, the security impact of allowing insecure RSA keys should have been next to none because IPFS doesn't trust other peers on the network anyways.

    Subdomain Gateway

    The gateway will redirect from http://localhost:5001/ipfs/CID/... to http://CID.ipfs.localhost:5001/... by default. This will:

    • Ensure that every dapp gets its own browser origin.
    • Make it easier to write websites that "just work" with IPFS because absolute paths will now work.

    Paths addressing the gateway by IP address (http://127.0.0.1:5001/ipfs/CID) will not be altered as IP addresses can't have subdomains.

    Note: cURL doesn't follow redirects by default. To avoid breaking cURL and other clients that don't support redirects, go-ipfs will return the requested file along with the redirect. Browsers will follow the redirect and abort the download while cURL will ignore the redirect and finish the download.

    TLS By Default

    In this release, we're switching TLS to be the default transport. This means we'll try to encrypt the connection with TLS before re-trying with SECIO.

    Contrary to the announcement in the go-ipfs 0.4.23 release notes, this release does not remove SECIO support to maintain compatibility with js-ipfs.

    SECIO Deprecation Notice

    SECIO should be considered to be well on the way to deprecation and will be completely disabled in either the next release (0.6.0, ~mid May) or the one following that (0.7.0, ~end of June). Before SECIO is disabled, support will be added for the NOISE transport for compatibility with other IPFS implementations.

    QUIC Upgrade

    If you've been using the experimental QUIC support, this release upgrades to a new and incompatible version of the QUIC protocol (draft 27). Old and new go-ipfs nodes will still interoperate, but not over the QUIC transport.

    We intend to standardize on this draft of the QUIC protocol and enable QUIC by default in the next release if all goes well.

    RC2 NOTE: QUIC has been upgraded back to the latest version.

    RC1 NOTE: We've temporarily backed out of the new QUIC version because it currently requires go 1.14 and go 1.14 has some scheduler bugs that go-ipfs can reliably trigger.

    Badger Datastore

    In this release, we're calling the badger datastore (enabled at initialization with ipfs init --profile=badgerds) as stable. However, we're not yet enabling it by default.

    The benefit of badger is that adding/fetching data to/from badger is significantly faster than adding/fetching data to/from the default datastore, flatfs. In some tests, adding data to badger is 32x faster than flatfs (in this release).

    However,

    1. Badger is complicated while flatfs pushes all the complexity down into the filesystem itself. That means that flatfs is only likely to loose your data if your underlying filesystem gets corrupted while there are more opportunities for badger itself to get corrupted.
    2. Badger can use a lot of memory. In this release, we've tuned badger to use very little (~20MiB) of memory by default. However, it can still produce large (1GiB) spikes in memory usage when garbage collecting.
    3. Badger isn't very aggressive when it comes to garbage collection and we're still investigating ways to get it to more aggressively clean up after itself.

    TL;DR: Use badger if performance is your main requirement, you rarely/never delete anything, and you have some memory to spare.

    Systemd Support

    For Linux users, this release includes support for two systemd features: socket activation and startup/shutdown notifications. This makes it possible to:

    • Start IPFS on demand on first use.
    • Wait for IPFS to finish starting before starting services that depend on it.

    You can find the new systemd units in the go-ipfs repo under misc/systemd.

    IPFS API Over Unix Domain Sockets

    This release supports exposing the IPFS API over a unix domain socket in the filesystem. You use this feature, run:

    > ipfs config Addresses.API "/unix/path/to/socket/location"
    

    Repo Migration

    IPFS uses repo migrations to make structural changes to the "repo" (the config, data storage, etc.) on upgrade.

    This release includes two very simple repo migrations: a config migration to ensure that the config contains working bootstrap nodes and a keystore migration to base32 encode all key filenames.

    In general, migrations should not require significant manual intervention. However, you should be aware of migrations and plan for them.

    • If you update go-ipfs with ipfs update, ipfs update will run the migration for you.
    • If you start the ipfs daemon with ipfs daemon --migrate, ipfs will migrate your repo for you on start.

    Otherwise, if you want more control over the repo migration process, you can manually install and run the repo migration tool.

    Bootstrap Peer Changes

    AUTOMATIC MIGRATION REQUIRED

    The first migration will update the bootstrap peer list to:

    1. Replace the old bootstrap nodes (ones with peer IDs starting with QmSoL), with new bootstrap nodes (ones with addresses that start with /dnsaddr/bootstrap.libp2p.io.
    2. Rewrite the address format from /ipfs/QmPeerID to /p2p/QmPeerID.

    We're migrating addresses for a few reasons:

    1. We're using DNS to address the new bootstrap nodes so we can change the underlying IP addresses as necessary.
    2. The new bootstrap nodes use 2048 bit keys while the old bootstrap nodes use 1024 bit keys.
    3. We're normalizing the address format to /p2p/Qm....

    Note: This migration won't add the new bootstrap peers to your config if you've explicitly removed the old bootstrap peers. It will also leave custom entries in the list alone. In other words, if you've customized your bootstrap list, this migration won't clobber your changes.

    Keystore Changes

    AUTOMATIC MIGRATION REQUIRED

    Go-IPFS stores additional keys (i.e., all keys other than the "identity" key) in the keystore. You can list these keys with ipfs key.

    Currently, the keystore stores keys as regular files, named after the key itself. Unfortunately, filename restrictions and case-insensitivity are platform specific. To avoid platform specific issues, we're base32 encoding all key names and renaming all keys on-disk.

    Changelog

    TODO

    ✅ Release Checklist

    For each RC published in each stage:

    • version string in version.go has been updated
    • tag commit with vX.Y.Z-rcN
    • upload to dist.ipfs.io
      1. Build: https://github.com/ipfs/distributions#usage.
      2. Pin the resulting release.
      3. Make a PR against ipfs/distributions with the updated versions, including the new hash in the PR comment.
      4. Ask the infra team to update the DNSLink record for dist.ipfs.io to point to the new distribution.
    • cut a pre-release on github and upload the result of the ipfs/distributions build in the previous step.
    • Announce the RC:

    Checklist:

    • [x] Stage 0 - Automated Testing
      • [x] Feature freeze. If any "non-trivial" changes (see the footnotes of docs/releases.md for a definition) get added to the release, uncheck all the checkboxes and return to this stage.
      • [x] Automated Testing (already tested in CI) - Ensure that all tests are passing, this includes:
    • [x] Stage 1 - Internal Testing
      • [x] CHANGELOG.md has been updated
      • [x] Network Testing:
        • [x] test lab things - TBD
      • [x] Infrastructure Testing:
        • [x] Deploy new version to a subset of Bootstrappers
        • [x] Deploy new version to a subset of Gateways
        • [x] Deploy new version to a subset of Preload nodes
        • [x] Collect metrics every day. Work with the Infrastructure team to learn of any hiccup
      • [x] IPFS Application Testing - Run the tests of the following applications:
    • [x] Stage 2 - Community Dev Testing
      • [x] Reach out to the IPFS early testers listed in docs/EARLY_TESTERS.md for testing this release (check when no more problems have been reported). If you'd like to be added to this list, please file a PR.
      • [x] Reach out to on IRC for beta testers.
      • [x] Run tests available in the following repos with the latest beta (check when all tests pass):
    • [x] Stage 3 - Community Prod Testing
      • [x] Documentation
        • [x] Ensure that CHANGELOG.md is up to date
        • [x] Ensure that README.md is up to date
        • [x] Ensure that all the examples we have produced for go-ipfs run without problems
        • [x] Update HTTP-API Documentation on the Website using https://github.com/ipfs/http-api-docs
      • [x] Invite the IPFS early testers to deploy the release to part of their production infrastructure.
      • [ ] Invite the wider community through (link to the release issue):
    • [x] Stage 4 - Release
      • [x] Final preparation
      • [ ] Publish a Release Blog post (at minimum, a c&p of this release issue with all the highlights, API changes, link to changelog and thank yous)
      • [ ] Broadcasting (link to blog post)
    • [ ] Post-Release
      • [ ] Bump the version in version.go to vX.(Y+1).0-dev.
      • [ ] Create an issue using this release issue template for the next release.
      • [ ] Make sure any last-minute changelog updates from the blog post make it back into the CHANGELOG.

    ❤️ Contributors

    < list generated by bin/mkreleaselog >

    Would you like to contribute to the IPFS project and don't know how? Well, there are a few places you can get started:

    • Check the issues with the help wanted label in the go-ipfs repo
    • Join an IPFS All Hands, introduce yourself and let us know where you would like to contribute - https://github.com/ipfs/team-mgmt/#weekly-ipfs-all-hands
    • Hack with IPFS and show us what you made! The All Hands call is also the perfect venue for demos, join in and show us what you built
    • Join the discussion at discuss.ipfs.io and help users finding their answers.
    • Join the 🚀 IPFS Core Implementations Weekly Sync 🛰 and be part of the action!

    ⁉️ Do you have questions?

    The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the #ipfs channel on Freenode, which is also accessible through our Matrix bridge.

    kind/enhancement 
    opened by Stebalien 66
  • Daemon triggers a Netscan alert from hosting company

    Daemon triggers a Netscan alert from hosting company

    Solution

    Use ipfs init --profile=server

    ~ Kubuxu


    I just installed go-ipfs, did an init, and started the daemon. A couple minutes later, my hosting provider sent me an abuse email indicating that a "Netscan" was coming from my host and asked me to stop. Here is the log they sent me (edited for privacy).

    ##########################################################################
    #               Netscan detected from host my.host.i.p                   #
    ##########################################################################
    
    time                protocol src_ip src_port          dest_ip dest_port
    ---------------------------------------------------------------------------
    Sun May 10 02:31:32 2015 UDP my.host.i.p 56809 => 192.internal.i.p 49939
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 100.external.i.p 12644
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:29 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 35879 =>  10.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:29 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:38 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:46 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:39 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:48 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:41 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:41 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:41 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:46 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:41 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:53 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  =>  25.external.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:33 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:35 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:36 2015 TCP my.host.i.p 44194 => 172.internal.i.p 4001 
    Sun May 10 02:31:39 2015 TCP my.host.i.p 44194 => 172.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:53 2015 TCP my.host.i.p 49417 => 172.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:33 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:39 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:35 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:46 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:53 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 50861 => 172.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 50863 => 172.internal.i.p 4001 
    Sun May 10 02:31:29 2015 TCP my.host.i.p 50863 => 172.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:48 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:53 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
    Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:48 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:51 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:39 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:39 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:29 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
    Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 100.external.i.p 47389
    Sun May 10 02:31:20 2015 TCP my.host.i.p 56610 =>  10.internal.i.p 55511
    Sun May 10 02:31:22 2015 TCP my.host.i.p 56610 =>  10.internal.i.p 55511
    

    Notice all but 3 destination addresses are internal network destination. There are also many repeats (same destination internal IP) and this all happened in 33 seconds. Nearly all of this was happening on port 4001 as well, reinforcing that this was IPFS doing this.

    How does ipfs currently find peers to swarm with? Is there a way to throttle back the peer discovery process? Why is it even trying to scan internal IPs? (I'm on a externally facing machine)

    opened by cinderblock 60
  • [WIP] Filestore Implementation

    [WIP] Filestore Implementation

    Closes Issue #875, avoid duplicating files added to ipfs

    NOT READY FOR MERGE

    Rebased #2600 on master.

    Quicklinks: Code, README,

    TODO to get this merged:

    • [ ] Rebase on master turning the chain of commits into a reasonable ChangeSet
    • [ ] Agree on major infrastructure changes
      • [ ] Multi-blockstore (#3119)
    • [x] Separate out non-filestore bits of infrastructure change into there own pull request
    • [ ] Code review
    • [ ] Merge

    Note The filestore is very basic right now, but it is functional. I will likely continue to improve and submit new pull requests for the enhanced functionally but right now I fell it is important a basic implementation gets in so that it will get used, it can be labeled as an experimental feature and disabled by default, but available for those that want to use it. I consider the code production ready.

    Resolves #875

    kind/enhancement 
    opened by kevina 58
  • Resource Constraints + Limits

    Resource Constraints + Limits

    We need a number of configurable resource limits. This issue will serve as a meta-issue to track them all and discuss a consistent way to configure/handle them.

    I'm going to use a notation like thingA.subthingB.subthingC. we dont have to keep this at all, just helps us bind scoped names to things. (using . instead of / as the . could reflect json hierarchy in the config, but it may not have to (e.g. repo.storage_max and repo.datastore.storage_gc_watermark could be in config as Repo.StorageMax and Repo.StorageGC, or something.).

    Possible Limits

    This is a list of possible limits. I don't think we need all of them, as other tools could limit this more, particularly in server scenarios. but please keep in mind that some users/use cases of ipfs demand that we have some limits in place ourselves, as many end users cannot be expected to even know what a Terminal is (e.g. if they run ipfs as an elecron-app or as a browser extension).

    • [ ] node.repo.storage_max: this affects the physical storage that a repo takes up. this must include all the storage, datastore + config file size (ok to pre-allocate more if neeeded), so that people can set a maximum. (MUST be user configurable) #972
      • [ ] node.repo.datastore.storage_max: hard limit on datastore storage size. could be computed as repo.storage_max - configsize where configsize could be live, or could be a reasonable bound. #972
      • [x] node.repo.datastore.storage_gc_watermark: soft limit on datastore storage size. after passing this threshold, automatically run gc. could be computed as node.repo.datastore.storage_max - 1MB or something. #972
    • [ ] node.network_bandwidth_max: limit on network bandwidth used.
      • [ ] node.gateway.bandwidth_max: limit on bandwidth allocated to running the gateway. this could be calculated from node.network_bandwidth_max - all other bandwidth use. #1070
      • [ ] node.swarm.bandwidth_max: limit on network bandiwdth allocated to running the ipfs protocol. this could be calculated from node.network_bandwidth_max - all other bandwidth use.
      • [ ] node.dht.bandwidth_max: limit on network bandwidth allocated to running the dht protocol. this could be calculated from node.network_bandwidth_max - all other bandwidth use.
      • [ ] node.bitswap.bandwidth_max: limit on network bandwidth allocated to running the bitswap protocol. this could be calculated from node.network_bandwidth_max - all other bandwidth use.
    • [ ] node.swarm.connections: soft limit on ipfs protocol network connections to make. the reason for this limit is that there is overhead to every connections kept alive. the node could try to stay within this limit.
    • [ ] node.gateway.ratelimit: a number of requests per second. with this limit, the user could reduce the accept load on the gateway. #1070
    • [ ] node.memlimit: a limit on the memory allocated to ipfs. could try to use smaller buffers if under different constraints. this is hard to do, prob wont be used end-user-side, and likely easier to do with tools around it sysadmin-side (docker, etc).

    note on config: the above keys need not be the config keys, but we should figure out some keys that make sense hierarchically.

    What other things are we interested in limiting?

    need/community-input 
    opened by jbenet 58
  • avoid duplicating files added to ipfs

    avoid duplicating files added to ipfs

    it would be very useful to have files that are passed through ipfs add not copied into the datastore. for example here, i added a 3.2GB file, which meant the disk usage for that file now doubled!

    Basically, it would be nice if the space usage for adding files would be O(1) instead of O(n) where n is the file sizes...

    kind/enhancement topic/repo 
    opened by anarcat 56
  • 0.18.0-rc1: 400 when Accept header starts with application/json,

    0.18.0-rc1: 400 when Accept header starts with application/json,

    Checklist

    Installation method

    ipfs-desktop

    Version

    docker run -p 8080:8080 --rm ipfs/kubo:v0.18.0-rc1
    

    Config

    Default for docker container.
    

    Description

    Pinata made my Christmas interesting by making rc0.18 their prod-facing release, so apologies if this RC isn't to the point of stability where you care about this sort of thing, but since I did all the work to track it down, here's the info. :) Best!

    $ docker run -p 8080:8080 --rm ipfs/kubo:v0.18.0-rc1 &
    $ http localhost:8080/ipfs/bafybeieznmzzuxoxqwmkgy5yycefaehgoswvc6yiu6mfhbblzkkj77w7oa Accept:'application/json'
    > [200 OK, ...successful output...]
    $ http localhost:8080/ipfs/bafybeieznmzzuxoxqwmkgy5yycefaehgoswvc6yiu6mfhbblzkkj77w7oa Accept:'application/json, text/plain'
    > HTTP/1.1 400 Bad Request
    Content-Length: 87
    Content-Type: text/plain; charset=utf-8
    Date: Mon, 26 Dec 2022 12:02:19 GMT
    Location: http://bafybeieznmzzuxoxqwmkgy5yycefaehgoswvc6yiu6mfhbblzkkj77w7oa.ipfs.localhost:8080/
    X-Content-Type-Options: nosniff
    
    error while processing the Accept header: mime: unexpected content after media subtype
    $ http localhost:8080/ipfs/bafybeieznmzzuxoxqwmkgy5yycefaehgoswvc6yiu6mfhbblzkkj77w7oa Accept:'foo/bar, application/json, text/plain'
    > [200 OK, ...successful output...]
    

    Suspect this was introduced in https://github.com/ipfs/kubo/commit/fdd19656c465f07ff6d7be653f5438d74a0c0c2f with error on line https://github.com/ipfs/kubo/blob/fdd19656c465f07ff6d7be653f5438d74a0c0c2f/core/corehttp/gateway_handler.go#L898

    kind/bug need/triage 
    opened by hamptonsmith 1
  • `ipfs config` execute badger things.

    `ipfs config` execute badger things.

    Checklist

    Installation method

    ipfs-update or dist.ipfs.tech

    Version

    Kubo version: 0.17.0
    Repo version: 12
    System version: amd64/linux
    Golang version: go1.19.1
    

    Config

    {
      "API": {
        "HTTPHeaders": {
          "Access-Control-Allow-Methods": [
            "*"
          ],
          "Access-Control-Allow-Origin": [
            "*"
          ]
        }
      },
      "Addresses": {
        "API": "/ip4/0.0.0.0/tcp/80",
        "Announce": [],
        "AppendAnnounce": [],
        "Gateway": "/ip4/0.0.0.0/tcp/8080",
        "NoAnnounce": [
          "/ip4/10.0.0.0/ipcidr/8",
          "/ip4/100.64.0.0/ipcidr/10",
          "/ip4/169.254.0.0/ipcidr/16",
          "/ip4/172.16.0.0/ipcidr/12",
          "/ip4/192.0.0.0/ipcidr/24",
          "/ip4/192.0.2.0/ipcidr/24",
          "/ip4/192.168.0.0/ipcidr/16",
          "/ip4/198.18.0.0/ipcidr/15",
          "/ip4/198.51.100.0/ipcidr/24",
          "/ip4/203.0.113.0/ipcidr/24",
          "/ip4/240.0.0.0/ipcidr/4",
          "/ip6/100::/ipcidr/64",
          "/ip6/2001:2::/ipcidr/48",
          "/ip6/2001:db8::/ipcidr/32",
          "/ip6/fc00::/ipcidr/7",
          "/ip6/fe80::/ipcidr/10"
        ],
        "Swarm": [
          "/ip4/0.0.0.0/tcp/4001",
          "/ip6/::/tcp/4001",
          "/ip4/0.0.0.0/udp/4001/quic",
          "/ip6/::/udp/4001/quic"
        ]
      },
      "AutoNAT": {},
      "Bootstrap": [
        "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
        "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
        "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
        "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
        "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
        "/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ"
      ],
      "DNS": {
        "Resolvers": {}
      },
      "Datastore": {
        "BloomFilterSize": 0,
        "GCPeriod": "1h",
        "HashOnRead": false,
        "Spec": {
          "child": {
            "path": "badgerds",
            "syncWrites": false,
            "truncate": true,
            "type": "badgerds"
          },
          "prefix": "badger.datastore",
          "type": "measure"
        },
        "StorageGCWatermark": 90,
        "StorageMax": "100GB"
      },
      "Discovery": {
        "MDNS": {
          "Enabled": false
        }
      },
      "Experimental": {
        "AcceleratedDHTClient": false,
        "FilestoreEnabled": false,
        "GraphsyncEnabled": false,
        "Libp2pStreamMounting": false,
        "P2pHttpProxy": false,
        "StrategicProviding": false,
        "UrlstoreEnabled": false
      },
      "Gateway": {
        "APICommands": [],
        "HTTPHeaders": {
          "Access-Control-Allow-Headers": [
            "X-Requested-With"
          ],
          "Access-Control-Allow-Methods": [
            "GET"
          ],
          "Access-Control-Allow-Origin": [
            "*"
          ]
        },
        "NoDNSLink": false,
        "NoFetch": false,
        "PathPrefixes": [],
        "PublicGateways": null,
        "RootRedirect": "",
        "Writable": false
      },
      "Identity": {
        "PeerID": "12D3KooWLhtyzbCz5eECCj1zzqh9z8pFFBKT1QiCkoXjMh3Wm9Ev"
      },
      "Internal": {},
      "Ipns": {
        "RecordLifetime": "",
        "RepublishPeriod": "",
        "ResolveCacheSize": 128
      },
      "Migration": {
        "DownloadSources": [],
        "Keep": ""
      },
      "Mounts": {
        "FuseAllowOther": false,
        "IPFS": "/ipfs",
        "IPNS": "/ipns"
      },
      "Peering": {
        "Peers": [
          {
            "Addrs": [
              "/ip4/139.178.68.217/tcp/6744"
            ],
            "ID": "12D3KooWCVXs8P7iq6ao4XhfAmKWrEeuKFWCJgqe9jGDMTqHYBjw"
          }
        ]
      },
      "Pinning": {
        "RemoteServices": {}
      },
      "Plugins": {
        "Plugins": null
      },
      "Provider": {
        "Strategy": ""
      },
      "Pubsub": {
        "DisableSigning": false,
        "Router": ""
      },
      "Reprovider": {
        "Interval": "12h",
        "Strategy": "all"
      },
      "Routing": {
        "Methods": null,
        "Routers": null,
        "Type": "dht"
      },
      "Swarm": {
        "AddrFilters": [
          "/ip4/10.0.0.0/ipcidr/8",
          "/ip4/100.64.0.0/ipcidr/10",
          "/ip4/169.254.0.0/ipcidr/16",
          "/ip4/172.16.0.0/ipcidr/12",
          "/ip4/192.0.0.0/ipcidr/24",
          "/ip4/192.0.2.0/ipcidr/24",
          "/ip4/192.168.0.0/ipcidr/16",
          "/ip4/198.18.0.0/ipcidr/15",
          "/ip4/198.51.100.0/ipcidr/24",
          "/ip4/203.0.113.0/ipcidr/24",
          "/ip4/240.0.0.0/ipcidr/4",
          "/ip6/100::/ipcidr/64",
          "/ip6/2001:2::/ipcidr/48",
          "/ip6/2001:db8::/ipcidr/32",
          "/ip6/fc00::/ipcidr/7",
          "/ip6/fe80::/ipcidr/10"
        ],
        "ConnMgr": {
          "GracePeriod": "20s",
          "HighWater": 900,
          "LowWater": 600,
          "Type": "basic"
        },
        "DisableBandwidthMetrics": false,
        "DisableNatPortMap": true,
        "RelayClient": {},
        "RelayService": {},
        "ResourceMgr": {
          "Limits": {
            "System": {
              "ConnsInbound": 1000
            }
          }
        },
        "Transports": {
          "Multiplexers": {},
          "Network": {},
          "Security": {}
        }
      }
    }
    

    Description

    ipfs config execute badger things, as a result it takes long execution time.

    $ time ipfs -D config --json API.HTTPHeaders.Access-Control-Allow-Methods '["*"]'
    2022-12-23T08:48:19.833Z        DEBUG   cmd/ipfs        ipfs/main.go:140        config path is /root/.ipfs
    2022-12-23T08:48:22.907Z        INFO    badger  [email protected]/logger.go:46      45 tables out of 422 opened in 3.015s
    
    2022-12-23T08:48:25.910Z        INFO    badger  [email protected]/logger.go:46      110 tables out of 422 opened in 6.018s
    
    2022-12-23T08:48:28.900Z        INFO    badger  [email protected]/logger.go:46      173 tables out of 422 opened in 9.008s
    
    2022-12-23T08:48:31.898Z        INFO    badger  [email protected]/logger.go:46      238 tables out of 422 opened in 12.005s
    
    2022-12-23T08:48:34.911Z        INFO    badger  [email protected]/logger.go:46      304 tables out of 422 opened in 15.019s
    
    2022-12-23T08:48:37.911Z        INFO    badger  [email protected]/logger.go:46      373 tables out of 422 opened in 18.019s
    
    2022-12-23T08:48:40.257Z        INFO    badger  [email protected]/logger.go:46      All 422 tables opened in 20.364s
    
    2022-12-23T08:48:40.258Z        INFO    badger  [email protected]/logger.go:46      Replaying file id: 280 at offset: 353313029
    
    2022-12-23T08:48:40.258Z        INFO    badger  [email protected]/logger.go:46      Replay took: 3.229µs
    
    2022-12-23T08:48:40.258Z        DEBUG   badger  [email protected]/logger.go:62      Value log discard stats empty
    
    real    0m20.547s
    user    0m35.432s
    sys     0m2.250s
    

    exepecting behavior: ipfs config only modify config file.

    kind/bug need/triage 
    opened by dongheeJeong 1
  • test: port peering test from sharness to Go

    test: port peering test from sharness to Go

    This is the slowest test in the sharness test suite, because it has very long sleeps. It usually takes 2+ minutes to run.

    This new impl runs all peering tests in about 20 seconds, since it polls for conditions instead of sleeping, and runs the tests in parallel.

    This also has an additional test case for a peer that was never online and then connects.

    opened by guseggert 0
  • test: add a test logger to CLI test harness

    test: add a test logger to CLI test harness

    This logger buffers log events and prints them only if a test fails or logging has been turned on in code. It does this regardless of the -verbose flag, so that -verbose can be used without necessarily showing verbose logs, since it's useful for large test suites since it streams out individual test progress.

    The logger also allows you to create child loggers with additional prefixes added to each log event, for extra context. Internally, all loggers in this DAG buffer their own output, and if the test fails or logging is enabled, the log events trickle back to the root logger which sorts them all by timestamp and prints them.

    This also refactors all the code to use the logger for fatal errors, which logs the error and then cleanly fails the test, instead of panicking which aborts the entire package.

    opened by guseggert 0
  • 0.18.0-rc1: panic in webtransport.(*transport).Dial

    0.18.0-rc1: panic in webtransport.(*transport).Dial

    Version

    0.18.0-rc1
    

    Config

    default
    

    Description

    Problem with kubo 0.18.0-rc1. At some point it crashes:

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x1b589c7]
    
    goroutine 284534 [running]:
    github.com/libp2p/go-libp2p/p2p/transport/webtransport.(*transport).Dial(0xc000736580, {0x2ba5508, 0xc00a8a5ec0}, {0x2bbcac0?, 0xc00a8d6390}, {0xc005fcce10, 0x26})
    	github.com/libp2p/[email protected]/p2p/transport/webtransport/transport.go:135 +0x167
    github.com/libp2p/go-libp2p/p2p/net/swarm.(*Swarm).dialAddr(0xc0019521a0, {0x2ba5508, 0xc00a8a5ec0}, {0xc005fcce10, 0x26}, {0x2bbcac0?, 0xc00a8d6390})
    	github.com/libp2p/[email protected]/p2p/net/swarm/swarm_dial.go:493 +0x1f1
    github.com/libp2p/go-libp2p/p2p/net/swarm.(*dialLimiter).executeDial(0xc00071dc70, 0xc00a8a7a80)
    	github.com/libp2p/[email protected]/p2p/net/swarm/limiter.go:219 +0xf0
    created by github.com/libp2p/go-libp2p/p2p/net/swarm.(*dialLimiter).addCheckFdLimit
    	github.com/libp2p/[email protected]/p2p/net/swarm/limiter.go:169 +0x4bb
    

    The same issue appeared 5 times in two days (running in Docker with --restart unless-stopped)

    This seems to be different from https://github.com/ipfs/kubo/issues/9515, as i always get the above trace around limiter/transport – filling as a separate issue.

    cc https://github.com/libp2p/go-libp2p/issues/1958

    kind/bug need/triage 
    opened by lidel 1
Releases(v0.18.0-rc1)
Owner
IPFS
A peer-to-peer hypermedia protocol
IPFS
IPFS implementation in Go

go-ipfs What is IPFS? IPFS is a global, versioned, peer-to-peer filesystem. It combines good ideas from previous systems such as Git, BitTorrent, Kade

IPFS 14.6k Jan 9, 2023
An IPFS bytes exchange for caching and retrieving data from Filecoin

?? go-hop-exchange An IPFS bytes exchange to allow any IPFS node to become a Filecoin retrieval provider and retrieve content from Filecoin Highlights

Myel 31 Aug 25, 2022
Deece is an open, collaborative, and decentralised search mechanism for IPFS

Deece Deece is an open, collaborative, and decentralised search mechanism for IPFS. Any node running the client is able to crawl content on IPFS and a

null 12 Oct 29, 2022
A tool for checking the accessibility of your data by IPFS peers

ipfs-check Check if you can find your content on IPFS A tool for checking the accessibility of your data by IPFS peers Documentation Build go build wi

Adin Schmahmann 18 Dec 17, 2022
🌐 (Web 3.0) Pastebin built on IPFS, securely served by Distributed Web and Edge Network.

pastebin-ipfs 简体中文 (IPFS Archivists) Still in development, Pull Requests are welcomed. Pastebin built on IPFS, securely served by Distributed Web and

Mayo/IO 166 Jan 1, 2023
A standalone ipfs gateway

rainbow Because ipfs should just work like unicorns and rainbows Building go build Running rainbow Configuration NAME: rainbow - a standalone ipf

IPFS 21 Nov 9, 2022
A minimal IPFS replacement for P2P IPLD apps

IPFS-Nucleus IPFS-Nucleus is a minimal block daemon for IPLD based services. You could call it an IPLDaemon. It implements the following http api call

Peergos 29 Jan 4, 2023
Technical specifications for the IPFS protocol stack

IPFS Specifications This repository contains the specs for the IPFS Protocol and associated subsystems. Understanding the meaning of the spec badges a

IPFS 1k Jan 7, 2023
Generates file.key file for IPFS Private Network.

ipfs-keygen Generates file.key file for IPFS Private Network. Installation go get -u github.com/reixmor/ipfs-keygen/ipfs-keygen Usage ipfs-keygen > ~/

Camilo Abel Monreal Aguero 0 Jan 18, 2022
Go-ipfs-pinner - The pinner system is responsible for keeping track of which objects a user wants to keep stored locally

go-ipfs-pinner Background The pinner system is responsible for keeping track of

y 0 Jan 18, 2022
A go implementation of the STUN client (RFC 3489 and RFC 5389)

go-stun go-stun is a STUN (RFC 3489, 5389) client implementation in golang (a.k.a. UDP hole punching). RFC 3489: STUN - Simple Traversal of User Datag

Cong Ding 546 Jan 5, 2023
A QUIC implementation in pure go

A QUIC implementation in pure Go quic-go is an implementation of the QUIC protocol in Go. It implements the IETF QUIC draft-29 and draft-32. Version c

Lucas Clemente 7.7k Jan 9, 2023
Fast RFC 5389 STUN implementation in go

STUN Package stun implements Session Traversal Utilities for NAT (STUN) [RFC5389] protocol and client with no external dependencies and zero allocatio

null 487 Nov 28, 2022
Pure Go implementation of the WebRTC API

Pion WebRTC A pure Go implementation of the WebRTC API New Release Pion WebRTC v3.0.0 has been released! See the release notes to learn about new feat

Pion 10.5k Jan 1, 2023
A LWM2M Client and Server implementation (For Go/Golang)

Betwixt - A LWM2M Client and Server in Go Betwixt is a Lightweight M2M implementation written in Go OMA Lightweight M2M is a protocol from the Open Mo

Zubair Hamed 56 Dec 23, 2022
A Socket.IO backend implementation written in Go

go-socket.io The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with

Jukka-Pekka Kekkonen 408 Sep 25, 2022
A Windows named pipe implementation written in pure Go.

npipe Package npipe provides a pure Go wrapper around Windows named pipes. Windows named pipe documentation: http://msdn.microsoft.com/en-us/library/w

Nate Finch 259 Jan 1, 2023
An Etsy StatsD (https://github.com/etsy/statsd) implementation in Go

STATSD-GO Port of Etsy's statsd, written in Go. This was forked from https://github.com/amir/gographite to provide Ganglia submission support. USAGE U

Jeff Buchbinder 45 Mar 5, 2021
Implementation of the FTPS protocol for Golang.

FTPS Implementation for Go Information This implementation does not implement the full FTP/FTPS specification. Only a small subset. I have not done a

Marco Beierer 27 Mar 14, 2022