IPFS Cluster - Automated data availability and redundancy on IPFS

Related tags

Network ipfs-cluster
Overview

IPFS Cluster

Made by Main project Matrix channel GoDoc Go Report Card Build Status codecov

Automated data availability and redundancy on IPFS

logo

IPFS Cluster provides data orchestration across a swarm of IPFS daemons by allocating, replicating and tracking a global pinset distributed among multiple peers.

It provides:

  • A cluster peer application: ipfs-cluster-service, to be run along with go-ipfs.
  • A client CLI application: ipfs-cluster-ctl, which allows easily interacting with the peer's HTTP API.
  • An additional "follower" peer application: ipfs-cluster-follow, focused on simplifying the process of configuring and running follower peers.

Are you using IPFS Cluster?

Please participate in the IPFS Cluster user registry.


Table of Contents

Documentation

Please visit https://cluster.ipfs.io/documentation/ to access user documentation, guides and any other resources, including detailed download and usage instructions.

News & Roadmap

We regularly post project updates to https://cluster.ipfs.io/news/ .

The most up-to-date Roadmap is available at https://cluster.ipfs.io/roadmap/ .

Install

Instructions for different installation methods (including from source) are available at https://cluster.ipfs.io/download .

Usage

Extensive usage information is provided at https://cluster.ipfs.io/documentation/ , including:

Contribute

PRs accepted. As part of the IPFS project, we have some contribution guidelines.

License

This library is dual-licensed under Apache 2.0 and MIT terms.

© 2020. Protocol Labs, Inc.

Comments
  • Issues with leader election when peers go down

    Issues with leader election when peers go down

    I have been playing around with ipfs-cluster to try to set up a dynamic cluster. I have read through the docs and what available information I can find on other tickets. What I am noticing is that the cluster only seems to be working and healthy if the setup happens like this (assume three nodes A, B, C - using leave_on_shutdown set to true).

    1. Start up ipfs service on node A with cluster secret and no specified peers (A becomes the leader)
    2. Start up ipfs service on node B with cluster secret and --bootstrap set to node A (B joins the cluster as a new peer)
    3. Start up ipfs service on node C with cluster secret and --bootstrap set to node A (C joins the cluster as a new peer)

    So that is all fine and works great. The issue has to do with what happens when peers have issues. If I kill ipfs-cluster-service on node B it correctly is removed and then when it starts up (again bootstrapping to A) it comes back which is expected.

    Now If I kill the ipfs-cluster-service on node A (the leader) that is where there is a problem. Since peer A has been elected the leader, when it goes down there is no longer a leader present. I would expect either peer B or C to become the leader in this case, but I could not find anything specific in the documentation to say what the expected behavior here is. They start logging about how there is no leader present, but neither one takes over as the leader. The second issue is that as soon as A comes back (bootstrapping to either B or C) it does not become the leader again according to all peers. Once the cluster gets into this state, the only way to get things back to normal is to kill the service on peers B and C and then restart them bootstrapping to A. You can understand why this is not an ideal situation for a peer to peer network.

    Another issue which seems like it might be related is that I have noticed when I bootstrap a node D (running on my Mac - the other 3 are running on Linux), only nodes A and C pick up the new peer. I can manually add it using the peer add command on node B, and it will pick it up, but when it goes down it is only automatically removed from A and C, so B never picks up on the changes. I am not sure if that actually leads to any problems or not.

    It seems like it could be some kind of race condition. I have not had a chance to dig through the code and since this is kind of a complicated issue to reproduce, I don’t have code to reproduce it right now, but I am hoping you can provide some guidance or idea about why this is happening, what might be causing it, and what I might be able to do to get around it. Not sure if it is related at all, but node A is running in Amsterdam, B in New Jersey, C in Tokyo.

    Also I am curious what the expected behavior is in this situation since I couldn’t find it exactly outlined in the docs. I am happy to help debug this further. Thanks so much.

    kind/bug kind/support topic/user-story 
    opened by ccampbell 79
  • Feat/backups upgrade path  -- WIP

    Feat/backups upgrade path -- WIP

    The basic approach is to read the json backup and replace all raft state with a single snapshot of the new version, while preserving the necessary raft config meta-data. I added migration as an ipfs-cluster-service rather than ipfs-cluster-ctl command because it reinforces to the user that the ipfs-cluster-service daemon should be stopped, reuses some of the config functions already built into this tool, and can't talk to an ipfs-cluster-service daemon endpoint so including it in ipfs-cluster-ctl would break the common pattern.

    • Still testing this out
    • Needs sharness, possibly other tests
    • Still thinking about upgrading general migration framework

    --Update-- @hsanjuan The core functionality is here and tests out manually. When you have a chance your feedback would be awesome. This is still WIP as I want to add a few things (sharness, cleaner development upgrade path and UX for trying to load bad raft state after an update without a migration). Thanks!

    opened by ZenGround0 37
  • Support PinPath, UnpinPath (resolve before pinning)

    Support PinPath, UnpinPath (resolve before pinning)

    This PR adds API support for pinning using path

    POST /pins/<ipfs or ipns path> and DELETE /pins/<ipfs or ipns path> will resolve the path into a cid and perform perform pinning or unpinning

    To do

    • [ ] add the same support in command
    • [ ] tests

    Fixes #450

    opened by kishansagathiya 30
  • ipfs-cluster dag/file sharding RFC WIP

    ipfs-cluster dag/file sharding RFC WIP

    @hsanjuan, this is the beginning of the rough draft RFC I have been working on. Consider it under heavy development this week. The unfilled sections are coming soon, plus there is more I would like to add to the RAID 0 section.

    I decided to go with a document + PR review process rather than an issue so that I could add and version and receive comments with better UX. I'm not necessarily trying to add this document to the repo.

    Forgive the somewhat scattered nature of a few paragraphs, particularly the TODOs, I am at a point where I have a lot of questions and ideas and don't want to lose track of them. Let me know if you don't know what I mean by something and I'll add clarity.

    By the end of this I'm hoping for a reasonably clear path forward to implement these new features to cluster, so I will be grateful if you point out places where I am assuming something incorrect, optimizing for unmotivated use cases, overlooking simpler solutions, ignoring dangerous inefficiencies or doing something else that will get in the way of that.

    opened by ZenGround0 29
  • Authorization for RPC

    Authorization for RPC

    Creating this to discuss IPFS-Cluster's use case of https://github.com/libp2p/go-libp2p-gorpc/issues/35,

    Things like,

    • Which methods should be allowed and how, where to put authorization map,current configuration service.json or separate, etc. I am inclined towards having a separate file for it, since this map can be big.
    • I was thinking that we can have roles, using which users could allow/disallow all methods belonging to that role. User can allow/disallow individual method, which would take precedence over the permissions to the role.
    • Do we want to support any wildcards, etc.
    opened by kishansagathiya 28
  • Read config values from env on init command

    Read config values from env on init command

    @hsanjuan this is what I had in mind for #656.

    In short, I split LoadJSON into another method, applyConfigJSON which only applies the configJSON values to the original Config and added the ApplyEnvVars method to Config which reads all env vars into a configJSON and overrides them back into Config.

    All the other ComponentConfigs have empty ApplyEnvVars because they shouldn't read config from the env. At least that is what the docs say: https://cluster.ipfs.io/documentation/configuration/#using-environment-variables-to-overwrite-configuration-values

    The options in the main configuration section (cluster) and the REST API section (restapi) can be overwritten by setting environment variables.

    I now notice that LoadJSON assumed values to be set in the configJSON and thus will override config values with zero values if env vars are not set. Maybe we could have another method that allows all fields to be optional? Would it make sense to use only pointers for the configJSON fields to be able to distinguish fields that were set more easily (they would be nil)? i.e.:

    type configJSON struct {
    	ID                   *string   `json:"id"`
    	Peername             *string   `json:"peername"`
    	PrivateKey           *string   `json:"private_key"`
    	Secret               *string   `json:"secret"`
    	Peers                []string `json:"peers,omitempty"`     // DEPRECATED
    	Bootstrap            []string `json:"bootstrap,omitempty"` // DEPRECATED
    	LeaveOnShutdown      *bool     `json:"leave_on_shutdown"`
    	ListenMultiaddress   *string   `json:"listen_multiaddress"`
    	StateSyncInterval    *string   `json:"state_sync_interval"`
    	IPFSSyncInterval     *string   `json:"ipfs_sync_interval"`
    	ReplicationFactor    *int      `json:"replication_factor,omitempty"` // legacy
    	ReplicationFactorMin *int      `json:"replication_factor_min"`
    	ReplicationFactorMax *int      `json:"replication_factor_max"`
    	MonitorPingInterval  *string   `json:"monitor_ping_interval"`
    	PeerWatchInterval    *string   `json:"peer_watch_interval"`
    	DisableRepinning     *bool     `json:"disable_repinning"`
    	PeerstoreFile        *string   `json:"peerstore_file,omitempty"`
    }
    

    Have I missed anything? What do you think?

    opened by roignpar 25
  • Private Network impl

    Private Network impl

    Copied from #108 by @dgrisham :

    Currently working through dependency issues -- here's a write-up on that as requested by @hsanjuan (also tagging @Kubuxu and @whyrusleeping in case they have any immediate suggestions):

    1. I included the imports for go-libp2p-pnet and go-libp2p-interface-pnet as:
    import (
        ...
        pnet "github.com/libp2p/go-libp2p-pnet"
        ipnet "github.com/libp2p/go-libp2p-interface-pnet"
        ...
    )
    

    Then ran gx-go rewrite --fix && gx-go rewrite. go-libp2p-interface-pnet resolved to a hash, but go-libp2p-pnet did not:

    import (
        ...
        pnet "github.com/libp2p/go-libp2p-pnet"
        ipnet "gx/ipfs/QmUxRRPqCRmjgZajYGDhUt4MNZFvT8sgry7YkA4ap7qLUP/go-libp2p-interface-pnet"
        ...
    )
    
    1. I tried to go build at this point, but I got the error:
    # github.com/ipfs/ipfs-cluster
    ./cluster.go:874: cannot assign "github.com/libp2p/go-libp2p-interface-pnet".Protector to protec (type "gx/ipfs/QmUxRRPqCRmjgZajYGDhUt4MNZFvT8sgry7YkA4ap7qLUP/go-libp2p-interface-pnet".Protector) in multiple assignment:
    	"github.com/libp2p/go-libp2p-interface-pnet".Protector does not implement "gx/ipfs/QmUxRRPqCRmjgZajYGDhUt4MNZFvT8sgry7YkA4ap7qLUP/go-libp2p-interface-pnet".Protector (wrong type for Protect method)
    		have Protect("github.com/libp2p/go-libp2p-transport".Conn) ("github.com/libp2p/go-libp2p-transport".Conn, error)
    		want Protect("gx/ipfs/QmcYnysCkyGezY6k6MQ1yHHdrRiZaU9x3M9Y1tE9qZ5hD2/go-libp2p-interface-conn".Conn) ("gx/ipfs/QmcYnysCkyGezY6k6MQ1yHHdrRiZaU9x3M9Y1tE9qZ5hD2/go-libp2p-interface-conn".Conn, error)
    

    I think (based on my very incomplete understanding of go and gx) that the issue is with this import in go-libp2p-pnet, which doesn't use the gx hash to import go-libp2p-interface-pnet.

    1. I tried various fixes from here, but I now think they were quite ill-informed and probably not too helpful to write-up. I think one way to fix this would be to remove all of the gx imports from this project and make them github.com/<repo>/<project> imports, as they were before. Another solution might be to change the import in the go-libp2p-pnet project that I mentioned above to be the corresponding gx import rather than the github.com/libp2p/go-libp2p-interface-pnet import (but this might be a rabbit hole into doing this for other repos as well -- maybe it's preferred to get them all changed to gx imports anyway, and this would be a good time to do it?).
    kind/enhancement 
    opened by hsanjuan 24
  • Docker : Unable to start ipfs-cluster by using docker

    Docker : Unable to start ipfs-cluster by using docker

    When i'm trying to run a docker container something goes wrong but I am not able to understand what it is.

    Run the container : docker run ipfs/ipfs-cluster

    Logs :

    Changing user to ipfs
    ipfs version 0.4.11
    initializing IPFS node at /data/ipfs
    generating 2048-bit RSA keypair...done
    peer identity: QmQsTifxncDb4jtKzcHA8JNo7gWzyhRQDgENLK5Nxsvno9
    to get started, enter:
    2017-10-31T10:01:17.122611563Z 
    ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
    2017-10-31T10:01:17.122620702Z 
    Initializing daemon...
    ipfs-cluster-service version 0.2.1
    10:01:22.139  INFO     config: Saving configuration config.go:289
    ipfs-cluster-service configuration written to /data/ipfs-cluster/service.json
    Unknown subcommand. Run "ipfs-cluster-service help" for more info
    

    The container cannot start.

    kind/bug exp/novice topic/docs 
    opened by raucoule1u 22
  • very large database

    very large database

    Additional information:

    • OS: Linux
    • IPFS Cluster version: 0.13.1
    • Installation method: dist.ipfs.io

    Describe the bug:

    I think the garbage collection in the database is broken/turned off. Otherwise, this database size is quite excessive for ~50k changes on a cluster.

    $ du -hc .ipfs-cluster
    9,1G    .ipfs-cluster/badger
    9,1G    .ipfs-cluster
    9,1G    total
    
    kind/bug kind/enhancement exp/wizard status/ready P1 effort/days 
    opened by RubenKelevra 21
  • Not able to make an interaction between two PC's using IPFS_CLUSTER

    Not able to make an interaction between two PC's using IPFS_CLUSTER

    Pre-check

    • [x] IPFS cluster of both the devices are on same version that is 0.7.0
    • [x] One node is windows
    • [x] Second is a ubuntu machine
    • [x] Both the machines are connected by the same WiFi hence belong to the same network
    • [x] All my peers are configured using the same cluster secret
    • [x] Both the peers have been initialized ->ipfs-cluster-service init and there service.jason file is created

    Description

    I have been trying to make an interaction beetween two pc's using ipfs-cluster .Have followed the https://cluster.ipfs.io/documentation/quickstart/ ie the quick starter guide but when i deploy start both the machines 1)on windows->ipfs-cluster-service daemon 2)on ubuntu-> ./ipfs-cluster-service daemon --bootstrap /ip4/192.168.0.103/tcp/9096/ipfs/Qmakp6u81YjRWDYXVQqnWecRwEp7isuF1wMSzACJ6mHmVW note 192.168.0.103 is the ip of the other machine and Qmakp6u81YjRWDYXVQqnWecRwEp7isuF1wMSzACJ6mHmVW is its ID Then error occurs Windows -> image Ubuntu machine-> image The service.json file of the windows is->

    {
      "cluster": {
        "id": "QmRQSCCgugicTGUZihbUzuLzuGf7DEKp35ePwpcDDJ8kGB",
        "peername": "LAPTOP-6QU7RMTR",
        "private_key": "CAASpwkwggSjAgEAAoIBAQCq0TVcS95vf+VjM161VuoSFkA3Ke7X9wpdmryh5c88Z1dlX4UMMH5sGOE4XT0sar/i3WNCxMitcYeNKWSt7xtObrHB8X9YK4E8zU8opWwWt1zaXc+B1v/6NeOSPuvlNPAGb/4yGw/QZU42TBo1czTDeIiQPTNaDtAB29dAHyHVc5KH0NMi2Oh5NX32NHw71xjli+toeD1LXDtWCZkaPEU7eZXz89sVFBkjHn6LmMy9Su4p31GR7b68lX6UpLGaRYy3Ybegc07Z4k6nW1Zyp06eUjBgDW5OBGM+1UuKZqg+pvFNyCKNJO5VOBTl5Jofjvq+CwCKFvpxKFm7ikNDT6G3AgMBAAECggEBAINdy1zE2DcFtALbgc2SHwdz50TFUfLzARzFoKYdl4fLrfG/SRH7xW4aoDJ5uk8LdbDiM7Eop3CD7AxKtivxxB1IkhomQJotMwHwnx0SQxMhRx6EoM4o68mgYfiZvU8TrDg3LtWX7EyHbGPjmRBcHkrpypSrDHAJDj0vtdWRW7LMIjE9tP8up0O5spgAzaiYDobW46qmrwWmXrBMlOkwWfRwaXZUNSbncYjiZ5Psa1j48iPpklbqRWHbVMCdAV+YkcsgBLdYJd9UKIY3Ig6eUKNCuz5uX9eKMZQAEjzSkCv4YamTm0juYpmFUCV8J/A2p3CJ8UO8IDr8cc7mO/Boc/kCgYEAyzmUsliMGCp11zf9Lj5MSQpY4lC58UCt5RIytLVkCJFQ0+QpIX3t+YgM4wUmh5r0NYMv0d8peDVVo6Y4Xl8zBau3Bui1pMRANFEZhc7hlLx4TKKjWSRvKskY5MIRGU72DLC1J3Y3SE3hZNKmLfnooGJ0JiEc3EIbTtV/+NS6eP0CgYEA1y0owngHJO0dfoRLj/5Q05HY+5BdKv0/2y5NTdH/KenBO6U11YQB2acGkrDQAaoOIglSPLcwCSEoXZcyuz2pwB9w+Gji0CLMa0IEmSucPhExbZxR4Ork+Xx3vm5OJdF2I1E6rQyMSAwqFAvYzfYvqEsDgTRfyirH638woT5VLcMCgYB2Avz3R/YqflWp4chzFxgjfg+5dFlV4FJa6GNrjr4FP6VpNmAwI7mSos+g8Te7nV7cyE53mBizxnzxqC+plazCSUHikDYS9Se3ebTRgB58yakuD2+97yti9B4xkQRu5ux42BCKVtqpcRhG/RAlOK8+m42JKsdgnD7RW6eRrq6OXQKBgHWD5E7Q3pX8Ka9+8QWjDuF9NdOt9DQWO1mo3+8wUPrC/xVkFRKXFauY3K4ggnlmNnHARXmDUmiqAzGZ8crw0lRq13fTUIv7dUjetUQx3RLIsQX76Xu8zXCz2XHXLDbbPnnrUvBPeg2fFxv7nFxfp4dx8GgQAoCW/LhQrm5hbIOhAoGAPfwYRyp3MNLSiuyy1CHU794Eg+cpa4AUFxEf6fsz5Aom7qVF8n8GL1q8f5D0yZw76CRrvQk2OsJTRRMTQ/7QZEq1QNyGvtu3hf6jWeqpGFEff3Ayc6Gco+tyknKHCAeU+cDE54/8FPXBbnOXXYPRsNKF7nXhhLuBYja+sU+H3D4=",
        "secret": "304421f1e313c2997de0a6c93806146bc1377603cb6964591de88320692d6415",
        "leave_on_shutdown": false,
        "listen_multiaddress": "/ip4/0.0.0.0/tcp/9096",
        "state_sync_interval": "10m0s",
        "ipfs_sync_interval": "2m10s",
        "replication_factor_min": -1,
        "replication_factor_max": -1,
        "monitor_ping_interval": "15s",
        "peer_watch_interval": "5s",
        "disable_repinning": false
      },
      "consensus": {
        "raft": {
          "init_peerset": [],
          "wait_for_leader_timeout": "156s",
          "network_timeout": "150s",
          "commit_retries": 1,
          "commit_retry_delay": "200ms",
          "backups_rotate": 6,
          "heartbeat_timeout": "15s",
          "election_timeout": "155s",
          "commit_timeout": "500ms",
          "max_append_entries": 64,
          "trailing_logs": 10240,
          "snapshot_interval": "2m0s",
          "snapshot_threshold": 8192,
          "leader_lease_timeout": "500ms"
        }
      },
      "api": {
        "restapi": {
          "http_listen_multiaddress": "/ip4/127.0.0.1/tcp/9094",
          "read_timeout": "0s",
          "read_header_timeout": "5s",
          "write_timeout": "0s",
          "idle_timeout": "2m0s",
          "basic_auth_credentials": null,
          "headers": {
            "Access-Control-Allow-Headers": [
              "X-Requested-With",
              "Range"
            ],
            "Access-Control-Allow-Methods": [
              "GET"
            ],
            "Access-Control-Allow-Origin": [
              "*"
            ]
          }
        }
      },
      "ipfs_connector": {
        "ipfshttp": {
          "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/10095",
          "node_multiaddress": "/ip4/127.0.0.1/tcp/4001",
          "connect_swarms_delay": "30s",
          "proxy_read_timeout": "0s",
          "proxy_read_header_timeout": "5s",
          "proxy_write_timeout": "0s",
          "proxy_idle_timeout": "1m0s",
          "pin_method": "refs",
          "ipfs_request_timeout": "5m0s",
          "pin_timeout": "24h0m0s",
          "unpin_timeout": "3h0m0s"
        }
      },
      "pin_tracker": {
        "maptracker": {
          "max_pin_queue_size": 50000,
          "concurrent_pins": 10
        },
        "stateless": {
          "max_pin_queue_size": 50000,
          "concurrent_pins": 10
        }
      },
      "monitor": {
        "monbasic": {
          "check_interval": "15s"
        },
        "pubsubmon": {
          "check_interval": "15s"
        }
      },
      "informer": {
        "disk": {
          "metric_ttl": "30s",
          "metric_type": "freespace"
        },
        "numpin": {
          "metric_ttl": "10s"
        }
      }
    }
    

    And that of Ubuntu is->

    {
      "cluster": {
        "id": "QmWZ1T9c9ZU7VoWukwx7gKfQjGn1s9mehdLP5eamU2wyad",
        "peername": "tbcdev",
        "private_key": "CAASpwkwggSjAgEAAoIBAQCzbFotaAA0TqONPFZRQNcJFRMQdodRCcWUbHz/FHVYSzDPIYyrGxD5cGIlahL3hVZbDeXtS8JvtD49M0u1bPZzxZsP2DO46D10fSynIYZeApnXLYpJxgMaCaVz3Mekhy75NWY2iDeU9Hv/kEPrlmqB9agBYiBsGS3xngynqL4CyEmkvBadufPijv06xyKpx+BHBNukGrMNRdcPYlK2LzXzCr48jJk0IPpLYCQf7PKTVD9esgMsUx2HTc1TTm7mGLddcw+M4XP0oxYbduzf2not7xKqnOQ4oAtfkVBtYqVtlv97pqZf06ifjGoA+yafdgwAnWpFDRibF3GoBxQRyo1LAgMBAAECggEAS6m0uZMzCtviwqugJvG1/OGDQZ0KYVVCmd3KNHN3LL3AnoiiXoGyfc4zxV1fFDyJdyp8PL6HBz42RO69zYtevuGlC2B8J0zgpaAn1W8gz8I/B+vvdj7njfJlcF+5XRuY5oTrTrHQ5qLXK6W1zsKGtblXmQW9cHiJ1Gt3ILjbMCZ6l2nXgG2px1qR86d6pXCJowm5+5IvIcYjYYZIGCEYPdqw4qaZ15RHy+UzoUopxb/1til938olEqotVR1eHgfbnCZk03cczxXoxxToowAGMP1QjPONZUPuD8PSHtEUGGzN90YbIR4/UfR9j2I7VxmBPazbRABZPweHK4nUKpp/GQKBgQDBU1H8K25z22YQuHqfqwYwBkiiyblWm0kv/N2if61ApUuQO7w/BTZlg83Oil5k6+5bHx4b86k1L1eWNgjqMkj6w00tlHga7ZPh+QGvczDkuE9VMP5wCZhGpdyYuq3Gtc2iZwKPLyB97+jG3eP97sVc02zkDjmNsgTpnOtSMf17xQKBgQDtlz5pKRQ+pmL7Le2QUOYQOI19hphunDuTCiqQO8ILi2Ru2/Fev4v8CDCfuGMvA+QYThKG/snk0PFaLl+svg53/c7WypHcgQSW7umfR3AT46dghzmrd3arsMStvmdeJMG8j/U4kmImItgdx5Vxd1EiRFY02+oDIXVBCHWQdjAlzwKBgHv8YtU2SYU2TXQlzEcAmVxNe2Iju6DGwJ5tLvubpNKT8C1VkjpcrnFWobR322ggQ+LexyGoGHoKncKxbvA8Rb/FZ4b29DxY6AIB/8m1N8NITWDWpifWj3mnwB2XhAGv8WzZYbPQxqbeKUz5W9IswxjwY6KzWMf+RtZIlEdH1kj9AoGBAN/RfL/ALQwf3lrVF1i+fRyGyfOYWfzJPO31w8cAJHqPo9szYxxowcx5QqUUJItj1Pp4gceeOj9N/i+ARC0NFcA/3xxE1EevWs1836RmvdRev4yVluRKtAZljcJG/kWXxtKFovLaI4/df03+eG/dgRcQ3U0KZlbwq+7Js0aVsCHNAoGAGYn8YxMDr8gNuucsNnYLTIhIDbeOK9Mv2GiTBsyqCCDgCNspAod1Dw2+FJrjs6cniEMA68uZFgqPJQMdTfNXctKkloqtro13hETcNHBGK7ZQCkUH5CKnctPZTVZUOW79ds8SLp9r3slHokzpyPB6plT8JBU9bvd7CuDZpoaS23E=",
        "secret": "304421f1e313c2997de0a6c93806146bc1377603cb6964591de88320692d6415",
        "leave_on_shutdown": false,
        "listen_multiaddress": "/ip4/0.0.0.0/tcp/10096",
        "state_sync_interval": "10m0s",
        "ipfs_sync_interval": "2m10s",
        "replication_factor_min": -1,
        "replication_factor_max": -1,
        "monitor_ping_interval": "15s",
        "peer_watch_interval": "5s",
        "disable_repinning": false
      },
      "consensus": {
        "raft": {
          "init_peerset": [],
          "wait_for_leader_timeout": "156s",
          "network_timeout": "150s",
          "commit_retries": 1,
          "commit_retry_delay": "200ms",
          "backups_rotate": 6,
          "heartbeat_timeout": "15s",
          "election_timeout": "155s",
          "commit_timeout": "500ms",
          "max_append_entries": 64,
          "trailing_logs": 10240,
          "snapshot_interval": "2m0s",
          "snapshot_threshold": 8192,
          "leader_lease_timeout": "500ms"
        }
      },
      "api": {
        "restapi": {
          "http_listen_multiaddress": "/ip4/127.0.0.1/tcp/10094",
          "read_timeout": "0s",
          "read_header_timeout": "5s",
          "write_timeout": "0s",
          "idle_timeout": "2m0s",
          "basic_auth_credentials": null,
          "headers": {
            "Access-Control-Allow-Headers": [
              "X-Requested-With",
              "Range"
            ],
            "Access-Control-Allow-Methods": [
              "GET"
            ],
            "Access-Control-Allow-Origin": [
              "*"
            ]
          }
        }
      },
      "ipfs_connector": {
        "ipfshttp": {
          "proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/10095",
          "node_multiaddress": "/ip4/127.0.0.1/tcp/4001",
          "connect_swarms_delay": "30s",
          "proxy_read_timeout": "0s",
          "proxy_read_header_timeout": "5s",
          "proxy_write_timeout": "0s",
          "proxy_idle_timeout": "1m0s",
          "pin_method": "refs",
          "ipfs_request_timeout": "5m0s",
          "pin_timeout": "24h0m0s",
          "unpin_timeout": "3h0m0s"
        }
      },
      "pin_tracker": {
        "maptracker": {
          "max_pin_queue_size": 50000,
          "concurrent_pins": 10
        },
        "stateless": {
          "max_pin_queue_size": 50000,
          "concurrent_pins": 10
        }
      },
      "monitor": {
        "monbasic": {
          "check_interval": "15s"
        },
        "pubsubmon": {
          "check_interval": "15s"
        }
      },
      "informer": {
        "disk": {
          "metric_ttl": "30s",
          "metric_type": "freespace"
        },
        "numpin": {
          "metric_ttl": "10s"
        }
      }
    }
    

    Please have a look and respond. Thnx in advance

    opened by pranavdaa 21
  • `ipfs-cluster-service -f init` doesn't seem to override config well

    `ipfs-cluster-service -f init` doesn't seem to override config well

    Pre-check

    • [x] This is not a IPFS Cluster website content issue (file those here)
    • [x] I read the troubleshooting section of the website and it did not help
    • [x] I searched for similar issues in the repo without luck
    • [x] All my peers are running the same cluster version
    • [x] All my peers are configured using the same cluster secret

    Basic information

    • [x] Version information (mark as appropiate):
      • [x] Master
      • [ ] Release candidate for next version
      • [ ] Latest stable version
      • [ ] An older version I should not be using
    • [x] Type (mark as appropiate):
      • [x] Bug
      • [ ] Feature request
      • [ ] Enhancement
    • [x] Operating system (mark as appropiate):
      • [x] Linux
      • [ ] macOS
      • [ ] Windows
      • [ ] Other: which?
    • [x] Installation method (mark as appropiate):
      • [ ] Binaries from dist.ipfs.io
      • [x] Built from sources
      • [ ] Docker
      • [ ] Snap
      • [ ] Other: which?

    Description

    Steps

    1. ipfs-cluster-service init
    2. ipfs-cluster-service daemon (Find everything to be working well here)
    3. Stop ipfs-cluster-service daemon
    4. ipfs-cluster-service -f init (to override configuration)
    5. ipfs-cluster-service daemon (Shows errors and eventually the daemon will stop)

    Logs here

    [[email protected] ipfs]$ ipfs-cluster-service -f init
    18:05:27.241  INFO     config: Saving configuration config.go:327
    ipfs-cluster-service configuration written to /home/kishansagathiya/.ipfs-cluster/service.json
    [[email protected] ipfs]$ ipfs-cluster-service daemon
    18:05:31.069  INFO    service: Initializing. For verbose output run with "-l debug". Please wait... daemon.go:43
    18:05:31.248  INFO    cluster: IPFS Cluster v0.5.0-753322cd listening on:
            /ip4/127.0.0.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/10.215.99.149/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/192.168.42.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/192.168.122.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/172.19.0.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/172.17.0.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/172.18.0.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
    
     cluster.go:103
    18:05:31.249  INFO    restapi: REST API (HTTP): /ip4/127.0.0.1/tcp/9094 restapi.go:403
    18:05:31.249  INFO   ipfshttp: IPFS Proxy: /ip4/127.0.0.1/tcp/9095 -> /ip4/127.0.0.1/tcp/5001 ipfshttp.go:209
    18:05:31.249  INFO  consensus: existing Raft state found! raft.InitPeerset will be ignored raft.go:203
    18:05:31.250  INFO    restapi: REST API (libp2p-http): ENABLED. Listening on:
            /ip4/127.0.0.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/10.215.99.149/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/192.168.42.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/192.168.122.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/172.19.0.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/172.17.0.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
            /ip4/172.18.0.1/tcp/9096/ipfs/QmNfSzfyEYKZ54XVW2gPeaZjaEejWzmWwa2NCw3JnEGfbo
    
     restapi.go:420
    18:05:32.772 ERROR       raft: NOTICE: Some RAFT log messages repeat and will only be logged once logging.go:105
    18:05:32.773 ERROR       raft: Failed to make RequestVote RPC to {Voter QmVwugPFW471L1xYJYWurh2KzcYvfgXzrM74wpcXzpaLTH QmVwugPFW471L1xYJYWurh2KzcYvfgXzrM74wpcXzpaLTH}: dial attempt failed: failed to dial <peer.ID VwugPF> (default failure) logging.go:105
    18:05:33.882 ERROR       raft: Failed to make RequestVote RPC to {Voter QmVwugPFW471L1xYJYWurh2KzcYvfgXzrM74wpcXzpaLTH QmVwugPFW471L1xYJYWurh2KzcYvfgXzrM74wpcXzpaLTH}: dial backoff logging.go:105
    18:05:51.250 ERROR    cluster: ***** ipfs-cluster consensus start timed out (tips below) ***** cluster.go:363
    18:05:51.250 ERROR    cluster: 
    **************************************************
    This peer was not able to become part of the cluster.
    This might be due to one or several causes:
      - Check the logs above this message for errors
      - Check that there is connectivity to the "peers" multiaddresses
      - Check that all cluster peers are using the same "secret"
      - Check that this peer is reachable on its "listen_multiaddress" by all peers
      - Check that the current cluster is healthy (has a leader). Otherwise make
        sure to start enough peers so that a leader election can happen.
      - Check that the peer(s) you are trying to connect to is running the
        same version of IPFS-cluster.
    **************************************************
     cluster.go:364
    18:05:51.250  INFO    cluster: shutting down Cluster cluster.go:431
    18:05:51.250  INFO  consensus: stopping Consensus component consensus.go:176
    18:05:56.259 WARNI  consensus: timed out waiting for state updates before shutdown. Snapshotting may fail raft.go:414
    18:05:56.259 ERROR       raft: Failed to take snapshot: nothing new to snapshot logging.go:105
    18:05:56.259  INFO    monitor: stopping Monitor pubsubmon.go:154
    18:05:56.259  INFO    restapi: stopping Cluster API restapi.go:438
    18:05:56.260  INFO   ipfshttp: stopping IPFS Proxy ipfshttp.go:536
    18:05:56.260  INFO pintracker: stopping MapPinTracker maptracker.go:119
    [[email protected] ipfs]$ ipfs-cluster-service 
    
    status/in-progress 
    opened by kishansagathiya 20
  • Add Estuary To IPFS Cluster

    Add Estuary To IPFS Cluster

    Describe the feature you are proposing

    I am looking for a way to integrate the estuary as one of the remote node for my IPFS cluster along with normal local IPFS nodes. Ideally I would be able to interact with single cluster API and the cluster should be able to pin data across all/configured nodes which can be estuary node.

    kind/enhancement exp/expert effort/weeks 
    opened by ohmpatel1997 1
  • Bump github.com/ipfs/go-ipfs-files from 0.1.1 to 0.2.0

    Bump github.com/ipfs/go-ipfs-files from 0.1.1 to 0.2.0

    Bumps github.com/ipfs/go-ipfs-files from 0.1.1 to 0.2.0.

    Release notes

    Sourced from github.com/ipfs/go-ipfs-files's releases.

    v0.2.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/ipfs/go-ipfs-files/compare/v0.1.1...v0.2.0

    Commits
    • 7a34343 Release v0.2.0
    • e8cf9a3 fix: error when TAR has files outside of root (#56)
    • 30b08ca Merge pull request #55 from ipfs/web3-bot/sync
    • 263276c fix: type of contents in serialfile
    • 84dc4b8 update .github/workflows/go-check.yml
    • 178229e update .github/workflows/go-test.yml
    • c2dbc99 stop using the deprecated io/ioutil package
    • e603bdf bump go.mod to Go 1.18 and run go fix
    • 0889edb chore(Directory): add DirIterator API restriction: iterate only once
    • 88b4692 chore: Update .github/workflows/stale.yml [skip ci]
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    topic/dependencies 
    opened by dependabot[bot] 0
  • Bump github.com/multiformats/go-multiaddr from 0.7.0 to 0.8.0

    Bump github.com/multiformats/go-multiaddr from 0.7.0 to 0.8.0

    Bumps github.com/multiformats/go-multiaddr from 0.7.0 to 0.8.0.

    Release notes

    Sourced from github.com/multiformats/go-multiaddr's releases.

    v0.8.0

    What's Changed

    Full Changelog: https://github.com/multiformats/go-multiaddr/compare/v0.7.0...v0.8.0

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    topic/dependencies 
    opened by dependabot[bot] 0
  • Bump github.com/prometheus/client_golang from 1.13.0 to 1.14.0

    Bump github.com/prometheus/client_golang from 1.13.0 to 1.14.0

    Bumps github.com/prometheus/client_golang from 1.13.0 to 1.14.0.

    Release notes

    Sourced from github.com/prometheus/client_golang's releases.

    1.14.0 / 2022-11-08

    It might look like a small release, but it's quite opposite 😱 There were many non user facing changes and fixes and enormous work from engineers from Grafana to add native histograms in 💪🏾 Enjoy! 😍

    What's Changed

    • [FEATURE] Add Support for Native Histograms. #1150
    • [CHANGE] Extend prometheus.Registry to implement prometheus.Collector interface. #1103

    New Contributors

    Full Changelog: https://github.com/prometheus/client_golang/compare/v1.13.1...v1.14.0

    1.13.1 / 2022-11-02

    • [BUGFIX] Fix race condition with Exemplar in Counter. #1146
    • [BUGFIX] Fix CumulativeCount value of +Inf bucket created from exemplar. #1148
    • [BUGFIX] Fix double-counting bug in promhttp.InstrumentRoundTripperCounter. #1118

    Full Changelog: https://github.com/prometheus/client_golang/compare/v1.13.0...v1.13.1

    Changelog

    Sourced from github.com/prometheus/client_golang's changelog.

    1.14.0 / 2022-11-08

    • [FEATURE] Add Support for Native Histograms. #1150
    • [CHANGE] Extend prometheus.Registry to implement prometheus.Collector interface. #1103

    1.13.1 / 2022-11-01

    • [BUGFIX] Fix race condition with Exemplar in Counter. #1146
    • [BUGFIX] Fix CumulativeCount value of +Inf bucket created from exemplar. #1148
    • [BUGFIX] Fix double-counting bug in promhttp.InstrumentRoundTripperCounter. #1118
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    topic/dependencies 
    opened by dependabot[bot] 0
  • Bump github.com/urfave/cli/v2 from 2.16.3 to 2.23.5

    Bump github.com/urfave/cli/v2 from 2.16.3 to 2.23.5

    Bumps github.com/urfave/cli/v2 from 2.16.3 to 2.23.5.

    Release notes

    Sourced from github.com/urfave/cli/v2's releases.

    v2.23.5

    What's Changed

    New Contributors

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.4...v2.23.5

    v2.23.4

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.3...v2.23.4

    v2.23.3

    What's Changed

    New Contributors

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.2...v2.23.3

    Note. This is considered a minor release even though it has a new "feature" i.e support for int64slice for alstrc flags. The int64slice is verbatim copy of existing code and doesnt include any new behaviour compared to other altsrc flags.

    v2.23.2

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.1...v2.23.2

    v2.23.1

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v2.23.0...v2.23.1

    v2.23.0

    What's Changed

    ... (truncated)

    Commits
    • 600ef6e Merge pull request #1573 from urfave/v2-deps-up
    • e045d5a Merge branch 'v2-maint' into v2-deps-up
    • 107796a Merge pull request #1574 from urfave/v2-gha
    • 28a402f Update github actions events for v2-maint branch
    • 9991c45 Update dependencies in v2 series
    • 61efca6 Merge pull request #1571 from dirkmueller/main
    • 2ec39a1 Update x/text to 0.3.8
    • 46043dd Merge pull request #1553 from dearchap/altsrc_generation
    • 45dc376 Code review comment
    • 190e5b6 Merge pull request #1551 from Edelweiss-Snow/issue_1550
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    topic/dependencies 
    opened by dependabot[bot] 0
  • Fix #1796: Disable AutoRelay

    Fix #1796: Disable AutoRelay

    Per https://github.com/libp2p/go-libp2p/issues/1852, the AutoRelay subsystem is now panicking on users. EnableAutoRelay must now be called with options, otherwise it seems to panic for some people.

    Disabling it is the best for now, given relays are enabled and a node must be able to connect to others on bootstrap, perhaps it does not need to re-discover new relays (every other node should be a relay).

    In any case we should revisit relay support and related services in Cluster, since semantics have changed a lot in libp2p, relayV2 is a thing, hole-punching is a thing etc. etc.

    opened by hsanjuan 0
Releases(v1.0.4)
  • v1.0.4(Sep 27, 2022)

    IPFS Cluster v1.0.4 is a maintenance release addressing a couple of bugs and adding more "state crdt" commands.

    One of the bugs has potential to cause a panic, while a second one can potentially dead-lock pinning operations and hang new pinning requests. We recommend all users to upgrade as soon as possible.

    List of changes

    Breaking changes

    There are no breaking changes on this release.

    Features
    Bug fixes
    Other changes

    No other changes.

    Upgrading notices

    Configuration changes

    There are no configuration changes for this release.

    REST API

    No changes.

    Pinning Service API

    No changes.

    IPFS Proxy API

    No changes.

    Go APIs

    No relevant changes.

    Other

    Nothing.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Sep 16, 2022)

    IPFS Cluster v1.0.3 is a maintenance release addressing some bugs and bringing some improvements to error handling behavior, as well as a couple of small features.

    This release upgrades to the latest libp2p release (v0.22.0).

    List of changes

    Breaking changes

    There are no breaking changes on this release.

    Features
    Bug fixes
    Other changes

    Upgrading notices

    Configuration changes

    There are no configuration changes for this release.

    REST API

    No changes.

    Pinning Service API

    No changes.

    IPFS Proxy API

    The IPFS Proxy now intercepts /block/put and /dag/put requests. This happens as follows:

    • The request is first forwarded "as is" to the underlying IPFS daemon, with the ?pin query parameter always set to false.
    • If ?pin=true was set, a cluster pin is triggered for every block and dag object uploaded (reminder that these endpoints accept multipart uploads).
    • Regular IPFS response to the uploads is streamed back to the user.
    Go APIs

    No relevant changes.

    Other

    Note that more than 10 failed requests to IPFS will now result in a rate-limit of 1req/s for any request to IPFS. This may cause things to queue up instead hammering the ipfs daemon with requets that fail. The rate limit is removed as soon as one request succeeds.

    Also note that now Cluster peers that are started will not become fully operable until IPFS has been detected to be available: no metrics will be sent, no recover operations will be run etc. essentially the Cluster peer will wait for IPFS to be available before starting to do things that need IPFS to be available, rather than doing them right away and have failures.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Jul 24, 2022)

    IPFS Cluster v1.0.2 is a maintenance release with bug fixes and another iteration of the experimental support for the Pinning Services API that was introduced on v1.0.0, including Bearer token authorization support for both the REST and the Pinning Service APIs.

    This release includes a security fix in the go-car library. The security issue allows an attacker to crash a cluster peer or cause excessive memory usage when uploading CAR files via the REST API (POST /add?format=car endpoint).

    This also the first release after moving the project from the "ipfs" to the the "ipfs-cluster" Github organization, which means the project Go modules have new paths (everything is redirected though). The Docker builds remain inside the "ipfs" namespace (i.e. docker pull ipfs/ipfs-cluster).

    IPFS Cluster is also ready to work with go-ipfs v0.13.0+. We recommend to upgrade.

    List of changes

    Breaking changes

    Features
    • REST/PinSVC API: support JWT bearer token authorization | https://github.com/ipfs/ipfs-cluster/issues/1703
    • crdt: commit pending batched pins on shutdown | https://github.com/ipfs/ipfs-cluster/issues/1697 | 1719
    • Export a prometheus - metric with the current disk informer value | https://github.com/ipfs/ipfs-cluster/issues/1725
    Bug fixes
    • Fix adding large directories | https://github.com/ipfs/ipfs-cluster/issues/1691 | https://github.com/ipfs/ipfs-cluster/issues/1700
    • PinSVC API: fix compliance errors and bugs | https://github.com/ipfs/ipfs-cluster/issues/1704
    • Pintracker: fix missing and wrong values in PinStatus object fields for recovered operations | https://github.com/ipfs/ipfs-cluster/issues/1705
    • ctl: fix "Exp" label showing the pin timestamp instead of the experiation date | https://github.com/ipfs/ipfs-cluster/issues/1666 | https://github.com/ipfs/ipfs-cluster/issues/1716
    • Pintracker: fix races causing wrong counts in metrics | https://github.com/ipfs/ipfs-cluster/issues/1717 | https://github.com/ipfs/ipfs-cluster/issues/1729
    • Update go-car to v0.4.0 (security fixes) | https://github.com/ipfs/ipfs-cluster/issues/1730

    Other changes

    • Improve language, fix typos to changelog | https://github.com/ipfs/ipfs-cluster/issues/1667
    • Update comment in docker-compose | https://github.com/ipfs/ipfs-cluster/issues/1689
    • Migrate from ipfs/ipfs-cluster to ipfs-cluster/ipfs-cluster | https://github.com/ipfs/ipfs-cluster/issues/1694
    • Enable spell-checking and fix spelling errors (US locale) | https://github.com/ipfs/ipfs-cluster/issues/1695
    • Enable CodeQL analysis and fix security warnings | https://github.com/ipfs/ipfs-cluster/issues/1696
    • Dependency upgrades: libp2p-0.20.1 etc. | https://github.com/ipfs/ipfs-cluster/issues/1711 | https://github.com/ipfs/ipfs-cluster/issues/1712 | https://github.com/ipfs/ipfs-cluster/issues/1724
    • API: improve debug logging during tls setup | https://github.com/ipfs/ipfs-cluster/issues/1715

    Upgrading notices

    Configuration changes

    There are no configuration changes for this release.

    REST API

    The REST API has a new POST /token endpoint, which returns a JSON object with a JWT token (when correctly authenticated).

    This token can be used to authenticate using Authorization: Bearer header on subsequent requests.

    The token is tied and verified against a basic authentication user and password, as configured in the basic_auth_credentials field.

    At the moment we do not support revocation, expiration and other token options.

    Pinning Service API

    • The Pinning Service API has a new POST /token endpoint, which returns a JSON object with a JWT token (when correctly authenticated). See the REST API section above.

    IPFS Proxy API

    No changes to IPFS Proxy API.

    Go APIs

    • All cluster modules have new paths: every instance of "ipfs/ipfs-cluster" should now be "ipfs-cluster/ipfs-cluster".

    Other

    • go-ipfs v0.13.0 introduced some changes to the Block/Put API. IPFS Cluster now uses the cid-format option when performing Block-Puts. We believe the change does not affect adding blocks and that it should still work with previous go-ipfs versions, yet we recommend upgrading to go-ipfs v0.13.1 or later.

    Prebuild Binaries

    All prebuild binaries are available on dist.ipfs.io

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(May 7, 2022)

    IPFS Cluster v1.0.1 is a maintenance release ironing out some issues and bringing a couple of improvements around observability of cluster performance:

    • We have fixed the ipfscluster_pins metric and added a few new ones that help determine how fast the cluster can pin and add blocks.
    • We have added a new Informer that broadcasts current pinning-queue size, which means we can take this information into account when making allocations, essentially allowing peers with big pinning queues to be relieved by peers with smaller pinning queues.

    Please read below for a list of changes and things to watch out for.

    List of changes

    Breaking changes

    Peers running IPFS Cluster v1.0.0 will not be able to read the pin's user-set metadata fields for pins submitted by peers in later versions, since metadata is now stored on a different protobuf field. If this is an issue, all peers in the cluster should upgrade.

    Features
    Bug fixes
    Other changes

    Upgrading notices

    Configuration changes

    There is a new pinqueue configuration object inside the informer section on newly initialized configurations:

      "informer": {
        ...
        "pinqueue": {
          "metric_ttl": "30s",
          "weight_bucket_size": 100000
        },
    	...
    

    This enables the Pinqueue Informer, which broadcasts metrics containing the size of the pinqueue with the metric weight divided by weight_bucket_size. The new metric is not used for allocations by default, and it needs to be manually added to the allocate_by option in the allocator, usually like:

    "allocator": {
       "balanced": {
         "allocate_by": [
           "tag:group",
           "pinqueue",
           "freespace"
         ]
       }
    
    REST API

    No changes to REST API.

    IPFS Proxy API

    No changes to IPFS Proxy API.

    Go APIs

    No relevant changes to Go APIs, other than the PinTracker interface now requiring a PinQueueSize method.

    Other

    The following metrics are now available in the Prometheus endpoint when enabled:

    ipfscluster_pins_ipfs_pins gauge
    ipfscluster_pins_pin_add counter
    ipfscluster_pins_pin_add_errors counter
    ipfscluster_blocks_put counter
    ipfscluster_blocks_added_size counter
    ipfscluster_blocks_added counter
    ipfscluster_blocks_put_error counter
    

    The following metrics were converted from counter to gauge:

    ipfscluster_pins_pin_queued
    ipfscluster_pins_pinning
    ipfscluster_pins_pin_error
    

    Peers that are reporting freespace as 0 and which use this metric to allocate pins, will no longer be available for allocations (they stop broadcasting this metric). This means setting StorageMax on IPFS to 0 effectively prevents any pins from being explicitly allocated to a peer (that is, when replication_factor != everywhere).

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Apr 22, 2022)

    IPFS Cluster v1.0.0 is a major release that represents that this project has reached maturity and is able to perform and scale on production environment (50+ million pins and 20 nodes).

    This is a breaking release, v1.0.0 cluster peers are not compatible with previous cluster peers as we have bumped the RPC protocol version (which had remained unchanged since 0.12.0).

    For a full list of changes, see the CHANGELOG.

    Source code(tar.gz)
    Source code(zip)
  • v0.14.5(Feb 16, 2022)

  • v0.14.4(Jan 11, 2022)

  • v0.14.3(Jan 3, 2022)

  • v0.14.2(Jan 3, 2022)

  • v0.14.1(Aug 16, 2021)

Owner
IPFS
A peer-to-peer hypermedia protocol
IPFS
Data Availability Sampling (DAS) on a Discovery-v5 DHT overlay

Implementing Data Availability Sampling (DAS) There's a lot of history to unpack here. Vitalik posted about the "Endgame": where ethereum could be hea

Diederik Loerakker 29 Nov 12, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

mobus 2 Sep 19, 2021
This is a tool that allows you to check minecraft names availability, this tool can do around 3000~ names a minute or more!

Checker This is a tool that allows you to check minecraft names availability, this tool can do around 3000~ names a minute or more! Tutorial To instal

null 3 Feb 13, 2022
Checks sneaker availability, currently Asos/JD/Nike + Air Force 1 '07 44 only

airforce Setup Requires a .env file with Twilio credentials and phone numbers. SID=AC0ae6d46612d3a0c3d49977485652f665 TOKEN=7ff8d07a7d0fc9e6432a14ad84

Melvin 0 Dec 12, 2021
TLDs finder: check domain name availability across all valid top-level domains

TLD:er TLDs finder — check domain name availability across all valid top-level d

Dwi Siswanto 50 Oct 31, 2022
An IPFS bytes exchange for caching and retrieving data from Filecoin

?? go-hop-exchange An IPFS bytes exchange to allow any IPFS node to become a Filecoin retrieval provider and retrieve content from Filecoin Highlights

Myel 31 Aug 25, 2022
This project provides fully automated one-click experience to create Cloud and Kubernetes environment to run Data Analytics workload like Apache Spark.

Introduction This project provides a fully automated one-click tool to create Data Analytics platform in Cloud and Kubernetes environment: Single scri

DataPunch - One Click to Create Cloud and Kubernetes Environment for Data Analytics and Apache Spark 44 Nov 16, 2022
A tool for checking the accessibility of your data by IPFS peers

ipfs-check Check if you can find your content on IPFS A tool for checking the accessibility of your data by IPFS peers Documentation Build go build wi

Adin Schmahmann 17 Nov 9, 2022
Deece is an open, collaborative, and decentralised search mechanism for IPFS

Deece Deece is an open, collaborative, and decentralised search mechanism for IPFS. Any node running the client is able to crawl content on IPFS and a

null 12 Oct 29, 2022
🌐 (Web 3.0) Pastebin built on IPFS, securely served by Distributed Web and Edge Network.

pastebin-ipfs 简体中文 (IPFS Archivists) Still in development, Pull Requests are welcomed. Pastebin built on IPFS, securely served by Distributed Web and

Mayo/IO 164 Nov 9, 2022
IPFS implementation in Go

go-ipfs What is IPFS? IPFS is a global, versioned, peer-to-peer filesystem. It combines good ideas from previous systems such as Git, BitTorrent, Kade

IPFS 14.4k Nov 27, 2022
A standalone ipfs gateway

rainbow Because ipfs should just work like unicorns and rainbows Building go build Running rainbow Configuration NAME: rainbow - a standalone ipf

IPFS 21 Nov 9, 2022
A minimal IPFS replacement for P2P IPLD apps

IPFS-Nucleus IPFS-Nucleus is a minimal block daemon for IPLD based services. You could call it an IPLDaemon. It implements the following http api call

Peergos 26 Nov 4, 2022
Technical specifications for the IPFS protocol stack

IPFS Specifications This repository contains the specs for the IPFS Protocol and associated subsystems. Understanding the meaning of the spec badges a

IPFS 1k Nov 26, 2022
Generates file.key file for IPFS Private Network.

ipfs-keygen Generates file.key file for IPFS Private Network. Installation go get -u github.com/reixmor/ipfs-keygen/ipfs-keygen Usage ipfs-keygen > ~/

Camilo Abel Monreal Aguero 0 Jan 18, 2022
Go-ipfs-pinner - The pinner system is responsible for keeping track of which objects a user wants to keep stored locally

go-ipfs-pinner Background The pinner system is responsible for keeping track of

y 0 Jan 18, 2022
Nhat Tran 0 Feb 10, 2022
kcp is a prototype of a Kubernetes API server that is not a Kubernetes cluster - a place to create, update, and maintain Kube-like APis with controllers above or without clusters.

kcp is a minimal Kubernetes API server How minimal exactly? kcp doesn't know about Pods or Nodes, let alone Deployments, Services, LoadBalancers, etc.

Prototype of Future Kubernetes Ideas 1.8k Nov 26, 2022
Local development against a remote Kubernetes or OpenShift cluster

Documentation - start here! ** Note: Telepresence 1 is being replaced by our even better Telepresence 2. Please try Telepresence 2 first and report an

Telepresence 5.4k Nov 26, 2022