Technical specifications for the IPFS protocol stack

Related tags

Network specs
Overview

IPFS Specifications

Matrix IRC Discord

This repository contains the specs for the IPFS Protocol and associated subsystems.

Understanding the meaning of the spec badges and their lifecycle

We use the following label system to identify the state of each spec:

  • - A work-in-progress, possibly to describe an idea before actually committing to a full draft of the spec.
  • - A draft that is ready to review. It should be implementable.
  • - A spec that has been adopted (implemented) and can be used as a reference point to learn how the system works.
  • - We consider this spec to close to final, it might be improved but the system it specifies should not change fundamentally.
  • - This spec will not change.
  • - This spec is no longer in use.

Nothing in this spec repository is permanent or even stable yet. Most of the subsystems are still a draft or in reliable state.

Index

The specs contained in this repository are:

Contribute

Suggestions, contributions, criticisms are welcome. Though please make sure to familiarize yourself deeply with IPFS, the models it adopts, and the principles it follows. This repository falls under the IPFS Code of Conduct.

Comments
  • WIP: IPLD spec

    WIP: IPLD spec

    This PR adds a new IPLD spec.

    Some things TODO:

    • [x] paths: link to path issue in go-ipfs or go-ipld
    • [ ] paths: list path resolving restrictions
    • [x] paths: show examples of path resolving
    • [ ] examples/bitcoin: make this a real txn
    • [ ] more examples

    @mildred @diasdavid could you review?

    help wanted 
    opened by jbenet 55
  • Background on Address Scheme discussions (dweb, ipfs:// and more)

    Background on Address Scheme discussions (dweb, ipfs:// and more)

    This still needs work, but it gives a starting point for people to clarify the discussion. Please submit comments and/or changes.

    ref ipfs/ipfs#227 ipfs/in-web-browsers#4 ipfs/in-web-browsers#28 https://github.com/ipfs/specs/pull/139

    status/in-progress 
    opened by flyingzumwalt 37
  • IPFS doesn't use TLS

    IPFS doesn't use TLS

    protocol/network#Encryption:

    IPFS uses cyphersuites like TLS.

    NOTE: we do not use TLS directly, because we do not want the CA system baggage. Most TLS implementations are very big. Since the IPFS model begins with keys, IPFS only needs to apply ciphers. This is a minimal portion of the whole TLS standard.

    So you rolled your own transport encryption? I find this surprising, and I'm… skeptical.

    This needs to be a new section (or an entirely separate document) rather than an aside.

    (Also, I'll add that if all you want is to connect to something, exchange RSA public keys, and get an encrypted transport stream, TLS is entirely capable of providing that without involving CAs.)

    opened by willglynn 27
  • Unite the Files API 🗂

    Unite the Files API 🗂

    Important note: This is not an entirely new proposal, it has been discussed several times over the course of the last 9 months in different situations with different views and opinions across the IPFS team. This comes to formalize the process so that we can commit and move it forward :)

    Several of our users have been mislead to think of IPFS of File System instead of a Graph Database (fair point that is the name of the project after all:D). This mostly comes from the fact that the cat, get and addcommands come as first order commands and are the ones mostly used by demos.

    Now, with the addition of the 'Files API', another layer of complexity and indirection was added. The common reaction to it is "Wait, what is Files API, weren't we adding Files all this time?".

    With this, we miss the chance to lead our users to understand what a great Graph Database primitives IPFS offers and we also make it really hard for users to understand what Files API is, specially when it has such a generic name.

    So, were is the proposal (this is not just me, although I'm the one writing it) that we've discussed through several points in time.

    Rename the Files API to mfs, this will enable us to have one non generic keyword to call it that we can use with our users and also something that the community will be able to search for or aspect specially, since it has very technical details.

    Move the cat, get and add commands to under the files umbrela.

    In practical terms, this means:

    # currently, the file manipulation commands are:
    ipfs cat
    ipfs get
    ipfs add
    ipfs files mkdir
    ipfs files flush
    ipfs files read
    ipfs files cp
    ipfs files ls
    ipfs files stat
    ipfs files rm
    ipfs files write
    

    Next:

    # Manipulation of files
    ipfs files cat
    ipfs files get
    ipfs files add
    
    # mfs - Mutable File System
    ipfs mfs mkdir
    ipfs mfs flush
    ipfs mfs read
    ipfs mfs cp
    ipfs mfs ls
    ipfs mfs stat
    ipfs mfs rm
    ipfs mfs write
    
    opened by daviddias 22
  • IPIP: Gateway _redirects File

    IPIP: Gateway _redirects File

    opened by justincjohnson 21
  • Experimental Proposal: CIDv1 -- IPLD, Multicodec-packed, and more

    Experimental Proposal: CIDv1 -- IPLD, Multicodec-packed, and more

    READ THIS PARAGRAPH FIRST

    Hey everyone, the below is a proposal for some changes to IPFS, IPLD, and how we link to data structures. It would address a bunch of open problems that have been identified, and improve the use, tooling, and model of IPLD to allow lots of what people have been requesting for months. Please review and leave comments. We feel pretty strongly about this being a good solution, but we're not sure if we're just drinking the koolaid and going to make things worse. Sanity check before we move further pls? Also my apologies, i would spend more time writing up a better version but i just dont have enough time right now and time is of the essence on this.


    [EXPERIMENTAL PROPOSAL] CIDv1 -- Important Updates to IPFS, IPLD, Multicodec, and more.

    IPFS migration path to IPLD (CBOR) from MerkleDAG (ProtoBuf)

    Multicodec Packed Representation

    It is useful to have a compact version of mulicodec, for use in small identifiers. This compact identifier will just be a single varint, looked up in a table. Different applications can use different tables. We should probably have one common table for well-known formats.

    We will establish a table for common authenticated data structure formats, for example: IPFS v0 Merkledag, CBOR IPLD, Git, Bitcoin, and more. The table is a simple varint lookup.

    IPLD Links Updates (new format)

    Open Problems (Motivation)

    IPLD allows content to be stored in multiple different formats, and thus we need a way to understand what kind of content is being loaded in when traversing a link. A problematic issue is that old ipfs content (protobuf merkledag) does not use multicodec. It makes it difficult to distinguish between the new CBOR IPLD objects and the old Protobuf objects.

    It has been proposed earlier that we wrap protobuf objects with a multicodec. But this is a problem, because the protobuf multicodec would not be authenticated. This is further complicated because many people have been requesting the ability to address raw leaf objects directly (that is, a hash linking to raw content, without ipld nor protobuf wrapping). This is a nice thing to have, but introduces difficulty in distinguishing between a protobuf or a raw encoded object, particularly when neither has a multicodec header which is authenticated by the object's hash. This lack of authentication is an attack vector: adversaries may provide protobuf objects with a raw multicodec, and depending on how implementations handle the multicodec, may poison an implementation's object repo.

    Another important performance constraint is that multicodec headers are quite large: /ipld/cbor/v0, for example, is 13 bytes, which is way too large for many applications of small data. Instead, we would like to be able to use a compact multicodec representation ("multicodec packed", a single varint) to distinguish the formats. So that encoded objects are wrapped with minimal overhead. Note that this still does not affect protobuf or raw objects because these do not include headers.

    Additional complications include how bitswap sends or identifies blocks, how a DagStore can pull out the object for a multihash and know what format encoding to use for it (eg raw vs protobuf), whether to allow linking from one object type to another, support for multiple base encodings for links, among others.

    In discussions we (@jbenet, @diasdavid, and @whyrusleeping) reviewed many different possiblities. We considered possibilities and how it affected linking data, wrapping the data with multicodec, storing it that way under the many layers of abstraction (dag store, blockstore, datastore, file systems), fetching and retrieving objects, knowing what format to use when, ensuring values are authenticated and not opening up vectors for attackers to poison repos, and more.

    In the end, we came up with a few small changes to how we represent IPLD links that solve all our problems (tm) \o/. These are:

    • teach IPLD links to carry data formats (using multicodec)
    • teach IPLD links to distinguish base encodings

    It is worth crediting many people here that have tirelessly pushed hard to get a bunch of these ideas out. @davidar @mildred @nicola to name a few, but many others too. But they haven't looked at this yet. this first post is the first they'll hear of this construction, and they may very well hate this particular combination of ideas :) please be direct with feedback, the sooner the better.

    IPLD Links learn about Base Encoding

    We propose adding a multibase prefix to representations of IPLD links. This is particularly important where the encoding is not binary.

    At this time, we recommend not including it in direct storage, where it should be binary. However, it may be found during the course of review that it is better to always retain the multibase prefix, even when storing in binary.

    This change is a much requested option to support multiple encodings for the hashes. Current links use by default base58, which is perfect for URLs as it doesn't contain any non supported char and can be easily copy-pasted, however, for performance reasons, it is not always the best format. Some users already encode IPFS multihashes in other bases, and therefore it would be ideal to have all IPFS and IPLD tooling support these encodings through multibase, avoiding confusing failures.

    IPLD Links acquire a version

    The fact we propose here changes to the basic link structure remind us of the basic multiformats principle:

    "Never going to change" considered harmful.

    therefore we deem it wise to ensure that henceforth we include a version so that evolution can be simple, and not complex. The below changes suggest a way to distinguish between old and new links, but we should avoid such situations in the future, as this approach leverages knowledge about multihash distributions in the wild. This will be less feasible in the future.

    IPLD Links learn about Codecs

    The most important component of these changes introduces a multicodec-packed varint prefix to the link, to signal the encoding of the linked-to object. This enables the link to carry information about the data it points to, and ensure it is interpreted correctly. This ensures that the multicodec prefix is NOT necessary for interpretation of an IPLD object, as the link to the object carries information for its interpretation.

    All proper IPLD formats (cbor and on) should carry the multicodec header at the beginning of their serialized representation, which authenticates the header and ensures clients can interpret the object without even having a link. But, this is not possible with objects of formats created before the IPLD spec, such as the first merkledag protobuf object codec in IPFS (go-ipfs 0.4.x and below). This includes also objects from other authenticated data structure distributed systems, such as Git, Bitcoin, Ethereum, and more. Finally, raw data -- which many hope to be able to address directly in IPLD -- cannot carry an authenticated prefix either.

    The approach of adding the multicodec to the link entirely side-steps the problem of not being able to authenticate multicodec headers for protobufs, git, bitcoin, or raw data objects. And this avoids a nasty repo poisoning attack, possible in other proposed suggestions that rely on an unauthenticated multicodec header (carried along with the object) to determine the type of an object.

    This also ensures that IPLD objects can still be content-addressed nicely, without needing to also store codec metadata alongside.

    This change has been long-proposed in other forms. These other forms usually suggested attaching a @multicodec key to IPLD link objects (as a property on or next to the link), which was cumbersome and introduced complexity in other ways. Specially, it was not easy to carry over this info to a URL or copy-pasted identifier.

    This multicodec-packed prefix will be sampled from a special table, maintained along with the IPLD spec. This table is expandable over time. A global multicodec table could grow from this one, or start separately.

    Content IDs

    This document will use the words Content IDs or CIDs. this abstraction is useful here but may not be useful beyond it. Another word -- albeit much less precise -- may be IPLD Link.

    Other options are:

    • SID - Self-describing IDentifier
    • SSDID - Secure Self Describable Identifier
    • IPLD Links -- no fancy name, less abstraction creep. less precise.

    Let the old base58 multihash links to protobuf data be called CID Version 0.

    CIDs Version 1 (new)

    Putting together the IPLD Link update statements above, we can term the new handle for IPLD data CID Version 1, with a multibase prefix, a version, a packed multicodec, and a multihash.

    <mbase><version><mcodec><mhash>
    

    Where:

    • <mbase> is a multibase prefix describing the base that encodes this CID. If binary, this is omitted.
    • <version> is the version number of the cid.
    • <mcodec> is a multicodec-packed identifier, from the CID multicodec table
    • <mhash> is a cryptographic multihash, including: <mhash-code><mhash-len><mhash-value>

    Note that all CIDs v1 and on should always begin with <mbase><version>, this evolving nicely.

    Distinguishing v0 and v1 CIDs (old and new)

    It is a HARD CONSTRAINT that all IPFS links continue to work. This means we need to continue to support v0 CIDs. This means IPFS APIs must accept both v0 and v1 CIDs. This section defines how to distinguish v0 from v1 CIDs.

    Old v0 CIDs are strictly sha2-256 multihashes encoded in base58 -- this is because IPFS tooling only shipped with support for sha2-256. This means the binary versions are 34 bytes long (sha2-256 256 bit multihash), and that the string versions are 46 characters long (base58 encoded). This means we can recognize a v0 CID by ensuring it is a sha256 bit multihash, of length 256 bits, and base58 encoded (when a string). Basically:

    • <mbase> is implicitly base58.
    • <version> is implicitly 0.
    • <mcodec> is implicitly protobuf (todo: add code here)
    • <mhash> is a cryptographic multihash, explicit.

    We can re-write old v0 CIDs into v1 CIDs, by making the elements explicit. This should be done henceforth to avoid creating more v0 CIDs. But note that many references exist in the wild, and thus we must continue supporting v0 links. In the distant future, we may remove this support after sha2 breaks.

    Note we can cleanly distinguish the values, which makes it easy to support both. The code for this check is here: https://gist.github.com/jbenet/bf402718a7955bf636fb47d214bcef8a

    IPLD supports non-CID hash links as implicit CIDv1s

    Note that raw hash links stored in various data structures (eg Protbouf, Git, Bitcoin, Ethereum, etc) already exist. These links -- when loaded directly as one of these data structures -- can be seen as "linking within a network" whereas proper CIDv1 IPLD links can be seen as linking "across networks" (internet of data! internet of data structures!). Supporting these existing (or even new) raw hash links as a CIDv1 can be done by noting that when on data structure links with just a raw binary link, the rest of the CIDv1 fields are implicit:

    • <mbase> is implicitly binary or whatever the format encodes.
    • <version> is implicitly 1.
    • <mcodec> is implicitly the same as the data structure.
    • <mhash> can be determined from the raw hash.

    Basically, we construct the corresponding CIDv1 out of the raw hash link because all the other information is in the context of the data structure. This is very useful because it allows:

    • more compact encoding of a CIDv1 when linking from one data struct to another
    • linking from CBOR IPLD to other CBOR IPLD objects exactly as has been spec-ed out so far, so any IPLD adopters continue working.
    • (most important) opens the door for native support of other data structures

    IPLD native support for Git, Bitcoin, Ethereum, and other authenticated data structures

    Given the above addressing changes, it is now possible to directly address and implement native support for Git, Bitcoin, Ethereum, and other authenticated data structure formats. Such native support would allow resolving through such objects, and treat them as true IPLD objects, instead of needing to wrap them in CBOR or another format. This is the proper merkle-forest. \o/

    IPLD addresses raw data

    Given the above addressing changes, it is now possible to address raw data directly, as an IPLD node. This node is of course taken to be just a byte buffer, and devoid of links (i.e. a leaf node).

    The utility of this is the ability to directly address any object via hashing external to IPLD datastructures, which is a much-requested feature.

    Support for multiple binary packed formats

    Contrary to existing Merkle objects (e.g IPFS protobuf legacy, git, bitcoin, dat and others), new IPLD ojects are authenticated AND self described data blobs, each IPLD object is serialized and prefixed by a multicodec identifying its format.

    Some candidate formats:

    • /ipld/cbor
    • /ipld/ion/1.0.0
    • /ipld/protobuf/3.0.0
    • /ipld/protobuf/2.0.0

    There is one strong requirement for these formats to work: a format MUST have a 1:1 mapping to the canonical IPLD serialiation format. Today (July 29, 2016), that format is CBOR.

    Changes to Interfaces / Specs

    Need changes to:

    • IPFS specs (addressing in particular) need to support CIDv1
    • IPFS interfaces need to support CIDv1
    • Add a new, small CIDv1 or "IPLD Links" spec
    • IPLD spec is compatible. Can improve in wording. CBOR data format does not change. Pathing does not change. .

    Support for CID v0 and v1

    It is a HARD CONSTRAINT that all IPFS links continue to work. In order to support both CID v0 paths (/ipfs/<mhash>) and the new CID v1 paths (/ipfs/<mbase><version><mcodec><mhash>, IPFS and other IPLD tooling will detect the version of the CID through a matching function. (See "Distinguishing v0 and v1 CIDs (old and new)" above).

    The following interfaces must support both types:

    • The IPFS API, which takes CIDs and Paths
      • This includes subprotocols, such as Bitswap
    • HTTP-to-IPFS Gateway, for all existing https://ipfs.io/ipfs/... links
    opened by jbenet 19
  • Adding additional file metadata to UnixFSv1

    Adding additional file metadata to UnixFSv1

    Current UnixFSv1 importers do not encode most of the standard file metadata from most file systems.

    This has been a particular challenge for package managers since they already rely on some of this metadata.

    The goal of this issue is to surface all the necessary discussion points in order to drive a new PR against the unixfs spec.

    Potential metadata

    • Permissions
      • Executable bit
      • Ownership (user and group)
    • Filename in file object
    • mtime
    • ctime
    • atime

    Additional considerations

    For time stamps (mtime, ctime, atime) we need to decide if we’re going to use high precision times or not. Most systems expect a 32-bit integer (low precision) while other use cases may need a 64-bit integer (high precision).

    Do we want to store additional metadata of the directory? How do we handle updating this when someone updates only a single file in the directory?

    Where do we store this metadata?

    In terms of the data format, should these properties be added to the File message or the Data message?

    History

    The history of this feature as well as meeting notes where this feature was prioritized are available here.

    opened by mikeal 16
  • docs: change mode and mtime to strings

    docs: change mode and mtime to strings

    As currently defined the closest we can come to removing a mode/mtime previously set on a file is to set it's value to 0, which works in that the CID returns to what it would have been pre-metadata, but when combined with default file/dir modes also has the side effect of making it impossible to actually set the file mode to 0.

    Worse, if you set the file mode to 0 you actually get the default file mode back of 0644 which is probably not what you intended.

    This is because protocol buffers assign default values to all fields so we cannot differentiate between the situation where a user has set a thing to 0 vs when they haven't set anything at all.

    The proposed solution is to store the mode/mtime as a string which can then be interpreted as a base 8 number (base 10 for mtime), which is a little icky but does solve the problem:

    | Value | Interpretation | | -------- | -------- | | '' | No mode has been set | | '0' | Mode has been set to zero | | '0755' | Mode has been set to 0755 |

    mtime is changed too because if I add some metadata to a file, I should be able to remove it - it shouldn't be a one-way thing. Unless we are happy with the mtime of all entries defaulting to 1970.

    It also removes the default mode values as otherwise it's not an opt-in upgrade.

    opened by achingbrain 15
  • Proposed IPLD changes

    Proposed IPLD changes

    This patch fixes some of the inconsistencies in the IPLD spec, as well as resolving some existing issues, namely:

    • ipfs/go-ipfs#2053 - allow raw data in leaf nodes
    • ipfs/go-ipfs#1582 - use EJSON for encoding (non-UTF8) binary data
    • mandate CBOR Strict Mode to avoid having to deal with duplicated keys

    Also some possible changes to the examples, but these are more to do with suggested convention rather than IPLD itself:

    • change link to content (although we could also use data or something similar), as file content needn't be merkle-links, and can in fact be directly embedded into small files
    • removed the extraneous subfiles wrapper, as file content can be either (a merkle-link to) a bytestring, a unicode string, or a list thereof

    I can split these into separate PRs if that would be more helpful.

    opened by davidar 14
  • IPLD merkle-path improvements

    IPLD merkle-path improvements

    Improves PR #37 and replaces PR #60. The idea is that there is only one kind of merkle-paths that are not to be confused with unixfs paths. These paths are powerful enough to be able to access multiple properties in IPLD objects and resolve merkle links.

    opened by mildred 14
  • Relationship with Protocol Buffers legacy IPFS node format

    Relationship with Protocol Buffers legacy IPFS node format

    In PR #37, we left out an important part of the spec aside: the relationship with protocol buffer serialization. This ought to be described as it has effects that may be far reaching.

    TODO items:

    • [X] Decide if we choose a format that requires path component escaping: no escaping needed
    • [X] Decide which special key to use to avoid conflict with path component (@attrs in one proposition, with @ escaping, . in the other proposition): not needed
    opened by mildred 13
  • Mention record owner optimisation

    Mention record owner optimisation

    Related: https://github.com/ipfs/specs/issues/274

    Changes IPNS.md to mention that the owner of a record is implicitly trusted to hold the most recent record, and implementations MAY skip the sequence value comparison in the case where the provider of the record is the record owner.

    opened by Winterhuman 0
  • Create Routing Specs

    Create Routing Specs

    We have various routing types, and no high level spec + specific routing types also lack basic specs.

    Routing specs we should have:

    • [ ] High level overview of types of routing, poitning at sub-specs for specific types / implementations / standards
    • [ ] DHT (peer / content / ipns records + put/get strategies and defaults - when to expire, how many copies to put/get etc)
    • [ ] Pubsub (IPNS over Pubsub example, but fleshed out as a generic protocol)
    • [ ] content routing over Bitswap: https://research.protocol.ai/publications/accelerating-content-routing-with-bitswap-a-multi-path-file-transfer-protocol-in-ipfs-and-filecoin/delarocha2021.pdf
      • [ ] sub-quest: improve bitswap specs themselves based on the above paper.
    • [ ] IPNI router (tbd, depends on https://github.com/ipfs/specs/pull/342)
    dif/expert P1 
    opened by lidel 0
  • IPIP: Add

    IPIP: Add "delete provider" to Content Routing HTTP API

    The idea here is to let the publisher manage the lifecycle of provider records. This may be simple TTL eviction or something more sophisticated like deduplication of records. Idempotency may become more important here since we are manually managing resource lifecycle. This is likely a use case for network indexers, but we haven't fleshed out the details yet, likely @ischasny will add once we know more.

    P2 status/blocked kind/enhancement need/analysis IPIP 
    opened by guseggert 1
  • IPFS DHT Specification is  missing

    IPFS DHT Specification is missing

    This is a placeholder for missing specs.

    Problem

    • No specs for IPFS DHT exist.
    • Kad DHT specs that live in https://github.com/libp2p/specs/tree/master/kad-dht and are currently focused on Peer routing only (scope limited to libp2p use of DHT).
    • IPFS specifics such as how provider records and IPNS records are published / resolved was removed from the mentioned specs, but should be still in git history of that repo

    TODO

    • [ ] see if we can salvage any old content from https://github.com/libp2p/specs/tree/master/kad-dht
    • [ ] decide if our spec is on top of libp2p one, or do we have our own, end-to-end specs
    • [ ] WRITE THE SPECS and get sign-off from GO, JS and Rust impl.

    If anyone has good content we should include, link below.

    P0 need/triage 
    opened by lidel 0
  • IPIP: Streaming Delegated Content Routing

    IPIP: Streaming Delegated Content Routing

    To support DHT use cases, the Delegated Content Routing HTTP API needs to support a streaming response that streams providers as they are found, probably using an application/x-ndjson response format.

    P1 kind/enhancement IPIP 
    opened by guseggert 0
Owner
IPFS
A peer-to-peer hypermedia protocol
IPFS
Ipfs-retriever - An application that retrieves files from IPFS network

ipfs-retriever This is an application that retrieves files from IPFS network. It

Phat Nguyen Luu 0 Jan 5, 2022
Collection of Technical Interview Questions solved with Go

go-interview Collection of Technical Interview Questions solved with Go Algorithms A Star Datastructures Linked Lists Doubly Linked List Singly Linked

Raed Shomali 3.9k Nov 24, 2022
Diameter stack and Base Protocol (RFC 6733) for the Go programming language

Diameter Base Protocol Package go-diameter is an implementation of the Diameter Base Protocol RFC 6733 and a stack for the Go programming language. St

Alexandre Fiori 213 Nov 10, 2022
An IPFS bytes exchange for caching and retrieving data from Filecoin

?? go-hop-exchange An IPFS bytes exchange to allow any IPFS node to become a Filecoin retrieval provider and retrieve content from Filecoin Highlights

Myel 31 Aug 25, 2022
IPFS implementation in Go

go-ipfs What is IPFS? IPFS is a global, versioned, peer-to-peer filesystem. It combines good ideas from previous systems such as Git, BitTorrent, Kade

IPFS 14.4k Nov 27, 2022
Deece is an open, collaborative, and decentralised search mechanism for IPFS

Deece Deece is an open, collaborative, and decentralised search mechanism for IPFS. Any node running the client is able to crawl content on IPFS and a

null 12 Oct 29, 2022
A tool for checking the accessibility of your data by IPFS peers

ipfs-check Check if you can find your content on IPFS A tool for checking the accessibility of your data by IPFS peers Documentation Build go build wi

Adin Schmahmann 17 Nov 9, 2022
🌐 (Web 3.0) Pastebin built on IPFS, securely served by Distributed Web and Edge Network.

pastebin-ipfs 简体中文 (IPFS Archivists) Still in development, Pull Requests are welcomed. Pastebin built on IPFS, securely served by Distributed Web and

Mayo/IO 164 Nov 9, 2022
A standalone ipfs gateway

rainbow Because ipfs should just work like unicorns and rainbows Building go build Running rainbow Configuration NAME: rainbow - a standalone ipf

IPFS 21 Nov 9, 2022
A minimal IPFS replacement for P2P IPLD apps

IPFS-Nucleus IPFS-Nucleus is a minimal block daemon for IPLD based services. You could call it an IPLDaemon. It implements the following http api call

Peergos 27 Nov 25, 2022
Generates file.key file for IPFS Private Network.

ipfs-keygen Generates file.key file for IPFS Private Network. Installation go get -u github.com/reixmor/ipfs-keygen/ipfs-keygen Usage ipfs-keygen > ~/

Camilo Abel Monreal Aguero 0 Jan 18, 2022
Go-ipfs-pinner - The pinner system is responsible for keeping track of which objects a user wants to keep stored locally

go-ipfs-pinner Background The pinner system is responsible for keeping track of

y 0 Jan 18, 2022
wire protocol for multiplexing connections or streams into a single connection, based on a subset of the SSH Connection Protocol

qmux qmux is a wire protocol for multiplexing connections or streams into a single connection. It is based on the SSH Connection Protocol, which is th

Jeff Lindsay 202 Nov 30, 2022
A simple tool to convert socket5 proxy protocol to http proxy protocol

Socket5 to HTTP 这是一个超简单的 Socket5 代理转换成 HTTP 代理的小工具。 如何安装? Golang 用户 # Required Go 1.17+ go install github.com/mritd/[email protected] Docker 用户 docker pull m

mritd 7 Sep 7, 2022
A RTP stack for Go

RTP/RTCP stack for Go This Go package implements a RTP/RTCP stack for Go. The package is a sub-package of the standard Go net package and uses standar

Werner Dittmann 297 Nov 19, 2022
Go SIP Stack

GoSIPs Go SIP Stack (http://www.GoSIPs.org) The objective of GoSIPs is to develop a Golang stack interface and implementation to the Session Initiatio

Rain Liu 118 Aug 31, 2022
A decentralized P2P networking stack written in Go.

noise noise is an opinionated, easy-to-use P2P network stack for decentralized applications, and cryptographic protocols written in Go. noise is made

Perlin Network 1.7k Nov 21, 2022
The Dual-Stack Dynamic DNS client, the world's first dynamic DNS client built for IPv6.

dsddns DsDDNS is the Dual-Stack Dynamic DNS client. A dynamic DNS client keeps your DNS records in sync with the IP addresses associated with your hom

Ryan Young 15 Sep 27, 2022
A modular is an opinionated, easy-to-use P2P network stack for decentralized applications written in Go.

xlibp2p xlibp2p is an opinionated, easy-to-use P2P network stack for decentralized applications written in Go. xlibp2p is made to be minimal, robust,

XFS Network 62 Nov 9, 2022