IPFS Project && Working Group Roadmaps Repo

Overview

IPFS Project Roadmap v0.6.0

Table of Contents

IPFS Mission Statement

The mission of IPFS is to create a resilient, upgradable, open network to preserve and grow humanity’s knowledge.

This looks different! Want to participate in helping define our "Mission Statement 2.0"? Add your thoughts here!

2020 Priority

Scoping in to 2020 H1

Instead of a 2020 year-long plan, we decided to focus on a 2020 H1 plan (covering Q1 & Q2) so as to:

  • Enable our team to truly focus on one thing, complete it, and then move on to other challenges instead of doing many things at once
  • Better understand the components of each goal and plan our time accordingly to hit them by not trying to nail down plans too far into the future
  • Be adaptable and prepared for surprises, re-prioritizations, or market shifts that require us to refocus energy or change our plan in the course of the year

2020 H1 Priority Selection Criteria

Before selecting a 2020 H1 priority, we did an open call for Theme Proposals to surface areas the community felt were of high importance and urgency. We combined these great proposals with an analysis of the project, team, and ecosystem state - and the biggest risks to IPFS Project success. Out of that analysis, we identified there were two main aspects our 2020 H1 plan MUST address:

  1. Mitigate current IPFS pain points around network performance and end user experience that are hindering wider adoption and scale
  2. Increase velocity, alignment, and capacity for IPFS devs and contributors to ensure our time and efforts are highly leveraged (because if we can make fast, sustained, high-quality progress by leveling-up our focus and healthy habits, we can achieve our goals faster and ensure contributing to IPFS is fun and productive!)

📞 Content Routing

Given the selection criteria, our main priority for the first half of 2020 - the next 6 months - is improving the performance and reliability of content routing in the IPFS network. 'Content routing' is the process of finding a node hosting the content you're looking for, such that you can fetch the desired data and quickly load your website/dapp/video/etc. As the IPFS network scaled this past year (over 30x!), it ran into new problems in our distributed routing algorithms - struggling to find content spread across many unreliable nodes. This was especially painful for IPNS, which relied on multiple of these slow/unreliable queries to find the latest version of a file. These performance gaps caused IPFS to lag and stall while searching for the needed content, hurting the end user experience and making IPFS feel broken. Searching the network to find desired content (aka, using IPFS as a decentralized CDN) is one of the most common actions for new IPFS users and is required by most ipfs-powered dapp use cases - therefore, it's the number 1 pain point we need to mitigate in order to unlock increased adoption and scalability of the network!

We considered a number of other potential goals - especially all the great 2020 Theme Proposals - before selecting this priority. However, we decided it was more important to focus core working group dev time on the main blockers and pain points to enable the entire ecosystem to grow and succeed. Many of these proposals are actually very well suited for community ownership via DevGrants and collaborations - and some of them, like "IPFS in Rust" and "Examples and Tutorials", already have grants or bounties associated with them!

2020 Working Groups

The IPFS project includes the collective work of serveral focused teams, called Working Groups (WGs). Each group defines its own roadmap with tasks and priorities derived from the main IPFS Project Priority. To better orient around our core focus for 2020 H1, we created a few new working groups (notably "Content Routing"), and spun others down (notably our "Package Managers" working group). For 2020 H1, we have 5 main working groups - with our "Ecosystem" working group divided into 3 sub-groups.

Each WG’s top-line focus:

  • Content Routing: Ensure all IPFS users can find and access content they care about in a distributed network of nodes
  • Testground: Provide robust feedback loops for content routing development, debugging, and benchmarking at scale
  • Bifrost (IPFS Infra): Make sure our gateway and infra scale to support access to the IPFS network
  • Ecosystem: Ensure community health and growth through collaborations, developer experience and platform availability
    • Browsers / Connectivity: Maximize the availability and connectivity of IPFS on the web
    • Collabs / Community: Support IPFS users and grow new opportunities through research, collaborations and community engagement
    • Dev Ex: Support the IPFS technical community through documentation, contributor experience, API ergonomics and tooling
  • Project: Support team functioning, prioritization, and day-to-day operations

Looking for more specifics? Check out the docs on our team roles and structure!

2020 Epics

We've expanded our 2020 Priority into a list of Epic Endeavours that give an overview of the primary targets IPFS has for 2020 H1. If you are pumped about these Epics and want to help, you can get involved! See the call to action (CTA) for each section below.

1. Build IPFS dev capacity and velocity

In order to achieve our content routing goal for 2020 H1, we need to level up our own leverage, coordination, velocity, and planning as a project to ensure all contributors spend their time and energy effectively. This includes a few different dimensions:

  • Integrate research via the ResNetLab into our design practice to ensure our work builds on the knowledge and experience of leading researchers in our fields
  • Empower new contributors in the IPFS ecosystem through DevGrants and collaborations to upgrade and extend IPFS to solve new problems
  • Invest in developer tooling, automation, and fast feedback loops to accelerate experimentation and iteration
  • Upgrade project planning and management within and between working groups to ensure we define, estimate, track and unblock our work efficiently
  • Focus our attention on fewer things to improve completion rate and reduce churn, saying "not now" or finding other champions for nice-to-have projects in order to allocate energy and attention to the most important work

You can get involved with ResNetLab RFPs or by proposing/funding projects in the DevGrants repo!

2. Improve content routing performance such that 95th percentile content routing speed is <5s

Improving content routing performance requires making improvements and bugfixes to the go-libp2p DHT at scale, and changing how we form, query, and resolve content in the IPFS network to be faster and more scalable. This involves a combination of research, design, implementation, and testing. Making changes to the configuration of the entire network is non-trivial - that's why we've been investing in the InterPlanetary Testground, a new set of tools for testing next generation P2P applications, to help us diagnose issues and evaluate improvements prior to rolling out upgrades to the entire public network. You can track the work in these milestones on ZenHub:

If you want to help refine the detailed milestones, or take on some of the improvements required to hit this goal, see the Content Routing Work Plan to dive deeper!

3. Invest in IPFS community enablement and support

Supporting community health and growth continues to be a core focus for IPFS as we scale to more users, applications, and use cases. Refining our adoption pathways, continuing to grow platform availability, and supporting our collaborators to bring IPFS to new users and use cases helps us maximize the impact and value we create in the world.

  • Scale the number of users and applications supported by IPFS through talks, guides, and how-tos
  • Refine our APIs to simplify end-user adoption and maximize ease of use
  • Bring IPFS to browsers to maximize default availability and connectivity on the web
  • Continue impoving our new IPFS Docs Site, to ensure developer & user questions are clearly answered and actionable
  • Invest in explicit community stewardship responsibilities to ensure there are answers, tools, and fast feedback loops to support new IPFS users and contributors

Great ways to start helping enable the IPFS community include: suggesting or building new tools to support IPFS users, reviewing open PRs, answering questions on http://discuss.ipfs.io and on our IRC channels on freenode/matrix, or writing your own how-tos and guides to use IPFS for your use case!

2019 Priority

Our core goal for 2019 was to make large-scale improvements to the IPFS network around scalability, performance, and usability. By focusing on the 📦 Package Managers use case, we hoped to identify, prioritize, and demonstrably resolve performance/usability issues, while driving adoption via a common and compelling use case that all developers experience daily. We hoped this focus would help us hone IPFS to be production ready (in functionality and practice), help scale network usage to millions of nodes, and accelerate our project and community growth/velocity.

Graded 2019 Epics

  1. The reference implementations of the IPFS Protocol (Go & JS) become Production Ready 🔁
  2. Support Software Package Managers in entering the Distributed Web ❗️
  3. Scale the IPFS Network 🔁
  4. The IPFS Community of Builders gets together for the 1st IPFS Conf
  5. IPFS testing, benchmarks, and performance optimizations 🔁
  6. Support the growing IPFS Community

You can see the details in what work we took on in each milestone, and which we achieved in the archived 2019 IPFS Project Roadmap.

Sorting Function

D = Difficulty (or "Delta" or "Distance"), E = Ecosystem Growth, I = Importance

To identify our top focus for 2019 and rank the future goals in our upgrade path, we used a sorting function to prioritize potential focus areas. Each goal was given a score from 1 (low) - 5 (high) on each axis. We sorted first in terms of low difficulty or "delta" (i.e. minimal additional requirements and fewer dependencies from the capabilities IPFS has now), then high ecosystem growth (growing our community and resources to help us gravity assist and accelerate our progress), and finally high importance (we want IPFS to have a strong, positive impact on the world). Future goals below are listed in priority order using this sorting function.

📦 Package Managers (D1 E5 I3)

The most used code and binary Package Managers are powered by IPFS.

Package Managers collect and curate sizable datasets. Top package managers distribute code libraries (eg npm, pypi, cargo, ...), binaries and program source code (eg apt, pacman, brew ...), full applications (app stores), datasets, and more. They are critical components of the programming and computing experience, and are the perfect use case for IPFS.

Most package managers can benefit tremendously from the content-addressing, peer-to-peer, decentralized, and offline capabilities of IPFS. Existing Package Managers should switch over to using IPFS as a default, or at least an optional way of distributing their assets, and their own updates. New Package Managers should be built entirely on IPFS. --- Code libraries, programs, and datasets should become permanent, resilient, partition tolerant, and authenticated, and IPFS can get them there. Registries become curators and seeds, but do not have to bear the costs of the entire bandwidth. Registries could become properly decentralized. --- We have a number of challenges ahead to make this a reality, but we are already beyond half-way. We should be able to get top package managers to (a) use IPFS as an optional distribution mechanism, then (b) use that to test all kinds of edge cases in the wild and to drive performance improvements , then (c) get package managers to switch over the defaults.

Future Goals

🗂 Large Files (D1 E4 I3)

By 2020, IPFS becomes the default way to distribute files or collections of files above 1GB

HTTP is not good for distributing large files or large collections of small files. Anything above 1GB starts running into problems (resuming, duplication, centralized bandwidth limitations, etc). BitTorrent works well for single archives that won't change, or won't duplicate, but fails in a number of places. IPFS has solved most of the hard problems but hasn't yet made the experience so easy that people default to IPFS to distribute large files. IPFS should solve this problem so well, it should be so easy and delightful to use, and it should be so high performance that it becomes the default way to move anything above 1GB world-wide. This is a massive hole right now that IPFS is well-poised to fill -- we just need to solve some performance and usability problems.

🔄 Decentralized Web (D2 E4 I3)

IPFS supports decentralized web apps built on p2p connections with power and capabilities at the edges.

In web 2.0, control of the web is centralized - its location-addressing model and client-server architecture encourage reliance and trust of centralized operators to host services, data, and intermediate connections. Walled gardens are common, and our data is locked into centralized systems that increase the risk of privacy breaches, state control, or that a single party can shut down valued services. The decentralized web is all about peer-to-peer connections and putting users in control of their tools and data. It does this by connecting users directly to each other and using verifiable tools like hash-links and encryption to ensure the power and control in the network is retained by the participants themselves. The decentralized web (as distinguished from Distributed Web) is NOT about partition tolerance, or making the web work equally well in local-area networks/mobile/offline - the focus here is on the control and ownership of services.

IPFS has solved most of the hard underlying design problems for decentralized web, but hasn't yet made the experience easy enough for end-users to experience it in the applications, tools, and services they use. This requires tooling and solutions for developers to sustainably run their business without any centralized intermediary facilitating the network (though centralized providers may still exist to augment and improve the experience for services that already work decentralized by design). Designing Federation for interop with current systems is key for the Migration Path.

🔒 Encrypted Web (D2 E3 I4)

Apps and Data are fully end-to-end encrypted at rest. Users have reader, writer, and usage privacy.

Apps and user data on IPFS are completely end-to-end encrypted, at rest, with only users having access. Users get reader and writer privacy by default. Any nodes providing services usually do so over encrypted data and never get access to the plain data. The apps themselves are distributed encrypted, decrypted and loaded in a safe sandbox in the users' control. Attackers (including ISPs) lose the ability to spy on users' data, and even which applications users are using. This works with all top use case apps -- email, chat, forums, collaboration tools, etc.

♻️ Distributed Web (D2 E2 I4)

Info and apps function equally well in local area networks and offline. The Web is a partitionable fabric, like the internet.

The web and mobile -- the most important application platforms on the planet -- are capable of working entirely in sub-networks. The norm for networked apps is to use the available data and connections, to sync asynchronously, and to leverage local connectivity protocols. The main apps for every top use case work equally well in offline or local network settings. It means IPFS and apps on top work excellently on desktops, the browser, and mobile. Users can use webapps like email, chat, forums, social networks, collaboration tools, games, and so on without having to be connected to the global internet. Oh, and getting files from one computer to another right next to it finally becomes an easy thing to do (airdrop level of easy).

👩🏽‍💻 Personal Web (D3 E4 I2)

Personal Data and programs are under user control.

The memex becomes reality. The web becomes a drastically more personal thing. Users' data and exploration is under the users' control -- similar to how a "personal computer" is under the user's control, and "the cloud" is not. Users decide which apps and other people get access to their data. Explorations can be recorded for the user in memex fashion. The user gets to keep copies of all the data they have observed through the web. A self-archiving personal record forms, which the user can always go back to, explore, and use -- whether or not those applications are still in development by their authors.

👟 Sneaker Web (D3 E2 I4)

The web functions over disconnected sneaker networks, spreading information, app data, apps, and more.

The web is capable of working fully distributed, and can even hop across disconnected components of the internet. Apps and their data can flow across high latency, intermittent, asynchronous links across them. People in disconnected networks get the same applications, the same quality of experience, and the same ability to distribute their contributions as anybody in the strongest connected component ("the backbone of the internet"). The Web is totally resistant to large scale partitions. Information can flow so easily across disconnected components that there is no use in trying to block or control information at the borders.

🚀 Interplanetary Web - Mars 2024. (D3 E3 I4)

Mars. Let's live the interplanetary dream!**

SpaceX plans to land on mars in 2022, and send humans in 2024. By then, IPFS should be the default/best choice for SpaceX networking. The first humans on mars should use IPFS to run the top 10 networked apps. That means truly excellent and well-known IPFS apps addressing the top 10 networked use cases must exist. For that to happen, the entire system needs to be rock solid, audited, performant, powerful, easy-to-use, well known, and so on. It means IPFS must work on a range of platforms (desktop, servers, web, mobile), and to work with both special purpose local area networks, and across interplanetary distances. If we achieve this, while solving for general use and general users (not specifically for the Mars use case, then IPFS will be in tremendous standing.

💾 Packet Switched Web (D3 E2 I3)

IPFS protocols use packet switching, and the network can relay all kinds of traffic easily, tolerating switch failures.

The infrastructure protocols (libp2p, IPFS, etc.) and the end-user app protocols (the logic of the app) can work entirely over a packet switching layer. Protocols like BitSwap, DHT, PubSub become drastically higher performance, and unconstrained by packets sent before theirs. Web applications can form their own isolated virtual networks, allowing their users to distribute the packets. Users can form their own groups and their own virtual networks, allowing users to only operate in a subnet they trust, and ensure all of their traffic is moving between trusted switches. The big public network uses packet switching by default.

📑 Data Web (D4 E3 I3)

Large Datasets are open, easy to access, easy to replicate, version controlled, secure, permanent.

We constantly lose access to important information, either because it ceases to exist or simply due to virtual virtual barriers (i.e. censorship, lack of connectivity and so on). Information also often loses its way into the peers that most needed it and there aren't good ways to signal that some dataset wasn't contemplated, referenced. We want to improve this dramatically, making the data that is produced more easy to access through making it versionased, secure and easier to replicate and locate.

✉️ Package Switched Web (D4 E2 I2)

Data in the web can be moved around over a package switching network. Shipping TB or PB hard drives of data becomes normal.

Beyond circuit switching and packet switching, the web works over package switching! It is possible to send apps, app assets, app user generated data, and so on through hard drives means. This means that the network stack and the IPLD graph sync layers are natively capable of using data in external, removable media. It is easy for a user Alice to save a lot of data to a removable drive, for Alice to mail the drive to another user Bob, and for Bob to plug in the drive to see his application finish loading what Alice wanted to show Bob. Instead of having to fumble with file exports, file systems, OS primitives, and worse -- IPFS, libp2p, and the apps just work -- there is a standard way to say "i want this data in this drive", and "i want to use the data from this drive". Once that happens, it can enable proper sneakernet web.

Self-Archiving Web (D4 E4 I4)

The Web becomes permanent, no more broken Links. Increase the lifespan of a Webpage from 6 months to ∞ (as good as a book).

The Internet Archive(s, plural) Content Address their snapshots to maximize deduplications and hit factor. IPFS becomes the platform that enables the multiple Internet Archives to store, replicate and share responsibility over who possesses what. It becomes simple for any institution (from large organization to small local library) to become an Internet Archive node. Users can search through these Internet Archives nodes, fully compliant with data protection laws.

🏷 Versioning Datasets (D4 E3 I3)

IPFS becomes the default way to version datasets, and unlocks a dataset distribution and utility explosion similar to what VCS did for code.

IPFS emerged from dataset versioning, package management, and distribution concerns. There are huge gaping holes in this space because large datasets are very unwieldy and defy most systems that make small files easy to version, package, and distribute. IPFS was designed with this kind of problem in mind and has the primitives in place to solve many of these problems. There are many things missing: (a) most importantly, a toolchain for version history management that works with these large graphs (most of what git does). (b) Better deduplication and representation techniques. (c) Virtual filesystem support -- to plug under existing architectures. (d) Ways to easily wrap existing data layouts (filestore) -- to plug on top existing architectures. (e) An unrelenting focus on extremely high performance. (f) primitives to retrieve and query relevant pieces of versioned datasets (IPLD Selectors and Queries). --- But luckily, all of these things can be added incrementally to enhance the tooling and win over more user segments.

🗃 Interplanetary DevOps (D4 E2 I2)

Versioning, packaging, distribution, and loading of Programs, Containers, OSes, VMs, defaults to IPFS.

IPFS is great for versioning, deduping, packaging, distributing assets, through a variety of mediums. IPFS can revolutionize computing infrastructure systems. It has the potential to become the default way for datacenter and server infrastructure users to set up their infrastructure. This can happen at several different layers. (a) In the simplest sense, IPFS can help distribute programs to servers, by sitting within the OS, and plugging in as the downloading mechanism (replace wget, rsync, etc.). (b) IPFS can also distribute containers -- it can sit alongside docker, kubernetes, and similar systems to help version, dedup, and distribute containerized services. (c) IPFS can also distribute OSes themselves, by plugging in at the OS package manager level, and by distributing OS installation media. (d) IPFS can also version, dedup, and distribute VMs, first by sitting alongside host OSes and hypervisors moving around VM snapshots, and then by modeling VMs themselves on top of IPFS/IPLD. --- To get there, we will need to solve many of the same problems as package managers, and more. We will need the IPLD importers to model and version the media super-effectively.

📖 The World's Knowledge becomes accessible through the DWeb (D5 E2 I5)

Humanity deserves equal access to the Knowledge. Platforms such as Wikipedia, Coursera, Edx, Khan Academy and others need to be available independently of Location and Connectivity. The content of this services need to exist everywhere. These replicas should be part of the whole world's dataset and not disjoint dataset. Anyone should be able to access these through the protocol, without having to deploy new services per area.

🌐 WebOS (D5 E2 I3)

The Web Platform and the OS'es merge.

The rift between the web and the OS is finally healed. The OS and local programs and WebApps merge. They are not just indistinguishable, they are the same thing. "Installing" becomes pinning applications to the local computer. "Saving" things locally is also just pinning. The browser and the OS are no longer distinguishable. The entire OS data itself is modelled on top of IPLD, and the internal FileSystem is IPFS (or something on top, like unixfs). The OS and programs can manipulate IPLD data structures natively. The entire state of the OS itself can be represented and stored as IPLD. The OS can be frozen or snapshotted, and then resumed. A computer boots from the same OS hash, drastically reducing attack surface.

Issues
  • Asks for libp2p team 2019 Roadmap

    Asks for libp2p team 2019 Roadmap

    We need to have concrete actionable asks for the libp2p team to be surfaced at their meetup next week and tentatively incorporated into their 2019 roadmap planning.

    Rough ideas from various 2019 planning discussions:

    • DHT crawler and debugging tool customization -- one command tell us what is going on
    • Fast (<5 sec) mutable name resolution for any IPNS record
      • ipns-pubsub stable and enabled by default (package manager need from https://github.com/ipfs/notes/issues/366)
    • "There is a set of runnable benchmarks which can measure real world data transfer speed of the go-IPFS system as a whole against traditional file exchange tools" (a shared item with go-ipfs - https://github.com/ipfs/team-mgmt/pull/794)
    • "Total wall-clock time for finding via the DHT and fetching data doesn’t exceed 3s (on average) for first byte across various node configurations (ex geographical distance)."
    • p2p transport (aka bluetooth or equivalent)
      • support for a variety of device types (desktop/mobile/IoT)
      • support for nearby node discovery and fully p2p (offline) discovery
    • Ability to add a 1m sharded index without disabling content routing

    @ipfs/wg-captains @ipfs/go-team @ipfs/javascript-team - can you think of additional requests we should be surfacing to the libp2p team?

    status/ready 
    opened by momack2 15
  • Tackle tracking protection

    Tackle tracking protection

    As conventional web become more popular, advertisers realized it could be a perfect vehicle for their business. Today we're tracked through the web to drive that business. Some browser vendors are putting a lot of effort against trackers. As I understand use of DHTs makes tracking even easier and as IPFS gets more popular it would attract same actors. Furthermore it creates a huge risk to people leaving under censorship as through tracking prosecutors will be gain ability to discover people accessing censored content.

    Maybe evaluating papers that attempt to solve this e.g. Octopus: A Secure and Anonymous DHT Lookup would be a worthy goal for the roadmap.

    opened by Gozala 8
  • How can ProtoSchool best support the IPFS project?

    How can ProtoSchool best support the IPFS project?

    As we build the roadmap for ProtoSchool, we'd like to take into account the priorities of the IPFS team and plan for some tutorial content that best highlights your most common or most prioritized use cases or features.

    For a sense of what ProtoSchool is capable of, please take a look at the existing tutorials, which run in-browser and (with the exception of the first tutorial on the list) offer coding challenges following the introduction of various content step-by-step. Beginner-friendliness is a major priority for this project, and as we consider requests from project teams we will also focus on ensuring that appropriate scaffolding exists to get users to the point where they can successfully approach those proposed topics.

    Could you please take a look at your project roadmap and help me understand what ProtoSchool tutorial content might most help you achieve your goals for 2019 and 2020?

    Do you have upcoming events where you hope to offer workshops? If you could envision that content fitting with the ProtoSchool tutorial format, please be sure to include these ideas and share the relevant event dates.

    cc @mikeal

    opened by terichadbourne 7
  • [2020 Theme Proposal] IPFS Cluster Applications

    [2020 Theme Proposal] IPFS Cluster Applications

    Note, this is part of the 2020 Theme Proposals Process - feel free to create additional/alternate proposals, or discuss this one in the comments!

    Theme description

    Many people come to IPFS believing that simply the act of adding/pinning a file enables instance distributed, redundant, permanent storage of arbitrary data among (presumably) peer nodes. This is sadly far from the truth, and can lead to people leaving the community feeling let down once realized - not likely to return. But with the right applications, incentives, and defaults set to easily enable peer groups to self organize and provide this idealized dream of IPFS for themselves (at least).

    This is, IMHO, the first step to true general purpose decentralized applications.

    Core needs & gaps

    As an end user and/or dapp creator, I want the default behavior to host and request for others to host a set of common, collectively valued, data among a peer group. At present, this takes a lot of research and configuration to achieve, if at all.

    Why focus this year

    • IPFS Cluster is recently getting to the point that it can deliver this solution.
    • Great work on permissioned and private networks on IPFS (#44, textile, peergos, ect.) that can enable configurable clusters sharing private and secure files among a sub-network.

    Milestones & rough roadmap

    • A minimal forkable set of examples akin to, or incorporated with, those found in JS IPFS to build off of.
    • An full-fledged application incubated by our PL/IPFS community that uses cluster to highlight this use case. Examples:
      • An auto-replicating "who is online" guestbook webapp. It allows for some to join the peer group , add your peer ID, sign this (cryptographicly). to showup online you must have all data for the app hosted on your node so others can get the app from you, and you must be online to remain on the log. A cute way to illistrait a true serverless dapp
      • A community shared database. Would include assets that all participants would store and relay redundantly. This could include things like a group website, photo album, chat app, and wiki/docs - so no server is needed. Only at least one member of the community to have their node running for the resources to be accessible. (Something I personally would love to get involved in and gather community support to build - see here - could fit nicely into community engagement goals (#42) )
      • A group password/keystore backup where sharded anonymous data are spread randomly across a small permissioned network such that only the owner of the keystore could know and privately collect the chunks needed to reconstruct their data. No one else on this network could (trivially) discover any keystore file, despite holding fragments of many of them.

    Desired / expected impact

    How will we measure success? What will working on this problem statement unlock for future years?

    • Increased grass-roots use of IPFS
    • Increased IPFS clients/nodes providing a useful service while online, thus high uptime and avalibility on the network to be expected
    • Decreased reliance on central gateways, increased community hosted gateways
    • Decreased reliance on central servers/resources for dapps in general with use of clusters of dapp users. - true dapps!
    2020 Theme Proposal 
    opened by NukeManDan 6
  • The IPFS Project Roadmap

    The IPFS Project Roadmap

    The IPFS Project Roadmap is here \o/

    It is with great pleasure to announce on behalf of all the extraordinary humans that are part of the IPFS Org that we now have a full IPFS Project Roadmap!

    image

    This Roadmap will stay in review stage until the first week of January 2019, when we tick the version to v1.0.0 and do a broadcast introducing it to the whole World. Until then, all the feedback is very much appreciated, feel welcome to post comments on this thread directly! Thank you!

    opened by daviddias 3
  • [2021 Theme Proposal] Increase max block size / defualt to blake2b-256

    [2021 Theme Proposal] Increase max block size / defualt to blake2b-256

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    One thing that limits IPFS usage for large datasets is its very small block size. 256kb is too small, and the largest size (at least for files) is 1mb. For better disk performance a minimum of 4mb would be needed. This will also drastically improve performance when using cloud backings for data storage.

    Hypothesis

    Large datasets for repeatable research are a pain to move around, ipfs can fix this.

    Vision statement

    allow bitswap to handel 4mb files.

    Why focus this year

    The useage is growing and moving to a 4mb default will result in a 16x reduction in overhead. Also makes it performant to use hdd as backing.

    Example workstreams

    Please list relevant workstreams, development milestones, and a high-level timeline for these efforts.

    Other content

    Please include links to other relevant content, notes, etc.

    2021 Theme Proposal 
    opened by lizelive 2
  • [2021 Theme Proposal] Permissionless Front-End

    [2021 Theme Proposal] Permissionless Front-End

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    Please describe the objective of your proposed theme, what problem it solves, and what executing on it would mean for the IPFS Project.

    Hypothesis

    Please describe the core hypotheses that you would need to believe for this theme to make sense as a 2021 IPFS project theme.

    Vision statement

    Please describe what the state of the IPFS project would look like if execution of this theme is massively successful.

    Why focus this year

    Please discuss why 2021 is the right year for this theme.

    Example workstreams

    Please list relevant workstreams, development milestones, and a high-level timeline for these efforts.

    Other content

    Please include links to other relevant content, notes, etc.

    At the current moment, DDToken.crypto is hosted on IPFS but we would like a permissionless front-end to match a permissionless back-end for DDToken.io and understand this is one of the best applications of Filecoin for Decentralized Finance (DeFi). I believe that IPFS is working with Filecoin and we would definitely like a permissionless front-end for DDToken.io.

    https://twitter.com/DDGaddis/status/1305633199242969088?s=20

    2021 Theme Proposal 
    opened by DDTGE2018 2
  • Descope Go-IPFS 2019 roadmap for Package Managers priority

    Descope Go-IPFS 2019 roadmap for Package Managers priority

    This is just a first pass at reformatting our 2019 roadmap to narrow in on package manager support. I think there are more of these that we should proactively drop - and more that we know now that we should add. @Stebalien @eingenito @ipfs/go-team for thoughts and improvements!

    status/in-progress 
    opened by momack2 2
  • Add a note to the roadmap about the February update

    Add a note to the roadmap about the February update

    I added a note about the February 3 update to the roadmap to more clearly describe our current state - where the project-level roadmap describes 1 main top-level priority, while the working group roadmaps all focus on 5.

    Proposal: I think we should merge the CURRENT 2019 working group roadmaps prior to any descoping changes. This helps us:

    1. reflect our current incremental status (project goals updated, WGs working to rescope), and the working group priorities that drove our Q1 OKRs,
    2. better support our current community members who want to easily reference what the WGs are focusing on tactically, and
    3. document the history of what we planned to take on and then descoped - along with the rhetoric for why.

    We already want to do descoping passes in a separate PR - why not commit the first batch now so that WG roadmaps are easy to find and reference?

    opened by momack2 1
  • Exercise: Allocating WG Roadmap Milestones by Quarter

    Exercise: Allocating WG Roadmap Milestones by Quarter

    Hello IPFS WGs - @ipfs/wg-captains, @ipfs/contributors! We’re trying to solidify Q1 OKRs by end of next week (1/18)! Now that we have defined our goals for 2019 in this repo, we are excited to start using our 2019 Working Group Roadmaps to gain perspective and confidence that our quarterly efforts put us on track to meet our yearly objectives. To help us do this, the Project Working Group prototyped and refined a short ~15-25 minute exercise that we encourage all working groups to do (either sync or async) as a quick feedback mechanism for our Q1 OKRs. As a benefit - it helps us tighten up our 2019 Working Group Roadmaps and make them more actionable and informative for the wider community. 🎉

    The exercise:

    • Step 1 (1 min): Divide milestones amongst team members - either with small groups owning sets of milestones (ex “Package Managers”, “Large Files”, “Production Ready”), or individuals taking ownership of specific milestones.
    • Step 2 (5-10 mins): Within Github, a Google Doc, or a Cryptpad - team members assign each milestone a quarter, or, if the milestone will span multiple quarters of work, break the milestone down into sequential “Parts 1, 2, 3” of work that are each assigned a quarter. In addition, team members give a rhetoric for why that quarter (either with a quick verbal explanation, or by writing a comment - ex, “I think we need to do benchmarking this quarter so we can prioritize among improvements to X the next quarter”).
    • Step 3 (1-5 min): Rearrange milestones and parts into the “Timeline” section of each WG Roadmap by quarter and look at the distribution of work. Modify and iterate as needed to make sure one quarter in particular isn’t overloaded with too many large commitments for specific resources.
    • Step 4 (5-10 mins): Look back at your Q1 OKRs - do your key results align with the work described in the Q1 section of your 2019 roadmap? If not, reflect on what work is important to prioritize this quarter to most efficiently reach our 2019 goals (ex through accomplishing prerequisites, accelerating development, simplifying maintenance, or utilizing ecosystem effects) and add/remove KRs accordingly.

    What we did in our Project WG meeting:

    • We allocated 1 project wg member per 2019 priority area and spent 5 minutes silently doing steps 2 & 3 (could have easily been done async).
    • We then took an additional ~15 minutes of our meeting to walk through each milestone and let each individual explain their rhetoric (ex past/future dependencies, ecosystem effects, etc). As a group, we moved a few milestones around based on load distribution and other comments.
    • (WIP) Async, we compared our Q1 milestones and Q1 OKRs to note discrepancies and ensure our goals put us on track for future quarters. Want to see it in action? Watch the recording of our meeting! Feel free to zoom through our 5 mins of silence where we did steps 2&3 async or just watch us talk through an example. ;)

    Goals and benefits:

    1. Waiting to chart the quarter until we're doing that quarter's OKRs can easily cause us to run out of time to handle important tasks by not starting early enough. If we take a stab at allocating our quarters out right now, we can try and rebalance (and proactively readjust quarters that seem overloaded) to make sure we get the most important work accomplished.
    2. Assigning milestones a quarter estimate helps community members understand when to interface with us on a particular effort on our 2019 roadmap.
    3. Our dependencies (chunks of work that build on each other) get more clearly defined by charting incremental work across quarters - and it helps us ensure we’ll have time to complete dependencies before the efforts that depend on them.
    4. In addition to “priority”, thinking about allocating milestones in time helps us proactively prioritize items that we expect to accelerate our development or create ecosystem effects within our community.

    Excited to see how this helps other WGs get clarity and excitement for what we plan to accomplish this quarter! And pumped to get all these awesome PRs merged. =]

    opened by momack2 1
  • Update Roadmap for 2020 Priority

    Update Roadmap for 2020 Priority

    This PR aims to update our IPFS Roadmap with our focus for 2020 H1. The dif is a little wonky, ignore that - you can see the past year's roadmap in https://github.com/ipfs/roadmap/blob/master/2019-IPFS-Project-Roadmap.md

    opened by momack2 0
  • FUSE mount Mutable File System (MFS)

    FUSE mount Mutable File System (MFS)

    It could be neat if ipfs mount would FUSE the Mutable File System (MFS), e.g. on /ipmfs(in addition to /ipfs & /ipns), as an alternative to having to use the ipfs files CLI.

    E.g. https://github.com/piedar/js-ipfs-mount or https://github.com/jfmherokiller/ipfs-mfs-fuse appear to have dabbled in this space in the past from what I could tell from a bit of quick research. But I couldn't find any existing issue specifically about this goal in core, so hope you don't mind that I'm adding this to the roadmap here, for high-level tracking. I'm sure it needs to be further broken down into finer grained requirements; I'll tag a number of issues that seem related which I've found while searching.

    This could subsequently perhaps be a foundation e.g. for an Kubernetes Container Storage Interface (CSI) implementation backed by IPFS.

    I thought about this while dreaming on https://github.com/vorburger/LearningLinux/blob/develop/docs/roadmap/readme.md

    opened by vorburger 5
  • [2021 Theme Proposal] IPFS ❤️ mobiles phones

    [2021 Theme Proposal] IPFS ❤️ mobiles phones

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    This theme proposal was part of https://github.com/ipfs/roadmap/issues/78, but I think it needs more attention than just being buried somewhere in the feature list there. So I split that proposal.

    IPFS works great on mobile phones but requires a lot of processing power and thus burns through the battery quite quickly. While we might be able to lower these consumptions by optimizations, I doubt we could compete with something like Dropbox or Google drive in terms of efficiencies.

    The bootstrap process takes too long, lossy connections make it hard to keep connections alive to efficiently search the DHT and the more mobile nodes would use IPFS the more clogged nodes get with very short-living connections which stale after a short while.

    Hypothesis

    To circumnavigate this issue it makes sense to introduce thin clients.

    Thin clients are fully featured IPFS clients which can hold data, share data with other nodes, and download data from the network. But instead of connecting directly to the whole network, a thin client only opens one connection to a known node.

    There need to be a trust relationship between both nodes, to allow the thin client to connect.

    A thin client can order a specific file to be downloaded, and get notified over the process. When the file has been downloaded to the connected node, the thin client can request it to be pushed. This allows the thin client to keep the mobile modem on for a very short time to get the transfer at high speed in one stream of data over one connection. And thus reducing the processing costs and network activities necessary to get the data.

    The same goes for data stored on the thin client.

    The thin client can order the other node to publish on its behalf the data to the network, so only a batch of CIDs gets pushed to the other client, without the need to establish a lot of connections to the DHT.

    The other node would announce the data with a special tag, that there might be a significant delay before the data is available, so only a request to make them available can be received.

    If a node receives such a request for one of its thin clients, it will request the data from the thin client to be cached in its storage.

    Depending on the thin client's power settings, the data might be transferred when the thin client is the next time on WiFi and charging.

    The progress of the transfer of those data from the thin client to the node will be sent as informational updates to the requesting node, to allow it to start the download requests again to fetch the data.

    This methodic could also be applied for background uploads, like pictures and videos from a device to IPFS, as soon as the WiFi/charger is connected. While the CIDs could already be sent to the network, to allow sharing right away. :)

    Vision statement

    I think IPFS should not only be the solution to store data for the web, but also share videos and pictures to share quickly and fluently from mobile devices, like Google Photos, Dropbox, and Google Drive allows us to do right now but without any server infrastructure. Connecting a mobile phone to nodes you trust to do the heavy lifting, while conserving processing time, unnecessary background activities, and battery life are definitely key for success here.

    The nodes a thin client connects to, doesn't have to be the nodes of a user, just trusted parties like friends.

    So it fits in nicely with the Web of Trust proposal to get the trust levels for such operations integrated into IPFS.

    Why focus this year

    Mobile devices are on the rise and it gets more complicated than ever to keep your data in "one place" while allowing access from everywhere, without having to send a random corporation all your data and metadata from all your devices.

    A NAS might solve this, but you still have the data in just one place, replication is hard, and local caching as well.

    I think IPFS could be the right project to run on all devices and keep the user's data accessible for him in a safe manner.

    Example workstreams

    • Implement Web of Trust with Quotas and ACLs for the data
    • Create a special mode where an IPFS client won't connect to the whole network, but just to known nodes, and learned nodes.
    • Create a special command on the handshake which requests thin-client access to the remote node. When the thin client's public key has the right to do that, unlock the thin client commands.
    • Add a flag on the DHT which is set to inform other clients that there might be an extended delay before this data can be accessed (because it's stored on a thin client)

    Implement thin client commands

    • which allow pushing content as a batch of CIDs for publishing to the DHT
    • This allows to fetch a CID to the node and ask for notifications about the progress.
    • Allows pulling a certain CID from the node, which fails when the CID isn't fully stored on the node.
    • Allows pulling a certain CID from the node interactively, which lets the node fetch the CID and channel the data through as it arrives.
    • Allows running DHT queries over the other node.
    • Allows pushing data to the node which gets pinned on the node (when it doesn't exceed the allowed quota).
    • Allows to read and write the ACL for CIDs on the node (if the data was uploaded by the thin client)
    • A message which cuts the connection and informs the node that the thin client will be unavailable, either with a timeframe or without one - like when the battery drops below a certain level.

    Implement node to the thin client commands

    • Implement a message which requests the thin client to switch to interactive mode. This would usually be denied if the client has no charger attached and no wifi. This would be the first request to be sent to the thin line client if there's a request for data stored on the thin client, received by the node.
    • If interactive mode isn't available, the node informs the client that there's an open data request and waits for the thin client being able to send it.
    • When the node is able to transfer, it will get a list of the CIDs requested until now. When the transfer is completed, and there have been new requests, the node is asked again if it can go into interactive modus - which might be denied at this point again, due to power limitations.

    Other content

    2021 Theme Proposal 
    opened by RubenKelevra 0
  • [2021 Theme Proposal Summary] Improved developer experience

    [2021 Theme Proposal Summary] Improved developer experience

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    A lot of current proposals ( while very valid in their own right ) focus on improving the experience of IPFS developers. This proposal is intended to summarize their commonalities and provide actionable ideas.

    Hypothesis

    Improving the overall IPFS developer experience will provide more and better results faster.

    Vision statement

    A bigger interest in IPFS and it's core ideas will bring new developers to the community that build apps on IPFS, help with software in the stack and more financial investment as well. Features can be implemented faster and a diverse set of developers represented in the community will lead to mature features.

    Why focus this year

    This has been a focus last year and it should stay a focus in 2021.

    Example workstreams

    DX proposals and issues over the last year boil down to four main areas of improvement:

    1. APIs #62,#61 capture this idea pretty well, improving the APIs and architecture of IPFS implementations helps to onboard new developers and increase adoption.

    2. Tooling #87 fits into this, as well as #77 and especially #63. Refer to each proposal for specifics, but in general this is awesome! Mature tooling would further adoption across the industry since no one likes working with arcane tech for a big project right?

    3. Specifications This is another big one, we should work out specifications across the stack. This would make it easier for people to develop compliant implementations and also get the IPFS community aligned. This is especially pressing since we don't want to end up with a developer aristocracy where the stack has become so complicated and arcane that only a select few can work on the project. (Looking at you graphsync spec :new_moon_with_face:)

    4. Talk about IPFS! Talk to your coworkers, friends and family about this! IPFS is awesome, let them know! Talking about IPFS also includes a bigger media presence( blog articles etc. ), as well as conventions ( at some point :| ) and encouraging people to become IPFS developer advocates in their communities. #86 fits in here quite nicely. in short just keep the ecosystem WG around

    Other content

    Refer to all proposals mentioned above for their specific ideas

    2021 Theme Proposal 
    opened by JonasKruckenberg 1
  • [2021 Theme Proposal] Inter Planetary Playground

    [2021 Theme Proposal] Inter Planetary Playground

    Inter Planetary Playground

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    Many tech projects feature a form of dev playground (e.g. https://play.rust-lang.org/, https://play.golang.org/, https://www.typescriptlang.org/, https://webassembly.studio/) that provide a quick and easy way to explore it, which also often doubles as a way to create and share reproducible problems.

    Inter planetary playground would:

    1. Provide an easiest way (0-install & 0-config) to explore the inter planetary stack.
    2. Provide a great way to attach reproducible test cases with a bugs reports.
    3. Dogfood Inter Planetary stack for storing & sharing code snippets.
    4. Allow documentation to link to live examples.

    Project would entail building a web application using inter planetary stack that allows:

    1. Entering code snippet.
    2. Evaluating code snippet in an environment that has access to (in-browser) IPFS node.
    3. Sharing executable example with others by publishing them on IPFS.

    Hypothesis

    Please describe the core hypotheses that you would need to believe for this theme to make sense as a 2021 IPFS project theme.

    Building inter planetary playground would provide an excellent way to dogfood the stack, identify gaps in the system, address them and integrate resulting tool into a day to day work. Which will improve the way bugs are reported and reduce time spend on trying to reproduce them.

    Vision statement

    Playground becomes the best place to explore & share inter planetary ideas. All new bug reports come with links to reproducible test cases which provides dev-team a great way to reproduce them and even share back fixes.

    Not only playground is great way to share inter planetary code snippets, but an excellent example of how to build a distributed web application with an inter planetary stack. In fact each shared code snippet is self contained distributed web application as well which can be turned into full fledged product.

    Why focus this year

    Please discuss why 2021 is the right year for this theme.

    Many gaps and limitations had been addressed over the last couple of years making inter planetary stack well fit to do this kind of dogfooding. By building such a playground not only stack is put work but it creates a launchpad for (self contained) distributed apps that are as easy to start and share as typing little snippets of code in browser.

    Example workstreams

    Please list relevant work streams, development milestones, and a high-level timeline for these efforts.

    • Develop a web application e.g. https://try.ipfs.io:
      • That embeds a code editor codemirror or monaco.
      • Embeds (shared) JS-IPFS node.
      • Provides an ability to execute code snippet in the context with ipfs embedded node.
      • Provides an ability to share a snippet by publishing it to IPFS.
    • Integrate remote pinning services for improved snippet availability
      • Add an interface for adding a remote pinning service.
      • Add ability to pin snippet into added pinning service during share interaction.
    • Self contained snippets
      • Turn shared snippets into self contained web apps. Bundled stored in IPFS contain (link) all the resources it depends on:
        • HTML page with code editor
        • All the resources JS, CSS, Images, etc...
        • Code snippet itself
      • Each snippet bundle can be loaded from arbitrary gateway or client that supports IPFS natively
        • Each snippet bundle is origin separated from the other
        • IPFS (shared) node upholds origin separation (MFS and pin sets are per origin)
      • Code snippets are just ES modules stored in IPFS and can be imported
    2021 Theme Proposal 
    opened by Gozala 0
  • [2021 Theme Proposal] IPFS Africa Community

    [2021 Theme Proposal] IPFS Africa Community

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    The objective of IPFS Africa Community as a theme is to increase adoption of IPFS by focusing on development of africa-centric solutions on the network. We will highlight and focus on application areas we’ve identified with higher adoption potential i.e using IPFS to build solutions around public/community data, democracy, internet freedom and mobile-specific solutions.

    These application areas reflect some of the biggest challenges linked to internet freedom for users in certain countries. In Uganda, for instance, the implementation of OTT (Over The Top) Tax for use of social media platforms presents an opportunity to rally and IPFS Uganda Community that can be inspired to build an IPFS Based solution to lower cost of file-sharing for internet users. This will also inspire development of mobile-specific solutions since its widely known that smartphone penetration stands at 95% on the continent.

    We have similar adoption opportunities in various countries i.e Nigeria, Zimbabwe and Kenya where we are already introducing more developers to IPFS as a viable alternative to traditional web services.

    Hypothesis

    During an in-depth review of my favorite IPFS Case Studies so far, I’ve found different types of solutions designed and built on IPFS that are uniquely addressing issues such as data privacy and autonomy, consensus systems, data immutability, distributed storage, decentralized identity, data marketplaces and more. For all of these core software goals, the IPFS network has demonstrated an overall low gap in readiness and with the right training and support community members can be encouraged to start experimenting with IPFS to build similar solutions. Training and support can be in the form of various community activities including protoschool workshops, hack nights and more.

    Vision statement & why focus this year

    With this approach to our objective of increasing IPFS adoption on the continent, we anticipate to build an active community of users, developers and founders who will also actively collaborate within the wider IPFS Community. In numbers, success for this theme means growing the number of applications on the network by 20%, growing the number of end-users by 5%, and growing the number of nodes by 30%.

    This will also increase community visibility and as we continue to reach out to web2 communities we will encourage them to make the switch to web3 bringing the wave closer & the web one step closer to decentralization in storage, security, files-sharing, node-distribution and privacy.

    Example workstreams

    Our workstream will focus on community activities that provide training and support for community members for example:

    • Evaluate the IPFS network by exploring how IPFS addresses censorship, disaster resilience, offline data accessibility, cost-efficiency in high bandwidth areas and more. This series of workshops titled ‘Explore IPFS’ can happen bi-weekly for 8-12 weeks.

    • Organize technical sessions where community members will work with IPFS Implementations, tools and applications. These can be meetups (physical or virtual) for sessions including:

        - Installing, upgrading and downgrading IPFS Installations. 
      
        - Host simple static websites using IPFS
      
        - In-depth review of Go and JS Implementations of IPFS
      
        - IPFS case studies review sessions and more
      
    • Support project building by identifying local partners to work with and lead project building activities.

    2021 Theme Proposal 
    opened by realChainLife 1
  • [2021 Theme Proposal] Integration of the cluster follower into the WebUI

    [2021 Theme Proposal] Integration of the cluster follower into the WebUI

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    Currently it's hard for new users to find data to store on their nodes. I often recommended the websites and new users were surprised to hear that there's a cluster-implementation based on ipfs.

    Hypothesis

    I feel like we should make the collaborative clusters much more visible, by integrating a join/leave functionality into the WebUI.

    It would be nice to hide the technicalities behind a link, like that there's a toolkit and the specific addresses etc.

    It's more interesting to show the users a link to the cluster pinset, so they can see what they will share with the network and group the pins of each cluster the user is taking part in the WebUI to see how much storage each cluster uses.

    Vision statement

    A join should start a progress bar, where the number of pins processed/remaining are showed and how much storage in total a cluster stores on the node.

    This allows the users to take part from right after firing their node up the first time and show what interesting kinds of data are stored on the filesystem, encouraging them to browse the filesystem for a while.

    Additionally stats like how many nodes take part in the cluster would be interesting to show there, encouraging the user to select a cluster which might have a lower amount of participants over larger ones.

    Why focus this year

    Last year the IPFS-team put collaborative clusters and we found some exiting ways to use them. Now it's time to make them more visible and easier to handle by new users to bring them to a broader audience and enhance the 'new user' experience.

    Example realization

    We could replace the explore tab which currently lists like 3 or 4 static cids which are interesting but not really dynamic.

    Instead we could load latest collaborative cluster list (via IPNS) in a bit more new user friendly explanations and descriptions provided by the cluster administrators in somethings like a simple yaml.

    This gets parsed and rendered in the tab. The cluster node which allows us to join is then asked for the amount of nodes in the cluster and the current size of the cluster-pinset and a CID to the cluster-pinset. (Not sure if that's currently available via the API but it should be a minor addition).

    Then we present this to the user, which can then press join and the node starts running a cluster follower in the background as long at its running.

    2021 Theme Proposal 
    opened by RubenKelevra 1
Owner
IPFS
A peer-to-peer hypermedia protocol
IPFS
OpenFunction Demo Repo

OpenFunction Demo Installation Setup a Cluster minikube start -p demo --kubernetes-version=v1.22.2 --network-plugin=cni --cni=calico Install OpenFunct

Lize 1 Nov 21, 2021
A starter repo for VS Code, Gitpod, etc for Go.

go-starter After using this template: Replace github.com/mattwelke/go-starter in go.mod after using this template. Replace go-starter in .gitignore wi

Matt Welke 0 Nov 26, 2021
A controller(CES) for controlling container egress traffic. Working with F5 AFM.

Container Egress Services (CES) Kubernetes is piloting projects transition to enterprise-wide application rollouts, companies must be able to extend t

F5 DevCentral 10 Jun 1, 2022
Working through the book "Practice Microservices" in Go

Practical Microservices in Go I'm working through the book Practical Microservices which uses Node as the teaching lanugage. I'm not interested in pra

Thomas Steven 0 Jan 11, 2022
Go microservice tutorial project using Domain Driven Design and Hexagonal Architecture!

"ToDo API" Microservice Example Introduction Welcome! ?? This is an educational repository that includes a microservice written in Go. It is used as t

Mario Carrion 448 Jun 27, 2022
Demo project for unit testing presentation @ GoJKT meetup

go-demo-service Demo project for unit testing presentation @ GoJKT meetup This is a demo project to show examples of unit testing for GoJKT meetup Use

M Yusuf Irfan H 3 Jul 10, 2021
Trying to build an Ecommerce Microservice in Golang and Will try to make it Cloud Native - Learning Example extending the project of Nic Jackson

Golang Server Project Best Practices Dependency Injection :- In simple words, we want our functions and packages to receive the objects they depend on

Ujjwal Sharma 17 Jun 13, 2022
Kyoto starter project

kyoto starter Quick Start project setup What's included kyoto kyoto uikit tailwindcss How to use Clone project with git clone https://github.com/yurii

Yurii Zinets 1 Apr 10, 2022
This is demo / sample / example project using microservices architecture for Online Food Delivery App.

Microservices This is demo / sample / example project using microservices architecture for Online Food Delivery App. Architecture Services menu-servic

Nurali Virani 0 Nov 10, 2021
Wake up, Samurai. We have a project to build

kyoto uikit UIKit for rapid development Requirements kyoto page configured SSA basic knowledge of kyoto (twui only) configured tailwindcss Installatio

Kyoto Framework 22 Jun 12, 2022
This project extends the go-chi router to support OpenAPI 3, bringing to you a simple interface to build a router conforming your API contract.

Go OpenAPI This project extends the go-chi router to support OpenAPI 3, bringing to you a simple interface to build a router conforming your API contr

Tiago Angelo 2 Mar 27, 2022
Golang Caching Microservice (bequant.io test project to get hired)

bequant test project How to use: Simply type docker-compose up -d --build db warden distributor in terminal while in project's directory. MySQL, Warde

Nikita Nikonov 4 May 14, 2022
Study Project for the application of micro services and requisition controls

Starting Endpoint GO with Retry Request Install GoLang for Linux Tutorial: LINK

Antenor Pires 3 May 14, 2022
Assignment2 - A shared project making use of microservice architecture

This project is a shared project making use of microservice architecture, API's and a simple frontend to implement a start-up new concept called EduFi. The concept combines education and financial systems to create profit from studying.

Gabriel Goh 1 Jan 26, 2022
This project implements p11-kit RPC server protocol, allowing Go programs to act as a PKCS #11 module without the need for cgo

PKCS #11 modules in Go without cgo This project implements p11-kit RPC server protocol, allowing Go programs to act as a PKCS #11 module without the n

Google 42 Jun 28, 2022
MadeiraMadeira boilerplate project to build scalable, testable and high performance Go microservices.

MadeiraMadeira boilerplate project to build scalable, testable and high performance Go microservices.

Madeira Madeira 11 Jun 22, 2022
Targetrwe api test - This project provides the backend service for the targetrwe test application

Targetrwe-api This project provides the backend service for the targetrwe test a

null 0 Feb 15, 2022
IPFS Cluster - Automated data availability and redundancy on IPFS

IPFS Cluster - Automated data availability and redundancy on IPFS

IPFS 1.2k Jul 4, 2022
Ipfs-retriever - An application that retrieves files from IPFS network

ipfs-retriever This is an application that retrieves files from IPFS network. It

Phat Nguyen Luu 0 Jan 5, 2022
lightning - forward messages between a qq group and a telegram group

lightning The purpose of this project is to forward messages between a qq group and a telegram group. Getting Started Clone this project: git clone ht

方泓睿 6 Dec 15, 2021
Priority queue with message-group based partitioning and equal attention guarantee for each message group based on Redis

redis-ordered-queue-go Priority queue with message-group based partitioning and equal attention guarantee for each message group based on Redis What i

Ilya Melamed 7 Jun 27, 2022
A test repo to demonstrate the current (go1.17.2) issue when trying to use retractA test repo to demonstrate the current (go1.17.2) issue when trying to use retract

test-go-mod-retract This is a test repo to demonstrate the current (go1.17.2) issue when trying to use retract in go.mod to retract a version in a non

Yuxuan 'fishy' Wang 0 Oct 16, 2021
Repo for working on Cloud-based projects in Go

GoCloud Repo for working on Cloud-based projects in Go AWS checkout SDK: https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/gov2/s3 cd into /ho

Noah Gift 1 Jan 10, 2022
Interface PancingIN v1.0 (group project DBMS)

PancingIN Prasyarat Telah memasang Go (direkomendasikan versi terbaru) Mengetahui cara menggunakan terminal / command line Telah melakukan clone/downl

Michael R. Krisnadhi 0 Oct 30, 2021
Course system - The project of Group 28 for ByteCamp 2022 Winter

Course System This is the project of Group 28 for ByteCamp 2022 Winter. Quick St

Dexter 2 Jan 28, 2022
An IPFS bytes exchange for caching and retrieving data from Filecoin

?? go-hop-exchange An IPFS bytes exchange to allow any IPFS node to become a Filecoin retrieval provider and retrieve content from Filecoin Highlights

Myel 28 May 23, 2022
Multitiered file storage API built on Filecoin and IPFS

Powergate Powergate is a multitiered file storage API built on Filecoin and IPFS, and an index builder for Filecoin data. It's designed to be modular

textile.io 344 Jun 23, 2022
IPFS implementation in Go

go-ipfs What is IPFS? IPFS is a global, versioned, peer-to-peer filesystem. It combines good ideas from previous systems such as Git, BitTorrent, Kade

IPFS 13.8k Jun 28, 2022
Deece is an open, collaborative, and decentralised search mechanism for IPFS

Deece Deece is an open, collaborative, and decentralised search mechanism for IPFS. Any node running the client is able to crawl content on IPFS and a

null 8 Jun 24, 2022