Sedna is an edge-cloud synergy AI project incubated in KubeEdge SIG AI.

Overview

English | 简体中文

Sedna

CI Go Report Card LICENSE

What is Sedna?

Sedna is an edge-cloud synergy AI project incubated in KubeEdge SIG AI. Benefiting from the edge-cloud synergy capabilities provided by KubeEdge, Sedna can implement across edge-cloud collaborative training and collaborative inference capabilities, such as joint inference, incremental learning, and federated learning. Sedna supports popular AI frameworks, such as TensorFlow, Pytorch, PaddlePaddle, MindSpore.

Sedna can simply enable edge-cloud synergy capabilities to existing training and inference scripts, bringing the benefits of reducing costs, improving model performance, and protecting data privacy.

Features

Sedna has the following features:

  • Provide the edge-cloud synergy AI framework.

    • Provide dataset and model management across edge-cloud, helping developers quickly implement synergy AI applications.
  • Provide edge-cloud synergy training and inference frameworks.

    • Joint inference: under the condition of limited resources on the edge, difficult inference tasks are offloaded to the cloud to improve the overall performance, keeping the throughput.
    • Incremental training: For small samples and non-iid data on the edge, models can be adaptively optimized on the cloud or edge. The more the models are used, the smarter they are.
    • Federated learning: For those scenarios that the data being too large, or unwilling to migrate raw data to the cloud, or high privacy protection requirements, models are trained at the edge and parameters are aggregated on the cloud to resolve data silos effectively.
    • etc..
  • Compatibility

    • Compatible with mainstream AI frameworks such as TensorFlow, Pytorch, PaddlePaddle, and MindSpore.
    • Provides extended interfaces for developers to quickly integrate third-party algorithms, and some necessary algorithms for edge-cloud synergy have been preset, such as hard sample discovering, aggregation algorithm.

Architecture

Sedna's edge-cloud synergy is implemented based on the following capabilities provided by KubeEdge:

  • Unified orchestration of across edge-cloud applications.
  • Router: across edge-cloud message channel in management plane.
  • EdgeMesh: across edge-cloud microservice discovery and traffic governance in data plane.

Component

Sedna consists of the following components:

GlobalManager

  • Unified edge-cloud synergy AI task management
  • Cross edge-cloud synergy management and collaboration
  • Central Configuration Management

LocalController

  • Local process control of edge-cloud synergy AI tasks
  • Local general management: model, dataset, and status synchronization

Worker

  • Do inference or training, based on existing ML framework.
  • Launch on demand, imagine they are docker containers.
  • Different workers for different features.
  • Could run on edge or cloud.

Lib

  • Expose the Edge AI features to applications, i.e. training or inference programs.

Guides

Installation

Follow the Sedna installation document to install Sedna.

Examples

Example1:Joint Inference Service in Helmet Detection Scenario.

Roadmap

Meeting

Regular Community Meeting:

Resources:

Contact

If you have questions, feel free to reach out to us in the following ways:

License

Sedna is under the Apache 2.0 license. See the LICENSE file for details.

Issues
  • Joint Inference Service is failed on the agent

    Joint Inference Service is failed on the agent

    I am deploying Example1:Using Joint Inference Service in Helmet Detection Scenario.

    In the last step Create joint inference service, master is okay, but joint-inference-helmet-detection-little can't work on the edge: you can see there is no pod on the edge(I have already had the big and little images locally and version is v0.4.3) image

    kubectl describe ji

      Edge Worker:
        Hard Example Mining:
          Name:  IBT
          Parameters:
            Key:    threshold_img
            Value:  0.9
            Key:    threshold_box
            Value:  0.9
        Model:
          Name:  helmet-detection-inference-little-model
        Template:
          Spec:
            Containers:
              Env:
                Name:             input_shape
                Value:            416,736
                Name:             video_url
                Value:            rtsp://localhost/video
                Name:             all_examples_inference_output
                Value:            /data/output
                Name:             hard_example_cloud_inference_output
                Value:            /data/hard_example_cloud_inference_output
                Name:             hard_example_edge_inference_output
                Value:            /data/hard_example_edge_inference_output
              Image:              kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.4.3
              Image Pull Policy:  IfNotPresent
              Name:               little-model
              Resources:
                Limits:
                  Memory:  2Gi
                Requests:
                  Cpu:     100m
                  Memory:  64M
              Volume Mounts:
                Mount Path:  /data/
                Name:        outputdir
            Node Name:       wspn2
            Volumes:
              Host Path:
                Path:  /joint_inference/output
                Type:  Directory
              Name:    outputdir
    Status:
      Active:  1
      Conditions:
        Last Heartbeat Time:   2022-03-23T13:34:59Z
        Last Transition Time:  2022-03-23T13:34:59Z
        Status:                True
        Type:                  Running
        Last Heartbeat Time:   2022-03-23T13:34:59Z
        Last Transition Time:  2022-03-23T13:34:59Z
        Message:               the worker of service failed
        Reason:                workerFailed
        Status:                True
        Type:                  Failed
      Failed:                  1
      Start Time:              2022-03-23T13:34:59Z
    Events:                    <none>
    
    

    on the edge, edgemesh and sedna are okay, but there is no joint-inference-helmet-detection-little image

    docker logs k8s_lc_lc-zlrzg_sedna
    

    image

    I have already tried to uninstall sdena and restart it again, the problem still exists.

    kind/bug 
    opened by zz952332446 15
  • [Enhancement Request] Integrate Plato into Sedna as a backend for supporting federated learning - Phase one

    [Enhancement Request] Integrate Plato into Sedna as a backend for supporting federated learning - Phase one

    This is a PR for integrating Plato into Sedna to support federated learning (#50). @li-ch @baochunli @jaypume

    • We currently support Fedavg and Mistnet in Sedna via Plato.
    • We add one example for each federated learning job.

    Finished

    • [x] A tool to automatically generate Plato configuration file based on the CRD.
    • [x] ~~Replace the communication libs with Sedna build-in libs.~~ S3 and asyncio are used.
    • [x] Examples and demo presentation
      • [x] CV: Yolo-v5 demo
    • [x] Trainer and Estimator
    • [x] datasource is from sedna dataset
    lgtm approved size/XXL 
    opened by XinYao1994 14
  • Join_inference cannot be connected to the cloud for analysis

    Join_inference cannot be connected to the cloud for analysis

    What happened: When I use the example which is joint_inference/helmet_detection_inference, there are some errors in the log helmet-detection-inference-example-edge-5pg66 logs: [2021-04-16 02:32:48,383][sedna.joint_inference.joint_inference][ERROR][124]: send request to http://192.168.2.211:30691 failed, error is HTTPConnectionPool(host='192.168.2.211', port=30691): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efe5409dcf8>: Failed to establish a new connection: [Errno 111] Connection refused',)), retry times: 5 [2021-04-16 02:32:48,384][sedna.joint_inference.joint_inference][WARNING][365]: retrieve cloud infer service failed, use edge result

    What you expected to happen: The above error does not exist

    How to reproduce it (as minimally and precisely as possible): Just follow the example: https://github.com/kubeedge/sedna/blob/main/examples/joint_inference/helmet_detection_inference/README.md

    Anything else we need to know?:

    Environment:

    Sedna Version
    $ kubectl get -n sedna deploy gm -o jsonpath='{.spec.template.spec.containers[0].image}'
    # paste output here
    kubeedge/sedna-gm:v0.1.0
    $ kubectl get -n sedna ds lc -o jsonpath='{.spec.template.spec.containers[0].image}'
    # paste output here
    

    kubeedge/sedna-lc:v0.1.0

    Kubernets Version
    $ kubectl version
    # paste output here
    

    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

    KubeEdge Version
    $ cloudcore --version
    # paste output here
    1.6
    $ edgecore --version
    # paste output here
    

    1.6

    CloudSide Environment:

    Hardware configuration
    $ lscpu
    # paste output here
    
    OS
    $ cat /etc/os-release
    # paste output here
    
    Kernel
    $ uname -a
    # paste output here
    
    Others

    EdgeSide Environment:

    Hardware configuration
    $ lscpu
    # paste output here
    
    OS
    $ cat /etc/os-release
    # paste output here
    
    Kernel
    $ uname -a
    # paste output here
    
    Others
    kind/bug 
    opened by Jw-Jm 14
  • Add e2e code framework

    Add e2e code framework

    try to fix #14, with other test cases to be added

    1. Add e2e framework code stolen from k8s.io/test/e2e/framework and remove unnecessary code for simplicity.
    2. Add run-e2e.sh and a github action job to run e2e tests.
    3. Add a simple dataset testcase, TODO for other CRs.
    4. Update local-up.sh to support run-e2e.sh.
    lgtm approved size/XXL 
    opened by llhuii 14
  • Add shared storage support for dataset/model

    Add shared storage support for dataset/model

    What would you like to be added:

    Add shared storage support for dataset/model, such as s3/http protocols.

    Why is this needed:

    Currently only dataset/model uri with host localpath is supported, thus limiting cross node model training/serving.

    kind/feature 
    opened by llhuii 13
  • buildx: speed the language having builtin build

    buildx: speed the language having builtin build

    We use docker-buildx to build our components images for different platforms.

    But for some languages, such as golang, have good builtin build for multi platforms, and buildx support that.

    In Sedna, we have GM/LC written by golang. This commit supports this function.

    Signed-off-by: llhuii [email protected]

    lgtm approved size/L 
    opened by llhuii 11
  • Improve build script of examples

    Improve build script of examples

    This new version of the example build script allows to selective build a type of example rather than all the Docker images. Moreover, it automatically pushes to a target repository.

    lgtm approved size/M 
    opened by vcozzolino 11
  • joint_inference: bug fix and interface reconstruction

    joint_inference: bug fix and interface reconstruction

    1. fix example bug: save result which get from cloud if is hard example
    2. Add docs and code comment
    3. rename: TSBigModelService -> BigModelService
    4. fix #133
    lgtm approved size/L 
    opened by JoeyHwong-gk 9
  • [Incremental Learning] Add the support with different `nodeName` of train/eval workers

    [Incremental Learning] Add the support with different `nodeName` of train/eval workers

    What would you like to be added/modified: Currently in the feature of incremental learning, the nodeNames of dataset and train/eval workers should be same.

    When the dataset is in shared storage, the support with different nodeName could be added

    Why is this needed:

    1. In principle we can't require the user must train and eval model in same node.
    2. Train requires much more resources than eval worker, they may not in the same node.
    3. Sometimes the user may need to do evalution in the same/similar node with infer-worker, such as both at edge.
    kind/feature 
    opened by llhuii 9
  • Move algorithm-specific dependencies into class definition (optical flow)

    Move algorithm-specific dependencies into class definition (optical flow)

    Signed-off-by: Vittorio Cozzolino [email protected]

    What type of PR is this?

    /kind bug

    What this PR does / why we need it:

    Move algorithm-specific dependencies into the class definition for the optical flow module. This should lift the requirement to have extra dependencies in the KB image.

    Which issue(s) this PR fixes:

    Fixes #320

    kind/bug lgtm approved size/M 
    opened by vcozzolino 8
  • Asynchcronous Service Communication with Apache Kafka

    Asynchcronous Service Communication with Apache Kafka

    Currently, communication between services deployed on top of Sedna happens through edgemesh which is a crucial component enabling the edge-cloud synergy and service discovery. However, it introduces a few limitations such as:

    • Tight service coupling. The different pods deployed on the cluster are strongly interdependent as they require to communicate directly with each other. Applications running on top of this kind of networks layouts are fairly prone to failures due to the need to maintain a "link" with the other peers at any point in time. Basically, if one service goes down it will block the whole pipeline. Additionally, in terms of implementation effort, it requires ports/IP semi-manual assignment in the GO controller which makes the system design very stiff.

    • Resiliency. Application and services designed on top of edgemesh need to cope with networks failures autonomously. This forces the developer to put failure management code directly in the application layer. For example, if I can't send data to the next pod, I buffer it (if I can buffer). This forces also to make assumptions about the capabilities of the host node (how much can I buffer?).

    • Complexity. In complex applications spanning dozens of nodes, thinking about how they should be connected or interact with each other becomes a time-consuming task. It would be much easier to just have pods read and write from a shared location where all the data is deposited/collected. In this way, there is no need to know who is fetching/writing the data - the application is just interested in the data itself.

    To solve the aforementioned problems, in this PR I advocate the integration between Sedna and Apache Kafka. In fact, Kafka can help with:

    • Decoupling services by using a brokers system where applications can deposit/retrieve data.
    • Increase resiliency as failures in distributed edge-cloud applications will be managed simply by the fact that the pods are not tightly interconnected anymore. In essence, we go from: No connection -> Hard failure to No data -> Wait for data.
    • Reduced deployment complexity. The pods just need to know a set of brokers to talk to. No need for ports of services, IPs, web services, and REST APIs.

    Currently, I have implemented and deployed a complete, fully functional example in Sedna running with Kafka. In this PR, I'm introducing the idea of adding Apache Kafka to Sedna. If it's accepted, my plan is to present the other components in subsequent PRs. The plan is the following:

    • Milestone 1 (current PR). Add Kafka wrappers to lib/sedna. These are application-agnostic and can be used in any service running on Sedna.
    • Milestone 2. Add GO functions to enable controller to inject directly into the pods the parameters required to access the Kafka brokers. This is done automatically through service discovery.
    • Milestone 3. Add the new services using Kafka to lib/core.
    • Milestone 4. Add new GO controller and types.
    • Milestone 5. Add complete example to example folder, together with Dockerfiles and other resources.
    needs-rebase size/L 
    opened by vcozzolino 8
  • The proposal about building a high frequency Sedna based end to end u…

    The proposal about building a high frequency Sedna based end to end u…

    The proposal about building a high frequency Sedna based end to end use case in ModelBox format

    What type of PR is this?

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #

    size/M 
    opened by Ymh13383894400 2
  • [OSPP]The proposal about end-to-end example for Sedna Re-ID feature

    [OSPP]The proposal about end-to-end example for Sedna Re-ID feature

    What type of PR is this? /kind documentation

    What this PR does / why we need it: To build an end-to-end example for Sedna Re-ID feature, which can provide references for other users

    Which issue(s) this PR fixes:

    Fixes #302

    size/M kind/documentation 
    opened by DevinWain 2
  • update storage-initializer version and s3 docs

    update storage-initializer version and s3 docs

    Signed-off-by: hey-kong [email protected]

    What type of PR is this? /kind documentation

    What this PR does / why we need it: Sync the latest version of storage-initializer, and update s3 docs for creating cr correctly.

    size/S kind/documentation 
    opened by hey-kong 1
  • [OSPP]The proposal for TinyMS support in Python SDK

    [OSPP]The proposal for TinyMS support in Python SDK

    What type of PR is this? /kind feature

    What this PR does: This PR is a proposal that analyses Python SDK architecture of sedna and discusses the way to implement Python SDK support for TinyMS and demos of MindSpore and TinyMS

    kind/feature size/L 
    opened by Lj1ang 2
  • [OSPP] The proposal for Observability management

    [OSPP] The proposal for Observability management

    What type of PR is this?

    /kind design

    why we need it: users can only check the status, parameters and metrics of tasks via the command line after creating edge-cloud synergy AI tasks by sedna. This design is intended to provide observability management for displaying logs and metrics of sedna in real time.

    size/M kind/design 
    opened by Kanakami 3
Releases(v0.5.0)
  • v0.5.0(Jun 2, 2022)

    What's New

    Add the Multi-Edge Inference Paradigm

    The new Multi Edge Inference feature introduces a new mode of collaboration to manage distributed AI applications to combine computing power of edge nodes and fully utilize resources of edge nodes.

    • Provide feature extraction-based collaborative inference to protect privacy of data on the edge.
    • Filters data to reduce the amount of data transmitted from the edge.
    • Message-oriented middleware is introduced to support asynchronous message communication between multi-edge AI application components.

    For details, see https://github.com/kubeedge/sedna/tree/main/examples/multiedgeinference/pedestrian_tracking

    By @vcozzolino @soumajm @jaypume.

    Incremental learning supports heterogeneous chips

    The chips of the training, evaluation, and inference worker nodes in incremental learning are different, and it causes that AI models of the same version cannot be used in a unified manner. Therefore, models need to be converted based on the special chip version. When this feature is imported to Incremental Learning, users do not need to manually convert different models. Instead, users can configure the chip version corresponding to the model when creating an application. In this way, models can be adaptively converted on different nodes.

    By @JimmyYang20 in https://github.com/kubeedge/sedna/pull/315

    Lifelong learning supports multi-node deployment

    Sedna 0.4 supports the lifelong learning application of the single-node version. It need that the dataset and the node name of the training, evaluation, and inference worker must be the same. However, this method has certain limitations in the following scenarios:

    • Training workers require more resources than evaluation workers and inference workers. In some cases, computing resources cannot meet requirements, and they need to be scheduled on different nodes.
    • In some scenarios, you need to manually specify that training, evaluation, and inference work on a specific node. For example, both of them work on an edge node.

    Therefore, the new feature enables the training, evaluation, and inference workers of lifelong learning to support the configuration of different nodeNames and nodeSelectors, allowing users to flexibly specify running nodes when creating workers.

    By @JimmyYang20 in https://github.com/kubeedge/sedna/pull/287

    Sedna supports Helm deployment

    Sedna helm charts are introduced, including helm charts of sedna-gm, sedna-lc, and sedna-kb. Users can install required components on demand. You can also customize or modify the sedna helm charts application template based on helm rules. In addition, users can upload sedna helm charts to various cloud-native app markets to deploy the entire sedna environment in a simpler and more convenient manner. For details, see https://github.com/kubeedge/sedna/tree/main/build/helm/sedna

    By @Poorunga in https://github.com/kubeedge/sedna/pull/297

    Other Notable Changes

    • Upgrade K8S to v1.21.4 by @llhuii in https://github.com/kubeedge/sedna/pull/232
    • Upgrade golang from 1.14 to 1.16 by @llhuii in https://github.com/kubeedge/sedna/pull/255
    • Update docs: add lib development guide by @JoeyHwong-gk in https://github.com/kubeedge/sedna/pull/178
    • Update DirectoryorCreate to DirectoryOrCreate by @hey-kong in https://github.com/kubeedge/sedna/pull/313
    • Add edgemesh installation by @llhuii in https://github.com/kubeedge/sedna/pull/253
    • Add pr template. by @haiker2011 in https://github.com/kubeedge/sedna/pull/271
    • buildx: speed the language having builtin build by @llhuii in https://github.com/kubeedge/sedna/pull/230
    • installation: allow *_VERSION passing without v by @llhuii in https://github.com/kubeedge/sedna/pull/247
    • Make min resync period configurable by @haiker2011 in https://github.com/kubeedge/sedna/pull/270
    • IL: decouple eval task and deploy task by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/242

    Bug Fixes

    • Fix IL bug: TrainTriggerStatus used in eval task by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/239
    • Fix IL bug: job misses first data when reads data. by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/237
    • Fix grammar error in api module by @xujingjing-cmss in https://github.com/kubeedge/sedna/pull/248
    • Fix grammar error in globalmanager module by @xujingjing-cmss in https://github.com/kubeedge/sedna/pull/249
    • Fix docs: index and quickstart by @jaypume in https://github.com/kubeedge/sedna/pull/233
    • Fix the wrong uvicorn version in sedna lib by @jaypume in https://github.com/kubeedge/sedna/pull/240
    • Fixbug: cloud node cannot connect k8s apiservice in allinone by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/291
    • Fixbug: db path has missed the mount volume prefix in LC by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/282
    • Fix downstream bug in IL by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/283
    • Fix all-in-one doc by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/281
    • Examples: fix the image name prefix by @llhuii in https://github.com/kubeedge/sedna/pull/243

    New Contributors

    • @xujingjing-cmss made their first contribution in https://github.com/kubeedge/sedna/pull/248
    • @haiker2011 made their first contribution in https://github.com/kubeedge/sedna/pull/270
    • @hey-kong made their first contribution in https://github.com/kubeedge/sedna/pull/313

    Full Changelog: https://github.com/kubeedge/sedna/compare/v0.4.3...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.4.3(Nov 3, 2021)

    What's Changed

    • Fix the display problem in the home page by @jaypume in https://github.com/kubeedge/sedna/pull/216
    • Fix kb image version in the install script by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/209
    • all-in-one: fix master node unschedule taint bug by @llhuii in https://github.com/kubeedge/sedna/pull/210
    • Improve update-codegen.sh to copy generated files into SEDNA_ROOT. by @vcozzolino in https://github.com/kubeedge/sedna/pull/218
    • Improve build script of examples by @vcozzolino in https://github.com/kubeedge/sedna/pull/214
    • Add missing object search/tracking CRD and RBAC yamls by @llhuii in https://github.com/kubeedge/sedna/pull/223
    • Unify all services exposed as ClusterIP by integrating EdgeMesh by @llhuii in https://github.com/kubeedge/sedna/pull/220
    • example: fix fl_model.train in surface_defect_detection_v2 by @XinYao1994 in https://github.com/kubeedge/sedna/pull/226
    • storage initializer: keep dir name in s3 download by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/221
    • all-in-one: add NO_INSTALL_SEDNA variable flag by @llhuii in https://github.com/kubeedge/sedna/pull/227
    • Fix building wrong GM/LC arm64 docker images by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/215

    New Contributors

    • @vcozzolino made their first contribution in https://github.com/kubeedge/sedna/pull/218

    Full Changelog: https://github.com/kubeedge/sedna/compare/v0.4.2...v0.4.3

    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Oct 14, 2021)

    What's Changed

    • Add object search controller by @EnfangCui in https://github.com/kubeedge/sedna/pull/190
    • Sedna control plane supports amd64/arm64 by @JimmyYang20 in https://github.com/kubeedge/sedna/pull/196
    • Fix install issues about lib by @JoeyHwong-gk in https://github.com/kubeedge/sedna/pull/184

    Full Changelog: https://github.com/kubeedge/sedna/compare/v0.4.1...v0.4.2

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Sep 18, 2021)

    New Features

    Bug Fixes

    • Optimize installation script (#195)
    • Fixes some bugs in the PR of integrating Plato into Sedna to support federated learning (#197)
    • Fixes GM OOM killer by increasing GM memory limit from 128Mi to 256Mi (#200)
    • Remove externalIPs when creating k8s service in GM (#201)
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Sep 8, 2021)

    Federated Learning

    • MistNet integrated, a representation extraction aggregation algorithm.
      • further reduce the resources requirement for the edge, by extracting the representation data on the edge side, and aggregation training on the cloud side.
      • further protect data privacy, by quantization and noise.
      • provides a sample code based on the yolov5 network.
    • Supports extending edge-cloud transmission method, including S3 and WebSocket protocols, by abstracting Transmitter interface.
    • Supports extending client choose algorithm in federated learning, by abstracting the ClientChoose interface.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Aug 16, 2021)

    Notable Changes:

    • In Incremental Learning, Inference/Train/Eval worker now can be deployed by nodename and nodeselector on multiple nodes.

    Bug Fixes

    • Fixed train/eval/infer callback function args not supported of sklearn backend.
    • Fixed model saving problem in Lifelong Learning Estimator.
    • Fixed incorrectly executing aggregation algorithm on the edge in Federated Learning.
    • Fixed that the job cann't be resumed after the LC restarts.

    Improvment

    • Decouple all features into independent package of gm and lc.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jun 8, 2021)

    Lifelong learning

    • Support edge-cloud synergy lifelong learning feature:
      • leverages the cloud knowledge base which empowers the scheme with memory ability, which helps to continuously learn and accumulate historical knowledge to overcome the catastrophic forgetting challenge.
      • is essentially the combination of another two learning schemes, i.e., multi-task learning and incremental learning, so that it can learn unseen tasks with shared knowledge among various scenarios over time.
    • Add lifelong learning example.

    Lib refactor

    • By using a registration of class-factory functions to emulate virtual constructors, developers can invoke different components by change variables in the Config file.
    • Clean up and redesign the base Config class, each feature maintains it's specific variables, and ensures that developers can be manually updated the variables.
    • Decouple the ML framework from the features of sedna, allows developers to choose their favorite framework.
    • Add a common file operation and a unified log-format in common module, use an abstract base class to standardize the feature modules' interface, and features are invoked by inheriting the base class.

    Published images

    The published images can be found under docker.io/kubeedge: kubeedge/sedna-gm:v0.3.0 kubeedge/sedna-lc:v0.3.0 kubeedge/sedna-kb:v0.3.0 kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0 kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0 kubeedge/sedna-example-incremental-learning-helmet-detection:v0.3.0 kubeedge/sedna-example-federated-learning-surface-defect-detection-train:v0.3.0 kubeedge/sedna-example-federated-learning-surface-defect-detection-aggregation:v0.3.0 kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 30, 2021)

  • v0.1.0(Apr 1, 2021)

    Incremental Learning

    • Support Automatically retraining, evaluating, and updating models based on the data generated at the edge.
    • Support time trigger, sample size trigger, and precision-based trigger.
    • Support hard sample discovering of unlabeled data, for reducing the manual labeling workload.

    Federated Learning

    • Support automatic deployment of federated learning training scripts to the edge.
    • Support user-defined aggregation algorithms.
    • Integrate the FedAvg algorithm.

    Joint Inference

    • Supports automatic deployment of big model and little model to cloud and edge.
    • Supports discovering hard examples and sending them to the cloud to improve the overall inference accuracy.

    Published images

    The published images can be found under docker.io/kubeedge: kubeedge/sedna-gm:v0.1.0 kubeedge/sedna-lc:v0.1.0 kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.1.0 kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.1.0 kubeedge/sedna-example-incremental-learning-helmet-detection:v0.1.0 kubeedge/sedna-example-federated-learning-surface-defect-detection-train:v0.1.0 kubeedge/sedna-example-federated-learning-surface-defect-detection-aggregation:v0.1.0

    Source code(tar.gz)
    Source code(zip)
    sedna-v0.1.0.tar.gz(28.29 MB)
Owner
KubeEdge
KubeEdge
Secure Edge Networking Based On Kubernetes And KubeEdge.

What is FabEdge FabEdge is an open source edge networking solution based on kubernetes and kubeedge. It solves the problems including complex network

FabEdge 373 Aug 12, 2022
An edge-native container management system for edge computing

SuperEdge is an open source container management system for edge computing to manage compute resources and container applications in multiple edge regions. These resources and applications, in the current approach, are managed as one single Kubernetes cluster. A native Kubernetes cluster can be easily converted to a SuperEdge cluster.

SuperEdge 840 Aug 8, 2022
Sig - Statistics in Go - CLI tool for quick statistical analysis of data streams

Statistics in Go - CLI tool for quick statistical analysis of data streams

Derek Smith 1 May 16, 2022
provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

null 23 May 8, 2022
Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.

Dapr is a portable, serverless, event-driven runtime that makes it easy for developers to build resilient, stateless and stateful microservices that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

Dapr 18.8k Aug 8, 2022
MOSN is a cloud native proxy for edge or service mesh. https://mosn.io

中文 MOSN is a network proxy written in Golang. It can be used as a cloud-native network data plane, providing services with the following proxy functio

MOSN 3.8k Aug 15, 2022
MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads.

What is MatrixOne? MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads. It provides an end-to-end data

Matrix Origin 1.1k Aug 10, 2022
Project Flogo is an open source ecosystem of opinionated event-driven capabilities to simplify building efficient & modern serverless functions, microservices & edge apps.

Project Flogo is an Open Source ecosystem for event-driven apps Ecosystem | Core | Flows | Streams | Flogo Rules | Go Developers | When to use Flogo |

TIBCO Software Inc. 2.1k Aug 11, 2022
Kubernetes Native Edge Computing Framework (project under CNCF)

KubeEdge KubeEdge is built upon Kubernetes and extends native containerized application orchestration and device management to hosts at the Edge. It c

KubeEdge 5.2k Aug 6, 2022
Microshift is a research project that is exploring how OpenShift1 Kubernetes can be optimized for small form factor and edge computing.

Microshift is a research project that is exploring how OpenShift1 Kubernetes can be optimized for small form factor and edge computing.

Oleg Silkin 0 Nov 1, 2021
OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

openyurtio/openyurt English | 简体中文 What is NEW! Latest Release: September 26th, 2021. OpenYurt v0.5.0. Please check the CHANGELOG for details. First R

OpenYurt 1.3k Aug 11, 2022
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

null 1 Dec 17, 2021
stratus is a cross-cloud identity broker that allows workloads with an identity issued by one cloud provider to exchange this identity for a workload identity issued by another cloud provider.

stratus stratus is a cross-cloud identity broker that allows workloads with an identity issued by one cloud provider to exchange this identity for a w

robert lestak 1 Dec 26, 2021
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

CloudSnorkel 16 Jun 8, 2022
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.

gecco helps teams optimise their cloud resource costs. Locate abandoned, idle, and inefficiently configured resources quickly. gecco helps teams build

aeihr. 2 Jan 9, 2022
This Go based project of Aadhyarupam Innovators demonstrate the code examples for building microservices, integration with cloud services (Google Cloud Firestore), application configuration management (Viper) etc.

This Go based project of Aadhyarupam Innovators demonstrate the code examples for building microservices, integration with cloud services (Google Cloud Firestore), application configuration management (Viper) etc.

Aadhyarupam 0 Jan 31, 2022
Tiny cross-platform webview library for C/C++/Golang. Uses WebKit (Gtk/Cocoa) and Edge (Windows)

webview A tiny cross-platform webview library for C/C++/Golang to build modern cross-platform GUIs. Also, there are Rust bindings, Python bindings, Ni

webview 10.3k Aug 6, 2022
🦖 Streaming-Serverless Framework for Low-latency Edge Computing applications, running atop QUIC protocol, engaging 5G technology.

YoMo YoMo is an open-source Streaming Serverless Framework for building Low-latency Edge Computing applications. Built atop QUIC Transport Protocol an

YoMo 1.2k Aug 6, 2022
Simple edge server / reverse proxy

reproxy Reproxy is simple edge HTTP(s) sever / reverse proxy supporting various providers (docker, static, file). One or more providers supply informa

Umputun 1k Aug 11, 2022
Simplified network and services for edge applications

English | 简体中文 EdgeMesh Introduction EdgeMesh is a part of KubeEdge, and provides a simple network solution for the inter-communications between servi

KubeEdge 130 Aug 7, 2022
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Clusternet 933 Aug 9, 2022
a small form factor OpenShift/Kubernetes optimized for edge computing

Microshift Microshift is OpenShift1 Kubernetes in a small form factor and optimized for edge computing. Edge devices deployed out in the field pose ve

Red Hat Emerging Technologies 380 Aug 6, 2022
A cutting edge (haha), prototype, object-oriented and highly modular slash command handler for Discordgo.

ken ⚠️ Disclaimer This package is still in a very early state of development and future updates might include breaking changes to the API until the fi

Ringo Hoffmann 16 Jul 15, 2022
Shows your recent browser history in tree style. 树状展示浏览器历史 (For Edge / Chromium / Chrome)

Tree Style Histyle This extension shows your recent browser history in tree style. When you browser pages from internet, you always jump from one page

null 125 Aug 11, 2022
🌐 (Web 3.0) Pastebin built on IPFS, securely served by Distributed Web and Edge Network.

pastebin-ipfs 简体中文 (IPFS Archivists) Still in development, Pull Requests are welcomed. Pastebin built on IPFS, securely served by Distributed Web and

Mayo/IO 158 Jul 11, 2022
Tiny cross-platform webview library for C/C++/Golang. Uses WebKit (Gtk/Cocoa) and Edge (Windows)

webview A tiny cross-platform webview library for C/C++/Golang to build modern cross-platform GUIs. Also, there are Rust bindings, Python bindings, Ni

webview 10.3k Aug 5, 2022
The Bhojpur PEE is a software-as-a-service product used as a Provider's Edge Equipment based on Bhojpur.NET Platform for application delivery.

Bhojpur PEE - Provider's Edge Equipment The Bhojpur PEE is a software-as-a-service product used as a Provider's Edge Equipment based on Bhojpur.NET Pl

Bhojpur Consulting 0 Dec 31, 2021
The backend for @yomo/presencejs ⚡️ made realtime web applications edge-aware by YoMo

yomo-presence-backend The backend for @yomo/presencejs ?? Dev on local 0. Prerequisites Install Go 1. Install YoMo CLI $ go install github.com/yomorun

YoMo 2 Jun 25, 2022
TurtleDex is a decentralized cloud storage platform that radically alters the landscape of cloud storage

TurtleDex is a decentralized cloud storage platform that radically alters the landscape of cloud storage. By leveraging smart contracts, client-side encryption, and sophisticated redundancy (via Reed-Solomon codes), TurtleDex allows users to safely store their data with hosts that they do not know or trust.

TurtleDev 531 May 29, 2021