A batch scheduler of kubernetes for high performance workload, e.g. AI/ML, BigData, HPC

Overview

kube-batch

Build Status Go Report Card RepoSize Release LICENSE

kube-batch is a batch scheduler for Kubernetes, providing mechanisms for applications which would like to run batch jobs leveraging Kubernetes. It builds upon a decade and a half of experience on running batch workloads at scale using several systems, combined with best-of-breed ideas and practices from the open source community.

Refer to tutorial on how to use kube-batch to run batch job in Kubernetes.

Overall Architecture

The following figure describes the overall architecture and scope of kube-batch; the out-of-scope part is going to be handled by other projects, e.g. Volcano

kube-batch

Who uses kube-batch?

As the kube-batch Community grows, we'd like to keep track of our users. Please send a PR with your organization name.

Currently officially using kube-batch:

  1. Kubeflow
  2. Volcano
  3. Baidu Inc
  4. TuSimple
  5. MOGU Inc
  6. Vivo

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Comments
  • Support Equivalence failure cache by template-uuid

    Support Equivalence failure cache by template-uuid

    Is this a BUG REPORT or FEATURE REQUEST?:

    /kind feature

    Description:

    For batch workload, there're several "similar" tasks in one job; if one task failed to fit one node, the other tasks will have the same result (except pod affinity/anti-affinity). In kube-batch, we don-t know which tasks are similar, but we can support customized indexed, e.g. an annotation of task template uuid.

    kind/feature priority/important-soon sig/scheduling sig/apps lifecycle/rotten 
    opened by k82cn 25
  • [feature] Add support for podGroup number limits for one queue

    [feature] Add support for podGroup number limits for one queue

    Is this a BUG REPORT or FEATURE REQUEST?:

    Uncomment only one, leave it on its own line:

    /kind bug /kind feature /kind feature

    What happened: For scenarios of BigData and ML, the batch jobs often summited in queues, we need to provide the capacity to control the number of running pod groups in queues with limited hardware resources. What you expected to happen: Add a parameter in Queue API:

    type QueueSpec struct {
    	Weight int32 `json:"weight,omitempty" protobuf:"bytes,1,opt,name=weight"`
             //  PodGroupNumber defines the max number of running podGroup in one queue, default to infinite.
            PodGroupNumber `json:"podgroupnumber,omitempty" protobuf:"bytes,1,opt,name=podgroupnumber"`
    }
    

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • Kubernetes version (use kubectl version):
    • Cloud provider or hardware configuration:
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
    kind/feature sig/scheduling sig/apps lifecycle/rotten 
    opened by jiaxuanzhou 22
  • Added tutorials for features

    Added tutorials for features

    DESCRIPTION: -- symptom of the problem a customer would see

    For now, kube-batch supports several features as follow; but we only have basic tutorial and design doc for gang-scheduling. It's better to have related docs for all features.

    • [ ] Gang-scheduling/Coscheduling (https://github.com/kubernetes/community/pull/2337)
    • [ ] Preemption/Reclaim
    • [ ] Task Priority within Job
    • [ ] Queue
    • [ ] DRF for Job sharing within Queue
    kind/documentation priority/important-soon sig/scheduling lifecycle/rotten 
    opened by k82cn 22
  • Xqueuejob contrib build fix

    Xqueuejob contrib build fix

    What this PR does / why we need it: Fix the separate build for the extended queuejob in contrib Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes # N/A Special notes for your reviewer:

    Release note:

    NONE
    
    approved cncf-cla: yes lgtm size/XXL 
    opened by dmatch01 20
  • Pass conformance test

    Pass conformance test

    Is this a BUG REPORT or FEATURE REQUEST?:

    /kind feature

    Description:

    Test kube-batch against conformance test after https://github.com/kubernetes-sigs/kube-batch/issues/588 and https://github.com/kubernetes-sigs/kube-batch/issues/520

    kind/feature sig/scheduling 
    opened by k82cn 18
  • [Question] How kube batch ensure the cache consistency if when another pod is scheduled by the default kube scheduler?

    [Question] How kube batch ensure the cache consistency if when another pod is scheduled by the default kube scheduler?

    /kind feature

    Since kube batch session open and seems that it will update its cache snapshot every 1 second (pls correct me if I didn't understand correctly), what if there are any other normal pod being scheduled by the k8s default scheduler during this time period of 1s and will the kube-batch cache still be consistent with the cluster status?

    Or will kube-batch and kube-scheduler schedule pods at the same time?

    Thanks in advance if there are some explanation or code reference could work as well.

    kind/feature lifecycle/rotten 
    opened by qw2208 16
  • [Question] Is multi-tenant implemented by   Namespace + ResuorceQuota?

    [Question] Is multi-tenant implemented by Namespace + ResuorceQuota?

    Hi, ! I've been be interested in multi-tenant, so I was wondering if you can share how to implement multi-tenant an Resource sharing between multi-tenant .Thanks.

    sig/scheduling sig/apps lifecycle/rotten 
    opened by zoux86 16
  • Added backfill

    Added backfill

    Added backfill action that allows lower priority jobs to run when higher priority jobs cannot run but still hold resources

    Tested with added unit tests, e2e tests and local manual test

    cncf-cla: yes needs-rebase size/XXL lifecycle/rotten ok-to-test 
    opened by pdgetrf 15
  • Add phase/conditions into PodGroup.Status

    Add phase/conditions into PodGroup.Status

    Is this a BUG REPORT or FEATURE REQUEST?:

    /kind feature

    Description:

    Currently, kube-batch only raised Unschedulable event for its status; it's better to include phase and conditions in PodGroup.Status to give more detail.

    Tasks

    • [x] Design Doc (https://github.com/kubernetes-sigs/kube-batch/pull/533)
    • [x] Add phase/condition into PodGroup.Status CRD (https://github.com/kubernetes-sigs/kube-batch/pull/525)
    • [x] Updated phase/condition by kube-batch (https://github.com/kubernetes-sigs/kube-batch/pull/560)
    help wanted kind/feature priority/important-soon sig/scheduling 
    opened by k82cn 15
  • Observed a panic:

    Observed a panic: "invalid memory address or nil pointer dereference"

    Is this a BUG REPORT or FEATURE REQUEST?:

    Uncomment only one, leave it on its own line:

    /kind bug

    What happened:

    I0516 04:37:06.887104       1 scheduler_helper.go:38] Considering Task <default/batch-1-p-w-2-cbtvn> on node <ip-10-0-7-61.ec2.internal>: <cpu 1500.00, memory 3670016000.00, GPU 0.00> vs. <cpu 1890.00, memory 3782279168.00, GPU 0.00>
    I0516 04:37:06.887142       1 scheduler_helper.go:38] Considering Task <default/batch-1-p-w-2-cbtvn> on node <ip-10-0-19-145.ec2.internal>: <cpu 1500.00, memory 3670016000.00, GPU 0.00> vs. <cpu 1890.00, memory 3782279168.00, GPU 0.00>
    E0516 04:37:06.887314       1 scheduler_helper.go:43] Predicates failed for task <default/batch-1-p-w-2-cbtvn> on node <ip-10-0-19-145.ec2.internal>: task <default/batch-1-p-w-2-cbtvn> does not tolerate node <ip-10-0-19-145.ec2.internal> taints
    I0516 04:37:06.887162       1 scheduler_helper.go:38] Considering Task <default/batch-1-p-w-2-cbtvn> on node <ip-10-0-2-157.ec2.internal>: <cpu 1500.00, memory 3670016000.00, GPU 0.00> vs. <cpu 1900.00, memory 3782279168.00, GPU 0.00>
    E0516 04:37:06.887334       1 scheduler_helper.go:43] Predicates failed for task <default/batch-1-p-w-2-cbtvn> on node <ip-10-0-2-157.ec2.internal>: task <default/batch-1-p-w-2-cbtvn> does not tolerate node <ip-10-0-2-157.ec2.internal> taints
    I0516 04:37:06.887195       1 scheduler_helper.go:38] Considering Task <default/batch-1-p-w-2-cbtvn> on node <ip-10-0-21-200.ec2.internal>: <cpu 1500.00, memory 3670016000.00, GPU 0.00> vs. <cpu 1890.00, memory 3782279168.00, GPU 0.00>
    E0516 04:37:06.887385       1 scheduler_helper.go:43] Predicates failed for task <default/batch-1-p-w-2-cbtvn> on node <ip-10-0-21-200.ec2.internal>: task <default/batch-1-p-w-2-cbtvn> does not tolerate node <ip-10-0-21-200.ec2.internal> taints
    E0516 04:37:06.887553       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
    /usr/local/go/src/runtime/asm_amd64.s:522
    /usr/local/go/src/runtime/panic.go:513
    /usr/local/go/src/runtime/panic.go:82
    /usr/local/go/src/runtime/signal_unix.go:390
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/kubernetes/pkg/scheduler/cache/node_info.go:611
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/plugins/nodeorder/nodeorder.go:66
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/plugins/nodeorder/nodeorder.go:243
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/framework/session_plugins.go:364
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/actions/allocate/allocate.go:148
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/util/scheduler_helper.go:64
    /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:65
    /usr/local/go/src/runtime/asm_amd64.s:1333
    I0516 04:37:06.887857       1 asm_amd64.s:523] Leaving Allocate ...
    panic: runtime error: invalid memory address or nil pointer dereference [recovered]
            panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x168 pc=0x10ad54d]
    
    goroutine 61624 [running]:
    github.com/kubernetes-sigs/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x108
    panic(0x126a4a0, 0x21a4e50)
            /usr/local/go/src/runtime/panic.go:513 +0x1b9
    github.com/kubernetes-sigs/volcano/vendor/k8s.io/kubernetes/pkg/scheduler/cache.(*NodeInfo).SetNode(0xc000511790, 0x0, 0x0, 0xc000511790)
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/kubernetes/pkg/scheduler/cache/node_info.go:611 +0x3d
    github.com/kubernetes-sigs/volcano/pkg/scheduler/plugins/nodeorder.generateNodeMapAndSlice(0xc0005cc4b0, 0xc000486180, 0x1, 0x1, 0x1)
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/plugins/nodeorder/nodeorder.go:66 +0x12b
    github.com/kubernetes-sigs/volcano/pkg/scheduler/plugins/nodeorder.(*nodeOrderPlugin).OnSessionOpen.func1(0xc000429700, 0xc0008f40a0, 0xc0006061c0, 0x9, 0xc0006f9338)
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/plugins/nodeorder/nodeorder.go:243 +0x12e
    github.com/kubernetes-sigs/volcano/pkg/scheduler/framework.(*Session).NodeOrderFn(0xc0003c60e0, 0xc000429700, 0xc0008f40a0, 0xc000826738, 0x4058f1, 0xc00090a058)
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/framework/session_plugins.go:364 +0x1bd
    github.com/kubernetes-sigs/volcano/pkg/scheduler/framework.(*Session).NodeOrderFn-fm(0xc000429700, 0xc0008f40a0, 0x0, 0x0, 0xc000077b00)
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/actions/allocate/allocate.go:148 +0x3e
    github.com/kubernetes-sigs/volcano/pkg/scheduler/util.PrioritizeNodes.func1(0x0)
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/pkg/scheduler/util/scheduler_helper.go:64 +0x7f
    github.com/kubernetes-sigs/volcano/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil.func1(0xc0006c0720, 0xc00090a000, 0xc000486168, 0xc000780840)
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:65 +0x91
    created by github.com/kubernetes-sigs/volcano/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil
            /root/asif/go/src/github.com/kubernetes-sigs/volcano/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:57 +0x148
    

    What you expected to happen: no err How to reproduce it (as minimally and precisely as possible):

    job reach podGroup minMember

    Anything else we need to know?:

    Environment:

    • Kubernetes version (use kubectl version): Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

    • Cloud provider or hardware configuration: aws eks

    • OS (e.g. from /etc/os-release): ubuntu

    • Kernel (e.g. uname -a):

    • Install tools: eks

    • Others:

    kind/bug priority/important-soon lifecycle/rotten 
    opened by zhuhesheng 14
  • Fix e2e test: gang scheduling.

    Fix e2e test: gang scheduling.

    Is this a BUG REPORT or FEATURE REQUEST?:

    /kind bug

    What happened: After the revert of batch job patches #806 , the gang scheduling testcase is falling, need investigate and fix.

    Environment:

    • Kubernetes version (use kubectl version):
    • Cloud provider or hardware configuration:
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
    kind/bug 
    opened by TommyLike 14
  • Update group name to scheduling.x-k8s.io

    Update group name to scheduling.x-k8s.io

    Is this a BUG REPORT or FEATURE REQUEST?:

    /kind feature

    What happened:

    It's better to follow the practice for group name of k8s-sigs , e.g. x-k8s.io.

    good first issue help wanted kind/feature 
    opened by k82cn 3
  • Replace cmd/deepcopy-gen with binaries

    Replace cmd/deepcopy-gen with binaries

    Is this a BUG REPORT or FEATURE REQUEST?:

    /kind feature

    What happened:

    Currently, we use cmd/deepcopy-gen to generate deepcopy codes; it's better to replace such kind of source code with release of code-generators.

    good first issue help wanted kind/feature lifecycle/stale 
    opened by k82cn 1
  • Update k8s version

    Update k8s version

    Is this a BUG REPORT or FEATURE REQUEST?:

    /kind feature

    What happened:

    1.12 is end of service, it's better to upgrade k8s version to the latest one :)

    good first issue help wanted kind/feature priority/important-soon lifecycle/stale 
    opened by k82cn 9
Releases(v0.5.0)
  • v0.5.0(Aug 5, 2019)

    • master and worker start in order in a job #872
    • use kubernetes default schedule policies #864
    • v0.4 runtime panic #861
    • Helm install fails #857
    • "NewResource/Convert2K8sResource" behavior mismatch #851
    • Phase of podgroup status looks incorrect #846
    • Update API GroupName #815
    • Migrate nodeorder and predicates plugins #814
    • Add Configuration for predicate plugin to enable/disable predicates algorithm #802
    • Consider support multi-containers pod error code handling #776
    • Deserved attr is not correctly calculated in proportion plugin #729
    • Keep backward compatibility for priority class #724
    • Change return value of NodeOrderFn from int to float #708
    • Add type Argument with some common parse function #704
    • Replace NodeOrder with BestNode #699
    • Support set default value to the configuration #695
    • Add resource predicates for tasks #694
    • Pass conformance test #589
    • big PodGroup blocks scheduling issue #514
    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Mar 31, 2019)

    v0.4.2

    Notes:

    • Added Balanced Resource Priority
    • Support Plugin Arguments
    • Support Weight for Node order plugin

    Issues:

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Mar 11, 2019)

    v0.4.1

    Notes:

    • Added NodeOrder Plugin
    • Added Conformance Plugin
    • Removed namespaceAsQueue feature
    • Supported Pod without PodGroup
    • Added performance metrics for scheduling

    Issues:

    Source code(tar.gz)
    Source code(zip)
  • v0.4(Jan 26, 2019)

    v0.4

    Notes:

    • Gang-scheduling/Coscheduling by PDB is depreciated.
    • Scheduler configuration format and related start up parameter were updated, refer to example for detail.

    Issues:

    Source code(tar.gz)
    Source code(zip)
  • v0.3(Jan 1, 2019)

    Release Notes:

    Docker Image:

    docker pull kubesigs/kube-batch:v0.3
    
    Source code(tar.gz)
    Source code(zip)
  • 0.3-rc1(Dec 17, 2018)

    Features

    Name | Version | Notes -- | -- | -- Gang-scheduling/Coscheduling | Alpha | Doc Preemption/Reclaim | Experimental |   Task Priority within Job | Experimental |   Queue | Experimental |   DRF for Job sharing within Queue | Experimental |

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 16, 2018)

    Features

    | Name | Version | Notes | | -------------------------------- | ------------ | -------------------------------------------------------- | | Gang-scheduling/Coscheduling | Experimental | Doc | | Preemption/Reclaim | Experimental | | | Task Priority within Job | Experimental | | | Queue | Experimental | | | DRF for Job sharing within Queue | Experimental | |

    Docker Images

    docker pull kubesigs/kube-batch:v0.2
    
    Source code(tar.gz)
    Source code(zip)
  • v0.2-rc1(Oct 11, 2018)

    Features

    | Name | Version | Notes | | -------------------------------- | ------------ | -------------------------------------------------------- | | Gang-scheduling/Coscheduling | Experimental | Doc | | Preemption/Reclaim | Experimental | | | Task Priority within Job | Experimental | | | Queue | Experimental | | | DRF for Job sharing within Queue | Experimental | |

    Docker Images

    docker pull kubesigs/kube-batch:v0.2
    
    Source code(tar.gz)
    Source code(zip)
Owner
Kubernetes SIGs
Org for Kubernetes SIG-related work
Kubernetes SIGs
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Computer Architecture and VLSI Systems (CARV) Laboratory 7 Oct 26, 2022
K8s-scheduler-extender - Scheduler extender for thpa

k8s-scheduler-extender-example This is an example of Kubernetes Scheduler Extend

crome98 0 Feb 6, 2022
Kubernetes workload controller for container image deployment

kube-image-deployer kube-image-deployer는 Docker Registry의 Image:Tag를 감시하는 Kubernetes Controller입니다. Keel과 유사하지만 단일 태그만 감시하며 더 간결하게 동작합니다. Container, I

PUBG Corporation 2 Mar 8, 2022
Set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs

Caelus Caelus is a set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs, these resources come from the underuti

Tencent 300 Nov 22, 2022
A web-based simulator for the Kubernetes scheduler

Web-based Kubernetes scheduler simulator Hello world. Here is web-based Kubernetes scheduler simulator. On the simulator, you can create/edit/delete t

Kubernetes SIGs 511 Dec 22, 2022
OpenAIOS vGPU scheduler for Kubernetes is originated from the OpenAIOS project to virtualize GPU device memory.

OpenAIOS vGPU scheduler for Kubernetes English version|中文版 Introduction 4paradigm k8s vGPU scheduler is an "all in one" chart to manage your GPU in k8

4Paradigm 132 Jan 3, 2023
topolvm operator provide kubernetes local storage which is light weight and high performance

Topolvm-Operator Topolvm-Operator is an open source cloud-native local storage orchestrator for Kubernetes, which bases on topolvm. Supported environm

Alauda.io 24 Nov 24, 2022
An high performance and ops-free local storage solution for Kubernetes.

Carina carina 是一个CSI插件,在Kubernetes集群中提供本地存储持久卷 项目状态:开发测试中 CSI Version: 1.3.0 Carina architecture 支持的环境 Kubernetes:1.20 1.19 1.18 Node OS:Linux Filesys

BoCloud 10 May 18, 2022
Carina: an high performance and ops-free local storage for kubernetes

Carina English | 中文 Background Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applicatio

null 557 Dec 30, 2022
Reconstruct Open API Specifications from real-time workload traffic seamlessly

Reconstruct Open API Specifications from real-time workload traffic seamlessly: Capture all API traffic in an existing environment using a service-mes

null 340 Jan 1, 2023
Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload.

Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload. Run tools like masscan, puredns, ffuf, httpx or a

null 176 Dec 31, 2022
SionReplayer is the workload replayer for SION project.

SionReplayer SionReplayer is the workload replayer for SION project. Simulation A sample of IBM docker registry trace is included. To run the simulati

null 0 Jan 13, 2022
Blast is a simple tool for API load testing and batch jobs

Blast Blast makes API requests at a fixed rate. The number of concurrent workers is configurable. The rate may be changed interactively during executi

Dave Brophy 209 Nov 10, 2022
Repository belajar docker ALTA Immerseive Back-End Batch 4

Belajar Docker Repository belajar docker ALTA Immerseive Back-End Batch 4 Untuk materi ini teman-teman bisa download docker sesuai dengan OS masing-ma

Jerry Young 0 Nov 12, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
Modern Job Scheduler

Kala Kala is a simplistic, modern, and performant job scheduler written in Go. Features: Single binary No dependencies JSON over HTTP API Job Stats Co

AJ Bahnken 1.9k Dec 21, 2022
scenario system to check the behavior of kube-scheduler

kube-scheduler-simulator-cli: Kubernetes Scheduler simulator on CLI and scenario system. Hello world. This repository is scenario system for kube-sche

Kensei Nakada 2 Jan 25, 2022
A Golang based high performance, scalable and distributed workflow framework

Go-Flow A Golang based high performance, scalable and distributed workflow framework It allows to programmatically author distributed workflow as Dire

Vanu 691 Jan 6, 2023
A high-performance Directed-Acyclic-Graph JIT in Go

GAG - A Directed-Acyclic-Graph JIT in Go GAG is a library I created while developing https://isobot.io to experiment with different ways of implementi

V-X 3 Mar 16, 2022