A serverless cluster computing system for the Go programming language

Overview

Bigslice

Bigslice is a serverless cluster data processing system for Go. Bigslice exposes composable API that lets the user express data processing tasks in terms of a series of data transformations that invoke user code. The Bigslice runtime then transparently parallelizes and distributes the work, using the Bigmachine library to create an ad hoc cluster on a cloud provider.

Developing Bigslice

Bigslice uses Go modules to capture its dependencies; no tooling other than the base Go install is required.

$ git clone https://github.com/grailbio/bigslice
$ cd bigslice
$ GO111MODULE=on go test

If tests fail with socket: too many open files errors, try increasing the maximum number of open files.

$ ulimit -n 2000
Issues
  • exec: add some form of log.Flush.Sync mechanism

    exec: add some form of log.Flush.Sync mechanism

    As called out in PR #21, it appears that the last few lines of log output can be lost when an error is reported to the master. The exact reason is not clear, but adding a sleep worker.Init helps. Investigate this and provide a more robust mechanism to ensure that log lines are not lost.

    opened by cosnicolaou 15
  • Filter and counts, maybe consider mapreduce counters

    Filter and counts, maybe consider mapreduce counters

    I've attempted to use Filter in two real world examples now but for one I had compromise on my stats reporting and for the other I had to back out of using it. I need to report the # of original inputs and the # post filtering which doesn't seem to be easily possible. I can use Scan or some other side-effect support operation, but then I need to make the side effect work across multiple machines which is annoying and expensive for something as simple as a count. Google's mapreduce offered counters which could be used for this task as this; this mechanism though seems overly general and was extensively abused so I'm not necessarily advocating for it. Given bigslice's ability to carry results through the graph via the slices themselves, maybe it makes sense to add stats to the core slice structure to report on progress through the graph - the simplest being # invocations per operation, but more detailed/extensive ones also being possible.

    opened by cosnicolaou 12
  • Checksum mismatch on github.com/grailbio/testutil@v0.0.1

    Checksum mismatch on github.com/grailbio/[email protected]

    Steps to reproduce:

    $ git clone https://github.com/grailbio/bigslice $ cd bigslice $ go test

    Output of go test::

    go: downloading github.com/grailbio/testutil v0.0.1
    verifying github.com/grailbio/[email protected]: checksum mismatch
    	downloaded: h1:s6IeIZsZHQZXcUnmEKqz22cSn05QsTH5AwHnrxMRKEs=
    	go.sum:     h1:RzGxJO5krJooQGu7pOOgA7RdrwF9L+PTGEIuO3O/M0g=
    
    SECURITY ERROR
    This download does NOT match an earlier download recorded in go.sum.
    The bits may have been replaced on the origin server, or an attacker may
    have intercepted the download attempt.
    
    For more information, see 'go help module-auth'.
    

    Go version:

    go version go1.13 linux/amd64
    

    go env:

    GO111MODULE=""
    GOARCH="amd64"
    GOBIN=""
    GOCACHE="/home/ps/.cache/go-build"
    GOENV="/home/ps/.config/go/env"
    GOEXE=""
    GOFLAGS=""
    GOHOSTARCH="amd64"
    GOHOSTOS="linux"
    GONOPROXY=""
    GONOSUMDB=""
    GOOS="linux"
    GOPATH="/home/ps/go"
    GOPRIVATE=""
    GOPROXY="https://proxy.golang.org,direct"
    GOROOT="/usr/local/go"
    GOSUMDB="sum.golang.org"
    GOTMPDIR=""
    GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
    GCCGO="gccgo"
    AR="ar"
    CC="gcc"
    CXX="g++"
    CGO_ENABLED="1"
    GOMOD="/home/ps/Code/bigslice/go.mod"
    CGO_CFLAGS="-g -O2"
    CGO_CPPFLAGS=""
    CGO_CXXFLAGS="-g -O2"
    CGO_FFLAGS="-g -O2"
    CGO_LDFLAGS="-g -O2"
    PKG_CONFIG="pkg-config"
    GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build504681712=/tmp/go-build -gno-record-gcc-switches"
    
    opened by psampaz 12
  • Track dependencies correctly given task loss

    Track dependencies correctly given task loss

    Track dependencies correctly given task loss. Fixes the following scenario:

    1. Task A depends on B and C. Nothing has yet been evaluated. A's dependency count is 2.
    2. B is successfully evaluated. A's dependency count goes to 1.
    3. C is successfully evaluated. B is concurrently lost. A's dependency count goes to 0.
    4. A is enqueued, sees that B is lost. Because it is still in the dependency set, A's count is not updated.
    5. B completes. A's count goes to -1, so it is not enqueued.
    6. There is nothing to do, and nothing pending. Evaluation returns without A, the root, ever having been evaluated successfully.

    Fix this by clearing dependency information before reconstructing it in Enqueue.

    Add a stress test that abuses evaluation with lost tasks. The test reliably produces the problem scenario.

    opened by jcharum 10
  • a question on sharding..

    a question on sharding..

    I have a large input set in a dynamodb table that I'd like to dump it to a sharded set of output files that support lookups (grailbio/recordio is the underlying format). I'd like to mod shard the original keys so that given a key I know which file to look the key up in. I have a writerfuncs set up to match the number of input shards and ideally I'd like bigslice to invoke the write func for shard n with all and only the mod-sharded values of my input keys. Is there any way to do this? I don't see any guaranteed way of doing so? My alternative of course is to have each writerfunc be able to write all of the sharded output files, but that seems contrary to the writerfunc model.

    I should say that Reshuffle doesn't appear to work as I would expect it to, which would be drive all keys with the same value to the same shard.

    Thanks!

    opened by cosnicolaou 8
  • cmd/bigslice: extract build and run into a package to allow for reuse.

    cmd/bigslice: extract build and run into a package to allow for reuse.

    This is a protoype for one means of more easily integrating bigslice into other user specific environments.

    This PR extracts the bigslice commands 'build' and 'run' functions into a separate package to allow them to be reused in other user environments. The intent being to allow user sites to customize their setup by providing their own bigslice command, with a site specific setup-ec2 command whilst still reusing the rest of the bigslice commands functionality.

    opened by cosnicolaou 8
  • support for user-defined metrics

    support for user-defined metrics

    This change implements user-defined metrics, following the design proposed in #18.

    It introduces a new package, github.com/grailbio/bigslice/metrics, with which users can declare metrics. These metrics can then be used during evaluation, and inspected upon completion.

    The code anticipates other modes of aggregation: e.g., live sampling, and aggregating by task/phase, etc..

    Example:

    var filtered = metrics.NewCounter()
    
    var myBigsliceFunc = bigslice.Func(func(...) bigslice.Slice {
        slice = ...
        return bigslice.Filter(slice, func(ctx context.Context, data myDataType) bool {
            if keep(data) {
                return true
            }
            scope := metrics.ContextScope(ctx)
            filtered.Incr(scope, 1)
            return false
        })
    })
    
    func main() {
        sess = ...
        res, err := sess.Run(cox, myBigsliceFunc, ...)
        ...
        log.Print("filtered items: ", filtered.Value(res.Scope()))
    }
    
    opened by mariusae 6
  • ReaderFunc for multiple files..

    ReaderFunc for multiple files..

    I would expect a ReaderFunc created with 2 shards, one for each of two files, to read the two files concurrently. However, it seems that they are read sequentially, at least when run in local mode.

    slice = bigslice.ReaderFunc(len(files), NewReaderFunc(files))
    ....
    

    where NewReaderFunc looks like:

    func NewReaderFunc(filenames []string) interface{} {
    	type state struct {}
    	return func(shard int, state *state, entities []wikientities.Entity) (n int, err error) {
                    ....
    		if state.Scanner == nil {
    		....
    			fmt.Printf("processing: shard: %v file: %v\n", shard, filenames[shard])
    
    		}
    .....
    

    is this expected?

    opened by cosnicolaou 6
  • Fix go.sum

    Fix go.sum

    fixes #2

    What I did was the following:

    git clone https://github.com/grailbio/bigslice.git
    cd bigslice
    rm go.sum
    go mod tidy
    

    using

    go version

    go version go1.13 linux/amd64

    and

    GOPROXY="https://proxy.golang.org,direct" GOSUMDB="sum.golang.org"

    opened by psampaz 6
  • Provide user-controlled sharding

    Provide user-controlled sharding

    As outlined in #16, it's often useful to extend fine-grained control of sharding to the user. It can be solved by wrapping integers with an identity hash function, but that seems less than ideal. It might be useful to provide this functionality as part of bigslice.Reshuffle.

    opened by mariusae 5
  • Free ephemeral resources used for scanning

    Free ephemeral resources used for scanning

    When we scan results, we get a slice of *openerAtReaders, one for each result task. We read from each reader sequentially. When we are done with a given reader, we retain some of the resources used when reading from it, most notably gob decode buffers. As we scan, we accumulate these defunct buffers, and our memory footprint grows.

    This happens for two reasons:

    1. We pop off the slice of readers to iterate, i.e. q = q[1:]. However, we do not clear the backing array reference to the reader.
    2. When we close the reader, we don't clear the sliceioReader, which in turn holds the gob decoder.

    Fixing either would eliminate the specific scan leak. Fix both, as I think it's the correct behavior.

    opened by jcharum 3
  • build(deps): bump nokogiri from 1.11.2 to 1.13.6 in /docs

    build(deps): bump nokogiri from 1.11.2 to 1.13.6 in /docs

    Bumps nokogiri from 1.11.2 to 1.13.6.

    Release notes

    Sourced from nokogiri's releases.

    1.13.6 / 2022-05-08

    Security

    • [CRuby] Address CVE-2022-29181, improper handling of unexpected data types, related to untrusted inputs to the SAX parsers. See GHSA-xh29-r2w5-wx8m for more information.

    Improvements

    • {HTML4,XML}::SAX::{Parser,ParserContext} constructor methods now raise TypeError instead of segfaulting when an incorrect type is passed.

    sha256:

    58417c7c10f78cd1c0e1984f81538300d4ea98962cfd3f46f725efee48f9757a  nokogiri-1.13.6-aarch64-linux.gem
    a2b04ec3b1b73ecc6fac619b41e9fdc70808b7a653b96ec97d04b7a23f158dbc  nokogiri-1.13.6-arm64-darwin.gem
    4437f2d03bc7da8854f4aaae89e24a98cf5c8b0212ae2bc003af7e65c7ee8e27  nokogiri-1.13.6-java.gem
    99d3e212bbd5e80aa602a1f52d583e4f6e917ec594e6aa580f6aacc253eff984  nokogiri-1.13.6-x64-mingw-ucrt.gem
    a04f6154a75b6ed4fe2d0d0ff3ac02f094b54e150b50330448f834fa5726fbba  nokogiri-1.13.6-x64-mingw32.gem
    a13f30c2863ef9e5e11240dd6d69ef114229d471018b44f2ff60bab28327de4d  nokogiri-1.13.6-x86-linux.gem
    63a2ca2f7a4f6bd9126e1695037f66c8eb72ed1e1740ef162b4480c57cc17dc6  nokogiri-1.13.6-x86-mingw32.gem
    2b266e0eb18030763277b30dc3d64337f440191e2bd157027441ac56a59d9dfe  nokogiri-1.13.6-x86_64-darwin.gem
    3fa37b0c3b5744af45f9da3e4ae9cbd89480b35e12ae36b5e87a0452e0b38335  nokogiri-1.13.6-x86_64-linux.gem
    b1512fdc0aba446e1ee30de3e0671518eb363e75fab53486e99e8891d44b8587  nokogiri-1.13.6.gem
    

    1.13.5 / 2022-05-04

    Security

    Dependencies

    • [CRuby] Vendored libxml2 is updated from v2.9.13 to v2.9.14.

    Improvements

    • [CRuby] The libxml2 HTML4 parser no longer exhibits quadratic behavior when recovering some broken markup related to start-of-tag and bare < characters.

    Changed

    • [CRuby] The libxml2 HTML4 parser in v2.9.14 recovers from some broken markup differently. Notably, the XML CDATA escape sequence <![CDATA[ and incorrectly-opened comments will result in HTML text nodes starting with &lt;! instead of skipping the invalid tag. This behavior is a direct result of the quadratic-behavior fix noted above. The behavior of downstream sanitizers relying on this behavior will also change. Some tests describing the changed behavior are in test/html4/test_comments.rb.

    ... (truncated)

    Changelog

    Sourced from nokogiri's changelog.

    1.13.6 / 2022-05-08

    Security

    • [CRuby] Address CVE-2022-29181, improper handling of unexpected data types, related to untrusted inputs to the SAX parsers. See GHSA-xh29-r2w5-wx8m for more information.

    Improvements

    • {HTML4,XML}::SAX::{Parser,ParserContext} constructor methods now raise TypeError instead of segfaulting when an incorrect type is passed.

    1.13.5 / 2022-05-04

    Security

    Dependencies

    • [CRuby] Vendored libxml2 is updated from v2.9.13 to v2.9.14.

    Improvements

    • [CRuby] The libxml2 HTML parser no longer exhibits quadratic behavior when recovering some broken markup related to start-of-tag and bare < characters.

    Changed

    • [CRuby] The libxml2 HTML parser in v2.9.14 recovers from some broken markup differently. Notably, the XML CDATA escape sequence <![CDATA[ and incorrectly-opened comments will result in HTML text nodes starting with &lt;! instead of skipping the invalid tag. This behavior is a direct result of the quadratic-behavior fix noted above. The behavior of downstream sanitizers relying on this behavior will also change. Some tests describing the changed behavior are in test/html4/test_comments.rb.

    1.13.4 / 2022-04-11

    Security

    Dependencies

    • [CRuby] Vendored zlib is updated from 1.2.11 to 1.2.12. (See LICENSE-DEPENDENCIES.md for details on which packages redistribute this library.)
    • [JRuby] Vendored Xerces-J (xerces:xercesImpl) is updated from 2.12.0 to 2.12.2.
    • [JRuby] Vendored nekohtml (org.cyberneko.html) is updated from a fork of 1.9.21 to 1.9.22.noko2. This fork is now publicly developed at https://github.com/sparklemotion/nekohtml

    ... (truncated)

    Commits
    • b7817b6 version bump to v1.13.6
    • 61b1a39 Merge pull request #2530 from sparklemotion/flavorjones-check-parse-memory-ty...
    • 83cc451 fix: {HTML4,XML}::SAX::{Parser,ParserContext} check arg types
    • 22c9e5b version bump to v1.13.5
    • 6155881 doc: update CHANGELOG for v1.13.5
    • c519a47 Merge pull request #2527 from sparklemotion/2525-update-libxml-2_9_14-v1_13_x
    • 66c2886 dep: update libxml2 to v2.9.14
    • b7c4cc3 test: unpend the LIBXML_LOADED_VERSION test on freebsd
    • eac7934 dev: require yaml
    • f3521ba style(rubocop): pend Style/FetchEnvVar for now
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies ruby 
    opened by dependabot[bot] 0
  • build(deps): bump addressable from 2.7.0 to 2.8.0 in /docs

    build(deps): bump addressable from 2.7.0 to 2.8.0 in /docs

    Bumps addressable from 2.7.0 to 2.8.0.

    Changelog

    Sourced from addressable's changelog.

    Addressable 2.8.0

    • fixes ReDoS vulnerability in Addressable::Template#match
    • no longer replaces + with spaces in queries for non-http(s) schemes
    • fixed encoding ipv6 literals
    • the :compacted flag for normalized_query now dedupes parameters
    • fix broken escape_component alias
    • dropping support for Ruby 2.0 and 2.1
    • adding Ruby 3.0 compatibility for development tasks
    • drop support for rack-mount and remove Addressable::Template#generate
    • performance improvements
    • switch CI/CD to GitHub Actions
    Commits
    • 6469a23 Updating gemspec again
    • 2433638 Merge branch 'main' of github.com:sporkmonger/addressable into main
    • e9c76b8 Merge pull request #378 from ashmaroli/flat-map
    • 56c5cf7 Update the gemspec
    • c1fed1c Require a non-vulnerable rake
    • 0d8a312 Adding note about ReDoS vulnerability
    • 89c7613 Merge branch 'template-regexp' into main
    • cf8884f Note about alias fix
    • bb03f71 Merge pull request #371 from charleystran/add_missing_encode_component_doc_entry
    • 6d1d809 Adding note about :compacted normalization
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Fail typechecking for functions passed to `bigslice.Func` that take `func` and channel arguments

    Fail typechecking for functions passed to `bigslice.Func` that take `func` and channel arguments

    Right now, these aren't caught until we try to gob-encode. Consider failing faster in type-checking to avoid too much confusion/loss when it works with local execution.

    good first issue 
    opened by jcharum 0
  • Proposal (WIP): PushReader, a simpler reader API

    Proposal (WIP): PushReader, a simpler reader API

    This is a concept for making data reading simpler than ReaderFunc by allowing "normal" Go state (in a closure).

    slice := bigslice.PushReader(Nshard, func(shard int, push func(string, int)) error {
    	fuzzer := fuzz.NewWithSeed(1)
    	var row struct {
    		string
    		int
    	}
    	for i := 0; i < N; i++ {
    		fuzzer.Fuzz(&row)
    		push(row.string, row.int)
    	}
    	return nil
    })
    

    The performance cost of this may be significant; I haven't measured yet. I wanted to start by having a concrete example of what user code will look like.

    opened by josh-newman 8
  • Support different instance types per computation

    Support different instance types per computation

    Support different instance types per computation. It's sometimes the case that users want different instance types for different invocations. For example, one computation may require instances with GPUs. It would be useful to be able to specify instance type per computation.

    We could potentially do this by allowing a customization of Func with a different configuration, plumbed through Bigmachine.

    enhancement 
    opened by jcharum 0
Owner
GRAIL
Source code created or maintained by GRAIL, Inc.
GRAIL
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

null 1 Dec 17, 2021
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

null 4 Nov 16, 2021
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Mert Doğan 0 Oct 24, 2021
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 56 Jun 20, 2022
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Snigdha Sambit Aryakumar 1 Jan 25, 2022
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Shiwen Cheng 274 Jun 29, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Aidan Melen 23 May 31, 2022
The metrics-agent collects allocation metrics from a Kubernetes cluster system and sends the metrics to cloudability

metrics-agent The metrics-agent collects allocation metrics from a Kubernetes cluster system and sends the metrics to cloudability to help you gain vi

null 0 Jan 14, 2022
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications

Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications Explore PipeCD docs » Overview PipeCD provides a unified co

PipeCD 599 Jun 23, 2022
Putting serverless on your server

Matterless: putting serverless on your server Serverless computing enables you to build applications that automatically scale with demand, and your wa

Zef Hemel 25 Mar 20, 2022
Kubernetes Native Serverless Framework

kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code without having to worry about the underlying infrastructu

Kubeless 6.8k Jun 30, 2022
Serviço de consulta de CEP Serverless usando Lambda function em Golang

Consulta CEP Serverless Consulta CEP foi desenvolvido com o objetivo de facilitar a vida do desenvolvedor que precisa de um serviço de consulta de CEP

Otavio Baldan 1 Oct 26, 2021
GCP Serverless API With Golang

GCP SERVERLESS API TECH STACK API Gateway Golang Google Cloud Firestore (Native Mode) Google Cloud Functions Google Cloud Storage LOCAL SETUP git clon

Emmanuel Onyekaba 2 Nov 8, 2021
Koyeb is a developer-friendly serverless platform to deploy apps globally.

Koyeb Serverless Platform Deploy a Go Gin application on Koyeb Learn more about Koyeb · Explore the documentation · Discover our tutorials About Koyeb

Koyeb 1 May 14, 2022
FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)

FaaSNet FaaSNet is the first system that provides an end-to-end, integrated solution for FaaS-optimized container runtime provisioning. FaaSNet uses l

LeapLab @ CS_GMU 31 Jun 26, 2022
Go serverless functions examples with most popular Cloud Providers

go-serverless Go serverless functions examples with most popular Cloud Providers Creating zip archive go mod download go build ./cmd/<aws|gcp> zip -

Vlad Morzhanov 0 Nov 16, 2021
Docker-NodeJS - Creating a CI/CD Environment for Serverless Containers on Google Cloud Run

Creating a CI/CD Environment for Serverless Containers on Google Cloud Run Archi

David 1 Jan 8, 2022
Deploy 2 golang aws lambda functions using serverless framework.

Deploy 2 golang aws lambda functions using serverless framework.

NguyenTheAnh 0 Jan 20, 2022
Build powerful pipelines in any programming language.

Gaia is an open source automation platform which makes it easy and fun to build powerful pipelines in any programming language. Based on HashiCorp's g

Gaia 4.7k Jun 26, 2022