Standard machine learning models

Related tags


Cog: Standard machine learning models

Define your models in a standard format, store them in a central place, run them anywhere.

  • Standard interface for a model. Define all your models with Cog, in a standard format. It's not just the graph – it also includes code, pre-/post-processing, data types, Python dependencies, system dependencies – everything.
  • Store models in a central place. No more hunting for the right model file on S3. Cog models are in one place with a content-addressable ID.
  • Run models anywhere: Cog models run anywhere Docker runs: your laptop, Kubernetes, cloud platforms, batch processing pipelines, etc. And, you can use adapters to convert the models to on-device formats.

Cog does a few things to make your life easier:

  • Automatic Docker image. Define your environment with a simple format, and Cog will generate CPU and GPU Docker images using best practices and efficient base images.
  • Automatic HTTP service. Cog will generate an HTTP service from the definition of your model, so you don't need to write a Flask server in the right way.
  • No more CUDA hell. Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will pick the right versions for you.

How does it work?

  1. Define how inferences are run on your model:
import cog
import torch

class ColorizationModel(cog.Model):
    def setup(self):
        self.model = torch.load("./weights.pth")

    @cog.input("input", type=Path, help="Grayscale input image")
    def run(self, input):
        # ... pre-processing ...
        output = self.model(processed_input)
        # ... post-processing ...
        return processed_output
  1. Define the environment it runs in with cog.yaml:
model: ""
  python_version: "3.8"
  python_requirements: "requirements.txt"
   - libgl1-mesa-glx
   - libglib2.0-0
  1. Push it to a repository and build it:
$ cog build
--> Uploading '.' to repository done
--> Building CPU Docker image... done
--> Building GPU Docker image... done
--> Built model b6a2f8a2d2ff

This has:

  • Created a ZIP file containing your code + weights + environment definition, and assigned it a content-addressable SHA256 ID.
  • Pushed this ZIP file up to a central repository so it never gets lost and can be run by anyone.
  • Built two Docker images (one for CPU and one for GPU) that contains the model in a reproducible environment, with the correct versions of Python, your dependencies, CUDA, etc.

Now, anyone who has access to this repository can run inferences on this model:

$ cog infer b6a2f8a2d2ff -i @input.png -o @output.png
--> Pulling GPU Docker image for b6a2f8a2d2ff... done
--> Running inference... done
--> Written output to output.png

It is also just a Docker image, so you can run it as an HTTP service wherever Docker runs:

$ cog show b6a2f8a2d2ff 
Docker image (GPU):
Docker image (CPU):

$ docker run -d -p 8000:8000 --gpus all

$ curl http://localhost:8000/infer -F [email protected]

Why are we building this?

It's really hard for researchers to ship machine learning models to production. Dockerfiles, pre-/post-processing, API servers, CUDA versions. More often than not the researcher has to sit down with an engineer to get the damn thing deployed.

By defining a standard model, all that complexity is wrapped up behind a standard interface. Other systems in your machine learning stack just need to support Cog models and they'll be able to run anything a researcher dreams up.

At Spotify, we built a system like this for deploying audio deep learning models. We realized this was a repeating pattern: Uber, Coinbase, and others have built similar systems. So, we're making an open source version.

The hard part is defining a model interface that works for everyone. We're releasing this early so we can get feedback on the design and find collaborators. Hit us up if you're interested in using it or want to collaborate with us. We're on Discord or email us at [email protected].


No binaries yet! You'll need Go 1.16, then run:

make install

This installs the cog binary to $GOPATH/bin/cog.

Next steps

  • refactor: re-usable config loading, and basic error definitions

    refactor: re-usable config loading, and basic error definitions

    What does this PR do?

    • Standardises config-related functionality in utils/config.go
    • Eliminates some repetitious config loading code
    • Adds a func to walk the parent directories in search of a cog.yaml
    • Adds some basic error creation code in pkg/errors/errors.go

    Next Steps/Questions

    • Is this a reasonable approach?
    • ~~How do we want cog run to work - relative to cwd or relative to the root (see #102)?~~
    • ~~What else needs to change to allow cog usage in subdirectories?~~
    • What's the best way to test this? Unit tests (where?), some addition to the e2e tests?

    (If you like the config-loading (utils/config.go) refactor, we could merge that as a separate PR first to keep PRs small and lean)

    opened by synek 5
  • gnutls_handshake() failed: The TLS connection was non-properly terminated.

    gnutls_handshake() failed: The TLS connection was non-properly terminated.

    cog build
    ═══╡ Uploading /Users/tekumara/code3/cog-examples/inst-colorization to localhost:8080/examples/inst-colorization
    ⠋ uploading (925 MB, 269.985 MB/s) ═══╡ Building model...
    ═══╡ Received model
    ═══╡ Building cpu image
    ═══╡   * Installing Python prerequisites
    ═══╡   * Installing Python 3.8
    ═══╡   * Installing system packages
    ═══╡   * Installing Python packages
    ═══╡   * Installing Cog
    ═══╡   * Copying code
    ═══╡ Successfully built 507cf5936fd9
    ═══╡ Pushing localhost:5000/inst-colorization:507cf5936fd9 to registry
    ═══╡ Building gpu image
    ═══╡   * Installing Python prerequisites
    ═══╡   * Installing Python 3.8
    ═══╡  ---> Using cache
    ═══╡  ---> 68aac6e4699f
    ═══╡ Step 8/20 : RUN curl | bash && 	git clone "$(pyenv root)"/plugins/pyenv-install-latest && 	pyenv
       │ install-latest "3.8" && 	pyenv global $(pyenv install-latest --print "3.8")
    ═══╡  ---> Running in ae5b74d815ca
    ═══╡   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    ═══╡                                  Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100   285  100   285    0     0    198      0  0:00:01  0:00:01 --:--:--   198  0
    ═══╡ Cloning into '/root/.pyenv'...
    ═══╡ Cloning into '/root/.pyenv/plugins/pyenv-doctor'...
    ═══╡ Cloning into '/root/.pyenv/plugins/pyenv-installer'...
    ═══╡ Cloning into '/root/.pyenv/plugins/pyenv-update'...
    ═══╡ fatal: unable to access '': gnutls_handshake() failed: The TLS connection was non-properly terminated.
    ═══╡ Failed to git clone
    ═══╡ Error: Failed to build Docker image: exit status 255

    High CPU usage during the build.

    opened by tekumara 5
  • Only keep :latest tag for local images

    Only keep :latest tag for local images

    Images are built, tagged with registry and content-addressable tag, pushed, re-tagged as :latest, and the original tag is removed.

    This prevents Cog from using a ton of disk space, but builds are still fast since the cached layers from the last build are still kept around thanks to the :latest tag.

    ref #80 #18

    Signed-off-by: andreasjansson [email protected]

    opened by andreasjansson 4
  • Handle symlinks in uploaded models

    Handle symlinks in uploaded models

    opened by andreasjansson 4
  • Throw error if user tries to upload the same version twice

    Throw error if user tries to upload the same version twice

    opened by andreasjansson 4
  • Update web hooks

    Update web hooks

    • Run "post-build" hook before "post-build-primary" hook
    • Include image URI and arch

    Signed-off-by: andreasjansson [email protected]

    opened by andreasjansson 3
  • Rename

    Rename "infer" and "run" to "predict"

    Opened up a fun versioning problem #94. Until that is resolved, cog predict actually calls /infer.

    This will break pushing old models -- there is no backwards compatibility for run(). Predictions on existing models that have been built will keep on working, because we are still calling /infer.

    opened by bfirsh 2
  • Remove container after run

    Remove container after run

    opened by andreasjansson 2
  • Cog should work in subdirectories

    Cog should work in subdirectories

    Cog should search up the file tree for cog.yaml, like Keepsake does.

    For example, if /home/ben/hotdog-detector/cog.yaml exists, then I should be able to run cog predict in /home/ben/hotdog-detector/subdir/ and it should do what I expect.

    There is some nuance here with cog run. Should the working directory be the relative current directory inside the container?

    good first issue help wanted 
    opened by bfirsh 2
  • Tail image build logs on push with -l param

    Tail image build logs on push with -l param

    opened by andreasjansson 2
  • Decide on and define nouns

    Decide on and define nouns

    They're a bit fuzzy currently. E.g. "image"

    opened by bfirsh 0
  • Add examples to help texts

    Add examples to help texts

    opened by bfirsh 0
  • Tidy up `cog predict` output

    Tidy up `cog predict` output

    It's much more messy than it should be. This is a follow-on from #155.

    opened by bfirsh 0
  • `cog run` working directory should be in the relative directory from cog.yaml

    `cog run` working directory should be in the relative directory from cog.yaml

    touch cog.yaml
    mkdir foo
    cd foo
    cog run

    I would expect this to work, but I'm actually a directory up where cog.yaml is. cog run should be relative to the directory you're in.

    opened by bfirsh 0
  • `cog run` should work even without `cog.yaml`

    `cog run` should work even without `cog.yaml`

    cog run python anywhere!

    good first issue help wanted 
    opened by bfirsh 0
  • Support GPU images on CPU

    Support GPU images on CPU

    Cog should be smart enough to see if you don't have a GPU available and not pass the --gpus option to Docker. That way you can run images where gpu: true, but without a GPU attached.

    @bfirsh has a half-finished branch for this.

    opened by bfirsh 0
  • Use Docker Remote API

    Use Docker Remote API

    In #155, we switched to entirely using the Docker CLI to interact with Docker, instead of using the Golang API. It's remarkably hard to do even basic things (docker run, build an image with buildkit, etc).

    However, this is brittle because it is an unversioned API and we need to concatenate strings to pass input.

    At some point, we should switch to using the Golang API. Buildx might make it easier to do builds.

    opened by bfirsh 0
  • Support importing model from subdirectories

    Support importing model from subdirectories

    This works:

    model: ""

    But this doesn't:

    model: "mymodel/"

    It should.

    opened by bfirsh 0
  • Add log message to prediction server when running `setup()`

    Add log message to prediction server when running `setup()`

    So the user doesn't have to do stuff like this

    good first issue help wanted 
    opened by bfirsh 0
  • Do a review of option names before release

    Do a review of option names before release

    These are an unversioned API, hence hard to change!

    • [ ] Remove any short options that aren't used often
    • [ ] Check for option collisions (e.g. does -i mean both image and input in different commands?)
    opened by bfirsh 0
  • v0.0.4(Jun 25, 2021)


    5757ec6 Add dependabot config 34ab62f Bump a15ec57 Bump from 1.10.0 to 1.12.0 8004c84 Bump from 1.40.0 to 1.40.1 c79643c Bump from 0.0.12 to 0.0.13 0836166 Bump from 0.6.5 to 0.6.6 5bc91cf Bump from 1.5.0 to 1.6.0 8ce9def Bump from 3.7.6 to 3.8.1 ffb5ede Bump from 3.8.1 to 3.8.2 16c0252 Make inline terminal taller dfcd61b Update description 73105c2 Update readme with local stuff 6abeea8 chore: add projections and venv/ to gitignore 7653c4d chore: add tests for loading config from parent dir 429ff9f chore: e2e test for loading config 81560d6 chore: test for loading Config from YAML 0815054 feat: allow config loading from subdirectories b724c51 feat: basic errors.go with ConfigNotFound d9a350c feat: config parent search has max depth (height?) 4b3f94a feat: shell out to docker to get bridge IP d985a6f fix: incorrect binary comparison 092d608 fix: linting errors 0c570e6 fix: use bridge network to connect to redis 164ba6c fix: use server dir for loading config 7253361 wip: simplify the algorithm to find the project root dir

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(250 bytes)
    cog_Darwin_arm64(14.96 MB)
    cog_Darwin_x86_64(15.28 MB)
    cog_Linux_x86_64(14.11 MB)
  • v0.0.3(Jun 8, 2021)

  • v0.0.2(Jun 7, 2021)


    e236101 Add TODO for run archs 5a0b845 Add install instructions 9c12e65 Bring back cog-examples link in readme eddca4a Document architectures in config YAML 408e1c4 Document returning Path and input constraints in Python 10fbd3d Fix example f26ebdc Fix no model set error 9052fe1 Improve cog run usage 27185a9 Make build output more verbose 962260c Put pre-install before installing Cog d0538dd Update docs and add docs for setting up own model 6cc7c66 Use a logwriter to print build logs

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(250 bytes)
    cog_Darwin_arm64(14.96 MB)
    cog_Darwin_x86_64(15.27 MB)
    cog_Linux_x86_64(14.11 MB)
  • v0.0.1-alpha4(May 31, 2021)


    240a262 Add 'cog run' to run commands in environment ba2ce89 Add VSCode settings 38750f3 Add created datetime to images e7f40a7 Add docs c28f9f6 Add images to version API response 38e03cc Add license bca2cd8 Add main-trigger 6b7a73c Add note about collaboration d82fb50 Add some extra error handling on push 0382282 Add start of contributing guidelines 5ea2ce1 Add support for local predictions fccca80 Add todo 6241231 Allow torch versions less or equal to system CUDA version aa11eda Authentication via auth delegate da22dce Background builds cd0ae02 Benchmark on GPU 3697f45 Better server logging b3c1678 Build with Buildx on M1 Macs 0772f16 Caching zip reader/writer for faster builds 17eaf37 Catch error when server dies to avoid spewing logs 6dbccff Clean up push output 3093d85 Close opened files in database c4cd66b Convenience function to make temp dirs in cf66d5f Create docs categories 320ebef Create pkg/util to clean up top-level package d4f1caa Debug hooks 3298693 Default Cog server host set in COG_SERVER env var 2d9076f Disable windows f98e6d2 Document pathlib.Paths for complex inputs/outputs 837a189 Don't use buildkit since it doesn't support CUDA 0f4014c Download individual files from model ed7f389 Expand home dirs on infer input/output 153e982 Explicit formatting of console logs 907cac5 Extract base dockerfile in dockerfile generator 4228569 Extract code for loading config 3112c79 Fix CPU stats e98a9d1 Fix Dockerfile cleanup 9dc8737 Fix concurrency issues e0218e9 Fix concurrent map read issue in queue 4ab8a92 Fix end to end tests 0f40432 Fix inference in readme 7c8056c Fix main trigger b5b5bde Fix pip github install bug 4a37f42 Fix port numbers in readme 19b6ec8 Fix queue test 35d02a5 Fix server looking like it's hanging 8882980 Fix small files unzip bug 00af09a Fix typo in show max f85d471 Fix writing plain text to stdout e0c3e8c GPU build 16d7a6f HTTP profiling 91f3569 Hack in torch 1.7.1 for cuda 10.0 089b440 Handle 404s from server 481801f Handle @files in example output be3fe15 Handle no repo being set 5b12810 Handle symlinks in uploaded models 0f8711d Handle todos (#85) 3ee75c8 Hidden flag to enable profiling 71d2c51 Ignore @ on infer output paths 69c7ad9 Input options 73b9194 Install Python before apt packages for better cacheability b687604 Install library file a copied file 7d2a2e1 Less noisy build logging 36812af Lint everything (#84) ecea1a2 Local test command 359801b Lock stream logger to be safe bcec76f Lowercase docker tag e6e97ae Make 'model' in cog.yaml optional 0c2a90f Make cog build logs follow by default a97db92 Make content-addressable ID ignore timestamps 6b2f07c Make examples relative to project root rather than workdir 151ce92 Make server debug messages 59d7fef Merge pull request #10 from replicate/andreas/lock-logger b91d7ec Merge pull request #11 from replicate/andreas/benchmark 5ebe108 Merge pull request #12 from replicate/andreas/delete a8f19c7 Merge pull request #13 from replicate/andreas/install-python-before-apt 962ee4a Merge pull request #14 from replicate/andreas/rename-package-to-model 3f28089 Merge pull request #23 from replicate/andreas/make-styleclip-work 6196272 Merge pull request #24 from replicate/andreas/webhooks 9ba400c Merge pull request #25 from replicate/andreas/default-cog-server 5f90ca7 Merge pull request #27 from replicate/andreas/caching-zip 1c01cca Merge pull request #30 from replicate/andreas/fix-ids-caching bb0cbfe Merge pull request #31 from replicate/andreas/bad-request-message f722feb Merge pull request #33 from replicate/andreas/example-file-output 26a60eb Merge pull request #35 from replicate/andreas/redis-queue-worker 1e4e667 Merge pull request #37 from replicate/andreas/test-command 800a70d Merge pull request #38 from replicate/andreas/fix-workdir 9885c1e Merge pull request #39 from replicate/andreas/cog-ignore 78e03e5 Merge pull request #40 from replicate/andreas/test-stats 58fa778 Merge pull request #41 from replicate/expand-home-dirs 5047e69 Merge pull request #42 from replicate/andreas/better-server-logging 9260c06 Merge pull request #43 from replicate/andreas/lowercase-docker-tag 5a7ff25 Merge pull request #44 from replicate/andreas/reliable-stream-queues e09cd7c Merge pull request #45 from replicate/andreas/queue-consumer-id 0d06ae0 Merge pull request #46 from replicate/andreas/auth 92a36c8 Merge pull request #47 from replicate/andreas/worker-record-timing 841a2e2 Merge pull request #48 from replicate/andreas/profiling b0cec3e Merge pull request #49 from replicate/andreas/reduce-build-memory 2aa4789 Merge pull request #50 from replicate/andreas/http-profiling b7b4d96 Merge pull request #51 from replicate/andreas/optimize-build-order bae2c3b Merge pull request #52 from replicate/andreas/save-test-output 50ef087 Merge pull request #53 from replicate/andreas/download-files f63c7e1 Merge pull request #54 from replicate/andreas/input-options 360af65 Merge pull request #55 from replicate/andreas/cleanup-container c765894 Merge pull request #56 from replicate/andreas/clean-test-logs 56d8a71 Merge pull request #57 from replicate/andreas/show-build-auth-errors 84f2586 Merge pull request #59 from replicate/andreas/better-mime-types f4b54b4 Merge pull request #60 from replicate/andreas/examples-relative-to-project-root 29cf14c Merge pull request #61 from replicate/andreas/catch-dead-server 92feb5d Merge pull request #62 from replicate/andreas/fix-cpu-stats 2a5e0e5 Merge pull request #63 from replicate/andreas/buildx 398ae58 Merge pull request #8 from replicate/andreas/denoise-logging 06adc2d Merge pull request #9 from replicate/andreas/console-log 7eff4ab Migrate server to terminal package d7561a7 More robust end-to-end push tests b7c7f0a Move generate_compatibility_matrices into tools/ d93844e Move helper scripts and fix workdir 90e06c4 Move workdir to top level of config f22d4a2 No data case for list e85f8eb Only keep :latest tag for local images bdf01a9 Our modifications to terminal package 3150f6b Pass consumer ID to queue worker 2c07ba0 Pin dependencies cc87449 Post-build web hooks 05b54bb Put in hidden temporary subdirectory 8048cc1 Python doc fixes a9c943f Python docs 46d205b Queue test (#75) bf14d3f Record memory usage and run time in model object 159c54a Record timing information in worker e1218c7 Redis queue worker model runner c0e7ef4 Reduce build memory usage 61e9cbb Rejig async logic to fix deadlock bug f7832ee Release binaries instead of tarballs 3c24dac Reliable queues using Redis streams 58b0976 Remove benchmark command 77a37e5 Remove container after run cc41a99 Remove dead code 2414a8e Remove extraneous auth header from download b09ccb2 Remove post_install 3c44200 Rename 'infer' to 'predict' f92d40f Rename 'package' to 'model' 5d69e16 Rename to Model.predict() 81a78b3 Rename dockerTag to imageId 886b846 Rename envvar 1802533 Rename model to version 308a5d0 Rename repo to model ce43029 Retry intermittent test 1ef8ff1 Run tests on save 298ccee Save test output if not defined in config examples 5585edf Show auth errors on cog build f95220d Show error message on bad request 562927f Simplify release filename 9992671 Slightly more real example 3bd1748 Strip multiplexing prefix from test container logs 853a172 Support .cogignore file 58fe4d8 Tail image build logs on push with -l param (#76) ae76fe9 Test before computing ID and uploading package to storage c7f0438 Test on GPU image if no CPU image exists 888d873 Throw error if user tries to upload the same version twice b3f41b6 Tweak readme ed630c1 Unit tests for model testing b2621d8 Update 981c550 Update documentation to reality d86f8b7 Update server docs to reality 08373b9 Update web hooks (#74) 84fc24d Use SSL 7ef7014 Use correct CUDA base image ba6e514 Use curated list of common mime types to extensions d35ef71 Use keepsake's console package for logging cdd59a1 Vendor Waypoint terminal package 049da61 Version tools used in Makefile with go mod aaddeeb Wait to delete dir until all builds are completed 19c1314 Workdir, pre- and post install scripts 71ef029 benchmark command 654e3cc delete command a8585ee generate gpu docker image with cog debug 09f05c7 include boot time in benchmark results dc0e9dd silence errors and silence usage in pre run hook f075549 support min and max values

    Source code(tar.gz)
    Source code(zip)
    checksums.txt(250 bytes)
    cog_Darwin_arm64(14.96 MB)
    cog_Darwin_x86_64(15.27 MB)
    cog_Linux_x86_64(14.11 MB)
  • v0.0.1-alpha3(Mar 26, 2021)


    6605ec3 Add validation for model name 1f4bdb9 Merge pull request #7 from replicate/andreas/redesign-repos b2045ea Validate model in subdirectory dfdaee7 Write json output to stdout 7c1160c bring back goreleaser 30b9446 bring back list cb877d4 jsonify result by default 60cc8b0 new float and bool types b14e56b redesign repos

    Source code(tar.gz)
    Source code(zip)
  • v0.0.1-alpha2(Mar 23, 2021)

Making machine learning reproducible
Prophecis is a one-stop machine learning platform developed by WeBank

Prophecis is a one-stop machine learning platform developed by WeBank. It integrates multiple open-source machine learning frameworks, has the multi tenant management capability of machine learning compute cluster, and provides full stack container deployment and management services for production environment.

WeBankFinTech 198 Jul 26, 2021
On-line Machine Learning in Go (and so much more)

goml Golang Machine Learning, On The Wire goml is a machine learning library written entirely in Golang which lets the average developer include machi

Conner DiPaolo 1.2k Jul 20, 2021
Deploy, manage, and scale machine learning models in production

Deploy, manage, and scale machine learning models in production. Cortex is a cloud native model serving platform for machine learning engineering teams.

Cortex Labs 7.6k Jul 21, 2021
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Gorgonia 4.1k Jul 27, 2021
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Gorgonia 4.1k Jul 19, 2021
Go Machine Learning Benchmarks

Benchmarks of machine learning inference for Go

Nikolay Dubina 15 Jun 17, 2021
Bigmachine is a library for self-managing serverless computing in Go

Bigmachine Bigmachine is a toolkit for building self-managing serverless applications in Go. Bigmachine provides an API that lets a driver process for

GRAIL 170 Jun 18, 2021
Reinforcement Learning in Go

Overview Gold is a reinforcement learning library for Go. It provides a set of agents that can be used to solve challenges in various environments. Th

AUNUM 228 Jul 21, 2021
Example of Neural Network models of social and personality psychology phenomena

SocialNN Example of Neural Network models of social and personality psychology phenomena This repository gathers a collection of neural network models

null 4 Jul 9, 2021
Ensembles of decision trees in go/golang.

CloudForest Google Group Fast, flexible, multi-threaded ensembles of decision trees for machine learning in pure Go (golang). CloudForest allows for a

Ryan Bressler 687 Jul 17, 2021
Go Scoring API for PMML

Goscore Go scoring API for Predictive Model Markup Language (PMML). Currently supports Neural Network, Decision Tree, Random Forest and Gradient Boost

Asaf Schers 65 May 17, 2021
Command line tool for improving typing skills (programmers friendly)

Command line tool for improving typing speed and accuracy. The main goal is to help programmers practise programming languages. Demo Installation Pyth

Jan 279 Jul 17, 2021
onnx-go gives the ability to import a pre-trained neural network within Go without being linked to a framework or library.

This is a Go Interface to Open Neural Network Exchange (ONNX). Overview onnx-go contains primitives to decode a onnx binary model into a computation b

Olivier Wulveryck 338 Jul 22, 2021
Machine Learning libraries for Go Lang - Linear regression, Logistic regression, etc.

package ml - Machine Learning Libraries ###import "" Package ml provides some implementations of usefull machine learnin

Alonso Vidales 191 Apr 17, 2021