Gorgonia is a library that helps facilitate machine learning in Go.

Overview

Logo

GoDoc GitHub version test and build Coverage Status Go Report Card unstable

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.

Gorgonia:

  • Can perform automatic differentiation
  • Can perform symbolic differentiation
  • Can perform gradient descent optimizations
  • Can perform numerical stabilization
  • Provides a number of convenience functions to help create neural networks
  • Is fairly quick (comparable to Theano and Tensorflow's speed)
  • Supports CUDA/GPGPU computation (OpenCL not yet supported, send a pull request)
  • Will support distributed computing

Goals

The primary goal for Gorgonia is to be a highly performant machine learning/graph computation-based library that can scale across multiple machines. It should bring the appeal of Go (simple compilation and deployment process) to the ML world. It's a long way from there currently, however, the baby steps are already there.

The secondary goal for Gorgonia is to provide a platform for exploration for non-standard deep-learning and neural network related things. This includes things like neo-hebbian learning, corner-cutting algorithms, evolutionary algorithms and the like.

Why Use Gorgonia?

The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

ML/AI at large is usually split into two stages: the experimental stage where one builds various models, test and retest; and the deployed state where a model after being tested and played with, is deployed. This necessitate different roles like data scientist and data engineer.

Typically the two phases have different tools: Python (PyTorch, etc) is commonly used for the experimental stage, and then the model is rewritten in some more performant language like C++ (using dlib, mlpack etc). Of course, nowadays the gap is closing and people frequently share the tools between them. Tensorflow is one such tool that bridges the gap.

Gorgonia aims to do the same, but for the Go environment. Gorgonia is currently fairly performant - its speeds are comparable to PyTorch's and Tensorflow's CPU implementations. GPU implementations are a bit finnicky to compare due to the heavy cgo tax, but rest assured that this is an area of active improvement.

Getting started

Installation

The package is go-gettable: go get -u gorgonia.org/gorgonia.

Gorgonia is compatible with go modules.

Documentation

Up-to-date documentation, references and tutorials are present on the official Gorgonia website at https://gorgonia.org.

Keeping Updated

Gorgonia's project has a Slack channel on gopherslack, as well as a Twitter account. Official updates and announcements will be posted to those two sites.

Usage

Gorgonia works by creating a computation graph, and then executing it. Think of it as a programming language, but is limited to mathematical functions, and has no branching capability (no if/then or loops). In fact this is the dominant paradigm that the user should be used to thinking about. The computation graph is an AST.

Microsoft's CNTK, with its BrainScript, is perhaps the best at exemplifying the idea that building of a computation graph and running of the computation graphs are different things, and that the user should be in different modes of thoughts when going about them.

Whilst Gorgonia's implementation doesn't enforce the separation of thought as far as CNTK's BrainScript does, the syntax does help a little bit.

Here's an example - say you want to define a math expression z = x + y. Here's how you'd do it:

package gorgonia_test

import (
	"fmt"
	"log"

	. "gorgonia.org/gorgonia"
)

// Basic example of representing mathematical equations as graphs.
//
// In this example, we want to represent the following equation
//		z = x + y
func Example_basic() {
	g := NewGraph()

	var x, y, z *Node
	var err error

	// define the expression
	x = NewScalar(g, Float64, WithName("x"))
	y = NewScalar(g, Float64, WithName("y"))
	if z, err = Add(x, y); err != nil {
		log.Fatal(err)
	}

	// create a VM to run the program on
	machine := NewTapeMachine(g)
	defer machine.Close()

	// set initial values then run
	Let(x, 2.0)
	Let(y, 2.5)
	if err = machine.RunAll(); err != nil {
		log.Fatal(err)
	}

	fmt.Printf("%v", z.Value())
	// Output: 4.5
}

You might note that it's a little more verbose than other packages of similar nature. For example, instead of compiling to a callable function, Gorgonia specifically compiles into a *program which requires a *TapeMachine to run. It also requires manual a Let(...) call.

The author would like to contend that this is a Good Thing - to shift one's thinking to a machine-based thinking. It helps a lot in figuring out where things might go wrong.

Additionally, there are no support for branching - that is to say there are no conditionals (if/else) or loops. The aim is not to build a Turing-complete computer.


More examples are present in the example subfolder of the project, and step-by-step tutorials are present on the main website

Using CUDA

Gorgonia comes with CUDA support out of the box. Please see the reference documentation about how cuda works on the Gorgonia.org website, or jump to the tutorial.

About Gorgonia's development process

Versioning

We use semver 2.0.0 for our versioning. Before 1.0, Gorgonia's APIs are expected to change quite a bit. API is defined by the exported functions, variables and methods. For the developers' sanity, there are minor differences to semver that we will apply prior to version 1.0. They are enumerated below:

  • The MINOR number will be incremented every time there is a deleterious break in API. This means any deletion, or any change in function signature or interface methods will lead to a change in MINOR number.
  • Additive changes will NOT change the MINOR version number prior to version 1.0. This means that if new functionality were added that does not break the way you use Gorgonia, there will not be an increment in the MINOR version. There will be an increment in the PATCH version.

API Stability

Gorgonia's API is as of right now, not considered stable. It will be stable from version 1.0 forwards.

Go Version Support

Gorgonia supports 2 versions below the Master branch of Go. This means Gorgonia will support the current released version of Go, and up to 4 previous versions - providing something doesn't break. Where possible a shim will be provided (for things like new sort APIs or math/bits which came out in Go 1.9).

The current version of Go is 1.13.1. The earliest version Gorgonia supports is Go 1.11.x but Gonum supports only 1.12+. Therefore, the minimum Go version to run the master branch is Go > 1.12.

Hardware and OS supported

Gorgonia runs on :

  • linux/AMD64
  • linux/ARM7
  • linux/ARM64
  • win32/AMD64
  • darwin/AMD64
  • freeBSD/AMD64

If you have tested gorgonia on other platform, please update this list.

Hardware acceleration

Gorgonia use some pure assembler instructions to accelerate somes mathematical operations. Unfortunately, only amd64 is supported.

Contributing

Obviously since you are most probably reading this on Github, Github will form the major part of the workflow for contributing to this package.

See also: CONTRIBUTING.md

Contributors and Significant Contributors

All contributions are welcome. However, there is a new class of contributor, called Significant Contributors.

A Significant Contributor is one who has shown deep understanding of how the library works and/or its environs. Here are examples of what constitutes a Significant Contribution:

  • Wrote significant amounts of documentation pertaining to why/the mechanics of particular functions/methods and how the different parts affect one another
  • Wrote code, and tests around the more intricately connected parts of Gorgonia
  • Wrote code and tests, and have at least 5 pull requests accepted
  • Provided expert analysis on parts of the package (for example, you may be a floating point operations expert who optimized one function)
  • Answered at least 10 support questions.

Significant Contributors list will be updated once a month (if anyone even uses Gorgonia that is).

How To Get Support

The best way of support right now is to open a ticket on Github.

Frequently Asked Questions

Why are there seemingly random runtime.GC() calls in the tests?

The answer to this is simple - the design of the package uses CUDA in a particular way: specifically, a CUDA device and context is tied to a VM, instead of at the package level. This means for every VM created, a different CUDA context is created per device per VM. This way all the operations will play nicely with other applications that may be using CUDA (this needs to be stress-tested, however).

The CUDA contexts are only destroyed when the VM gets garbage collected (with the help of a finalizer function). In the tests, about 100 VMs get created, and garbage collection for the most part can be considered random. This leads to cases where the GPU runs out of memory as there are too many contexts being used.

Therefore at the end of any tests that may use GPU, a runtime.GC() call is made to force garbage collection, freeing GPU memories.

In production, one is unlikely to start that many VMs, therefore it's not really a problem. If there is, open a ticket on Github, and we'll look into adding a Finish() method for the VMs.

Licence

Gorgonia is licenced under a variant of Apache 2.0. It's for all intents and purposes the same as the Apache 2.0 Licence, with the exception of not being able to commercially profit directly from the package unless you're a Significant Contributor (for example, providing commercial support for the package). It's perfectly fine to profit directly from a derivative of Gorgonia (for example, if you use Gorgonia as a library in your product)

Everyone is still allowed to use Gorgonia for commercial purposes (example: using it in a software for your business).

Dependencies

There are very few dependencies that Gorgonia uses - and they're all pretty stable, so as of now there isn't a need for vendoring tools. These are the list of external packages that Gorgonia calls, ranked in order of reliance that this package has (subpackages are omitted):

Package Used For Vitality Notes Licence
gonum/graph Sorting *ExprGraph Vital. Removal means Gorgonia will not work Development of Gorgonia is committed to keeping up with the most updated version gonum license (MIT/BSD-like)
gonum/blas Tensor subpackage linear algebra operations Vital. Removal means Gorgonial will not work Development of Gorgonia is committed to keeping up with the most updated version gonum license (MIT/BSD-like)
cu CUDA drivers Needed for CUDA operations Same maintainer as Gorgonia MIT/BSD-like
math32 float32 operations Can be replaced by float32(math.XXX(float64(x))) Same maintainer as Gorgonia, same API as the built in math package MIT/BSD-like
hm Type system for Gorgonia Gorgonia's graphs are pretty tightly coupled with the type system Same maintainer as Gorgonia MIT/BSD-like
vecf64 optimized []float64 operations Can be generated in the tensor/genlib package. However, plenty of optimizations have been made/will be made Same maintainer as Gorgonia MIT/BSD-like
vecf32 optimized []float32 operations Can be generated in the tensor/genlib package. However, plenty of optimizations have been made/will be made Same maintainer as Gorgonia MIT/BSD-like
set Various set operations Can be easily replaced Stable API for the past 1 year set licence (MIT/BSD-like)
gographviz Used for printing graphs Graph printing is only vital to debugging. Gorgonia can survive without, but with a major (but arguably nonvital) feature loss Last update 12th April 2017 gographviz licence (Apache 2.0)
rng Used to implement helper functions to generate initial weights Can be replaced fairly easily. Gorgonia can do without the convenience functions too rng licence (Apache 2.0)
errors Error wrapping Gorgonia won't die without it. In fact Gorgonia has also used goerrors/errors in the past. Stable API for the past 6 months errors licence (MIT/BSD-like)
gonum/mat Compatibility between Tensor and Gonum's Matrix Development of Gorgonia is committed to keeping up with the most updated version gonum license (MIT/BSD-like)
testify/assert Testing Can do without but will be a massive pain in the ass to test testify licence (MIT/BSD-like)

Various Other Copyright Notices

These are the packages and libraries which inspired and were adapted from in the process of writing Gorgonia (the Go packages that were used were already declared above):

Source How it's Used Licence
Numpy Inspired large portions. Directly adapted algorithms for a few methods (explicitly labelled in the docs) MIT/BSD-like. Numpy Licence
Theano Inspired large portions. (Unsure: number of directly adapted algorithms) MIT/BSD-like Theano's licence
Caffe im2col and col2im directly taken from Caffe. Convolution algorithms inspired by the original Caffee methods Caffe Licence
Issues
  • Masked tensor

    Masked tensor

    As promised, set about trying to implement basic masked array functionality.

    To begin with, created a new iterator type 'MultIterator', which is designed to iterate over multiple arrays simultaneously, with the same syntax as 'FlatIterator', to allow switching between them (it uses an array of FlatIterators internally). For single non-masked arrays, MultIterator is about 20% slower than FlatIterator. However, it only calculates offsets for unique shapes/stride combinations, and so when indexing arrays of same shape, a single FlatIterator is shared between them all, allowing significant compute savings.

    func BenchmarkFlatIteratorMulti6(b *testing.B) {
    	ap := make([]*AP, 6)
    	for j := 0; j < 6; j++ {
    		ap[j] = NewAP(Shape{30, 60, 10}, []int{1000000, 15000, 50})
    	}
    	it := NewMultIterator(ap...)
    	for n := 0; n < b.N; n++ {
    		for _, err := it.Next(); err == nil; _, err = it.Next() {
    		}
    		it.Reset()
    	}
    	DestroyMultIterator(it)
    }
    

    You could create a MultiIterator from tensors directly

    T1 := New(Of(Float64), WithShape(3, 20), WithMaskStrides([]bool{true, true}))
    T2 := New(Of(Float64), WithShape(3, 20), WithMaskStrides([]int{20,1}))
    T3 := New(Of(Float64), FromScalar(7))
    it := MultIteratorFromDense(T1, T2, T3)
    

    It also means that you don't have to worry when creating functions of multiple arguments in which the same array could be repeated as different arguments - in which case naive use of FlatIterator could cause that array to be iterated multiple times in a single for loop iteration - with MultIterator that can not happen.

    As for the mask, for the time being I opted to simply add a []bool to the Dense struct, and an additional stride int to AP. MultIterator supports masked operations, such as NextValid() or NextInvalid(), in addition to Next(). Examples of usage can be seen in dense_maskmethods_test.go and iterator_test.go.

    func TestMaskedIteration(t *testing.T) {
    	assert := assert.New(t)
    	T := New(Of(Float64), WithShape(2, 3, 4, 5))
    	assert.True(len(T.mask) < 1)
    	dataF64 := T.Data().([]float64)
    	for i := range dataF64 {
    		dataF64[i] = float64(i)
    	}
    	for i := 0; i < 5; i++ {
    		T.MaskedEqual(float64(i) * 10.0)
    	}
    
    	it := MultIteratorFromDense(T)
    
    	j := 0
    	for _, err := it.Next(); err == nil; _, err = it.Next() {
    		j++
    	}
    	it.Reset()
    	assert.True(j == 120)
    
    	j = 0
    	for _, err := it.NextValid(); err == nil; _, err = it.NextValid() {
    		j++
    	}
    	it.Reset()
    	assert.True(j == 115)
    
    	j = 0
    	for _, err := it.NextInvalid(); err == nil; _, err = it.NextInvalid() {
    		j++
    	}
    	it.Reset()
    	assert.True(j == 5)
    }
    

    I did not want to spend too much time going further before agreeing on the basics. While I show some basic mask setting operations in dense_maskmethods.go, I only do this for float64 tensors as a demonstration - the functionality would have to be implemented in genlib at some point which would take me sometime to do properly as this is my first time using text/template.

    I also did not optimize masked iteration, there are smarter ways to find next valid/valid, e.g. by processing >=8 bytes at once, but I figure that it's best to leave that until the structure is agreed upon.

    opened by kabaka0 37
  • [WIP] work on getting the gorgonia to use the errors package

    [WIP] work on getting the gorgonia to use the errors package

    This is the work in progress for the integration of the errors package into gorgonia which addressed #46

    As this work touches a large surface area, I wanted to open this branch early to mark the progress, so that I can get some feedback (if any) about the approach.

    once all the work is done, I will be squashing all the commits into one with a meaningful commit message to keep everything in master clean, so please note that I may be doing a force push at some point.

    opened by NDari 36
  • [fix] Using the iterator of the new Gonum API

    [fix] Using the iterator of the new Gonum API

    This change allows the v0.9.2-working2 branch to compile and work with the latest evolution of the Gonum API. The implementation relies on the OrderedNode implementation of the iterator package.

    opened by owulveryck 31
  • Iterator.Chan() considered harmful

    Iterator.Chan() considered harmful

    sketch space for describing how to create a chan int of negative length, and how to reproduce it

    Background/Context of the Issue

    Gorgonia is a library for representing and executing mathematical equations, and performing automatic differentiation. It's like Tensorflow and PyTorch for Go. It's currently undergoing some major internal refactor (that will not affect the public APIs much)

    I was improving the backend tensor package by splitting up the data structure into a data structure + pluggable execution engine, instead of having built in methods (see also #128). The reasons are so that it's easier to change out execution backends (CPU, GPU... even a network CPU (actual experiment I did was to run a small neural network on a Raspberry Pi and all computation is offshored to my workstation, and vice versa, which turned out to be a supremely bad idea)).

    Another reason was due to the fact that I wanted to do some experiments at my work which use algorithms that involve sparse tensors (see also #127) for matrix factorization tasks.

    Lastly, I wanted to clean up the generics support of the tensor package. The current master branch of the tensor package had a lot of code to support arbitrary tensor types. With the split of execution engines and data structure, more of this support could be offloaded to the execution engine instead. This package provides a default execution engine (type StdEng struct{}: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/defaultengine.go), which could be extended (example: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/example_extension_test.go) . The idea was to have an internal/execution package which held all the code for the default execution engine.

    Data Structures

    The most fundamental data structure is storage.Header, which is an analogue for a Go slice: it's a three word structure. It's chosen because it is a ridiculously simple structure can store Go-allocated memory, C-allocated memory and device-allocated memory (like CUDA).

    On top of storage.Header is tensor.array. It's essentially a storage.Header with an additional field for the type. The v field will eventually be phased out once the refactor is complete.

    On top of tensor.array are the various implementations of tensor.Tensor. Chief amongst these is the tensor.Dense struct. Essentially it's a tensor.array coupled with some access patterns and meta information.

    Access to the data in the tensor.Tensor can be achieved by use of Iterators. The Iterator basically assumes that the data is held in a flat slice, and returns the next index on the slice. There are auxiliary methods like NextValidity to handle special case tensors like masked tensors, where some elements are masked from operations.

    The bug happens in the Chan method of the FlatIterator type.

    How to reproduce

    The branch where the bug is known to exist is the debugrace branch, which can be found here: 1dee6d2 .

    1. git checkout debugrace
    2. Run tests with various GOMAXPROCS like so: GOMAXPROCS=1 go test -run=. . Try it with various GOMAXPROCS, one of them is bound to trigger an issue.
    3. The test won't panic, because I have added a recover statement here https://github.com/chewxy/gorgonia/blob/debugrace/tensor/dense_viewstack_specializations.go#L636. Removing the deferred function causes a index out of bounds panic.
    4. All the tests must be run to trigger the issue.
    5. The issue is found in the test for the Stack function: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/dense_matop_test.go#L768 . If only the stack test is run (for example GOMAXPROCS=1 go test -run=Stack), it is unlikely the problem will show up (I wrote a tiny python script to run it as many times as possible with many GOMAXPROCS configurations and none of them caused an error).

    You should get something like this:

    image

    Environments

    I've managed to reproduce the issue on OS X, with Go 1.8 and on Ubuntu 16.10 with Go 1.8.2 and Go tip (whatever gvm thinks is Go tip). I've no access to Go on a windows box so I can't test it on Windows.

    Magic and Unsafe Use

    As part of the refactoring, there are a few magic bits being used. Here I attempt to list them all (may not be exhaustive):

    • The Go slice structure is re-implemented in https://github.com/chewxy/gorgonia/blob/debugrace/tensor/internal/storage/header.go. Note that here an unsafe.Pointer is used instead of the standard one like reflect.SliceHeader which stores a uintptr. This is due to the fact that I want Go to keep a reference to the actual slice. This may affect the runtime and memory allocation.. I'm not too sure.
    • //go:linkname is used in some internal packages (specific example here: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/internal/execution/generic_arith_vv.go). It's basically just a rename of functions in github.com/chewxy/vecf32 and github.com/chewxy/vecf64. Those packages contain optional AVX/SSE related vector operations like arithmetics. However, those have to be manually invoked via a build tag. By default it uses go algorithms, not SSE/AVX operations.
    • //go:linkname is used in unsafe.go: https://github.com/chewxy/gorgonia/blob/debugrace/tensor/unsafe.go#L105. However it should be noted that memmove is never called as after some tests I decided it would be too unsafe to use (also explains why there are comments that say TODO: implement memmove.
    • There are several naughty pointer arithmetics at play:

    What I suspect

    I suspect that there may be some naughty things happening in memory (because it only happens when all the tests are run). The problem is I don't know exactly where to start looking.

    bug 
    opened by chewxy 17
  • The Broadcast function is exported but not usable outside of the package

    The Broadcast function is exported but not usable outside of the package

    I need to implement a "add" operator for two tensors with a broadcasting mechanism as described here.

    The Broadcast function seems to be a perfect fit for this. Moreover, the test is partially implementing what I am trying to do. But nor the ʘBinaryOperatorType or any other binOp implementations are exported.

    Therefore the Broadcast function can only be used within the Gorgonia package.

    Maybe we should make it private to avoid confusion in the documentation and expose "Broadcasted version" of some operators instead? What do you think?

    question ux 
    opened by owulveryck 17
  • Adopt

    Adopt "dep" as the official installation mechanism

    With the possible integration of the dep package manager into the Go toolchain, it may be worth it to adopt it as the official installation method for Gorgonia. There are a good bit of packages to install to fully utilize Gorgonia, which may scare people new to Go and/or programming in general.

    • [x] provide a Gopkg.toml with explicit versions of the libraries whenever possible
    • [x] provide a Gopkg.lock
    • [x] Add the vendor directory to the .gitignore file so that it is not checked in.
    • [ ] Provide an installation section in the readme showing how to use dep to install Gorgonia and how to test if your installation was successful.
    documentation enhancement 
    opened by NDari 17
  • V0.8.0 working

    V0.8.0 working

    opened by chewxy 15
  • 1.15.3

    1.15.3 "Import Cycle Not Allowed" on convnet example w/ Cuda

    Hello,

    I get the following error when trying to run the convnet cuda example.

    package command-line-arguments
    	imports gorgonia.org/gorgonia
    	imports gorgonia.org/gorgonia: import cycle not allowed
    

    My file structure is as below;

    ├── project
    │ ├── convnet.go
    │ └── cudamodules.go
    

    Note that if I roll back Go to 1.13.9 (Which is fine so by no means urgent), this error does not present itself.

    However... Without piggybacking too much of this issue for another, when I run the following command in the project directory, on version 1.13.9, I get the following output which indefinitely hangs.

    >>>/usr/local/go-1.13/bin/go run -tags='cuda' .
    2020/10/21 10:03:07 Using CUDA build
    2020/10/21 10:03:07 gorgonia. true
    2020/10/21 10:03:08 p0 (100, 32, 14, 14)
    2020/10/21 10:03:08 p2 shape (100, 128, 3, 3)
    2020/10/21 10:03:08 r2 shape (100, 1152)
    2020/10/21 10:03:08 l2 shape (100, 1152) | (1152, 625)
    2020/10/21 10:03:08 l3 name Dropout 0.55(%15) :: Matrix float64 | a3 name ReLU(%14) :: Matrix float64
    2020/10/21 10:03:08 DONE
    2020/10/21 10:03:08 m.out.Shape (100, 10), y.Shape (100, 10)
    2020/10/21 10:03:08 Batches 600
    Epoch 0 0 / 600 [------------------------------------------------------]   0.00%
    

    Running nvidia-smi I can see it has allocated memory to it, but seems to just hang

    0      6477      C   /tmp/go-build071137485/b001/exe/goHide       470MiB
    

    It could be that i'm doing something wrong, but I build the cudamodules using the cudagen tool. The only strange thing is that I have to remove //+build cuda from the top of convnet.go prior to running the cudagen tool otherwise I get the following

    2020/10/21 10:07:50 failed to get name of package in working directory. Error: exit status 1. go list error: package .: build constraints exclude all Go files in /path/to/project. I then add it back in afterwards and it runs, but hangs as mentioned.

    Apologies if the latter is just me being a Go novice!

    opened by phillips96 13
  • Test broadcast add

    Test broadcast add

    See #301

    Note that this branch currently fails, but I believe that it is due an implementation error in the broadcasting. Specifically, I was unable to make the following operation:

    given a tensor a with shape (2,) and a tensor b with shape (2,2,2), broadcast-add them, such that the result c has shape (2,2,2). This stems from the fact that when I try to broadcast a into shape (2,2,2) (using left=[]byte{1, 2}) in the broadcastAdd, it panics.

    This situation is demonstrated on the second commit of this MR.

    This behavior is not consistent with when we do the same operation where b has shape (2,2) and a is broadcasted using left=[]byte{1}, which is valid (as per test named "vec-mat").

    Note that I am assuming that we are following the same rules as the broadcasting rules of numpy

    opened by jorgecarleitao 13
  • go get-u gorgonia assembly failed error

    go get-u gorgonia assembly failed error

    Hey guys, whenever I try to use go get gorgonia, I keep hitting the following error:

    gorgonia.org/gorgonia

    asm: asmins: illegal 64: 00000 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:7) MOVQ a+4(FP), SI asm: asmins: illegal in mode 32: 00000 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:7) MOVQ a+4(FP), SI (24 18) asm: asmins: illegal 64: 00005 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:8) MOVQ b+12(FP), CX asm: asmins: illegal in mode 32: 00005 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:8) MOVQ b+12(FP), CX (24 15) asm: asmins: illegal 64: 00010 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:9) MOVQ SI, AX asm: asmins: illegal in mode 32: 00010 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:9) MOVQ SI, AX (18 14) asm: asmins: illegal 64: 00013 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:10) CMPQ CX, $-1 asm: asmins: illegal in mode 32: 00013 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:10) CMPQ CX, $-1 (15 5) asm: asmins: illegal 64: 00019 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:13) CQO asm: asmins: illegal in mode 32: 00019 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:13) CQO (1 1) asm: asmins: illegal 64: 00021 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:14) IDIVQ CX asm: asmins: illegal in mode 32: 00021 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:14) IDIVQ CX (15 1) asm: asmins: illegal 64: 00024 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:15) MOVQ AX, q+20(FP) asm: asmins: illegal in mode 32: 00024 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:15) MOVQ AX, q+20(FP) (14 24) asm: asmins: illegal 64: 00029 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:16) MOVQ DX, r+28(FP) asm: asmins: illegal in mode 32: 00029 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:16) MOVQ DX, r+28(FP) (21 24) asm: asmins: illegal 64: 00035 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:20) NEGQ AX asm: asmins: illegal in mode 32: 00035 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:20) NEGQ AX (1 14) asm: asmins: illegal 64: 00038 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:21) MOVQ AX, q+20(FP) asm: asmins: illegal in mode 32: 00038 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:21) MOVQ AX, q+20(FP) (14 24) asm: asmins: illegal 64: 00043 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:22) MOVQ $0, r+28(FP) asm: asmins: illegal in mode 32: 00043 (C:\Users\manikandank\go\src\gorgonia.org\gorgonia\mathutils.s:22) MOVQ $0, r+28(FP) (2 24) asm: assembly failed

    bug 
    opened by casijoe5231 12
  • Cannot Concat() on Matrices with Shape (1, 1)

    Cannot Concat() on Matrices with Shape (1, 1)

    Concatenation of multiple (1, 1) matrices seems to result in a panic. Error message:

    panic: runtime error: index out of range [1] with length 0
    
    goroutine 17 [running]:
    gorgonia.org/tensor.assignArray(0xccd568, 0xc000168400, 0xccd568, 0xc000214200, 0x0, 0x0)
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/dense_assign.go:53 +0xb3e
    gorgonia.org/tensor.StdEng.denseConcat(0xccd568, 0xc000214200, 0x0, 0xc0001f4010, 0x1, 0x1, 0x0, 0x0, 0x10, 0xc0001f4000)
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/defaultengine_matop_misc.go:337 +0x678
    gorgonia.org/tensor.StdEng.Concat(0xcccf58, 0xc000214200, 0x0, 0xc0001f4000, 0x1, 0x1, 0xbf6740, 0x1, 0x1, 0xc0001f4000)
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/defaultengine_matop_misc.go:238 +0x185
    gorgonia.org/tensor.(*Dense).Concat(0xc000214200, 0x0, 0xc00000e8b0, 0x1, 0x1, 0x40dbbb, 0xc00006e8e0, 0x20)
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/dense_matop_memmove.go:74 +0xfd
    gorgonia.org/tensor.Concat(0x0, 0xcccf58, 0xc000214200, 0xc00006e910, 0x1, 0x1, 0x0, 0x0, 0x203000, 0xc000020800)
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/api_matop.go:64 +0xec
    gorgonia.org/gorgonia.concatOp.Do(0x0, 0x2, 0x2, 0xc00006e8e0, 0x2, 0x2, 0xcc5638, 0x2, 0xc00021e1e0, 0xc0002240e0)
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/op_tensor.go:1036 +0xd1
    gorgonia.org/gorgonia.(*execOp).exec(0xc00021e1e0, 0xc000232000, 0x0, 0x0)
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/vm_tape_nocuda.go:82 +0x1002
    gorgonia.org/gorgonia.(*tapeMachine).runall(0xc000232000, 0xc000238060, 0xc0002380c0)
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/vm_tape.go:239 +0x10f
    created by gorgonia.org/gorgonia.(*tapeMachine).RunAll
    	/home/samuel/go/pkg/mod/gorgonia.org/[email protected]/vm_tape.go:212 +0x12c
    
    

    To reproduce the error:

    package main
    
    import (
    	"log"
    
    	G "gorgonia.org/gorgonia"
    	"gorgonia.org/tensor"
    )
    
    func main() {
    	g := G.NewGraph()
    	t1Back := tensor.New(tensor.WithBacking([]float64{1.0}), tensor.WithShape(1, 1))
    	t1 := G.NewMatrix(g, tensor.Float64, G.WithShape(1, 1), G.WithValue(t1Back))
    
    	t2Back := tensor.New(tensor.WithBacking([]float64{2.0}), tensor.WithShape(1, 1))
    	t2 := G.NewMatrix(g, tensor.Float64, G.WithShape(1, 1), G.WithValue(t2Back))
    
    	c, err := G.Concat(0, t1, t2)
    	if err != nil {
    		log.Fatalf("could not concat: %v", err)
    	}
    
    	vm := G.NewTapeMachine(g)
    	vm.RunAll()
    	vm.Reset()
    }
    
    
    opened by samuelfneumann 0
  • Nodes gets confused

    Nodes gets confused

    https://play.golang.org/p/L1-ZE_IC267

    This is a simple example, when not passing WithName on argument list, causes Gorgonia to confuse nodes - HaddamardProd in line 49 computes a product of vecs2 with itself, as opposed to with vecs1.

    I have a bigger piece of code that behaves differently than expected, and I cannot post. However I am wondering if this bug is not the cause. Is there a way to check it?

    package common
    
    import (
    	"fmt"
    	"github.com/stretchr/testify/assert"
    	"gorgonia.org/gorgonia"
    	"gorgonia.org/tensor"
    	"testing"
    )
    
    func TestSomething(t *testing.T) {
    
    	vectors1 := []float64{
    		1, 2,
    		0, 0,
    		0, 0,
    		0, 3,
    		-1, -1,
    	}
    
    	vectors2 := []float64{
    		2, 1,
    		1, 1,
    		0, 0,
    		3, 3,
    		-1, -1,
    	}
    
    	vectors1AsTensor := tensor.New(
    		tensor.WithBacking(vectors1),
    		tensor.WithShape(5, 2))
    
    	vectors2AsTensor := tensor.New(
    		tensor.WithBacking(vectors2),
    		tensor.WithShape(5, 2))
    
    	g := gorgonia.NewGraph()
    
    	//vecs1 := gorgonia.NewMatrix(g, gorgonia.Float64, gorgonia.WithShape(5, 2), gorgonia.WithName("vecs1"))
    	//vecs2 := gorgonia.NewMatrix(g, gorgonia.Float64, gorgonia.WithShape(5, 2), gorgonia.WithName("vecs2"))
    	vecs1 := gorgonia.NewMatrix(g, gorgonia.Float64, gorgonia.WithShape(5, 2))
    	gorgonia.WithName("vecs1")(vecs1)
    	vecs2 := gorgonia.NewMatrix(g, gorgonia.Float64, gorgonia.WithShape(5, 2))
    	gorgonia.WithName("vecs2")(vecs2)
    
    	assert.NoError(t, gorgonia.Let(vecs1, vectors1AsTensor))
    	assert.NoError(t, gorgonia.Let(vecs2, vectors2AsTensor))
    
    	hp := gorgonia.Must(gorgonia.HadamardProd(vecs1, vecs2))
    	gorgonia.WithName("hp")(hp)
    
    	machine := gorgonia.NewTapeMachine(g)
    	defer machine.Close()
    
    	assert.NoError(t, machine.RunAll())
    
    	assert.Equal(t, []float64{
    		2, 2,
    		0, 0,
    		0, 0,
    		0, 9,
    		1, 1,
    	}, hp.Value().Data().([]float64))
    
    	fmt.Printf("1 : %+v\n", vecs1.Value())
    	fmt.Printf("2 : %+v\n", vecs2.Value())
    	fmt.Printf("h : %+v\n", hp.Value())
    }
    
    opened by njskalski 1
  • Adding the concept of training/eval mode to the VM

    Adding the concept of training/eval mode to the VM

    I'm submitting this to start the discussion and understand if this feasible Without this I think the only option to disable the Dropout op in eval mode, for example, is recreating the graph which is hard when you want to create a higher level framework and still let people use gorgonia directly

    opened by dcu 6
  • Failed to run on colab

    Failed to run on colab

    I try to compile convnet_cuda in colab (gpu) error:

    /root/go/src/gorgonia.org/gorgonia/cuda.go:141:23: v.Pointer undefined (type Value has no field or method Pointer)
    /root/go/src/gorgonia.org/gorgonia/cuda.go:186:48: v.Pointer undefined (type Value has no field or method Pointer)
    /root/go/src/gorgonia.org/gorgonia/cuda.go:199:24: retVal.Pointer undefined (type Value has no field or method Pointer)
    /root/go/src/gorgonia.org/gorgonia/vm_tape_cuda.go:190:45: dv.d.Pointer undefined (type Value has no field or method Pointer)
    /root/go/src/gorgonia.org/gorgonia/vm_tape_cuda.go:197:25: dv.d.Pointer undefined (type Value has no field or method Pointer)
    /root/go/src/gorgonia.org/gorgonia/vm_tape_cuda.go:227:50: from.Pointer undefined (type Value has no field or method Pointer)
    /root/go/src/gorgonia.org/gorgonia/vm_tape_cuda.go:232:20: to.Pointer undefined (type Value has no field or method Pointer)
    

    cudatest:

    CUDA version: 11020
    CUDA devices: 1
    
    Device 0
    ========
    Name      :	"Tesla T4"
    Clock Rate:	1590000 kHz
    Memory    :	15843721216 bytes
    Compute   : 	7.5
    

    go version: 1.16.4

    https://colab.research.google.com/drive/1TM2mjQvvO0gS5JFpU_NJPyR6p473LCLr?usp=sharing

    opened by wailovet 4
  •  cudagen failed to compile with nvcc

    cudagen failed to compile with nvcc

    Hello, I encountered a problem, when I run cudagen it will appear error:

    [email protected] MSYS /d/GOPATH/src/suvvm.work/ToadOCREngine
    $ export GO111MODULE=off
    
    [email protected] MSYS /d/GOPATH/src/suvvm.work/ToadOCREngine
    $ export CGO_CFLAGS="-I/d/NVIDIAGPUComputingToolkit/CUDA/v10.1/include/"
    
    [email protected] MSYS /d/GOPATH/src/suvvm.work/ToadOCREngine
    $ export PATH="$PATH:/c/cuda/bin/"
    
    [email protected] MSYS /d/GOPATH/src/suvvm.work/ToadOCREngine
    $ go get gorgonia.org/gorgonia
    
    [email protected] MSYS /d/GOPATH/src/suvvm.work/ToadOCREngine
    $ go get gorgonia.org/cu
    
    [email protected] MSYS /d/GOPATH/src/suvvm.work/ToadOCREngine
    $ export  PATH="$PATH:/d/NVIDIAGPUComputingToolkit/CUDA/v10.1/lib/x64"
    export  PATH="$PATH:/d/NVIDIAGPUComputingToolkit/CUDA/v10.1/bin"
    export  PATH="$PATH:/d/NVIDIAGPUComputingToolkit/CUDA/v10.1/libnvvp"
    export LIBRARY_PATH="/d/NVIDIAGPUComputingToolkit/CUDA/v10.1/lib/x64"
    export  LD_LIBRARY_PATH="/d/NVIDIAGPUComputingToolkit/CUDA/v10.1/lib/x64"
    
    [email protected] MSYS /d/GOPATH/src/suvvm.work/ToadOCREngine
    $ go install gorgonia.org/gorgonia/cmd/cudagen
    
    [email protected] MSYS /d/GOPATH/src/suvvm.work/ToadOCREngine
    $ cudagen
    2021/04/30 22:31:51 failed to compile with nvcc. Error: exit status 1. nvcc error:
    

    stderr.String() does not output anything

    this is my go env

    go env
    set GO111MODULE=off
    set GOARCH=amd64
    set GOBIN=D:\Go\bin
    set GOCACHE=C:\Users\LENOVO\AppData\Local\go-build
    set GOENV=C:\Users\LENOVO\AppData\Roaming\go\env
    set GOEXE=.exe
    set GOFLAGS=
    set GOHOSTARCH=amd64
    set GOHOSTOS=windows
    set GOINSECURE=
    set GOMODCACHE=D:\GOPATH\pkg\mod
    set GONOPROXY=
    set GONOSUMDB=
    set GOOS=windows
    set GOPATH=D:\GOPATH
    set GOPRIVATE=
    set GOPROXY=https://proxy.golang.org,direct
    set GOROOT=D:\Go
    set GOSUMDB=sum.golang.org
    set GOTMPDIR=
    set GOTOOLDIR=D:\Go\pkg\tool\windows_amd64
    set GOVCS=
    set GOVERSION=go1.16.3
    set GCCGO=gccgo
    set AR=ar
    set CC=gcc
    set CXX=g++
    set CGO_ENABLED=1
    set GOMOD=
    set CGO_CFLAGS=-ID:/NVIDIAGPUComputingToolkit/CUDA/v10.1/include/
    set CGO_CPPFLAGS=
    set CGO_CXXFLAGS=-g -O2
    set CGO_FFLAGS=-g -O2
    set CGO_LDFLAGS=-g -O2
    set PKG_CONFIG=pkg-config
    set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\msys64\tmp\go-build916827538=/tmp/go-build -gno-record-gcc-switches
    

    win10 gcc 8.1.0 cuda v10.1

    opened by suvvm 0
  • Batchnormimpl2

    Batchnormimpl2

    opened by chewxy 4
  • Added ARCHITECTURE.md

    Added ARCHITECTURE.md

    opened by chewxy 4
  • Proposal: use Vulkan for GPU computing instead of CUDA.

    Proposal: use Vulkan for GPU computing instead of CUDA.

    There are great bindings for Vulkan in Go: https://github.com/vulkan-go/vulkan/

    Why this proposal? Although CUDA may provide the best performance for neural networks because of cudnn, using it involves getting your hands dirty with all the proprietary NVIDIA ecosystem. Creating a Vulkan backend would allow gorgonia to be unique in its kind AND VENDOR INDEPENDENT. It may be a slight sacrifice for performance, but the tradeoff would be motivated and balanced: The main advantage is that it would be painless for developers to build and run gorgonia programs, taking advantage of the GPUs in their systems regardless of the actual GPU vendor. For example, as an AMD GPU user I am forced to either train models and infer on the CPU or use the tensorflow ROCm build.

    The second advantage would be giving more traction to the Golang Vulkan bindings, not many projects have been using the bindings. There is still a mess going on for GPGPU programming in Go, and the only decent solutions seems your CUDA package, but it's limited to NVIDIA only!

    Another very interesting project is vulkan compute: kompute.cc. I may start to develop bindings or a similar solution for go, for running arbitrary tensor oriented code in asynchronous GPU tasks. Such a package may be used by gorgonia. Would anybody be interested in contributing?

    opened by 0x0f0f0f 0
  • MaxPool1D: Impossible height/kernel/pad combination

    MaxPool1D: Impossible height/kernel/pad combination

    MaxPool1D(x *Node, 2, 0, 1) with x being a vector (node from Dense with single dim []int{n}) returns the error: errors.New("Impossible height/kernel/pad combination")

    My guess is that pad/size checks are done with 2D tensor in mind.

    I'm proposing a fix

    opened by strigi-form 1
  • Can tiny-yolo-v3-coco train on its own data set?

    Can tiny-yolo-v3-coco train on its own data set?

    opened by ystyle 5
Releases(v0.9.17)
  • v0.9.17(Mar 14, 2021)

    CI

    CI (GitHub actions) has a new template system that will ease the go releases' upgrade. On top of that, it now has a custom runner for ARM64. This leads to discovering and fixing a couple of issues in the tests on ARM64.

    Fixes

    • Support flat weights for the BatchNorm op (#465)
    • fix the reset method of the tape machine (#467)
    • fix clipping in Adam solver (#469)
    • fix panic message in GlorotEtAlN64 (#470)
    • fix concurrent example (#472)

    API change

    • functions to create primitive Value types (NewF64, NewF32, ...) (#481)
    • Breaking change: the BatchNorm1d function has been removed; BatchNorm function supports 1d and 2d operations (#482)
    Source code(tar.gz)
    Source code(zip)
  • v0.9.16(Dec 31, 2020)

    This version incorporates the semantics clarification of the tensor package - the unsafe pointer things are cleaned up as well.

    Small bugfixes to SoftMax was also fixed - SoftMax no longer cause a race condition.

    Source code(tar.gz)
    Source code(zip)
  • v0.9.15(Sep 27, 2020)

    When vectors were broadcast with a repeat of 1, one of the values is accidentally zero'd. This leaves very strange artifacts in neural networks.

    This has now been fixed

    Source code(tar.gz)
    Source code(zip)
  • v0.9.14(Sep 10, 2020)

  • v0.9.13(Aug 6, 2020)

  • v0.9.12(Jun 18, 2020)

    The Upsample2D operator has been added by @cpllbstr . It is similar to the operator in PyTorch: https://pytorch.org/docs/master/generated/torch.nn.Upsample.html

    Source code(tar.gz)
    Source code(zip)
  • v0.9.11(Jun 15, 2020)

    Due to the great work by @wzzhu, shape inference is now a bit more robust. It goes back to the original Gorgonia understanding of shapes - where reductions do not aggressively squeeze the dimensions.

    Source code(tar.gz)
    Source code(zip)
  • v0.9.10(Apr 10, 2020)

    In the previous version, the repeatOp was a compound operation. It had this function signature effectively: func repeat(a, nTimes *Node, axes ...int). So you could do something like repeat(a, 300, 1, 2, 3) in which a gets repeated 300 times across axes 1, 2 and 3.

    This has been deoptimized such that it's effectively func repeat(a, repeat *Node, axis int). The reason for this deoptimization is because upon further analyses of what the function actually does, it simply calls tensor.Repeat many times. This causes many new tensors to be allocated. But the whole point of symbolic operations is so that we may preallocate ahead of time.

    This deoptimization allows for the repeatOp to call tensor.RepeatReuse which allows for a repeat operation to reuse preallocated values, leading to less allocations, improving performance

    Source code(tar.gz)
    Source code(zip)
  • v0.9.9(Mar 25, 2020)

  • v0.9.8(Feb 10, 2020)

    Two bugfixes in this release:

    • An Off-By-One bug in which the axes of softmax was affected.
    • TrimSpace being used in the iris example
    • Return value of scalar values are fixed
    Source code(tar.gz)
    Source code(zip)
  • v0.9.7(Jan 19, 2020)

    Previously when an expression such as -(x+y) is given and x and y are scalar values, the neg op would fail to correctly pass the derivative into the constituents. This is due to a misuse of UnsafeDo . This has been rectified now.

    Source code(tar.gz)
    Source code(zip)
  • v0.9.5(Dec 7, 2019)

    A number of new features were added, mainly to support golgi - gorgonia.org/golgi. Here is an incomplete enumeration:

    • KeepDims is introduced as a function to decorate another function
    • A bunch of BroadcastXXX operations were added (autogenerated)
    • Unconcat which is the opposite of Concat
    • BatchedMatMul supports more than 3D tensors
    • SoftMax supports multiple axes now
    • Monadish handling of *Nodes
    • Consistent axis operations thanks to @bdleitner
    • GAP operator
    Source code(tar.gz)
    Source code(zip)
  • v0.9.4(Nov 7, 2019)

  • v0.9.3(Sep 6, 2019)

  • v0.9.2(Aug 29, 2019)

  • v0.9.0-beta(Aug 18, 2018)

    Ongoing notes:

    • CUDA: Better CUDA support (IN PROGRESS)
      • ~ColMajor used by default if engine is CUDA.~ (ColMajor is supported, but defaults to using RowMajor for all the major cuBLAS versions. Careful reasoning of the parameters obviates the need for ColMajor by default, which causes more headaches. It is still supported)
      • Transposition will be automatically done when performing transports back to CPU.
      • cudnn operations supported (IN PROGRESS) (note: these are the ones I use more often hence gets bigger attention):
        • [x] Conv2d
        • [x] Dropout
        • [x] Maxpool2d
        • [x] BatchNorm
        • [x] Rectify
      • Other CUDA related optimizations
        • [x] full cuBLAS support
    • New Ops:
      • BatchNorm
      • InvSqrt
      • CUDA enabled ops in ops/nn (preview for how things will start to look in v0.10.0)
    • New Features:
      • Limited shape inference. Working towards a calculus for shapes (first raised in #96 and #97).
    • Optimizations:
      • Optimizations of basic ops to use engine functions if available, otherwise, fall back to using Apply, which adds a penalty from repeatedly calling functions.
      • Faster VMs (1 of 2 VMs): ~greedy goroutines grabs gigs from a priority queue. This causes faster execution of code in general.~ (this is moved to a future version of 0.9.xx):
    benchmark                           old ns/op      new ns/op      delta
    BenchmarkTapeMachineExecution-8     3129074510     2695304022     -13.86%
    
    benchmark                           old allocs     new allocs     delta
    BenchmarkTapeMachineExecution-8     25745          25122          -2.42%
    
    benchmark                           old bytes      new bytes      delta
    BenchmarkTapeMachineExecution-8     4804578705     4803784111     -0.02%
    
    • Code generation: some exported API is now auto generated
    • New Solver : @ynqa added the Momentum solver.
    • Breaking API: Solver now take a slice of ValueGrad instead of Nodes. ValueGrad is an interface, of which a *Node fulfils. An additional utility function NodesToValueGrads has been added to aid with refactoring. This was done for two reasons:
      • ~The support for BatchNorm operation, which is a verily impure and highly stateful function. The BatchNorm Op has internal states that need to have their gradients updated as well. But the internal state of BatchNorm isn't really part of the expression graph, and really it shouldn't be.~ Turns out there was a better API for BatchNorm.
      • In the next version, v0.10.0. We aim to do better package organization for managability. With this API breaking change, the solver now is less dependent on the other parts of Gorgonia and can be easily separated.
    • Breaking Semantics: A gorgonia.VM now implements io.Closer. It should be treated as a resource as well as a computation device - the VM must be Close()d in order for the resources acquired by the VM to actually be released. Turns out, automatic resource management is too difficult. Who'd thunk that?
    Source code(tar.gz)
    Source code(zip)
  • v0.8.4(May 11, 2018)

  • v0.8.3(May 5, 2018)

    Bugfixes for UnbindAllNonInput was done.

    Furthermore, because Gonum no longer supports Go with versions less than 1.8, Gorgonia will not support that either

    Source code(tar.gz)
    Source code(zip)
  • v0.8.2(Jan 27, 2018)

  • v0.8.1(Jan 21, 2018)

  • v0.8.0(Dec 17, 2017)

    In v0.8.0, the Gorgonia packages have officially been split:

    • "gorgonia.org/gorgonia" - handles graph creation and execution
    • "gorgonia.org/tensor" - underlying data structures
    • "gorgonia.org/cu" - CUDA interface

    Futhermore, dep is now officially used. TensorDot support for automatic and symbolic differentiation added by @siquus

    Source code(tar.gz)
    Source code(zip)
  • v0.7.5(Nov 14, 2017)

    There has been a bug fix for MatMul. Previously this would panic:

    // Case 1
    a := New(WithShape(2, 1), WithBacking(Range(Float64, 0, 2)))
    b := New(WithShape(1, 3), WithBacking(Range(Float64, 0, 3)))
    c, err := MatMul(a, b)
    
    // Case 2
    a = New(WithShape(1, 2), WithBacking(Range(Float64, 0, 2)))
    b = New(WithShape(2, 3), WithBacking(Range(Float64, 0, 6)))
    c, err = MatMul(a, b)
    

    @siquus discovered the bug and fixed it for Case 1. Additional test cases were generated for Case 2 and fixed.

    Source code(tar.gz)
    Source code(zip)
  • v0.7.4(Oct 3, 2017)

    Added convolution operations and its differential functions. The convolution function is heavily based on IM2Col and Col2IM.

    Much of the code was cribbed from Caffe's implementation of im2col and col2im. Licences will be added in the upcoming commit.

    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Sep 23, 2017)

    A bug was fixed in the example.

    Additional smarts for subgraphing were added so that nodes with infidel ops will also be automatically added when subgraphing

    Source code(tar.gz)
    Source code(zip)
  • v0.7.2(Sep 20, 2017)

    @docmerlin submitted a patch for a bug in OneHotVector. Also tests were added.

    Additionally, new README notes were also added, informing users of upcoming changes to future versions.

    Source code(tar.gz)
    Source code(zip)
  • v0.7.1(Sep 9, 2017)

    Some issues with versioning Gorgonia has now been resolved. No more compiler magics are being used Additional tests and documentation were added.

    Source code(tar.gz)
    Source code(zip)
Owner
Gorgonia
Gorgonia
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Gorgonia 4.1k Jul 19, 2021
A reimplementation of AlphaGo in Go (specifically AlphaZero)

A reimplementation of AlphaGo in Go (specifically AlphaZero)

Gorgonia 187 Jul 10, 2021
Generative Adversarial Network in Go via Gorgonia

Generative adversarial networks Recipe for simple GAN in Golang ecosystem via Gorgonia library Table of Contents About Why Instruments Usage Code expl

Dimitrii Lopanov 59 Jul 19, 2021
onnx-go gives the ability to import a pre-trained neural network within Go without being linked to a framework or library.

This is a Go Interface to Open Neural Network Exchange (ONNX). Overview onnx-go contains primitives to decode a onnx binary model into a computation b

Olivier Wulveryck 338 Jul 22, 2021
Prophecis is a one-stop machine learning platform developed by WeBank

Prophecis is a one-stop machine learning platform developed by WeBank. It integrates multiple open-source machine learning frameworks, has the multi tenant management capability of machine learning compute cluster, and provides full stack container deployment and management services for production environment.

WeBankFinTech 198 Jul 26, 2021
Reinforcement Learning in Go

Overview Gold is a reinforcement learning library for Go. It provides a set of agents that can be used to solve challenges in various environments. Th

AUNUM 228 Jul 21, 2021
A High-level Machine Learning Library for Go

Overview Goro is a high-level machine learning library for Go built on Gorgonia. It aims to have the same feel as Keras. Usage import ( . "github.

AUNUM 282 Jul 10, 2021
Bigmachine is a library for self-managing serverless computing in Go

Bigmachine Bigmachine is a toolkit for building self-managing serverless applications in Go. Bigmachine provides an API that lets a driver process for

GRAIL 170 Jun 18, 2021
Deploy, manage, and scale machine learning models in production

Deploy, manage, and scale machine learning models in production. Cortex is a cloud native model serving platform for machine learning engineering teams.

Cortex Labs 7.6k Jul 21, 2021
Go Machine Learning Benchmarks

Benchmarks of machine learning inference for Go

Nikolay Dubina 15 Jun 17, 2021
On-line Machine Learning in Go (and so much more)

goml Golang Machine Learning, On The Wire goml is a machine learning library written entirely in Golang which lets the average developer include machi

Conner DiPaolo 1.2k Jul 20, 2021
Standard machine learning models

Cog: Standard machine learning models Define your models in a standard format, store them in a central place, run them anywhere. Standard interface fo

Replicate 70 Jul 27, 2021
Machine Learning libraries for Go Lang - Linear regression, Logistic regression, etc.

package ml - Machine Learning Libraries ###import "github.com/alonsovidales/go_ml" Package ml provides some implementations of usefull machine learnin

Alonso Vidales 191 Apr 17, 2021
Ensembles of decision trees in go/golang.

CloudForest Google Group Fast, flexible, multi-threaded ensembles of decision trees for machine learning in pure Go (golang). CloudForest allows for a

Ryan Bressler 687 Jul 17, 2021