onnx-go gives the ability to import a pre-trained neural network within Go without being linked to a framework or library.

Overview

ONNX Logo Go Logo

Mentioned in Awesome Go GoDoc Go Report Card Build Status CodeCov

This is a Go Interface to Open Neural Network Exchange (ONNX).

Overview

onnx-go contains primitives to decode a onnx binary model into a computation backend, and use it like any other library in your go code. for more information about onnx, please visit onnx.ai.

The implementation of the the spec of ONNX is partial on the import, and non-existent for the export.

Vision statement

For the Go developer who needs to add a machine learning capability to his/her code, onnx-go is a package that facilitates the use of neural network models (software 2.0) and unlike any other computation library, this package does not require special skills in data-science.

Warning The API is experimental and may change.

Disclaimer

This is a new version of the API.
The tweaked version of Gorgonia have been removed. It is now compatible with the master branch of Gorgonia.
Some operators are not yet available though.

A utility has been added in order to run models from the zoo.
check the `examples` subdirectory.

Install

Install it via go get

go get github.com/owulveryck/onnx-go

onnx-go is compatible with go modules.

Example

Those examples assumes that you have a pre-trained model.onnx file available. You can download pre-trained modles from the onnx model zoo.

Very simple example

This example does nothing but decoding the graph into a simple backend. Then you can do whatever you want with the generated graph.

// Create a backend receiver
	backend := simple.NewSimpleGraph()
	// Create a model and set the execution backend
	model := onnx.NewModel(backend)

	// read the onnx model
	b, _ := ioutil.ReadFile("model.onnx")
	// Decode it into the model
	err := model.UnmarshalBinary(b)

Simple example to run a pre-trained model

This example uses Gorgonia as a backend.

import "github.com/owulveryck/onnx-go/backend/x/gorgonnx"

At the present time, Gorgonia does not implement all the operators of ONNX. Therefore, most of the model from the model zoo will not work. Things will go better little by little by adding more operators to the backend.

You can find a list of tested examples and a coverage here.

func Example_gorgonia() {
	// Create a backend receiver
	backend := gorgonnx.NewGraph()
	// Create a model and set the execution backend
	model := onnx.NewModel(backend)

	// read the onnx model
	b, _ := ioutil.ReadFile("model.onnx")
	// Decode it into the model
	err := model.UnmarshalBinary(b)
	if err != nil {
		log.Fatal(err)
	}
	// Set the first input, the number depends of the model
	model.SetInput(0, input)
	err = backend.Run()
	if err != nil {
		log.Fatal(err)
	}
	// Check error
	output, _ := model.GetOutputTensors()
	// write the first output to stdout
	fmt.Println(output[0])
}

Model zoo

In the examples subdirectory, you will find a utility to run a model from the zoo, as well as a sample utility to analyze a picture with Tiny YOLO v2

Internal

ONNX protobuf definition

The protobuf definition of onnx has is compiled into Go with the classic protoc tool. The definition can be found in the internal directory. The definition is not exposed to avoid external dependencies to this repo. Indeed, the pb code can change to use a more efficient compiler such as gogo protobuf and this change should be transparent to the user of this package.

Execution backend

In order to execute the neural network, you need a backend able to execute a computation graph (for more information on computation graphs, please read this blog post

This picture represents the mechanism:

Schema

onnx-go do not provide any executable backend, but for a reference, a simple backend that builds an information graph is provided as an example (see the simple subpackage). Gorgonia is the main target backend of ONNX-Go.

Backend implementation

a backend is basically a Weighted directed graph that can apply on Operation on its nodes. It should fulfill this interface:

type Backend interface {
	OperationCarrier
	graph.DirectedWeightedBuilder
}
type OperationCarrier interface {
	// ApplyOperation on the graph nodes
	// graph.Node is an array because it allows to handle multiple output
	// for example a split operation returns n nodes...
	ApplyOperation(Operation, ...graph.Node) error
}

An Operation is represented by its name and a map of attributes. For example the Convolution operator as described in the spec of onnx will be represented like this:

convOperator := Operation{
		Name: "Conv",
		Attributes: map[string]interface{}{
			"auto_pad":  "NOTSET",
			"dilations": []int64{1, 1},
			"group":     1,
			"pads":      []int64{1, 1},
			"strides":   []int64{1, 1},
		},
	}

Besides, operators, a node can carry a value. Values are described as tensor.Tensor To carry data, a Node of the graph should fulfill this interface:

type DataCarrier interface {
	SetTensor(t tensor.Tensor) error
	GetTensor() tensor.Tensor
}

Backend testing

onnx-go provides a some utilities to test a backend. Visit the testbackend package for more info.

Contributing

Contributions are welcome. A contribution guide will be eventually written. Meanwhile, you can raise an issue or send a PR. You can also contact me via Twitter or on the gophers' slack (I am @owulveryck on both)

This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.

Author

Olivier Wulveryck

License

MIT.

Issues
  • Fix build at Raspberry PI

    Fix build at Raspberry PI

    Is your feature request related to a problem? Please describe. I'm having some errors during the build of the YOLO example with a Raspberry Pi.

    Describe the solution you'd like The solution is quite easy and most of the problems are related to gorgonia. I've already opened an issue. https://github.com/gorgonia/gorgonia/issues/311

    The only thing that I had to do at onnx-go is to update this dependency replace github.com/chewxy/math32 => github.com/chewxy/math32 v1.0.1.

    Performance I didn't any profile yet, but the performance is really slow at the Pi. It's possible to use the Pi GPU to speed up the things? On my Mac, the YOLO example is taking something like 2s and at the Pi 50s.

    I did some tests with some other projects like https://github.com/shizukachan/darknet-nnpack and got something like 500ms, 100x faster.

    opened by diegobernardes 20
  • Gorgonia's evaluation of the MNIST model does not give expected result

    Gorgonia's evaluation of the MNIST model does not give expected result

    This is related to the issue #2 I have with gorgonnx (the previous test implementation of onnx-to-gorgonia).

    The problem is exactly the same with the new version of the unmarsaler (from the directed-graph branch)

    To investigate, I will check every operator to see where the bug is hidden.

    To do so, I have created a test file here. This file contains the evaluated input and output of all the node that compose the MNIST model (from the ONNX model zoo).

    The next task is to evaluate all the tests to see if the results are ok.

    To help me, each test function generates a "numpy" compatible tensor for input and output. For simple operators, that should be enough to run them within python and to compute the expected result.

    Any help welcome.

    HOWTO:

    • go-get this repository
    • checkout the directed-graph branch
    • cd into examples/gorgonia
    • go run mnist.go run the (unsuccessful) test (gorgonia has been vendorer in this directory)
    • go test generate a numpy subdirectory with the tests files.
    • find which operator is not ok

    Remark: I did not export the attributes of the Convolution operator yet, but you can find their values in the internal/examples/mnist directory

    opened by owulveryck 16
  • Analyse (and enhance) performances on multiple predictions

    Analyse (and enhance) performances on multiple predictions

    I have a demo model with 39 inputs.it takes 0.5s to predict 10000 data using keras.with onnx-go it takes 5s to predict.

    	for i := 0; i < 10000; i++ {
    		model.SetInput(0, input)
    		err = backend.Run()
    		model.GetOutputTensors()
    	}
    

    Am I make some mistake here?

    Gorgonia / Gorgonnx 
    opened by bitnick10 13
  • refactor(internal/onnx/ir): rename internal/pb-onnx to internal/onnx/ir

    refactor(internal/onnx/ir): rename internal/pb-onnx to internal/onnx/ir

    In internal/pb-onnx, the package name is pb but the path ends pb-onnx,

    In order to improve maintainability, I propose

    • to change the path to internal/onnx/<packagename>,
    • to rename it ir instead of pb to be compliant with the content (see @owulveryck comment)
    opened by rlespinasse 10
  • Is it possible to make onnx-go run distributedly?

    Is it possible to make onnx-go run distributedly?

    Is your feature request related to a problem? Please describe. I am a graduate student who needs a graduation project. I am thinking about whether it is necessary and feasible to make the program run distributedly.

    Describe the solution you'd like Possibly with the help of github.com/chrislusf/gleam ?

    opened by BruceWangNo1 10
  • Installation error

    Installation error

    When trying to install this package in a go modules enabled project(golang:latest image) I got the following error:

    go: extracting github.com/tensorflow/tensorflow v1.13.1
    # github.com/owulveryck/onnx-go/internal/pb-onnx
    /go/pkg/mod/github.com/owulveryck/[email protected]/internal/pb-onnx/onnx.proto3.pb.go:22:11: undefined: proto.ProtoPackageIsVersion3
    # gorgonia.org/gorgonia
    /go/pkg/mod/gorgonia.org/[email protected]/graph.go:569:2: cannot use e (type edge) as type graph.Edge in return argument:
            edge does not implement graph.Edge (missing ReversedEdge method)
    /go/pkg/mod/gorgonia.org/[email protected]/node.go:437:16: n.shape.CalcStrides undefined (type tensor.Shape has no field or method CalcStrides)
    /go/pkg/mod/gorgonia.org/[email protected]/node.go:756:15: cannot use e (type edge) as type graph.Edge in argument to n.g.SetEdge:
            edge does not implement graph.Edge (missing ReversedEdge method)
    /go/pkg/mod/gorgonia.org/[email protected]/utils.go:147:28: undefined: tensor.InfChecker
    /go/pkg/mod/gorgonia.org/[email protected]/utils.go:190:28: undefined: tensor.NaNChecker
    
    bug 
    opened by mauri870 10
  • [TinyYolo v2] Bug in maxpool?

    [TinyYolo v2] Bug in maxpool?

    This commit allows the model tiny Yolo v2 to be compiled and executed With Gorgonia.

    Sadly the execution does not give the expected result:

    ➜  model_zoo_executor git:(tiny-yolov2) ✗  export MODELDIR=~/Documents/tiny_yolov2
    ➜  model_zoo_executor git:(tiny-yolov2) ✗ go run main.go -model $MODELDIR/model.onnx -input $MODELDIR/test_data_set_0/input_0.pb -output $MODELDIR/test_data_set_0/output_0.pb
    
            Error Trace:    main.go:72
                                                    proc.go:200
                                                    asm_amd64.s:1337
            Error:          Max difference between -0.17929432 and 0.056231752 allowed is 0.005, but difference was -0.23552606999874115
            Messages:       the two tensors should be equal.
    exit status 1
    

    According to this blog post the architecture should be:

    Layer         kernel  stride  output shape
    ---------------------------------------------
    Input                          (416, 416, 3)
    Convolution    3×3      1      (416, 416, 16)
    MaxPooling     2×2      2      (208, 208, 16)
    Convolution    3×3      1      (208, 208, 32)
    MaxPooling     2×2      2      (104, 104, 32)
    Convolution    3×3      1      (104, 104, 64)
    MaxPooling     2×2      2      (52, 52, 64)
    Convolution    3×3      1      (52, 52, 128)
    MaxPooling     2×2      2      (26, 26, 128)
    Convolution    3×3      1      (26, 26, 256)
    MaxPooling     2×2      2      (13, 13, 256)
    Convolution    3×3      1      (13, 13, 512)
    MaxPooling     2×2      1      (13, 13, 512)
    Convolution    3×3      1      (13, 13, 1024)
    Convolution    3×3      1      (13, 13, 1024)
    Convolution    1×1      1      (13, 13, 125)
    ---------------------------------------------
    

    After setting some logs, the architecture of the decoded network is:

    +Convolution             (3, 3)          [1 1]           (1, 16, 416, 416)
    +MaxPooling              (2, 2)          [2 2]           (1, 16, 208, 208)
    +Convolution             (3, 3)          [1 1]           (1, 32, 208, 208)
    +MaxPooling              (2, 2)          [2 2]           (1, 32, 104, 104)
    +Convolution             (3, 3)          [1 1]           (1, 64, 104, 104)
    +MaxPooling              (2, 2)          [2 2]           (1, 64, 52, 52)
    +Convolution             (3, 3)          [1 1]           (1, 128, 52, 52)
    +MaxPooling              (2, 2)          [2 2]           (1, 128, 26, 26)
    +Convolution             (3, 3)          [1 1]           (1, 256, 26, 26)
    +MaxPooling              (2, 2)          [2 2]           (1, 256, 13, 13)
    +Convolution             (3, 3)          [1 1]           (1, 512, 13, 13)
    -MaxPooling              (2, 2)          [1 1]           (1, 512, 14, 14)
    -Convolution             (3, 3)          [1 1]           (1, 1024, 14, 14)
    -Convolution             (3, 3)          [1 1]           (1, 1024, 14, 14)
    -Convolution             (1, 1)          [1 1]           (1, 125, 14, 14)
    

    The last layer using the Maxpool operator does not give the correct output size. The padding used is computed from the auto_pad argument but seems ok (padding is [1,1]).

    It requires more investigation; maybe a bug in Gorgonia.

    Note : the computation is slow, but Make it work, then Make it fast

    cc @chewxy

    bug 
    opened by owulveryck 9
  • Broadcasting is consuming a lot of memory in Gorgonnx/Gorgonia

    Broadcasting is consuming a lot of memory in Gorgonnx/Gorgonia

    Bench

    I've created this simple benchmark with the MNIST model to analyze the behavior of the code:

    package onnx_test
    
    import (
            "testing"
    
            "github.com/owulveryck/onnx-go"
            "github.com/owulveryck/onnx-go/backend/x/gorgonnx"
            "github.com/owulveryck/onnx-go/internal/examples/mnist"
            "gorgonia.org/tensor"
    )
    
    func BenchmarkUnmarshalBinary(b *testing.B) {
            input := tensor.New(tensor.WithShape(1, 1, 28, 28), tensor.Of(tensor.Float32))
            for n := 0; n < b.N; n++ {
                    // Create a backend receiver
                    backend := gorgonnx.NewGraph()
                    // Create a model and set the execution backend
                    model := onnx.NewModel(backend)
    
                    // Decode it into the model
                    err := model.UnmarshalBinary(mnist.GetMnist())
                    if err != nil {
                            b.Fatal(err)
                    }
                    // Set the first input, the number depends of the model
                    model.SetInput(0, input)
                    err = backend.Run()
                    if err != nil {
                            b.Fatal(err)
                    }
            }
    }
    

    Running this with go test -bench=. -benchmem -memprofile memprofile.out -cpuprofile profile.out -benchtime=10s generates two files to decode with the go profiler.

    CPU

    The result for the CPU is displayed herer: mnist cpu flamegraph

    There are possible enhancements, but nothing obvious.

    Memory

    The result for the Memory usage is more interesting. It shows that the repeatOp of Gorgonia is using a lot of memory. The repeatOp is the foundation of the broadcasting.

    Screenshot 2019-05-28 at 11 15 46

    This op seems to copy a lot of data:

    bla

    gorgonia.Tensor

    The analysis point that this function from the tensor package is involved in extra memory consumption:

    https://github.com/gorgonia/tensor/blob/8eeece33868236224d51e7362e36a68642870bd2/array.go#L34-L51

    Especially this call to val.Interface()

    	return array{
    		Header: hdr,
    		t:      t,
    		v:      val.Interface(),
    	}
    

    According to the comment, this field is even not mandatory by the array.

    // array is the underlying generic array.
    type array struct {
    	storage.Header             // the header - the Go representation (a slice)
    	t              Dtype       // the element type
    	v              interface{} // an additional reference to the underlying slice. This is not strictly necessary, but does improve upon anything that calls .Data()
    }
    

    On top of that, the reflect package from stdlib references a TODO with something to enhance in the packEface function (packEface converts v to the empty interface. ):

    		if v.flag&flagAddr != 0 {
    			// TODO: pass safe boolean from valueInterface so
    			// we don't need to copy if safe==true?
    			c := unsafe_New(t)
    			typedmemmove(t, c, ptr)
    			ptr = c
    		}
    

    The safe flag is true when calling Interface() function:

    // Interface returns v's current value as an interface{}.
    // It is equivalent to:
    //	var i interface{} = (v's underlying value)
    // It panics if the Value was obtained by accessing
    // unexported struct fields.
    func (v Value) Interface() (i interface{}) {
    	return valueInterface(v, true)
    }
    

    This suggests that avoiding the copy would significantly improve the performances.

    cc @chewxy

    Gorgonia / Gorgonnx 
    opened by owulveryck 7
  • Added Squeeze operator

    Added Squeeze operator

    Hi,

    I've added the Squeeze operator, following a previous PR on Unsqueeze. Specifications are here: https://github.com/onnx/onnx/blob/master/docs/Operators.md#squeeze

    I'll try to add more tests, but do you have any tips to generate the model and put it in binary in the test?

    Thanks!

    opened by bezineb5 6
  • Implement operator Softmax for backend Gorgonia/Gorgonnx

    Implement operator Softmax for backend Gorgonia/Gorgonnx

    Why is this operator needed?

    This operator is needed at least to run the inception v1 model;

    Implementation

    Link to existing material on the backend

    Expected problems?

    • Two versions of the operator exist in Gorgonia; we should decide whether we need the stable or the non-stable version
    • The Softmax operator of ONNX carries one attribute (the axis for the softmax); this attribute does not exist in gorgonia; the full implementation of the operator may require to tweak Gorgonia.

    Tests

    go test -run=ONNX/TestSoftmax

    backend Gorgonia / Gorgonnx Operator Feature request 
    opened by owulveryck 6
  • Work in progress

    Work in progress

    About

    This is just the interface between the ONNX structures/files and the Go ecosystem. There is also an ongoing effort to implement a backend into Gorgonia which is a Computation Lib for Go.

    So far, the TensorProto import is partially implemented into the tensor lib but I am waiting for more progress before I ask for a PR and a merge to the master branch.

    Regarding the Model and the Graph structures, I have started a POC which is quick'n'dirty by now. (if you are interested, the code is here). My goal is to be able to run the MNIST example. So far I have generated an ExprGraph and I can run it, but the result is wrong. I am doing some bug hunting.

    Next

    Once the POC is working, I will do some PR, and start a complete integration process of ONNX into Gorgonia (it may need some tooling and enhanced testing). Meanwhile, if you have any idea for enhancing the onnx-go repo, please feel free to open an issue or a PR.

    cc @jspisak @prasanthpul @lupesko @bddppq

    opened by owulveryck 5
  • ../../go/src/gorgonia.org/tensor/dense_compat.go:442:23: undefined: array.Interface

    ../../go/src/gorgonia.org/tensor/dense_compat.go:442:23: undefined: array.Interface

    when i am trying to run gorgonia/tensor package or while installing i am getting the above mentioned error

    i know this is version issue . but i have uninstalled and installed several times . but still the issue is same

    could you please help me to solve

    opened by manjunathsc123 0
  • panic: negative dimension size does not make sense

    panic: negative dimension size does not make sense

    • The model file is download from https://bj.bcebos.com/paddle2onnx/model_zoo/ch_ppocr_mobile_v2.0_cls_infer.onnx
    package main
    
    import (
    	_ "embed"
    	"fmt"
    	"github.com/owulveryck/onnx-go"
    	"github.com/owulveryck/onnx-go/backend/x/gorgonnx"
    )
    
    //go:embed model/ch_ppocr_mobile_v2.0_cls_infer.onnx
    var gModelData []byte
    
    func main() {
    	// Create a backend receiver
    	backend := gorgonnx.NewGraph()
    	//backend := simple.NewSimpleGraph()
    	// Create a model and set the execution backend
    	model := onnx.NewModel(backend)
    
    	// read the onnx model
    	// Decode it into the model
    	err := model.UnmarshalBinary(gModelData)
    	if err != nil {
    		panic(err)
    	}
    	fmt.Println(model.Input, model.Output)
    }
    
    • output:
    panic: negative dimension size does not make sense
    
    goroutine 1 [running]:
    gorgonia.org/tensor.Shape.CalcStrides({0xc0003f3660, 0x4, 0xc0003f3640})
    	C:/local/gopath/pkg/mod/gorgonia.org/[email protected]/shape.go:45 +0xa7
    gorgonia.org/tensor.(*AP).calcStrides(0x4)
    	C:/local/gopath/pkg/mod/gorgonia.org/[email protected]/ap.go:348 +0x4e
    gorgonia.org/tensor.(*AP).SetShape(0xc000489b00, {0xc0003f3640, 0x4, 0x120})
    	C:/local/gopath/pkg/mod/gorgonia.org/[email protected]/ap.go:83 +0x1b4
    gorgonia.org/tensor.(*Dense).setShape(0xc000489b00, {0xc0003f3640, 0xc000111c78, 0x3eda07})
    	C:/local/gopath/pkg/mod/gorgonia.org/[email protected]/dense.go:264 +0x27
    gorgonia.org/tensor.WithShape.func1({0xb3ec10, 0xc000489b00})
    	C:/local/gopath/pkg/mod/gorgonia.org/[email protected]/consopt.go:83 +0xbe
    gorgonia.org/tensor.New({0xc000475ac0, 0x2, 0x4})
    	C:/local/gopath/pkg/mod/gorgonia.org/[email protected]/tensor.go:91 +0x4d
    github.com/owulveryck/onnx-go.(*Model).processValue(0xc000111f28, 0xc0000a7040)
    	C:/local/gopath/pkg/mod/github.com/owulveryck/[email protected]/decoder.go:81 +0x345
    github.com/owulveryck/onnx-go.(*Model).applyModelProtoGraph(0xc000111f28, 0xc0000a8000)
    	C:/local/gopath/pkg/mod/github.com/owulveryck/[email protected]/decoder.go:126 +0x145
    github.com/owulveryck/onnx-go.(*Model).decodeProto(0xebd200, 0x8e8d3)
    	C:/local/gopath/pkg/mod/github.com/owulveryck/[email protected]/decoder.go:112 +0x18a
    github.com/owulveryck/onnx-go.(*Model).UnmarshalBinary(0x2b51e3ab7d0, {0xebd200, 0x8e8d3, 0x8e8d3})
    	C:/local/gopath/pkg/mod/github.com/owulveryck/[email protected]/decoder.go:38 +0x75
    main.main()
    	D:/go-projects/test2/main.go:22 +0x211
    
    opened by orestonce 0
  • "Asymmetric padding" error

    I'm trying to get an existing model to run but I'm getting the following error message:

    onnx: operator conv. Implementation error for attribute pads (1): Asymetric padding
    

    From what I can see, it fails here: https://github.com/owulveryck/onnx-go/blob/12c4d2ede603d7d87a26ad4be469a37a6c38e94b/backend/x/gorgonnx/conv.go#L101

    Unfortunately, my familiarity with ONNX and Tensorflow (and I guess ML in general) is very limited so I don't really know what this means. I saw issue #74 which appears to be similar and was resolved but for maxpool, not for conv.

    What are my options for making this work?

    opened by rivo 0
  • chore(deps): bump github.com/gogo/protobuf from 1.3.1 to 1.3.2

    chore(deps): bump github.com/gogo/protobuf from 1.3.1 to 1.3.2

    Bumps github.com/gogo/protobuf from 1.3.1 to 1.3.2.

    Release notes

    Sourced from github.com/gogo/protobuf's releases.

    Release v.1.3.2

    Tested versions:

    go 1.15.6 protoc 3.14.0

    Bug fixes:

    skippy peanut butter

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Implement operator `LinearRegressor` for backend `gorgonia`

    Implement operator `LinearRegressor` for backend `gorgonia`

    Why is this operator needed?

    I get the error [ERROR] onnx: operator LinearRegressor not implemented () with the backend gorgonia

    Implementation

    Link to existing material on the backend

    N/A

    Expected problems?

    N/A

    Tests

    go test -run=ONNX/TestOperator

    backend Operator Feature request 
    opened by Sauci 0
Releases(v0.5.0)
Owner
Olivier Wulveryck
Here I am a geek and a gopher. In real life, I am a consultant. It means that most of the contribution to the OSS projects is out of office hours.
Olivier Wulveryck
Go (Golang) encrypted deep learning library; Fully homomorphic encryption over neural network graphs

DC DarkLantern A lantern is a portable case that protects light, A dark lantern is one who's light can be hidden at will. DC DarkLantern is a golang i

Raven 1 Dec 2, 2021
fonet is a deep neural network package for Go.

fonet fonet is a deep neural network package for Go. It's mainly created because I wanted to learn about neural networks and create my own package. I'

Barnabás Pataki 67 May 13, 2022
Artificial Neural Network

go-deep Feed forward/backpropagation neural network implementation. Currently supports: Activation functions: sigmoid, hyperbolic, ReLU Solvers: SGD,

Patrik Ehrencrona 376 Jun 27, 2022
Neural Network for Go.

gonet gonet is a Go module implementing multi-layer Neural Network. Install Install the module with: go get github.com/dathoangnd/gonet Import it in

Dat Hoang 74 Apr 21, 2022
Golang Neural Network

Varis Neural Networks with GO About Package Some time ago I decided to learn Go language and neural networks. So it's my variation of Neural Networks

Artem Filippov 44 Mar 20, 2022
Example of Neural Network models of social and personality psychology phenomena

SocialNN Example of Neural Network models of social and personality psychology phenomena This repository gathers a collection of neural network models

null 6 Jan 15, 2022
Neural network in Go

network Package network is a simple implementation of a nonbiased neural network. The networks created by this package can be trained with backpropaga

Shingirai Chanakira 0 Nov 25, 2021
Neural Networks written in go

gobrain Neural Networks written in go Getting Started The version 1.0.0 includes just basic Neural Network functions such as Feed Forward and Elman Re

Go Machine Learning 521 Jun 24, 2022
An implementation of Neural Turing Machines

Neural Turing Machines Package ntm implements the Neural Turing Machine architecture as described in A.Graves, G. Wayne, and I. Danihelka. arXiv prepr

Fumin 397 Apr 4, 2022
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

EGNN - Pytorch Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch. May be eventually used for Alphafold2 replication.

Phil Wang 219 Jun 25, 2022
A multilayer perceptron network implemented in Go, with training via backpropagation.

Neural Go I'm in the process of making significant changes to this package, particularly, to make it more modular, and to base it around an actual lin

Schuyler Erle 63 Mar 5, 2022
Generative Adversarial Network in Go via Gorgonia

Generative adversarial networks Recipe for simple GAN in Golang ecosystem via Gorgonia library Table of Contents About Why Instruments Usage Code expl

Dimitrii Lopanov 73 Jun 26, 2022
Tpu-traffic-classifier - This small program creates ipsets and iptables rules for nodes in the Solana network

TPU traffic classifier This small program creates ipsets and iptables rules for

Triton One 7 May 16, 2022
Distributed hyperparameter optimization framework, inspired by Optuna.

Goptuna Distributed hyperparameter optimization framework, inspired by Optuna [1]. This library is particularly designed for machine learning, but eve

Masashi SHIBATA 214 Jun 28, 2022
The Go App Boot Framework

The Go App Boot Framework good is a http framework that makes developers write go applications much easier. Download and Install go get -u github.com/

null 0 Jan 11, 2022
Genetic Algorithms library written in Go / golang

Description Genetic Algorithms for Go/Golang Install $ go install git://github.com/thoj/go-galib.git Compiling examples: $ git clone git://github.com

Thomas Jager 192 May 26, 2022
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Gorgonia 4.5k Jun 26, 2022
Go package for OCR (Optical Character Recognition), by using Tesseract C++ library

gosseract OCR Golang OCR package, by using Tesseract C++ library. OCR Server Do you just want OCR server, or see the working example of this package?

Hiromu OCHIAI 1.8k Jun 25, 2022
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Gorgonia 4.5k Jun 26, 2022