Spice.ai is an open source, portable runtime for training and using deep learning on time series data.

Overview

Spice.ai

build License: Apache-2.0 Discord Subreddit subscribers Follow on Twitter

Spice.ai is an open source, portable runtime for training and using deep learning on time series data.


⚠️ DEVELOPER PREVIEW ONLY Spice.ai is under active alpha stage development and is not intended to be used in production until its 1.0-stable release.


The vision for Spice.ai is to make creating intelligent applications as easy as building a modern website. Spice.ai brings AI development to your editor, in any language or framework with a fast, iterative, inner development loop, with continuous-integration (CI) and continuous-deployment (CD) workflows.

Spice.ai is written in Golang and Python and runs as a container or microservice with applications calling a simple HTTP API. It's deployable to any public cloud, on-premises, and edge.

📢 Read the Spice.ai announcement blog post at blog.spiceai.org.

📺 View a 60 second demo of Spice.ai in action here.

Community-driven data components

The Spice.ai runtime also includes a library of community-driven data components for streaming and processing time series data, enabling developers to quickly and easily combine data with learning to create intelligent models.

Spice.ai pod registry

Modern developers also build with the community by leveraging registries such as npm, NuGet, and pip. The registry for sharing and using Spice.ai packages is spicerack.org. As the community shares more and more AI building blocks, developers can quickly build intelligence into their applications, initially with definitions of AI projects and eventually by sharing and reusing fully-trained models.

Pre-release software

⚠️ The vision to bring intelligent application development to the maturity of modern web development is a vast undertaking. We haven't figured it all out or solved all the problems yet. We're looking for feedback on the direction. Spice.ai is not finished, in fact, we only just started in June, and we invite you on the journey.

Spice.ai and spicerack.org are both pre-release, early, alpha software. Spice.ai v0.1-alpha has many gaps, including limited deep learning algorithms and training scale, streaming data, simulated environments, and offline learning modes. Packages aren't searchable or even listed on spicerack.org yet.

Our intention with this preview is to work with developers early to co-define and co-develop the developer experience, aligning to the goal of making AI easy for developers. 🚀 Thus, due to the stage of development and as we focus, there are currently several limitations on the general Roadmap to v1.0-stable.

Join us!

We greatly appreciate and value your feedback. Please feel free to file an issue and get in touch with the team through Discord or by sending us mail at [email protected].

Thank you for sharing this journey with us! 🙏

Getting started with Spice.ai

First, ⭐️ star this repo! Thank you for your support! 🙏

Then, follow this guide to get started quickly with Spice.ai. For a more comprehensive getting started guide, see the full online documentation.

Current hosting limitations

  • Docker is required. We are targeting self-host support in v0.3.0-alpha.
  • Only macOS and Linux are natively supported. WSL 2 is required for Windows.
  • arm64 is not yet supported (i.e. Apple's M1 Macs). We use M1s ourselves, so we hope to support this very soon :-)

⭐️ We highly recommend using GitHub Codespaces to get started. Codespaces enables you to run Spice.ai in a virtual environment in the cloud. If you use Codespaces, the install is not required and you may skip to the Getting Started with Codespaces section.

Installation (local machine)

  1. Install Docker
  2. Install the Spice CLI

Step 1. Install Docker: While self-hosting on baremetal hardware will be supported, the Developer Preview currently requires Docker. To install Docker, please follow these instructions.

Step 2. Install the Spice CLI: Run the following curl command in your terminal.

curl https://install.spiceai.org | /bin/bash

You may need to restart your terminal for the spice command to be added to your PATH.

Getting started with Codespaces

The recommended way to get started with Spice.ai is to use GitHub Codespaces.

Create a new GitHub Codespace in the spiceai/quickstarts repo at github.com/spiceai/quickstarts/codespaces.

Once you open the Codespace, Spice.ai and everything you need to get started will already be installed. Continue on to train your first pod.

Create your first Spice.ai Pod and train it

A Spice.ai Pod is simply a collection of configuration and data that is used to train and deploy your own AI.

We will add intelligence to a sample application, ServerOps, by creating and training a Spice.ai pod that offers recommendations to the application for different server operations, such as performing server maintenance.

If you are using GitHub Codespaces, skip Step 1. and continue with Step 2., as the repository will already be cloned.

Step 1. Clone the Spice.ai quickstarts repository:

cd $HOME
git clone https://github.com/spiceai/quickstarts
cd quickstarts/serverops

Step 2. Start the Spice runtime with spice run:

cd $HOME/quickstarts/serverops
spice run

Step. 3. In a new terminal, add the ServerOps quickstart pod:

So that we can leave Spice.ai running, add the quickstart pod in a new terminal tab or window. If you are running in GitHub Codespaces, you can open a new terminal by clicking the split-terminal button in VS Code.

spice add quickstarts/serverops

The Spice.ai CLI will download the ServerOps quickstart pod and add the pod manifest to your project at spicepods/serverops.yaml.

The Spice runtime will then automatically detect the pod and start your first training run!

Note, automatic training relies on your system's filewatcher. In some cases, this might be disabled or not work as expected. If training does not start, follow the command to retrain the pod below.

Observe the pod training

Navigate to http://localhost:8000 in your favorite browser. You will see an overview of your pods. From here, you can click on the serverops pod to see a chart of the pod's training progress.

Retrain the pod

In addition to automatic training upon manifest changes, training can be started by using the Spice CLI from within your app directory.

spice train serverops

Get a recommendation

After training the pod, you can now get a recommendation for an action from it!

curl http://localhost:8000/api/v0.1/pods/serverops/recommendation

Run the ServerOps application

To see how Spice.ai makes creating intelligent applications easy, try running and reviewing the sample ServerOps Node or Powershell apps, serverops.js and serverops.ps1.

Node:

npm install
node serverops.js

Powershell:

./serverops.ps1

Next steps

Congratulations! In just a few minutes you downloaded and installed the Spice.ai CLI and runtime, created your first Spice.ai Pod, trained it, and got a recommendation from it.

This is just the start of the journey with Spice.ai. Next, try one of the quickstart tutorials or in-depth samples for creating intelligent applications.

Try:

  • ServerOps sample - a more in-depth version of the quickstart you just completed, using CPU metrics from your own machine
  • Gardener - Intelligently water a simulated garden
  • Trader - a basic Bitcoin trading bot

Community

Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved. We will be starting a community call series soon!

Contributing to Spice.ai

See CONTRIBUTING.md.

Issues
  • Dataspace Interpolation/Sparse data

    Dataspace Interpolation/Sparse data

    Some data aren't mean to be interpolated, the interpolation can be an option. If such option is chosen categories shouldn't be interpolated. Having a warning stating the given names will be ignored as the data interpolation is enable can prevent future misunderstanding.

    enhancement 
    opened by Corentin-pro 5
  • Remove table_lock during a training run

    Remove table_lock during a training run

    Part of #392

    To improve the performance (speed) of training, it was important to remove acquiring the lock table_lock on every step of the training run. This lock was used to control when data was added to the data_manager for the pod to prevent conflicts where the table is being updated at the same time we are querying it during training.

    The solution proposed here will remove the need for the table_lock during training by copying the data to be used during a training run. Data that is added to the data_manager will continue to be made available immediately for inferencing.

    A secondary change that speeds up the processing of the AddData request is to not take a lock when adding data. Incoming data will be resampled to match the indexes of the existing table. Merging of the table has been improved to use pandas.concat followed by a resample which will not touch existing indexes.

    enhancement 
    opened by phillipleblanc 4
  • Add algorithm selection for training/importing

    Add algorithm selection for training/importing

    I haven't check the test yet but I think this is a good base to see the direction of this feature and discuss. Everything is working from my manual tests but I might missed some things.

    Resolves #327

    Summary of the commit :

    • Add 'algorithm' key in aiengine protos (StartTrainingRequest, ImportModelRequest)
    • Add 'algorithm' key in runtime protos (TrainModel, ImportModel)
    • Clean code (PEP8, flake) for ai scripts : main.py and train.py
    • Reduce Tensorflow verbose before import to avoid unecessary info logs (main.py l.318)
    • Add 'algorithm' flag for import and train command (CLI)
    enhancement 
    opened by Corentin-pro 4
  • Support ingestion of transaction/correlation ids

    Support ingestion of transaction/correlation ids

    Enable the engine to ingest transaction or correlation ids, like order_id or trace_id.

    To ingest unique ids today, they would need to be set as categories, which is not scalable beyond a few ids. The initial version of this feature would simply enable them to be ingested, and round-tripped through the data APIs. In the second iteration, they would be available for reference in reward functions, and the third the potential to automatically correlate and flatten multiple rows of data on the id.

    Proposed manifest

    dataspaces:
      - from: coinbase
        name: btcusd
        identifiers:
          - transaction_id
          - tick_id
        measurements:
          ...
    

    alternative proposal:

    dataspaces:
      - from: coinbase
        name: btcusd
        transactions:
          identifiers:
            - transaction_id
            - tick_id
        measurements:
          ...
    

    alternative proposal:

    dataspaces:
      - from: coinbase
        name: btcusd
        transactions:
          identifiers:
            - transaction_id
            - tick_id
          operator: and | or
        measurements:
          ...
    

    alternative proposal:

    dataspaces:
      - from: coinbase
        name: btcusd
        correlation_ids:
          - transaction_id
          - tick_id
        measurements:
          ...
    
    enhancement 
    opened by lukekim 3
  • Wrong algorithm name silently fails at training

    Wrong algorithm name silently fails at training

    Using the --learning-algorithm CLI parameter if the name is not either vpg or dql the training will say it is starting but will silently fail.

    Reference: https://docs.spiceai.org/deep-learning-ai/

    Validation should be done server-side (in spiced), with the error being passed back to the CLI over the HTTP call.

    bug 
    opened by Corentin-pro 3
  • spice pod train can return 'Not found' error if pod not loaded in runtime

    spice pod train can return 'Not found' error if pod not loaded in runtime

    Hey,

    Just encountered this error, I had ran spice run in one terminal, and went to spice train gardener in another, but got:

    failed to start training: 404 Not Found
    

    The source of the problem is: https://github.com/spiceai/spiceai/blob/45a7979a5741b58514076042b10c34dee5c2c0f7/pkg/cli/cmd/train.go#L78

    If the pod is not loaded by the runtime (which it wasn't because on WSL2 it seems I need to restart runtime after adding a pod via CLI) I got this 404 error which was a bit confusing.

    Perhaps another error (pod xxx may not be added?) or something could be helpful. Cheers.

    bug 
    opened by Plasma 3
  • Error on getting a recommendation on v0.3 compatible quickstarts/tweet-recommendation

    Error on getting a recommendation on v0.3 compatible quickstarts/tweet-recommendation

    Debug this. The issue is passing a negative value to np.exp() in the soft_max function. The input array is the result of model.predict().

    Console output:

    /workspaces/spiceai/ai/src/algorithms/dql/agent.py:34: RuntimeWarning: overflow encountered in exp
      exp_q_values = np.exp(q_values)
    /workspaces/spiceai/ai/src/algorithms/dql/agent.py:35: RuntimeWarning: invalid value encountered in true_divide
      return exp_q_values / np.sum(exp_q_values)
    

    Callstack:

    image

    Overflow line:

    image

    Input/Output objects to np.exp:

    image

    bug 
    opened by phillipleblanc 2
  • Update to TensorFlow 2.6.1 and freeze Keras to 2.6.0

    Update to TensorFlow 2.6.1 and freeze Keras to 2.6.0

    Also, tweak the waitForTrainingComplete timeout for ImportExport to fix flaky e2e test.

    See Issue thread for details on 2.6.1 + Keras: https://github.com/tensorflow/tensorflow/issues/52922

    dependencies 
    opened by lukekim 2
  • Name of algorithm used in the dashboard training run

    Name of algorithm used in the dashboard training run

    When a training run is in-progress or has been completed, the name of the learning algorithm used is not displayed. If training with different algorithms, it would be useful to compare the results of one algorithm with the other, and for that, we need to know which algorithm was used.

    Screen Shot 2021-11-03 at 10 23 04 AM good first issue 
    opened by phillipleblanc 2
  • Can't get 'spice run'

    Can't get 'spice run'

    Hi, I just followed the instruction but after running spice run

    it downloads docker images and then fail into this error Error: Failed to verify health of localhost:8004 after 121 attempts exit status 1

    I'm running on

    • Ubuntu 18
    • Docker 20.10.6, build 370c289
    opened by NeverVane 2
  • Explicit development and production runtime modes

    Explicit development and production runtime modes

    In production, no file-watcher for changes in pod manifest or configuration. Expected to deploy and use specific pre-provided manifests in production.

    Development mode will be set on spiced with -development defaulting to "production" if not provided. spice CLI will always set the -development flag when it starts spiced.

    enhancement 
    opened by lukekim 2
  • [AI Engine] Migrate from Pandas to pyarrow

    [AI Engine] Migrate from Pandas to pyarrow

    What

    Migrate the internal data structures of the AI engine Python code to use pyarrow. Final tensor input may still use Pandas format using the appropriate pyarrow function.

    Why?

    This is step one to unifying the data format across the runtime and AI engine, and improving performance across the entire platform.

    Corentin to break down into issues to work on.

    Includes

    • Actual work
    • Functional Tests (E.g. we haven't regressed from Pandas)
    • Performance Tests (we show performance has improved)
    • Documentation
    enhancement 
    opened by lukekim 0
  • When specifying the `--number-episodes` command-line parameter, the dashboard doesn't use that value

    When specifying the `--number-episodes` command-line parameter, the dashboard doesn't use that value

    Describe the bug The dashboard uses the default number of episodes instead of the number of episodes requested for a training.

    To Reproduce

    1. Clone the spiceai/quickstarts repo.
    2. Navigate to the serverops directory in the quickstarts repo.
    3. Run spice add quickstarts/serverops
    4. Run spice run
    5. (In a separate console window) Run spice train serverops --number-episodes 20
    6. Open a browser to http://localhost:8000/pods/serverops and observe the denominator for the episode progress is not correct

    Expected behavior The dashboard would show the correct denominators for the episode progress.

    Screenshots Screen Shot 2021-12-27 at 11 57 18 AM

    Additional context Add any other context about the problem here.

    bug good first issue 
    opened by phillipleblanc 0
  • Python packages for different version of python

    Python packages for different version of python

    Using spiceai outside docker can have python package problems if the system have a python version other than 3.8.

    Pining the version of packages only for 3.8 and let the virtual environment use the system packages for other versions (with a warning message).

    An other approach would be to have different requirements file for various python versions.

    enhancement 
    opened by Corentin-pro 0
  • Proposal: Learning Delay

    Proposal: Learning Delay

    WHY

    For many types of problems, it isn't possible to give a reward for an action until more time has passed. Imagine a Spicepod with a time granularity of 10s. During training, the reward function will be scored based on data from 10s after the action was taken. That isn't enough time to judge whether the trade was good or not.

    A possible solution in the existing system would be to crank up the granularity higher, say to something like 4 hours. Then in the reward function the next state would be the price from 4 hours later, which might give you a better idea on whether the trade was good or not.

    But cranking up the granularity means that more data points are collapsed into a single measurement, which means you need a lot more data to train something useful. It also means that the real-time recommendations will only change every 4 hours, which might not be acceptable for your application.

    For an application operating on real-time data, the smaller the granularity the more useful the recommendations become. But a smaller granularity makes it harder to write a reward function - as it may not be possible to judge how good an action is until later.

    Introducing a "learning delay" would decouple the granularity from when the reward function is run on the data. This allows more flexibility in the training and makes it easier to write a good reward function.

    WHAT

    A "learning delay" is used to delay rewards until a specified time has passed. During the initial "learning delay" interval, no rewards are given because the reward function is not invoked. After the initial "learning delay" the reward function is invoked with the action taken at the beginning of the "learning delay" interval, along with the state at that time and the state after the "learning delay" interval has passed.

    Spicepod manifest

    name: defly_slippage
    params:
      epoch_time: 1612557000
      granularity: 15s
      learning_delay: 2m
      interval: 1h
      period: 720h
    

    Reward function definition

    current_state = the state at the beginning of the learning delay interval next_state = the state at the end of the learning delay interval

    HOW

    We would need to keep a list of actions made during a training run, along with the index of the state at that time in the dataframe and the time. During the training loop, if the run hasn't progressed as far as the learning delay, then it uses a reward of 0. (Open Question: Do we want to discard this experience instead of incorporating them with a reward of 0?)

    Once it advances past the learning delay, it will call the reward function with the action taken "learning delay" time ago, along with the state at that time as the current_state. It also returns the later state as the next_state.

    opened by phillipleblanc 0
Releases(v0.6.1-alpha)
  • v0.6.1-alpha(Apr 21, 2022)

    Announcing the release of Spice.ai v0.6.1-alpha! 🌶

    Building upon the Apache Arrow support in v0.6-alpha, Spice.ai now includes new Apache Arrow data processor and Apache Arrow Flight data connector components! Together, these create a high-performance bulk-data transport directly into the Spice.ai ML engine. Coupled with big data systems from the Apache Arrow ecosystem like Hive, Drill, Spark, Snowflake, and BigQuery, it's now easier than ever to combine big data with Spice.ai.

    And we're also excited to announce the release of Spice.xyz! 🎉

    Spice.xyz is data and AI infrastructure for web3. It’s web3 data made easy. Insanely fast and purpose designed for applications and ML.

    Spice.xyz delivers data in Apache Arrow format, over high-performance Apache Arrow Flight APIs to your application, notebook, ML pipeline, and of course through these new data components, to the Spice.ai runtime.

    Read the announcement post at blog.spice.ai.

    New in this release

    Now built with Go 1.18.

    Dependency updates

    • Updates to React 18
    • Updates to CRA 5
    • Updates to Glide DataGrid 4
    • Updates to SWR 1.2
    • Updates to TypeScript 4.6
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(33.04 MB)
    spiced_darwin_arm64.tar.gz(32.65 MB)
    spiced_linux_amd64.tar.gz(33.29 MB)
    spice_darwin_amd64.tar.gz(27.50 MB)
    spice_darwin_arm64.tar.gz(27.16 MB)
    spice_linux_amd64.tar.gz(27.72 MB)
  • v0.6-alpha(Feb 8, 2022)

    Announcing the release of Spice.ai v0.6-alpha! 🏹

    Spice.ai now scales to datasets 10-100 larger enabling new classes of uses cases and applications! 🚀 We've completely rebuilt Spice.ai's data processing and transport upon Apache Arrow, a high-performance platform that uses an in-memory columnar format. Spice.ai joins other major projects including Apache Spark, pandas, and InfluxDB in being powered by Apache Arrow. This also paves the way for high-performance data connections to the Spice.ai runtime using Apache Arrow Flight and import/export of data using Apache Parquet. We're incredibly excited about the potential this architecture has for building intelligent applications on top of a high-performance transport between application data sources the Spice.ai AI engine.

    Highlights in v0.6-alpha

    Massive improvement in data loading performance and dataset scale

    From data connectors, to REST API, to AI engine, we've now rebuilt Spice.ai's data processing and transport on the Apache Arrow project. Specifically, using the Apache Arrow for Go implementation. Many thanks to Matt Topol for his contributions to the project and guidance on using it.

    This release includes a change to the Spice.ai runtime to AI Engine transport from sending text CSV over gGPC to Apache Arrow Records over IPC (Unix sockets).

    This is a breaking change to the Data Processor interface, as it now uses arrow.Record instead of Observation.

    Benchmarking v0.6

    Before v0.6, Spice.ai would not scale into the 100s of 1000s of rows.

    | Format | Row Number | Data Size | Process Time | Load Time | Transport time | Memory Usage | | ------ | ---------- | --------- | ------------ | --------- | -------------- | ------------ | | csv | 2,000 | 163.15KiB | 3.0005s | 0.0000s | 0.0100s | 423.754MiB | | csv | 20,000 | 1.61MiB | 2.9765s | 0.0000s | 0.0938s | 479.644MiB | | csv | 200,000 | 16.31MiB | 0.2778s | 0.0000s | NA (error) | 0.000MiB | | csv | 2,000,000 | 164.97MiB | 0.2573s | 0.0050s | NA (error) | 0.000MiB | | json | 2,000 | 301.79KiB | 3.0261s | 0.0000s | 0.0282s | 422.135MiB | | json | 20,000 | 2.97MiB | 2.9020s | 0.0000s | 0.2541s | 459.138MiB | | json | 200,000 | 29.85MiB | 0.2782s | 0.0010s | NA (error) | 0.000MiB | | json | 2,000,000 | 300.39MiB | 0.3353s | 0.0080s | NA (error) | 0.000MiB |

    After building on Arrow, Spice.ai now easily scales beyond millions of rows.

    | Format | Row Number | Data Size | Process Time | Load Time | Transport time | Memory Usage | | ------ | ---------- | --------- | ------------ | --------- | -------------- | ------------ | | csv | 2,000 | 163.14KiB | 2.8281s | 0.0000s | 0.0194s | 439.580MiB | | csv | 20,000 | 1.61MiB | 2.7297s | 0.0000s | 0.0658s | 461.836MiB | | csv | 200,000 | 16.30MiB | 2.8072s | 0.0020s | 0.4830s | 639.763MiB | | csv | 2,000,000 | 164.97MiB | 2.8707s | 0.0400s | 4.2680s | 1897.738MiB | | json | 2,000 | 301.80KiB | 2.7275s | 0.0000s | 0.0367s | 436.238MiB | | json | 20,000 | 2.97MiB | 2.8284s | 0.0000s | 0.2334s | 473.550MiB | | json | 200,000 | 29.85MiB | 2.8862s | 0.0100s | 1.7725s | 824.089MiB | | json | 2,000,000 | 300.39MiB | 2.7437s | 0.0920s | 16.5743s | 4044.118MiB |

    New in this release

    Dependency updates

    • Updates to numpy 1.21.0
    • Updates to marked 3.0.8
    • Updates to follow-redirects 1.14.7
    • Updates nanoid to 3.2.0
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(32.80 MB)
    spiced_darwin_arm64.tar.gz(33.30 MB)
    spiced_linux_amd64.tar.gz(33.04 MB)
    spice_darwin_amd64.tar.gz(26.90 MB)
    spice_darwin_arm64.tar.gz(27.39 MB)
    spice_linux_amd64.tar.gz(27.10 MB)
  • v0.5.1-alpha(Dec 28, 2021)

    Announcing the release of Spice.ai v0.5.1-alpha! 📈

    This minor release builds upon v0.5-alpha adding the ability to start training from the dashboard plus support for monitoring training runs with TensorBoard.

    Highlights in v0.5.1-alpha

    Start training from dashboard

    A "Start Training" button has been added to the pod page on the dashboard so that you can easily start training runs from that context.

    Training runs can now be started by:

    • Modifications to the Spicepod YAML file.
    • The spice train command.
    • The "Start Training" dashboard button.
    • POST API calls to /api/v0.1/pods/{pod name}/train

    Video: https://user-images.githubusercontent.com/80174/146122241-f8073266-ead6-4628-8563-93e98d74e9f0.mov

    TensorBoard monitoring

    TensorBoard monitoring is now supported when using DQL (default) or the new SACD learning algorithms that was announced in v0.5-alpha.

    When enabled, TensorBoard logs will automatically be collected and a "Open TensorBoard" button will be shown on the pod page in the dashboard.

    Logging can be enabled at the pod level with the training_loggers pod param or per training run with the CLI --training-loggers argument.

    Video: https://user-images.githubusercontent.com/80174/146382503-2bb2570b-5111-4de0-9b80-a1dc4a5dcc35.mov

    Support for VPG will be added in v0.6-alpha. The design allows for additional loggers to be added in the future. Let us know what you'd like to see!

    New in this release

    • Adds a start training button on the dashboard pod page.
    • Adds TensorBoard logging and monitoring when using DQL and SACD learning algorithms.

    Dependency updates

    • Updates to Tailwind 3.0.6
    • Updates to Glide Data Grid 3.2.1
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(28.44 MB)
    spiced_darwin_arm64.tar.gz(28.83 MB)
    spiced_linux_amd64.tar.gz(28.64 MB)
    spice_darwin_amd64.tar.gz(23.34 MB)
    spice_darwin_arm64.tar.gz(23.71 MB)
    spice_linux_amd64.tar.gz(23.52 MB)
  • v0.5-alpha(Dec 8, 2021)

    We are excited to announce the release of Spice.ai v0.5-alpha! 🥇

    Highlights include a new learning algorithm called "Soft Actor-Critic" (SAC), fixes to the behavior of spice upgrade, and a more consistent authoring experience for reward functions.

    If you are new to Spice.ai, check out the getting started guide and star spiceai/spiceai on GitHub.

    Highlights in v0.5-alpha

    Soft Actor-Critic (Discrete) (SAC) Learning Algorithm

    The addition of the Soft Actor-Critic (Discrete) (SAC) learning algorithm is a significant improvement to the power of the AI engine. It is not set as the default algorithm yet, so to start using it pass the --learning-algorithm sacd parameter to spice train. We'd love to get your feedback on how its working!

    Consistent reward authoring experience

    With the addition of the reward function files that allow you to edit your reward function in a Python file, the behavior of starting a new training session by editing the reward function code was lost. With this release, that behavior is restored.

    In addition, there is a breaking change to the variables used to access the observation state and interpretations. This change was made to better reflect the purpose of the variables and make them easier to work with in Python

    | Previous (Type) | New (Type) | | ----------------------------------- | -------------------------------------- | | prev_state (SimpleNamespace) | current_state (dict) | | prev_state.interpretations (list) | current_state_interpretations (list) | | new_state (SimpleNamespace) | next_state (dict) | | new_state.interpretations (list) | next_state_interpretations (list) |

    Improved spice upgrade behavior

    The Spice.ai CLI will no longer recommend "upgrading" to an older version. An issue was also fixed where trying to upgrade the Spice.ai CLI using spice upgrade on Linux would return an error.

    New in this release

    • Adds a new learning algorithm called "Soft-Actor Critic (Discrete)" (SAC).
    • Updates the reward function parameters for the YAML code blocks from prev_state and new_state to current_state and next_state to be consistent with the reward function files.
    • Fixes an issue where editing a reward functions file would not automatically trigger training.
    • Fixes the normalization of values for the Deep-Q Learning algorithm to handle larger values.
    • Fixes an issue where the Spice.ai CLI would not upgrade on Linux with the spice upgrade command.
    • Fixes an issue where the Spice.ai CLI would recommend an "upgrade" to an older version.
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(37.18 MB)
    spiced_darwin_arm64.tar.gz(37.63 MB)
    spiced_linux_amd64.tar.gz(37.37 MB)
    spice_darwin_amd64.tar.gz(23.34 MB)
    spice_darwin_arm64.tar.gz(23.71 MB)
    spice_linux_amd64.tar.gz(23.51 MB)
  • v0.4.1-alpha(Nov 23, 2021)

    Announcing the release of Spice.ai v0.4.1-alpha! ✅

    This point release focuses on fixes and improvements to v0.4-alpha. Highlights include AI engine performance improvements, updates to the dashboard observations data grid, notification of new CLI versions, and several bug fixes.

    A special acknowledgment to @Adm28, who added the CLI upgrade detection and prompt, which notifies users of new CLI versions and prompts to upgrade.

    Highlights in v0.4.1-alpha

    AI engine performance improvements

    Overall training performance has been improved up to 13% by removing a lock in the AI engine.

    In versions before v0.4.1-alpha, performance was especially impacted when streaming new data during a training run.

    Dashboard Observations Datagrid

    The dashboard observations datagrid now automatically resizes to the window width, and headers are easier to read, with automatic grouping into dataspaces. In addition, column widths are also resizable.

    CLI version detection and upgrade prompt

    When it is run, the Spice.ai CLI will now automatically check for new CLI versions once a day maximum.

    If it detects a new version, it will print a notification to the console on spice version, spice run or spice add commands prompting the user to upgrade using the new spice upgrade command.

    New in this release

    • Adds automatic resizing of the observations datagrid.
    • Adds header group by dataspace to the observations datagrid.
    • Adds CLI version detection and prompt for upgrade on version, run, and add commands.
    • Adds Support for parsing hex-encoded times and measurements. Use the time_format of hex or prefix with 0x.
    • Updates AI engine with improved training performance.
    • Updates Go and NPM dependencies.
    • Fixes detection of Spicepods in the Spicepods directory, and a resulting error when loading a non-Spicepod file.
    • Fixes a potential "zip slip" security issue.
    • Fixes an issue where the AI engine may not gracefully shutdown.
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(37.17 MB)
    spiced_darwin_arm64.tar.gz(37.63 MB)
    spiced_linux_amd64.tar.gz(37.38 MB)
    spice_darwin_amd64.tar.gz(23.33 MB)
    spice_darwin_arm64.tar.gz(23.71 MB)
    spice_linux_amd64.tar.gz(23.51 MB)
  • v0.4-alpha(Nov 16, 2021)

    We are excited to announce the release of Spice.ai v0.4-alpha! 🏄‍♂️

    Highlights include support for authoring reward functions in a code file, the ability to specify the time of recommendation, and ingestion support for transaction/correlation ids. Authoring reward functions in a code file is a significant improvement to the developer experience than specifying functions inline in the YAML manifest, and we are looking forward to your feedback on it!

    If you are new to Spice.ai, check out the getting started guide and star spiceai/spiceai on GitHub.

    Highlights in v0.4-alpha

    Upgrade using spice upgrade

    The spice upgrade command was added in the v0.3.1-alpha release, so you can now upgrade from v0.3.1 to v0.4 by simply running spice upgrade in your terminal. Special thanks to community member @Adm28 for contributing this feature!

    Reward Function Files

    In addition to defining reward code inline, it is now possible to author reward code in functions in a separate Python file.

    The reward function file path is defined by the reward_funcs property.

    A function defined in the code file is mapped to an action by authoring its name in the with property of the relevant reward.

    Example:

    training:
      reward_funcs: my_reward.py
      rewards:
        - reward: buy
          with: buy_reward
        - reward: sell
          with: sell_reward
        - reward: hold
          with: hold_reward
    

    Learn more in the documentation: docs.spiceai.org/concepts/rewards/external

    Time Categories

    Spice.ai can now learn from cyclical patterns, such as daily, weekly, or monthly cycles.

    To enable automatic cyclical field generation from the observation time, specify one or more time categories in the pod manifest, such as a month or weekday in the time section.

    For example, by specifying month the Spice.ai engine automatically creates a field in the AI engine data stream called time_month_{month} with the value calculated from the month of which that timestamp relates.

    Example:

    time:
      categories:
        - month
        - dayofweek
    

    Supported category values are: month dayofmonth dayofweek hour

    Learn more in the documentation: docs.spiceai.org/reference/pod/#time

    Get recommendation for a specific time

    It is now possible to specify the time of recommendations fetched from the /recommendation API.

    Valid times are from pod epoch_time to epoch_time + period.

    Previously the API only supported recommendations based on the time of the last ingested observation.

    Requests are made in the following format:GET http://localhost:8000/api/v0.1/pods/{pod}/recommendation?time={unix_timestamp}`

    An example for quickstarts/trader

    GET http://localhost:8000/api/v0.1/pods/trader/recommendation?time=1605729600

    Specifying {unix_timestamp} as 0 will return a recommendation based on the latest data. An invalid {unix_timestamp} will return a result that has the valid time range in the error message:

    {
      "response": {
        "result": "invalid_recommendation_time",
        "message": "The time specified (1610060201) is outside of the allowed range: (1610057600, 1610060200)",
        "error": true
      }
    }
    

    New in this release

    • Adds time categories configuration to the pod manifest to enable learning from cyclical patterns in data - e.g. hour, day of week, day of month, and month
    • Adds support for defining reward functions in a rewards functions code file.
    • Adds the ability to specify recommendation time making it possible to now see which action Spice.ai recommends at any time during the pod period.
    • Adds support for ingestion of transaction/correlation identifiers (e.g. order_id, trace_id) in the pod manifest.
    • Adds validation for invalid dataspace names in the pod manifest.
    • Adds the ability to resize columns to the dashboard observation data grid.
    • Updates to TensorFlow 2.7 and Keras 2.7
    • Fixes a bug where data processors were using data connector params
    • Fixes a dashboard issue in the pod observations data grid where a column might not be shown.
    • Fixes a crash on pod load if the training section is not included in the manifest.
    • Fixes an issue where data manager stats errors were incorrectly being printed to console.
    • Fixes an issue where selectors may not match due to surrounding whitespace.
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(37.11 MB)
    spiced_darwin_arm64.tar.gz(37.56 MB)
    spiced_linux_amd64.tar.gz(37.31 MB)
    spice_darwin_amd64.tar.gz(23.10 MB)
    spice_darwin_arm64.tar.gz(23.45 MB)
    spice_linux_amd64.tar.gz(23.28 MB)
  • v0.3.1-alpha(Nov 2, 2021)

    We are excited to announce the release of Spice.ai v0.3.1-alpha! 🎃

    This point release focuses on fixes and improvements to v0.3-alpha. Highlights include the ability to specify both seed and runtime data, to select custom named fields for time and tags, a new spice upgrade command and several bug fixes.

    A special acknowledgment to @Adm28, who added the new spice upgrade command, which enables the CLI to self-update, which in turn will auto-update the runtime.

    Highlights in v0.3.1-alpha

    Upgrade command

    The CLI can now be updated using the new spice upgrade command. This command will check for, download, and install the latest Spice.ai CLI release, which will become active on it's next run.

    When run, the CLI will check for the matching version of the Spice.ai runtime, and will automatically download and install it as necessary.

    The version of both the Spice.ai CLI and runtime can be checked with the spice version CLI command.

    Seed data

    When working with streaming data sources, like market prices, it's often also useful to seed the dataspace with historical data. Spice.ai enables this with the new seed_data node in the dataspace configuration. The syntax is exactly the same as the data syntax. For example:

    dataspaces:
      - from: coinbase
        name: btcusd
        seed_data:
          connector: file
            params:
              path: path/to/seed/data.csv
          processor:
            name: csv
        data:
          connector: coinbase
            params:
              product_ids: BTC-USD
          processor:
            name: json
    

    The seed data will be fetched first, before the runtime data is initialized. Both sets of connectors and processors use the dataspace scoped measurements, categories and tags for processing, and both data sources are merged in pod-scoped observation timeline.

    Time field selectors

    Before v0.3.1-alpha, data was required to include a specific time field. In v0.3.1-alpha, the JSON and CSV data processors now support the ability to select a specific field to populate the time field. An example selector to use the created_at column for time is:

    data:
       processor:
          name: csv
          params:
            time_selector: created_at
    

    Tag field selectors

    Before v0.3.1-alpha, tags were required to be placed in a _tags field. In v0.3.1-alpha, any field can now be selected to populate tags. Tags are pod-unique string values, and the union of all selected fields will make up the resulting tag list. For example:

    dataspace:
      from: twitter
      name: tweets
      tags:
        selectors:
          - tags
          - author_id
        values:
          - spiceaihq
          - spicy
    

    New in this release

    • Adds a new spice upgrade command for self-upgrade of the Spice.ai CLI.
    • Adds a new seed_data node to the dataspace configuration, enabling the dataspace to be seeded with an alternative source of data.
    • Adds the ability to select a custom time field in JSON and CSV data processors with the time_selector parameter.
    • Adds the ability to select custom tag fields in the dataspace configuration with selectors list.
    • Adds error reporting for AI engine crashes, where previously it would fail silently.
    • Fixes the dashboard pods list from "jumping" around due to being unsorted.
    • Fixes rare cases where categorical data might be sent to the AI engine in the wrong format.
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(37.29 MB)
    spiced_darwin_arm64.tar.gz(37.76 MB)
    spiced_linux_amd64.tar.gz(37.50 MB)
    spice_darwin_amd64.tar.gz(23.07 MB)
    spice_darwin_arm64.tar.gz(23.45 MB)
    spice_linux_amd64.tar.gz(23.25 MB)
  • v0.3-alpha(Oct 26, 2021)

    Spice.ai v0.3-alpha

    We are excited to announce the release of Spice.ai v0.3-alpha! 🎉

    This release adds support for ingestion, automatic encoding, and training of categorical data, enabling more use-cases and datasets beyond just numerical measurements. For example, perhaps you want to learn from data that includes a category of t-shirt sizes, with discrete values, such as small, medium, and large. The v0.3 engine now supports this and automatically encodes the categorical string values into numerical values that the AI engine can use. Also included is a preview of data visualizations in the dashboard, which is helpful for developers as they author Spicepods and dataspaces.

    A special acknowledgment to @sboorlagadda, who submitted the first Spice.ai feature contribution from the community ever! He added the ability to list pods from the CLI with the new spice pods list command. Thank you, @sboorlagadda!!!

    If you are new to Spice.ai, check out the getting started guide and star spiceai/spiceai on GitHub.

    Highlights in v0.3-alpha

    Categorical data

    In v0.1, the runtime and AI engine only supported ingesting numerical data. In v0.2, tagged data was accepted and automatically encoded into fields available for learning. In this release, v0.3, categorical data can now also be ingested and automatically encoded into fields available for learning. This is a breaking change with the format of the manifest changing separating numerical measurements and categorical data.

    Pre-v0.3, the manifest author specified numerical data using the fields node.

    In v0.3, numerical data is now specified under measurements and categorical data under categories. E.g.

    dataspaces:
      - from: event
        name: stream
        measurements:
          - name: duration
            selector: length_of_time
            fill: none
          - name: guest_count
            selector: num_guests
            fill: none
        categories:
          - name: event_type
            values:
              - dinner
              - party
          - name: target_audience
            values:
              - employees
              - investors
        tags:
          - tagA
          - tagB
    

    Data visualizations preview

    A top piece of community feedback was the ability to visualize data. After first running Spice.ai, we'd often hear from developers, "how do I see the data?". A preview of data visualizations is now included in the dashboard on the pod page.

    Listing pods

    Once the Spice.ai runtime has started, you can view the loaded pods on the dashboard and fetch them via API call localhost:8000/api/v0.1/pods. To make it even easier, we've added the ability to list them via the CLI with the new spice pods list command, which shows the list of pods and their manifest paths.

    Coinbase data connector

    A new Coinbase data connector is included in v0.3, enabling the streaming of live market ticker prices from Coinbase Pro. Enable it by specifying the coinbase data connector and providing a list of Coinbase Pro product ids. E.g. "BTC-USD". A new sample which demonstrates is also available with its associated Spicepod available from the spicerack.org registry. Get it with spice add samples/trader.

    Tweet Recommendation Quickstart

    A new Tweet Recommendation Quickstart has been added. Given past tweet activity and metrics of a given account, this app can recommend when to tweet, comment, or retweet to maximize for like count, interaction rates, and outreach of said given Twitter account.

    Trader Sample

    A new Trader Sample has been added in addition to the Trader Quickstart. The sample uses the new Coinbase data connector to stream live Coinbase Pro ticker data for learning.

    New in this release

    • Adds support for ingesting, encoding, and training on categorical data. v0.3 uses one-hot-encoding.
    • Changes Spicepod manifest fields node to measurements and add the categories node.
    • Adds the ability to select a field from the source data and map it to a different field name in the dataspace. See an example for measurements in docs.
    • Adds support for JSON content type when fetching from the /observations API. Previously, only CSV was supported.
    • Adds a preview version of data visualizations to the dashboard. The grid has several limitations, one of which is it currently cannot be resized.
    • Adds the ability to select which learning algorithm to use via the CLI, the API, and specified in the Spicepod manifest. Possible choices are currently "vpg", Vanilla Policy Gradient and "dql", Deep Q-Learning. Shout out to @corentin-pro, who added this feature on his second day on the team!
    • Adds the ability to list loaded pods with the CLI command spice pods list.
    • Adds a new coinbase data connector for Coinbase Pro market prices.
    • Adds a new Tweet Recommendation Quickstart.
    • Adds a new Trader Sample.
    • Fixes bug where the /observations endpoint was not providing fully qualified field names.
    • Fixes issue where debugging messages were printed when using spice add.
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(37.27 MB)
    spiced_darwin_arm64.tar.gz(37.73 MB)
    spiced_linux_amd64.tar.gz(37.48 MB)
    spice_darwin_amd64.tar.gz(23.06 MB)
    spice_darwin_arm64.tar.gz(23.40 MB)
    spice_linux_amd64.tar.gz(23.24 MB)
  • v0.2.1-alpha(Oct 12, 2021)

    Spice.ai v0.2.1-alpha

    Announcing the release of Spice.ai v0.2.1-alpha! 🚚

    This point release focuses on fixes and improvements to v0.2-alpha. Highlights include the ability to specify how missing data should be treated and a new production mode for spiced.

    This release supports the ability to specify how the runtime should treat missing data. Previous releases filled missing data with the last value (or initial value) in the series. While this makes sense for some data, i.e., market prices of a stock or cryptocurrency, it does not make sense for discrete data, i.e., ratings. In v0.2.1, developers can now add the fill parameter on a dataspace field to specify the behavior. This release supports fill types previous and none. The default is previous.

    Example in a manifest:

    dataspaces:
      - from: twitter
        name: tweets
        fields:
          - name: likes
            fill: none # The new fill parameter
    

    spiced now defaults to a new production mode when run standalone (not via the CLI), with development mode now explicitly set with the --development flag. Production mode does not activate development time features, such as the Spicepod file watcher. The CLI always runs spiced in development mode as it is not expected to be used in production deployments.

    New in this release

    • Adds a fill parameter to dataspace fields to specify how missing values should be treated.
    • Adds the ability to specify the fill behavior of empty values in a dataspace.
    • Simplifies releases with a single spiceai release instead of separate spice and spiced releases.
    • Adds an explicit development mode to spiced. Production mode does not activate the file watcher.
    • Fixes a bug when the pod parameter epoch_time was not set which would cause data not to be sent to the AI engine.
    • Fixes a bug where the User-Agent was not set correctly from CLI calls to api.spicerack.org
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(36.50 MB)
    spiced_darwin_arm64.tar.gz(36.93 MB)
    spiced_linux_amd64.tar.gz(36.69 MB)
    spice_darwin_amd64.tar.gz(23.53 MB)
    spice_darwin_arm64.tar.gz(23.91 MB)
    spice_linux_amd64.tar.gz(23.71 MB)
  • v0.2-alpha-spiced(Oct 4, 2021)

    Spice.ai v0.2-alpha

    We are excited to announce the release of Spice.ai v0.2-alpha! 🎉

    This release is the first major version since the initial v0.1 announcement and includes significant improvements based upon community and early customer feedback. If you are new to Spice.ai, check out the getting started guide and star spiceai/spiceai on GitHub.

    Highlights in v0.2-alpha

    Tagged data

    In the first release, the runtime and AI engine could only ingest numerical data. In v0.2, tagged data is accepted and automatically encoded into fields available for learning. For example, it's now possible to include a "liked" tag when using tweet data, automatically encoded to a 0/1 field for training. Both CSV and the new JSON observation formats support tags. The v0.3 release will add additional support for sets of categorical data.

    Streaming data

    Previously, the runtime would trigger each data connector to fetch on a 15-second interval. In v0.2, we upgraded the interface for data connectors to a push/streaming model, which enables continuous streaming data into the environment and AI engine.

    Interpreted data

    Spice.ai works together with your application code and works best when it's provided continuous feedback. This feedback could be from the application itself, for example, ratings, likes, thumbs-up/down, profit from trades, or external expertise. The interpretations API was introduced in v0.1.1, and v0.2 adds AI engine support providing a way to give meaning or an interpretation of ranges of time-series data, which are then available within reward functions. For example, a time range of stock prices could be a "good time to buy," or perhaps Tuesday mornings is a "good time to tweet," and an application or expert can teach the AI engine this through interpretations providing a shortcut to it's learning.

    New in this release

    • Adds core runtime and AI engine tagged data support
    • Adds tagged data support to the CSV processor
    • Adds streaming data support to the engine and data connectors
    • Adds a new JSON data processor for ingesting JSON data
    • Adds a new Twitter data connector with JSON processor support
    • Adds a new /pods//dataspaces API
    • Adds support for using interpretations in reward functions Learn more.
    • Adds support for downloading zipped pods from the spicerack.org registry
    • Adds support for adding data along with the pod manifest when adding a pod from the spicerack.org registry
    • Adds basic /pods//diagnostics API
    • Fixes pod period, interval, and granularity not being correctly set when trying to use a "d" format
    • Fixes the color scheme of action counts in the dashboard to improve readability
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(36.09 MB)
    spiced_darwin_arm64.tar.gz(36.51 MB)
    spiced_linux_amd64.tar.gz(36.29 MB)
  • v0.1.1-alpha-spiced(Sep 21, 2021)

    Spice.ai v0.1.1-alpha

    Announcing the release of Spice.ai v0.1.1-alpha! 🙌

    This is the first point release following the public launch of v0.1-alpha and is focused on fixes and improvements to v0.1-alpha before the bigger v0.2-alpha release.

    Highlights include initial support for interpretations and the addition of a new Json Data Processor which enables observations to be posted in JSON to a new Dataspaces API. The ability to post observations directly to the Dataspace also now makes Data Connectors optional.

    Interpretations will enable end-users and external systems to participate in training by providing expert interpretation of the data, ultimately creating smarter pods. v0.1.1-alpha includes the ability to add and get interpretations by API and through import/export of Spicepods. Reward function authors will be able to use interpretations in reward functions from the v0.2-alpha release.

    Previously observations could only be added in CSV format. JSON is now supported by calling the new dataspace observations API that leverages the also new JSON processor located in the data-components-contrib repository. The JSON processor defaults to parsing the Spice.ai observation format and is extensible to other schemas.

    The dashboard has also been improved to show action counts during a training run, making it easier to visualize the learning process.

    New in this release

    • Adds visualization of actions counts during a training run in the dashboard.
    • Adds a new interpretations API, along with support for importing and exporting interpretations to pods. Learn more.
    • Adds a new API for ingesting dataspace observations. Learn more.
    • Adds an official DockerHub repository for spiceai/spiceai.
    • Fixes bug where the dashboard would not load on browser refresh.
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(35.75 MB)
    spiced_linux_amd64.tar.gz(35.94 MB)
  • v0.1.1-alpha-rc-spiced(Sep 9, 2021)

  • v0.1.1-alpha-rc-spice(Sep 9, 2021)

  • v0.1.0-alpha.5-spiced(Sep 7, 2021)

    Spice.ai v0.1.0-alpha.5

    Announcing the release of Spice.ai v0.1.0-alpha.5! 🎉

    This release focused on preparation for the public launch of the project, including more comprehensive and easier-to-understand documentation, quickstarts and samples.

    Data Connectors and Data Processors have now been moved to their own repository spiceai/data-components-contrib

    To better improve the developer experience, the following breaking changes have been made:

    • The pods directory .spice/pods (and thus manifests) and the config file .spice/config.yaml have been moved from the ./spice directory to the app root ./. This allows for the .spice directory to be added to the .gitignore and for the manifest changes to be easily tracked in the project.
    • Flights have been renamed to more understandable Training Runs in user interfaces.

    New in this release

    • Adds Open source acknowledgements to the dashboard
    • Adds improved error messages for several scenarios
    • Updates all Quickstarts and Samples to be clearer, easier to understand and better show the value of Spice.ai. The LogPruner sample has also been renamed ServerOps
    • Updates the dashboard to show a message when no pods have been trained
    • Updates all documentation links to docs.spiceai.org
    • Updates to use Python 3.8.12
    • Fixes bug where the dashboards showed undefined episode number
    • Fixes issue where the manifest.json was not being served to the React app
    • Fixes the config.yaml being written when not required
    • Removes the ability to load a custom dashboard - this may come back in a future release

    Breaking changes

    • Changes .spice/pods is now located at ./spicepods
    • Changes .spice/config.yaml is now located at .spice.config.yaml
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(34.91 MB)
    spiced_darwin_arm64.tar.gz(35.29 MB)
    spiced_linux_amd64.tar.gz(35.09 MB)
  • v0.1.0-alpha.5-spice(Sep 7, 2021)

    Spice.ai v0.1.0-alpha.5

    Announcing the release of Spice.ai v0.1.0-alpha.5! 🎉

    This release focused on preparation for the public launch of the project, including more comprehensive and easier-to-understand documentation, quickstarts and samples.

    Data Connectors and Data Processors have now been moved to their own repository spiceai/data-components-contrib

    To better improve the developer experience, the following breaking changes have been made:

    • The pods directory .spice/pods (and thus manifests) and the config file .spice/config.yaml have been moved from the ./spice directory to the app root ./. This allows for the .spice directory to be added to the .gitignore and for the manifest changes to be easily tracked in the project.
    • Flights have been renamed to more understandable Training Runs in user interfaces.

    New in this release

    • Adds Open source acknowledgements to the dashboard
    • Adds improved error messages for several scenarios
    • Updates all Quickstarts and Samples to be clearer, easier to understand and better show the value of Spice.ai. The LogPruner sample has also been renamed ServerOps
    • Updates the dashboard to show a message when no pods have been trained
    • Updates all documentation links to docs.spiceai.org
    • Updates to use Python 3.8.12
    • Fixes bug where the dashboards showed undefined episode number
    • Fixes issue where the manifest.json was not being served to the React app
    • Fixes the config.yaml being written when not required
    • Removes the ability to load a custom dashboard - this may come back in a future release

    Breaking changes

    • Changes .spice/pods is now located at ./spicepods
    • Changes .spice/config.yaml is now located at .spice.config.yaml
    Source code(tar.gz)
    Source code(zip)
    spice_darwin_amd64.tar.gz(20.88 MB)
    spice_darwin_arm64.tar.gz(21.24 MB)
    spice_linux_amd64.tar.gz(21.06 MB)
  • v0.1.0-alpha-spiced(Sep 7, 2021)

  • v0.1.0-alpha-spice(Sep 7, 2021)

  • v0.1.0-alpha.4-spiced(Aug 31, 2021)

    Spice.ai v0.1.0-alpha.4

    Announcing the release of Spice.ai v0.1.0-alpha.4! 🎉

    We have a project name update. The project will now be referred to as "Spice.ai" instead of "Spice AI" and the project website will be located at spiceai.org.

    This release now uses the new spicerack.org AI package registry instead of fetching packages directly from GitHub.

    Added support for importing and exporting Spice.ai pods with spice import and spice export commands.

    The CLI been streamlined removing the pod command:

    • pod add changes from spice pod add <pod path> to just spice add <pod path>
    • pod train changes from spice pod train <pod name> to just spice train <pod name>

    We've also updated the names of some concepts:

    • "DataSources" are now "Dataspaces"
    • "Inference" is now "Recommendation"

    New in this release

    • Adds a new Gardener to intelligently decide on the best time to water a simulated garden
    • Adds support for importing and exporting Spice.ai pods with spice import and spice export commands
    • Adds a complete end-to-end test suite
    • Adds installing by friendly URL curl https://install.spiceai.org | /bin/bash
    • Adds the spice binary to PATH automatically by shell config (E.g. .bashrc .zshrc)
    • Adds support for targeting hosting contexts (docker or metal) specifically with a --context command line flag
    • Removes the model downloader. This will return with better supported in a later version
    • Updates Trader quickstart with demo Node.js application to better demonstrate its use
    • Updates LogPruner quickstart with demo PowerShell Core script to better demonstrate its use
    • Updates Tensorflow from 2.5.0 to 2.5.1
    • Fixes potential mismatch of CLI and runtime by only automatically upgrading to the same version
    • Fixes issue with .spice/config.yml creation in Docker due to incorrect permissions
    • Fixes dashboard title from React App to Spice.ai

    Breaking changes

    • Changes datasources section in the pod manifest to dataspaces
    • Changes /api/v0.1/pods/<pod>/inference API to /api/v0.1/pods/<pod>/recommendation
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(35.16 MB)
    spiced_darwin_arm64.tar.gz(34.87 MB)
    spiced_linux_amd64.tar.gz(35.31 MB)
  • v0.1.0-alpha.4-spice(Aug 31, 2021)

    Spice.ai v0.1.0-alpha.4

    Announcing the release of Spice.ai v0.1.0-alpha.4! 🎉

    We have a project name update. The project will now be referred to as "Spice.ai" instead of "Spice AI" and the project website will be located at spiceai.org.

    This release now uses the new spicerack.org AI package registry instead of fetching packages directly from GitHub.

    The CLI been streamlined removing the pod command:

    • pod add changes from spice pod add <pod path> to just spice add <pod path>
    • pod train changes from spice pod train <pod name> to just spice train <pod name>

    We've also updated the names of some concepts:

    • "DataSources" are now "Dataspaces"
    • "Inference" is now "Recommendation"

    New in this release

    • Adds a new Gardener to intelligently decide on the best time to water a simulated garden
    • Adds a complete end-to-end test suite
    • Adds installing by friendly URL curl https://install.spiceai.org | /bin/bash
    • Adds the spice binary to PATH automatically by shell config (E.g. .bashrc .zshrc)
    • Adds support for targeting hosting contexts (docker or metal) specifically with a --context command line flag
    • Removes the model downloader. This will return with better supported in a later version
    • Updates [Trader]](https://github.com/spiceai/quickstarts/tree/trunk/trader) quickstart with demo Node.js application to better demonstrate its use
    • Updates [LogPruner]](https://github.com/spiceai/quickstarts/tree/trunk/logpruner) quickstart with demo PowerShell Core script to better demonstrate its use
    • Updates Tensorflow from 2.5.0 to 2.5.1
    • Fixes potential mismatch of CLI and runtime by only automatically upgrading to the same version
    • Fixes issue with .spice/config.yml creation in Docker due to incorrect permissions
    • Fixes dashboard title from React App to Spice.ai

    Breaking changes

    • Changes datasources section in the pod manifest to dataspaces
    • Changes /api/v0.1/pods/<pod>/inference API to /api/v0.1/pods/<pod>/recommendation
    Source code(tar.gz)
    Source code(zip)
    spice_darwin_amd64.tar.gz(21.20 MB)
    spice_darwin_arm64.tar.gz(20.92 MB)
    spice_linux_amd64.tar.gz(21.34 MB)
  • v0.1.0-alpha.3-spiced(Aug 23, 2021)

    Spice.ai v0.1.0-alpha.3

    Announcing the release of Spice.ai v0.1.0-alpha.3! 🎉

    New in this release

    • Adds a new Log Pruner sample and quickstart to intelligently decide on the best time to prune logs or run maintenance on a server
    • Adds a new Kubernetes sample that shows how to get Spice.ai running in your Kubernetes cluster
    • Adds significant improvements to data handling, including the ability to compose data "connectors" and "processors". This changes the way data is specified the pod manifest and is a breaking change (see below)
    • Adds a completely rewritten dashboard based on React, with improved styling and useability, which will serve as a foundation for future improvements
    • Adds a rewrite to the communication backend using gRPC insead of HTTP for improved performance and as a foundation for future improvements
    • Adds the ability to fallback to $GITHUB_TOKEN where $SPICE_GH_TOKEN was not specified during Private Preview
    • Fixes an issue where deleting a manifest file was not handled gracefully by the Spice.ai runtime
    • Fixes an issue where debug logs were printed to the console
    • Fixes an issue when running in Docker where the http_port config value was ignored
    • Fixes an issue where the backend engine process would not terminate if the main spiced process crashes

    Notes

    New in this release is the ability to compose data "connectors" and data "processors" decoupling the fetching and processing of data inputs. This enables the ability to create a single data processor, for example a CSV processor, with different input data connectors, for example using the "csv" processor with "file", "blob" or "database" connectors.

    This introduces a breaking change to the way data connectors are specified in the pod manifest by splitting it into a connector and processor. Previously defined like:

    - datasource:
      from: coinbase
      name: btcusd
      # Old format
      connector:
        type: csv
        params:
          path: data/btcusd.csv
    

    Has been changed to:

    - datasource:
      from: coinbase
      name: btcusd
      # New format
      data:
        connector:
          name: file
          params:
            path: ../../test/assets/data/csv/COINBASE_BTCUSD, 30.csv
        processor:
          name: csv
    
    Source code(tar.gz)
    Source code(zip)
    spiced_darwin_amd64.tar.gz(34.97 MB)
    spiced_darwin_arm64.tar.gz(34.87 MB)
    spiced_linux_amd64.tar.gz(35.12 MB)
  • v0.1.0-alpha.3-spice(Aug 23, 2021)

    Spice.ai v0.1.0-alpha.3

    Announcing the release of Spice.ai v0.1.0-alpha.3! 🎉

    New in this release

    • Adds a new Log Pruner sample and quickstart to intelligently decide on the best time to prune logs or run maintenance on a server
    • Adds a new Kubernetes sample that shows how to get Spice.ai running in your Kubernetes cluster
    • Adds significant improvements to data handling, including the ability to compose data "connectors" and "processors". This changes the way data is specified the pod manifest and is a breaking change (see below)
    • Adds a completely rewritten dashboard based on React, with improved styling and useability, which will serve as a foundation for future improvements
    • Adds a rewrite to the communication backend using gRPC insead of HTTP for improved performance and as a foundation for future improvements
    • Adds the ability to fallback to $GITHUB_TOKEN where $SPICE_GH_TOKEN was not specified during Private Preview
    • Fixes an issue where deleting a manifest file was not handled gracefully by the Spice.ai runtime
    • Fixes an issue where debug logs were printed to the console
    • Fixes an issue when running in Docker where the http_port config value was ignored
    • Fixes an issue where the backend engine process would not terminate if the main spiced process crashes

    Notes

    New in this release is the ability to compose data "connectors" and data "processors" decoupling the fetching and processing of data inputs. This enables the ability to create a single data processor, for example a CSV processor, with different input data connectors, for example using the "csv" processor with "file", "blob" or "database" connectors.

    This introduces a breaking change to the way data connectors are specified in the pod manifest by splitting it into a connector and processor. Previously defined like:

    - datasource:
      from: coinbase
      name: btcusd
      # Old format
      connector:
        type: csv
        params:
          path: data/btcusd.csv
    

    Has been changed to:

    - datasource:
      from: coinbase
      name: btcusd
      # New format
      data:
        connector:
          name: file
          params:
            path: ../../test/assets/data/csv/COINBASE_BTCUSD, 30.csv
        processor:
          name: csv
    
    Source code(tar.gz)
    Source code(zip)
    spice_darwin_amd64.tar.gz(21.10 MB)
    spice_darwin_arm64.tar.gz(20.92 MB)
    spice_linux_amd64.tar.gz(21.24 MB)
  • v0.1.0-alpha.2-spiced(Aug 13, 2021)

Owner
Spice.ai
Powerful and easy-to-use time series AI designed for developers.
Spice.ai
Collect gtfs vehicle movement data for ML model training.

Transitcast Real time transit monitor This project uses a static gtfs static schedules and frequent queries against a gtfs-rt vehicle position feed ge

Open Transit Tools 7 Apr 13, 2022
Incentivized AI Training Casino using ISCP for the Agri-D Hackaton!

Welcome to the Wasp repository! Wasp is a node software developed by the IOTA Foundation to run the IOTA Smart Contract Protocol (ISCP in short) on to

Zignar Technologies 1 May 15, 2022
Training materials and labs for a "Getting Started" level course on COBOL

COBOL Programming Course This project is a set of training materials and labs for COBOL on z/OS. The following books are available within this reposit

Open Mainframe Project 2.2k Jun 27, 2022
Versioned model registry suitable for temporary in-training storage and permanent storage

Cogment Model Registry Cogment is an innovative open source AI platform designed to leverage the advent of AI to benefit humankind through human-AI co

Cogment 2 May 26, 2022
fonet is a deep neural network package for Go.

fonet fonet is a deep neural network package for Go. It's mainly created because I wanted to learn about neural networks and create my own package. I'

Barnabás Pataki 67 May 13, 2022
A multilayer perceptron network implemented in Go, with training via backpropagation.

Neural Go I'm in the process of making significant changes to this package, particularly, to make it more modular, and to base it around an actual lin

Schuyler Erle 63 Mar 5, 2022
Go Training Class Material :

Go Training Review our different courses and material To learn about Corporate training events, options and special pricing please contact: William Ke

Ardan Labs 10.6k Jun 29, 2022
FlyML perfomant real time mashine learning libraryes in Go

FlyML perfomant real time mashine learning libraryes in Go simple & perfomant logistic regression (~100 LoC) Status: WIP! Validated on mushrooms datas

Vadim Kulibaba 1 May 30, 2022
The open source, end-to-end computer vision platform. Label, build, train, tune, deploy and automate in a unified platform that runs on any cloud and on-premises.

End-to-end computer vision platform Label, build, train, tune, deploy and automate in a unified platform that runs on any cloud and on-premises. onepa

Onepanel, Inc. 605 Jun 24, 2022
An open source embedding vector similarity search engine powered by Faiss, NMSLIB and Annoy

Click to take a quick look at our demos! Image search Chatbots Chemical structure search Milvus is an open-source vector database built to power AI ap

The Milvus Project 11.1k Jun 26, 2022
Open-source software engineering competency and career plans.

Software Engineering Competency Matrix This repository contains an "Open Competency Matrix" for Software Engineers. It includes a standard data struct

null 15 Jun 1, 2022
On-line Machine Learning in Go (and so much more)

goml Golang Machine Learning, On The Wire goml is a machine learning library written entirely in Golang which lets the average developer include machi

Conner DiPaolo 1.3k Jun 19, 2022
Deploy, manage, and scale machine learning models in production

Deploy, manage, and scale machine learning models in production. Cortex is a cloud native model serving platform for machine learning engineering teams.

Cortex Labs 7.8k Jun 21, 2022
Self-contained Machine Learning and Natural Language Processing library in Go

Self-contained Machine Learning and Natural Language Processing library in Go

NLP Odyssey 1.2k Jun 24, 2022
Machine Learning for Go

GoLearn GoLearn is a 'batteries included' machine learning library for Go. Simplicity, paired with customisability, is the goal. We are in active deve

Stephen Whitworth 8.4k Jun 25, 2022
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Gorgonia 4.5k Jun 26, 2022
Machine Learning libraries for Go Lang - Linear regression, Logistic regression, etc.

package ml - Machine Learning Libraries ###import "github.com/alonsovidales/go_ml" Package ml provides some implementations of usefull machine learnin

Alonso Vidales 193 Jun 23, 2022
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Gorgonia 4.5k Jun 26, 2022
Prophecis is a one-stop machine learning platform developed by WeBank

Prophecis is a one-stop machine learning platform developed by WeBank. It integrates multiple open-source machine learning frameworks, has the multi tenant management capability of machine learning compute cluster, and provides full stack container deployment and management services for production environment.

WeBankFinTech 344 Jun 22, 2022