golang bigcache with clustering as a library.

Overview

clusteredBigCache

Go Report Card

This is a library based on bigcache with some modifications to support

  • clustering and
  • individual item expiration

Bigcache is an excellent piece of software but the fact that items could only expire based on a predefined value was not just too appealing. Bigcache had to be modified to support individual expiration of items using a single timer. This happens by you specifying a time value as you add items to the cache. Running two or more instances of an application that would require some level of caching would normally default to memcache or redis which are external applications adding to the mix of services required for your application to run.

With clusteredBigCache there is no requirement to run an external application to provide caching for multiple instances of your application. The library handles caching as well as clustering the caches between multiple instances of your application providing you with simple library APIs (by just calling functions) to store and get your values.

With clusteredBigCache, when you store a value in one instance of your application and every other instance or any other application for that matter that you configure to form/join your "cluster" will see that exact same value.

Installing

Using go get

$ go get github.com/oaStuff/clusteredBigCache

Samples 1

This is the application responsible for storing data into the cache

package main

import (
    "fmt"
    "bufio"
    "os"
    "github.com/oaStuff/clusteredBigCache/Cluster"
    "strings"
    "time"
)

//
//main function
//
func main() {
    fmt.Println("starting...")
    cache := clusteredBigCache.New(clusteredBigCache.DefaultClusterConfig(), nil)
    count := 1
    cache.Start()

    reader := bufio.NewReader(os.Stdin)
    var data string
    for strings.ToLower(data) != "exit" {
        fmt.Print("enter data : ")
        data, _ = reader.ReadString('\n')
        data = strings.TrimSpace(data)
        err := cache.Put(fmt.Sprintf("key_%d", count), []byte(data), time.Minute * 60)
        if err != nil {
            panic(err)
       	}
       	fmt.Printf("'%s' stored under key 'key_%d'\n", data, count)
       	count++
   	}
}
Explanation:

The above application captures data from the keyboard and stores them inside clusteredBigCache starting with keys 'key_1', 'key_2'...'key_n'. As the user types and presses the enter key the data is stored in the cache.

cache := clusteredBigCache.New(clusteredBigCache.DefaultClusterConfig(), nil) This statement will create the cache using the default configuration. This configuration has default values for LocalPort = 9911, Join = false amongst others. If you intend to use this library for applications that will run on the same machine, you will have to give unique values for LocalPort

cache.Start() This must be called before using any other method on this cache.

err := cache.Put(fmt.Sprintf("key_%d", count), []byte(data), time.Minute * 60). You set values in the cache giving it a key, the data as a []byte slice and the expiration or time to live (ttl) for that key/value within the cache. When the key/value pair reaches its expiration time, they are removed automatically.

Samples 2

This is the application responsible for reading data of the cache. This can be run on the same or different machine on the network.

package main

import (
    "github.com/oaStuff/clusteredBigCache/Cluster"
    "bufio"
    "os"
    "strings"
    "fmt"
    "time"
)

//
//
func main() {
    config := clusteredBigCache.DefaultClusterConfig()
    config.LocalPort = 8888
    config.Join = true
    config.JoinIp = "127.0.0.1:9911"
    cache := clusteredBigCache.New(config, nil)
    err := cache.Start()
    if err != nil {
        panic(err)
    }
    
    reader := bufio.NewReader(os.Stdin)
    var data string
    for strings.ToLower(data) != "exit" {
        fmt.Print("enter key : ")
        data, _ = reader.ReadString('\n')
        data = strings.TrimSpace(data)
        value, err := cache.Get(data, time.Millisecond * 160)
        if err != nil {
            fmt.Println(err)
            continue
        }
        fmt.Printf("you got '%s' from the cache\n", value)
    }
}
Explanation:

The above application reads a string from the keyboard which should represent a key for a value in clusteredBigCache. If a user enters the corresponding keys shown in sample1 above ('key_1', 'key_2'...'key_n'), the corresponding values will be returned.

    config := clusteredBigCache.DefaultClusterConfig()
    config.LocalPort = 8888
    config.Join = true
    config.JoinIp = "127.0.0.1:9911"
    cache := clusteredBigCache.New(config, nil)
    err := cache.Start()

The above uses the default configuration to create a config and modifies what it actually needs. config.LocalPort = 8888 has to be changed since this application will run on the same machine with the sample1 application. This is to avoid 'port already in use' errors.

config.Join = true. For an application to join another application or applications using clusteredBigCache, it must set config.Join value to true and set config.JoinIP to the IP address of one of the systems using clusteredBigCache eg config.Join = "127.0.0.1:9911. This example says that this application wants to join another application using clusteredBigCache at IP address 127.0.0.1 and port number 9911.

cache := clusteredBigCache.New(config, nil) creates the cache and cache.Start() must be called to start everything running.

NB

After cache.Start() is called the library tries to connect to the specified IP address using the specified port. When successfully connected, it create a cluster of applications using clusteredBigCache as a single cache. ie all applications connected will see every value every application sets in the cache.

Sample way to parse config in an app

    join := flag.String("join", "", "ipAddr:port number of remote server")
    localPort := flag.Int("port", 6060, "local server port to bind to")

    
    flag.Parse()
    
    config := clusteredBigCache.DefaultClusterConfig()
    if *join != "" {
        config.JoinIp = *join
        config.Join = true
    }
    config.LocalPort = *localPort

Your application could pass parameters to it in any form and make use of them in configuring clusteredBigCache. The above sample just only catered for join and localport. If you want network connections between machine to be reconnected in the event of a disconnection, you will have to set config.ReconnectOnDisconnect = true.

Logging within the library

clusteredBigCache takes a second parameter is its New() function for logging. This function expects an interface of

type AppLogger interface {
    Info(msg string)
    Warn(msg string)
    Critical(msg string)
    Error(msg string)
}

You could easily just wrap any logger within a struct and provide this interface method for that struct and simple delegate calls to the underlining logger or better still just wrap a logger function to provide the interface like example bellow

type myLogger func(...interface{})

func (log myLogger) Info(msg string)  {
	log(msg)
}

func (log myLogger) Warn(msg string)  {
	log(msg)
}

func (log myLogger) Error(msg string)  {
	log(msg)
}

func (log myLogger) Critical(msg string)  {
	log(msg)
}


cache := clusteredBigCache.New(config, myLogger(log.Println))

Using Passive client

Passive client are nodes in the clusteredBigCache network that do not store any data locally but functions all the same like every other node. To create a passive client you simply call clusteredBigCache.NewPassiveClient("linux_box_100","localhost:9090", 8885, 0, 0, 0, nil) This will connect to an existing cluster at address localhost:9090 and join the cluster. the linux_box_100 is the node's id. This can be an empty string if you want an auto generated id. Every other function can be performed on the returned object.

credits

Core cache system from bigcache

Data structures from emirpasic

LICENSE

MIT.

Issues
  • Dependency between items

    Dependency between items

    This is one feature I found missing in all type of caches. The cached items cant be atomically made dependant to one another, the simple case would be paging: GET /somepage/paging-1 --> cache.set("/something/paging-1", content) GET /somepage/paging-2 --> cache.set("/something/paging-2", content) GET /somepage/paging-3 --> cache.set("/something/paging-3", content) POST /somepage --> cache.invalidate("/something/paging-*)

    The iterator can be used, but if a lot of entries are cached, it really isnt efficient. Some additional logic would be fine, to specify that "/something/paging-1" is dependant to "/something/" and invalidates/calls hook if "/something/" has changed.

    opened by roker 0
  • Data Consistency

    Data Consistency

    If you write a key the same time on multiple nodes the data isn't consistent across the nodes

    image

    edit: the above screenshot was from merging together your examples

    package main
    
    import (
    	"bufio"
    	"flag"
    	"fmt"
    	"os"
    	"strings"
    	"time"
    
    	clusteredBigCache "github.com/oaStuff/clusteredBigCache/Cluster"
    )
    
    type logger struct {
    }
    
    func (l logger) Info(msg string) {
    	//color.Green(msg)
    }
    
    func (l logger) Warn(msg string) {
    	//color.Yellow(msg)
    }
    
    func (l logger) Critical(msg string) {
    	//color.Red(msg)
    }
    
    func (l logger) Error(msg string) {
    	//color.HiRed(msg)
    }
    
    func main() {
    	listen := flag.Int("listen", 9911, "listen on")
    	join := flag.String("join", "", "join who?")
    
    	flag.Parse()
    
    	fmt.Println("starting...")
    
    	c := clusteredBigCache.DefaultClusterConfig()
    	c.PingInterval = 2
    	c.PingTimeout = 1
    	c.PingFailureThreshHold = 3
    	if *join != "" {
    		c.Join = true
    		c.JoinIp = *join
    	}
    	c.LocalPort = *listen
    
    	cache := clusteredBigCache.New(c, &logger{})
    	count := 1
    	cache.Start()
    
    	reader := bufio.NewReader(os.Stdin)
    	var data string
    	for strings.ToLower(data) != "exit" {
    		fmt.Print("enter data : ")
    		data, _ = reader.ReadString('\n')
    		data = strings.TrimSpace(data)
    		if data[0] == '>' {
    			err := cache.Put(fmt.Sprintf("key_%d", count), []byte(data[1:]), time.Minute*60)
    			if err != nil {
    				panic(err)
    			}
    			fmt.Printf("'%s' stored under key 'key_%d'\n", data[1:], count)
    			count++
    		} else {
    			value, err := cache.Get(data, time.Millisecond*160)
    			if err != nil {
    				fmt.Println(err)
    				continue
    			}
    			fmt.Printf("you got '%s' from the cache\n", value)
    		}
    	}
    }
    
    opened by freman 0
Releases(v0.4)
moss - a simple, fast, ordered, persistable, key-val storage library for golang

moss moss provides a simple, fast, persistable, ordered key-val collection implementation as a 100% golang library. moss stands for "memory-oriented s

null 869 Jun 21, 2022
Dayligo - Golang library for working with Daylio backups

dayligo Dayligo is a library for working with Daylio backup files in the .daylio

Joel Auterson 1 Jan 15, 2022
Eventually consistent distributed in-memory cache Go library

bcache A Go Library to create distributed in-memory cache inside your app. Features LRU cache with configurable maximum keys Eventual Consistency sync

Iwan Budi Kusnanto 88 Jun 18, 2022
Concurrency-safe Go caching library with expiration capabilities and access counters

cache2go Concurrency-safe golang caching library with expiration capabilities. Installation Make sure you have a working Go environment (Go 1.2 or hig

Christian Muehlhaeuser 1.8k Jun 21, 2022
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major

Patrick Mylund Nielsen 6.3k Jun 29, 2022
groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

groupcache Summary groupcache is a distributed caching and cache-filling library, intended as a replacement for a pool of memcached nodes in many case

Go 11.5k Jun 25, 2022
Fast and simple key/value store written using Go's standard library

Table of Contents Description Usage Cookbook Disadvantages Motivation Benchmarks Test 1 Test 4 Description Package pudge is a fast and simple key/valu

Vadim Kulibaba 321 Jun 23, 2022
Pure Go implementation of D. J. Bernstein's cdb constant database library.

Pure Go implementation of D. J. Bernstein's cdb constant database library.

John Barham 222 Jun 12, 2022
A go library for testing Amazon DynamoDB.

minidyn Amazon DynamoDB testing library written in Go. Goals Make local testing for DynamoDB as accurate as possible. Run DynamoDB tests in a CI witho

Truora 21 Mar 31, 2022
redic - Bindings for hiredis Redis-client library

This repo is a fork of https://github.com/redis/hiredis. redic - Bindings for hiredis Redis-client Go library Install go get -u github.com/hjyoun0731/

null 0 Dec 21, 2021
OcppManager-go - A library for dynamically managing OCPP configuration (variables). It can read, update, and validate OCPP variables.

?? ocppManager-go A library for dynamically managing OCPP configuration (variables). It can read, update, and validate OCPP variables. Currently, only

Blaž 0 Jan 3, 2022
Key event handling library for tcell - THIS IS A MIRROR - SEE LINK BELOW

cbind Key event handling library for tcell Features Set KeyEvent handlers Encode and decode KeyEvents as human-readable strings Usage // Create a new

Trevor Slocum 0 Jan 10, 2022
A tiny Golang JSON database

Scribble A tiny JSON database in Golang Installation Install using go get github.com/nanobox-io/golang-scribble. Usage // a new scribble driver, provi

DigitalOcean (Nanobox) 148 Jun 2, 2022
Golang in-memory database built on immutable radix trees

go-memdb Provides the memdb package that implements a simple in-memory database built on immutable radix trees. The database provides Atomicity, Consi

HashiCorp 2.5k Jun 24, 2022
☄ The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct.

Gormat - Cross platform gopher tool The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct. 中文说明 Features Data

永林 271 Jun 21, 2022
pure golang key database support key have value. 非常高效实用的键值数据库。

orderfile32 pure golang key database support key have value The orderfile32 is standard alone fast key value database. It have two version. one is thi

null 3 Apr 30, 2022
A simple memory database. It's nothing but a homework to learn primary datastruct of golang.

A simple memory database. It's nothing but a homework to learn primary datastruct of golang.

常乐村喵蕉君 0 Nov 8, 2021
NoSql DB using fileSystems using golang

No SQL DB using Go Prerequisite go 1.15 Run test go test -v ./... Env Var Variable Description Default Value Possible Values DB_DATA dir location to s

Neeraj 0 Nov 8, 2021
A rest-api that works with golang as an in-memory key value store

Rest API Service in GOLANG A rest-api that works with golang as an in-memory key value store Usage Run command below in terminal in project directory.

sercan aydın 0 Dec 6, 2021