Golang LRU cache

Overview

golang-lru

This provides the lru package which implements a fixed-size thread safe LRU cache. It is based on the cache in Groupcache.

Documentation

Full docs are available on Godoc

Example

Using the LRU is very simple:

l, _ := New(128)
for i := 0; i < 256; i++ {
    l.Add(i, nil)
}
if l.Len() != 128 {
    panic(fmt.Sprintf("bad len: %v", l.Len()))
}
Issues
  • Add expirable LRU implementation

    Add expirable LRU implementation

    My attempt at expirable cache implementation, similar to what was done in #41. I've tried to make new cache as close as possible to the existing one in terms of code style and behavior.

    The only difference which I did leave in place is that size=0 on this type of cache means unlimited, which makes sense with the expirable cache.

    Original work was done in lcw but it turned out that I don't need it there and I thought it would be nice to add my changes to upstream (this repo).

    opened by paskal 8
  • Does not build go 1.11

    Does not build go 1.11

    Hi,

    Your go.mod file prevents golang-lru from building on go 1.11. Is 1.12 a strict requirement of the library? If not, can the go.mod file be modified to allow other go versions at least to go 1.11?

    Thanks!

    opened by cep21 7
  • Add an 'onEvict' function called when an element is removed.

    Add an 'onEvict' function called when an element is removed.

    I wanted to keep an LRU cache of open files, so I need a hook for closing the files as they're evicted from the cache.

    A side effect is that the Purge operation is now probably slower, since we have to remove each element individually to ensure that OnEvicted gets called for each element. If that is important, I could check if the OnEvicted function is missing, and if so purge the original way.

    opened by mreid-moz 6
  • Adds LRU cache resize

    Adds LRU cache resize

    Adds a Resize method to LRU cache.


    We use golang-lru in production at Crunchyroll for embedded HTTP response caching in microservices through microcache.

    We'd like to wrap LRU cache with our own type which would allow for the cache size to be specified in total data size rather than element count. This would make the cache safer and more memory efficient since right now we have to make an educated guess at average element size (compressed HTTP responses) and reserve significant memory overhead for potential variation. I have a good idea of how to implement this wrapper by calculating average element size in real time and adjusting the cache size dynamically to strictly control memory usage. However, this requires the cache to be resizable so that it can be expanded or contracted depending on the average size of elements added and evicted.

    This is an interface change adding one method.

    This PR only affects LRU and not ARC or 2Q since those implementations would be more complex.

    The performance of a cache resize depends on the delta. In general, downsizing is more expensive since it may result in many evictions while upsizing is relatively cheap.

    opened by kevburnsjr 5
  • Unused code in simplelru.LRU.Get()

    Unused code in simplelru.LRU.Get()

    Hi there!

    I was looking closely at the code and found a piece added in #56 which doesn't seem to be used and is not tested:

    https://github.com/hashicorp/golang-lru/blob/881666a9a6fd9b1a7ceecdb853b12c106791809b/simplelru/lru.go#L76-L78

    I tried but was unable to trigger this condition. I think it should be either covered with tests and added to all other affected functions (e.g. at least LRU.Peek) or removed altogether if that condition is indeed impossible as I think it is.

    opened by paskal 4
  • Documentation - ARCCache Patent

    Documentation - ARCCache Patent

    In the documentation for ARCCache it says that the algorithm is patented by IBM

    It looks like IBM sold (or traded or gave) the patent to Intel in 2013 https://assignment.uspto.gov/patent/index.html#/patent/search/resultAbstract?id=6996676&type=patNum

    http://legacy-assignments.uspto.gov/assignments/assignment-pat-30228-415.pdf

    opened by hasKeef 4
  • [lru_ttl] add TTL functionality to our LRU cache

    [lru_ttl] add TTL functionality to our LRU cache

    We've been using this addition for a number of months with great success.

    Basically, we have a stream of events we want to de-duplicate and merge, but we also want to guarantee that popular events eventually get flushed to storage periodically. This is pretty straight forward to do by adding a ~5 second TTL and calling a save method in the on_evict hook. Naturally this means we do one update per event group every N seconds rather than every requests, saving our DB significant load.

    I can imagine other use cases too, such as ensuring your local cache is never too out of date when being used as a look aside cache. Edit: Our other use case is caching authentication information for the lifetime of the token by building a token -> user map. Naturally, we want this information to expire in a timely fashion :)

    For more details on our particular use case: https://blogs.unity3d.com/2015/12/02/the-state-of-game-performance-reporting/

    opened by ChrisLundquist 4
  • Adds an Adaptive Replacement Cache

    Adds an Adaptive Replacement Cache

    Adds an ARCCache implementation, which adaptively trades off between storing frequently used and recently used entries. This is nice as a small burst in access to new records does not cause the frequently used records to be evicted. In almost all cases an ARC outperforms a standard LRU, but there is an additional overhead.

    This PR moves the pure LRU logic into an internal package which is not thread safe. The existing LRUCache provides thread safety on top of that. The ARCCache implementation uses the same internal LRU, and is also thread safe.

    opened by armon 4
  • Fixed potential memory leak from calling Purge very often

    Fixed potential memory leak from calling Purge very often

    We're using the LRU in a situation where we are purging the cache fairly often. The original code was creating a new list and map during every purge. This pull request resets the original list and map, so it doesn't create any additional garbage.

    opened by jtsylve 4
  • question about promotion policy in 2Q-LRU

    question about promotion policy in 2Q-LRU

    Hi, it seems that implementation of 2Q-LRU in golang-lru has slight difference with the original paper in promotion policy .

    In the paper, the action of accessing a entry in recent-list is to do nothing. Since this access may simply be a correlated reference. (from 2Q, Full Version part) 2q-lru-access

    But in golang-lru, when accessing a entry in the recent list again, it will be promoted to the mru-end of the frequent list.

    func (c *TwoQueueCache) Get(key interface{}) (value interface{}, ok bool) {
        // ...
    
        // If the value is contained in recent, then we
       // promote it to frequent
       if val, ok := c.recent.Peek(key); ok {  
                c.recent.Remove(key)
                c.frequent.Add(key, val)
                return val, ok
        }
    

    Could share your consideration about this promotion policy? @armon

    I think this is answer is going to be "when a entry being accessed again, it's truly in most-recent-used status.", but I figured I would ask just in case I'm wrong.

    Thanks in advance.

    opened by lorneli 3
  • LRU introduces

    LRU introduces "considerable" overhead

    I am in the process of introducing a cache layer to one of my libraries. The goal is to cache compiled regular expressions, on a high level:

    c := regex.Regexp{regex.Compile(someregex)
    
    if m.Matches(somehaystack) {
      // ..
    }
    

    Because regexp are user-defined, I want to cache them so I don't have to compile them on every request. First, I started out with a very simple approach of storing them to a map (no locks, nothing, very naive approach):

    var cache := map[string]*regexp.Regexp
    
    // ...
    cache[pattern] = regexp.Compile(pattern)
    
    // ..
    if r, ok := cache[p] {
      r.Matches(something)
    }
    

    This reduced runtime considerably:

    # without cache
    BenchmarkLadon/policies=10-8                2000            617938 ns/op
    BenchmarkLadon/policies=100-8                300           5677979 ns/op
    BenchmarkLadon/policies=1000-8                20          59775053 ns/op
    
    # with primitive cache
    BenchmarkLadon/policies=10-8              200000              6022 ns/op
    BenchmarkLadon/policies=100-8              30000             43606 ns/op
    BenchmarkLadon/policies=1000-8              3000            511438 ns/op
    

    Now, I thought it might be a good idea to use golang-LRU for caching, so memory won't fill up. Unfortunately, this didn't go as expected:

    # without cache
    BenchmarkLadon/policies=10-8                2000            617938 ns/op
    BenchmarkLadon/policies=100-8                300           5677979 ns/op
    BenchmarkLadon/policies=1000-8                20          59775053 ns/op
    
    # with cache (map)
    BenchmarkLadon/policies=10-8              200000              6022 ns/op
    BenchmarkLadon/policies=100-8              30000             43606 ns/op
    BenchmarkLadon/policies=1000-8              3000            511438 ns/op
    
    # with cache (replaced map with LRU)
    BenchmarkLadon/policies=10-8                2000            637065 ns/op
    BenchmarkLadon/policies=100-8                200           6383768 ns/op
    BenchmarkLadon/policies=1000-8                20          68409968 ns/op
    

    This really baffled me, because we're using even more time now. First I investigated if I made a mistake in my code, without luck. Then I started writing some benchmarks to check LRU itself:

    package lru
    
    import (
    	"github.com/hashicorp/golang-lru"
    	"regexp"
    	"sync"
    	"testing"
    )
    
    func BenchmarkLadonLRU(b *testing.B) {
    	b.StopTimer()
    	p, _ := regexp.Compile("^(foo|bar)$")
    	c, _ := lru.New(4096)
    	b.StartTimer()
    
    	for z := 0; z < b.N; z++ {
    		c.Add(z, p)
    		o, _ := c.Get(z)
    		if r, ok := o.(*regexp.Regexp); ok {
    			r.MatchString("foo")
    		}
    	}
    }
    
    func BenchmarkLadonSimple(b *testing.B) {
    	var lock sync.RWMutex
    	p, _ := regexp.Compile("^(foo|bar)$")
    	cache := map[int]interface{}{}
    	b.ResetTimer()
    
    	for z := 0; z < b.N; z++ {
    		lock.Lock()
    		cache[z] = p
    		lock.Unlock()
    		lock.RLock()
    		o := cache[z]
    		lock.RUnlock()
    		if r, ok := o.(*regexp.Regexp); ok {
    			r.MatchString("foo")
    		}
    	}
    }
    
    func BenchmarkLadonNoTypeCast(b *testing.B) {
    	var lock sync.RWMutex
    	p, _ := regexp.Compile("^(foo|bar)$")
    	cache := map[int]*regexp.Regexp{}
    	b.ResetTimer()
    
    	for z := 0; z < b.N; z++ {
    		lock.Lock()
    		cache[z] = p
    		lock.Unlock()
    		lock.RLock()
    		r := cache[z]
    		lock.RUnlock()
    		r.MatchString("foo")
    	}
    }
    
    func BenchmarkLadonNoTypeCastNoLockJustYolo(b *testing.B) {
    	p, _ := regexp.Compile("^(foo|bar)$")
    	cache := map[int]*regexp.Regexp{}
    	b.ResetTimer()
    
    	for z := 0; z < b.N; z++ {
    		cache[z] = p
    		r := cache[z]
    		r.MatchString("foo")
    	}
    }
    

    Which resulted:

    BenchmarkLadonLRU-8                              2000000               890 ns/op
    BenchmarkLadonSimple-8                           3000000               531 ns/op
    BenchmarkLadonNoTypeCast-8                       3000000               497 ns/op
    BenchmarkLadonNoTypeCastNoLockJustYolo-8         3000000               446 ns/op
    

    While locking and type casting introduces some overhead, LRU needs about twice the time per op than the most primitive cache. I noticed that increasing the cache's size increases the time spent per operation, too.

    I still believe that something else is going on with my code where I replaced the map with LRU, although I think it might be due to some LRU or golang internal when dealing with the interface value (maybe some private fields are eliminated and require re-processing when calling Match?).

    I don't know if there is an easy way to improve the runtime of LRU, but maybe you find this helpful. I'll update you if I find something new.

    opened by aeneasr 3
  • Suggestion to handle thread safe for LRU Cache

    Suggestion to handle thread safe for LRU Cache

    This LRU cache is not a thread safe. So it's better to add a thread safe layer to this.

    Code to reproduce the concurrent access.

    Code :-

    // github.com/hashicorp/golang-lru/simplelru package main

    import ( "log" "sync"

    "github.com/hashicorp/golang-lru/simplelru"
    

    )

    var wg sync.WaitGroup

    func main() { LRU, err := simplelru.NewLRU(100, nil) if err != nil { log.Fatal(err) return } for i := 0; i < 100; i++ { LRU.Add(100, 100) } wg.Add(2) go FreqAccess(LRU) go FreqWrite(LRU) wg.Wait() }

    func FreqAccess(lru *simplelru.LRU) { defer wg.Done() for i := 0; i < 100; i++ { wg.Add(1) go func() { defer wg.Done() lru.Get(i) }() } } func FreqWrite(lru *simplelru.LRU) { defer wg.Done() for i := 0; i < 100; i++ { wg.Add(1) go func() { defer wg.Done() lru.Add(i, i) }() } }

    Error :-

    fatal error: concurrent map read and map write

    opened by nagnagendra 0
  • Added AddOrUpdate method to lru

    Added AddOrUpdate method to lru

    It behaves similar to the standard Add method with the difference that it can also take an update method that will be performed in case the key already exists.

    For example:

    value1 := []string{"some_value"}
    value2 := []string{"some_value_2"}
    
    l.Add(1, value1)
    l.AddOrUpdate(1, value2, func(v1, v2 interface{}) interface{} {
    	slice1 := v1.([]string)
    	slice2 := v2.([]string)
    	return append(slice1, slice2...)
    })
    
    

    Then l.Get(1) will result to []string{"some_value", "some_value_2"}

    opened by gkoutsou 1
  • LRU generics

    LRU generics

    Added generics for all levels of LRU implementation For internal list, moved from GO built in containers/list to a generic implementation: github.com/bahlo/generic-list-go Updated all tests to accommodate generics In go.mod changed go version to 1.18

    opened by aviadl 1
  • removeElement panics with nil interface conversion

    removeElement panics with nil interface conversion

    Hello, I'm facing intermittent issues with using simplelru under high load. Here is the error:

    Apr 22 05:56:59 server1 erigon[4187823]: panic: interface conversion: interface {} is nil, not *simplelru.entry Apr 22 05:56:59 server1 erigon[4187823]: goroutine 355 [running]: Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/golang-lru/simplelru.(*LRU).removeElement(0x16ec740, 0xc0009cb608) Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/[email protected]/simplelru/lru.go:172 +0x107 Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/golang-lru/simplelru.(*LRU).removeOldest(0x15d7080) Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/[email protected]/simplelru/lru.go:165 +0x34 Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/golang-lru/simplelru.(*LRU).Add(0xc0036d5b60, {0x1567300, 0xc01e20ee10}, {0x15ad680, 0x371f460}) Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/[email protected]/simplelru/lru.go:67 +0x394 Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.(*TxPool).discardLocked(0xc000200180, 0xc012f77ef0, 0x4) Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:1063 +0x15a Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.(*TxPool).addLocked(0xc000200180, 0xc01c355a90) Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:1038 +0x2c5 Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.addTxs(0x105cc63, {0x28a5fe8, 0xc01e015938}, 0xc001c65a70, {{0xc010ff5800, 0x42, 0x80}, {0xc0153eec00, 0x528, 0x600}, ...}, ...) Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:905 +0x4a4 Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.(*TxPool).processRemoteTxs(0xc000200180, {0x28c2d58, 0xc0010eb480}) Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:507 +0x4bc Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.MainLoop({0x28c2d58, 0xc0010eb480}, {0x28cc1a8, 0xc001d92280}, {0x1ad4ccb667b585ab, 0x13748a941b4512d3}, 0xc000200180, 0xc0006ae780, 0xc0003571c0, 0xc0005ba150, ...) Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:1327 +0x565 Apr 22 05:56:59 server1 erigon[4187823]: created by github.com/ledgerwatch/erigon/eth.New Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon/eth/backend.go:475 +0x3368

    Panic happens at this line I believe: kv := e.Value.(*entry)

    Is it because I'm misusing library in some way?

    opened by limhyesook 1
Owner
HashiCorp
Consistent workflows to provision, secure, connect, and run any infrastructure for any application.
HashiCorp
A skip list of arbitrary elements that can be filtered using roaring bitmaps stored in an LRU cache

Skipfilter This package provides a data structure that combines a skiplist with a roaring bitmap cache. This library was created to efficiently filter

Kevin Burns 21 Jun 21, 2022
Fast in-memory key:value store/cache with TTL

MCache library go-mcache - this is a fast key:value storage. Its major advantage is that, being essentially a thread-safe . map[string]interface{} wit

O.J 83 Jul 27, 2022
Cache Slow Database Queries

Cache Slow Database Queries This package is used to cache the results of slow database queries in memory or Redis. It can be used to cache any form of

null 115 Jul 22, 2022
☔️ A complete Go cache library that brings you multiple ways of managing your caches

Gocache Guess what is Gocache? a Go cache library. This is an extendable cache library that brings you a lot of features for caching data. Overview He

Vincent Composieux 1.4k Jul 30, 2022
A threadsafe single-value cache for Go with a simple but flexible API

SVCache SVCache is a threadsafe, single-value cache with a simple but flexible API. When there is no fresh value in the cache, an attempt to retrieve

softwaretechnik.berlin 1 Jan 23, 2022
CLRS study. Codes are written with golang.

algorithms CLRS study. Codes are written with golang. go version: 1.11 Heap BinaryHeap on array BinaryHeap on linkedlist LeftistHeap FibonacciHeap Tre

Apollo Li 658 Aug 2, 2022
Golang string comparison and edit distance algorithms library, featuring : Levenshtein, LCS, Hamming, Damerau levenshtein (OSA and Adjacent transpositions algorithms), Jaro-Winkler, Cosine, etc...

Go-edlib : Edit distance and string comparison library Golang string comparison and edit distance algorithms library featuring : Levenshtein, LCS, Ham

Hugo Bollon 335 Jul 19, 2022
Gota: DataFrames and data wrangling in Go (Golang)

Gota: DataFrames, Series and Data Wrangling for Go This is an implementation of DataFrames, Series and data wrangling methods for the Go programming l

null 2.3k Jul 29, 2022
Roaring bitmaps in Go (golang)

roaring This is a go version of the Roaring bitmap data structure. Roaring bitmaps are used by several major systems such as Apache Lucene and derivat

Roaring bitmaps: A better compressed bitset 1.7k Jul 28, 2022
A simple Set data structure implementation in Go (Golang) using LinkedHashMap.

Set Set is a simple Set data structure implementation in Go (Golang) using LinkedHashMap. This library allow you to get a set of int64 or string witho

Studio Sol Comunicação Digital Ltda 21 Jul 22, 2022
skiplist for golang

skiplist reference from redis zskiplist Usage package main import ( "fmt" "github.com/gansidui/skiplist" "log" ) type User struct { score float6

gansidui 79 May 7, 2022
An in-memory string-interface{} map with various expiration options for golang

TTLCache - an in-memory cache with expiration TTLCache is a simple key/value cache in golang with the following functions: Expiration of items based o

Rene Kroon 466 Jul 28, 2022
High-performance minimalist queue implemented using a stripped-down lock-free ringbuffer, written in Go (golang.org)

This project is no longer maintained - feel free to fork the project! gringo A high-performance minimalist queue implemented using a stripped-down loc

Darren Elwood 127 Jul 18, 2022
Double-ARray Trie System for golang

Darts This is a GO implementation of Double-ARray Trie System. It's a clone of the C++ version Darts can be used as simple hash dictionary. You can al

Andy Song 94 Jul 28, 2022
Golang implementation of Radix trees

go-radix Provides the radix package that implements a radix tree. The package only provides a single Tree implementation, optimized for sparse nodes.

Armon Dadgar 740 Aug 5, 2022
Golang library for reading and writing Microsoft Excel™ (XLSX) files.

Excelize Introduction Excelize is a library written in pure Go providing a set of functions that allow you to write to and read from XLSX / XLSM / XLT

360 Enterprise Security Group, Endpoint Security, inc. 12.3k Jul 31, 2022
HyperLogLog and HyperLogLog++ implementation in Go/Golang.

HyperLogLog and HyperLogLog++ Implements the HyperLogLog and HyperLogLog++ algorithms. HyperLogLog paper: http://algo.inria.fr/flajolet/Publications/F

Clark DuVall 408 Jun 24, 2022
Golang library for querying and parsing OFX

OFXGo OFXGo is a library for querying OFX servers and/or parsing the responses. It also provides an example command-line client to demonstrate the use

Aaron Lindsay 103 Jul 10, 2022
Go (golang) library for reading and writing XLSX files.

XLSX Introduction xlsx is a library to simplify reading and writing the XML format used by recent version of Microsoft Excel in Go programs. Tutorial

Geoffrey J. Teale 5.4k Aug 5, 2022