Golang LRU cache

Related tags

golang-lru
Overview

golang-lru

This provides the lru package which implements a fixed-size thread safe LRU cache. It is based on the cache in Groupcache.

Documentation

Full docs are available on Godoc

Example

Using the LRU is very simple:

l, _ := New(128)
for i := 0; i < 256; i++ {
    l.Add(i, nil)
}
if l.Len() != 128 {
    panic(fmt.Sprintf("bad len: %v", l.Len()))
}
Issues
  • Add expirable LRU implementation

    Add expirable LRU implementation

    My attempt at expirable cache implementation, similar to what was done in #41. I've tried to make new cache as close as possible to the existing one in terms of code style and behavior.

    The only difference which I did leave in place is that size=0 on this type of cache means unlimited, which makes sense with the expirable cache.

    Original work was done in lcw but it turned out that I don't need it there and I thought it would be nice to add my changes to upstream (this repo).

    opened by paskal 8
  • Does not build go 1.11

    Does not build go 1.11

    Hi,

    Your go.mod file prevents golang-lru from building on go 1.11. Is 1.12 a strict requirement of the library? If not, can the go.mod file be modified to allow other go versions at least to go 1.11?

    Thanks!

    opened by cep21 7
  • Add an 'onEvict' function called when an element is removed.

    Add an 'onEvict' function called when an element is removed.

    I wanted to keep an LRU cache of open files, so I need a hook for closing the files as they're evicted from the cache.

    A side effect is that the Purge operation is now probably slower, since we have to remove each element individually to ensure that OnEvicted gets called for each element. If that is important, I could check if the OnEvicted function is missing, and if so purge the original way.

    opened by mreid-moz 6
  • fix gofmt issue

    fix gofmt issue

    noticed there were some diffs when running gofmt

    opened by thaJeztah 5
  • Adds LRU cache resize

    Adds LRU cache resize

    Adds a Resize method to LRU cache.


    We use golang-lru in production at Crunchyroll for embedded HTTP response caching in microservices through microcache.

    We'd like to wrap LRU cache with our own type which would allow for the cache size to be specified in total data size rather than element count. This would make the cache safer and more memory efficient since right now we have to make an educated guess at average element size (compressed HTTP responses) and reserve significant memory overhead for potential variation. I have a good idea of how to implement this wrapper by calculating average element size in real time and adjusting the cache size dynamically to strictly control memory usage. However, this requires the cache to be resizable so that it can be expanded or contracted depending on the average size of elements added and evicted.

    This is an interface change adding one method.

    This PR only affects LRU and not ARC or 2Q since those implementations would be more complex.

    The performance of a cache resize depends on the delta. In general, downsizing is more expensive since it may result in many evictions while upsizing is relatively cheap.

    opened by kevburnsjr 5
  • Adds an Adaptive Replacement Cache

    Adds an Adaptive Replacement Cache

    Adds an ARCCache implementation, which adaptively trades off between storing frequently used and recently used entries. This is nice as a small burst in access to new records does not cause the frequently used records to be evicted. In almost all cases an ARC outperforms a standard LRU, but there is an additional overhead.

    This PR moves the pure LRU logic into an internal package which is not thread safe. The existing LRUCache provides thread safety on top of that. The ARCCache implementation uses the same internal LRU, and is also thread safe.

    opened by armon 4
  • Documentation - ARCCache Patent

    Documentation - ARCCache Patent

    In the documentation for ARCCache it says that the algorithm is patented by IBM

    It looks like IBM sold (or traded or gave) the patent to Intel in 2013 https://assignment.uspto.gov/patent/index.html#/patent/search/resultAbstract?id=6996676&type=patNum

    http://legacy-assignments.uspto.gov/assignments/assignment-pat-30228-415.pdf

    opened by hasKeef 4
  • [lru_ttl] add TTL functionality to our LRU cache

    [lru_ttl] add TTL functionality to our LRU cache

    We've been using this addition for a number of months with great success.

    Basically, we have a stream of events we want to de-duplicate and merge, but we also want to guarantee that popular events eventually get flushed to storage periodically. This is pretty straight forward to do by adding a ~5 second TTL and calling a save method in the on_evict hook. Naturally this means we do one update per event group every N seconds rather than every requests, saving our DB significant load.

    I can imagine other use cases too, such as ensuring your local cache is never too out of date when being used as a look aside cache. Edit: Our other use case is caching authentication information for the lifetime of the token by building a token -> user map. Naturally, we want this information to expire in a timely fashion :)

    For more details on our particular use case: https://blogs.unity3d.com/2015/12/02/the-state-of-game-performance-reporting/

    opened by ChrisLundquist 4
  • Unused code in simplelru.LRU.Get()

    Unused code in simplelru.LRU.Get()

    Hi there!

    I was looking closely at the code and found a piece added in #56 which doesn't seem to be used and is not tested:

    https://github.com/hashicorp/golang-lru/blob/881666a9a6fd9b1a7ceecdb853b12c106791809b/simplelru/lru.go#L76-L78

    I tried but was unable to trigger this condition. I think it should be either covered with tests and added to all other affected functions (e.g. at least LRU.Peek) or removed altogether if that condition is indeed impossible as I think it is.

    opened by paskal 4
  • tslru: a thread-safe lru

    tslru: a thread-safe lru

    The concurrent get() performance of tslru exceeds simplelru+sync.RWmutex

    opened by laotoutou 2
  • would you consider a pr for lock

    would you consider a pr for lock

    // Get looks up a key's value from the cache.
    func (c *Cache) Get(key interface{}) (value interface{}, ok bool) {
    	c.lock.Lock()
    	value, ok = c.lru.Get(key)
    	c.lock.Unlock()
    	return value, ok
    }
    

    When I get data from lru, I need to lock the entire lru. Would you consider locking only a few related nodes

    opened by laotoutou 0
  • fix onEvictedCB bug

    fix onEvictedCB bug

    fix the bug reported in issue 83 (https://github.com/hashicorp/golang-lru/issues/83)

    opened by yigongliu-concur 2
  • Remove method is calling the wrong callback

    Remove method is calling the wrong callback

    this line in the Remove method https://github.com/hashicorp/golang-lru/blob/80c98217689d6df152309d574ccc682b21dc802c/lru.go#L170

    should instead be

    	c.onEvictedCB(k, v)
    
    opened by rtksrubie 2
  • Add method GetAndAdd to LRU Cache

    Add method GetAndAdd to LRU Cache

    Hi, Sometimes it's useful to get value by key and set a new one atomically (like getset in the Redis).

    opened by vasayxtx 5
  • Support expiring cache use callback

    Support expiring cache use callback

    • support simple expiring caches backed by eviction logic of existing 2q, arc and simple lru:

      NewExpiring2Q(size int, expirationTime time.Duration) NewExpiringARC(size int, expirationTime time.Duration) NewExpiringLRU(size int, expirationTime time.Duration)

    • no changes to existing api, no client code change if not using expiring cache.

    • client code can register callback to receive evicted/expired keys/vals.

    • there are 2 expiring policies similar to Guava's CacheBuilder (https://guava.dev/releases/19.0/api/docs/com/google/common/cache/CacheBuilder.html): ExpireAfterWrite and ExpireAfterAccess

    • the default cleanup of expired entries is lazy, only when space is needed for new entries (inside Add) or accurate keys (inside Keys) and size (inside Len) are needed; can add background cleanup by a goroutine periodically calling RemoveAllExpired()

    • following same pattern of simple LRU, separate 2q/arc Cache from LRU, so that XXXCache is just a thread safe wrapper of XXXLRU. this avoids the double locking when ExpiringCache wraps 2Q or ARC.

    opened by yigongliu-concur 0
  • Add Features: add MAdd and MGet

    Add Features: add MAdd and MGet

    Update lru.go. Simple functions for many keys' operations with less mutex race.

    opened by bjkxt 4
  • support simple expiring cache

    support simple expiring cache

    Please review changes to support simple expiring cache:

    • reuse existing 2q, arc and simple lru for lru eviction, add expiring logic as a wrapper; need changes in current api Add() method to return evicted key/val.

    • since there are 3000+ packages importing golang-lru, there should be no changes required for user client code. avoid api changes that require user code change; the changes to api Add() method (single change to LRUCache interface) is introduced as optional argument:

      Add(key, val interface{}, evictedKeyVal ...*interface{}) (evicted bool)

      this is "pull" style api: user/caller code specify receiving arguments to receive results from callee, similar to Reader{ Read(recvBuf []byte)int }:

      Add(k,v) //current code, no change

      Add(k,v,&evictedKey) //interested in evicted key

      Add(k,v,&evictedKey,&evictedVal) //interested in both evicted key and val

    • following same pattern of simple LRU, separate 2q/arc Cache from LRU, so that XXXCache is just a thread safe wrapper of XXXLRU. this avoids the double locking when ExpiringCache wraps 2Q or ARC.

    • there are 2 expiring policies similar to Guava's CacheBuilder (https://guava.dev/releases/19.0/api/docs/com/google/common/cache/CacheBuilder.html): ExpireAfterWrite and ExpireAfterAccess

    • the default cleanup of expired entries is lazy, only when space is needed for new entries (inside Add) or accurate keys (inside Keys) and size (inside Len) are needed; can add background cleanup by a goroutine periodically calling RemoveAllExpired()

    opened by yigongliu-concur 5
  • Request: move patent-protected code to another repo

    Request: move patent-protected code to another repo

    Could you move patent-protected code (https://github.com/hashicorp/golang-lru/issues/31) to another repo? I don't want to import a patent-protected module, even when the functions in the problem are not called actually.

    opened by AkihiroSuda 1
  • Add a copyright / notice file

    Add a copyright / notice file

    This project doesn't appear to include any copyright information or a NOTICE file. Can you add one? This is desired to comply with the open source license conditions.

    opened by davidmateos 1
Owner
HashiCorp
Consistent workflows to provision, secure, connect, and run any infrastructure for any application.
HashiCorp
Golang LRU cache

golang-lru This provides the lru package which implements a fixed-size thread safe LRU cache. It is based on the cache in Groupcache. Documentation Fu

HashiCorp 2.4k Jul 19, 2021
A fast little LRU cache for Go

tinylru A fast little LRU cache. Getting Started Installing To start using tinylru, install Go and run go get: $ go get -u github.com/tidwall/tinylru

Josh Baker 88 Jul 4, 2021
An in-memory string-interface{} map with various expiration options for golang

TTLCache - an in-memory cache with expiration TTLCache is a simple key/value cache in golang with the following functions: Expiration of items based o

Rene Kroon 251 Jul 14, 2021
An in-memory string-interface{} map with various expiration options for golang

TTLCache - an in-memory cache with expiration TTLCache is a simple key/value cache in golang with the following functions: Expiration of items based o

Rene Kroon 254 Jul 24, 2021
Cache Slow Database Queries

Cache Slow Database Queries This package is used to cache the results of slow database queries in memory or Redis. It can be used to cache any form of

null 100 Jul 23, 2021
101+ coding interview problems in Go

116+ Coding Interview Problems with Detailed Solutions The Ultimate Go Study Guide eBook version → Join my mailing list to get the latest updates here

Hoanh An 3.1k Jul 21, 2021
Fast in-memory key:value store/cache with TTL

MCache library go-mcache - this is a fast key:value storage. Its major advantage is that, being essentially a thread-safe . map[string]interface{} wit

O.J 68 Jul 12, 2021
Go Library [DEPRECATED]

Tideland Go Library Description The Tideland Go Library contains a larger set of useful Google Go packages for different purposes. ATTENTION: The cell

Tideland 194 Feb 1, 2021
Probabilistic data structures for processing continuous, unbounded streams.

Boom Filters Boom Filters are probabilistic data structures for processing continuous, unbounded streams. This includes Stable Bloom Filters, Scalable

Tyler Treat 1.4k Jul 17, 2021
A collection of useful, performant, and threadsafe Go datastructures.

go-datastructures Go-datastructures is a collection of useful, performant, and threadsafe Go datastructures. NOTE: only tested with Go 1.3+. Augmented

Workiva 6.1k Jul 21, 2021
DataFrames for Go: For statistics, machine-learning, and data manipulation/exploration

Dataframes are used for statistics, machine-learning, and data manipulation/exploration. You can think of a Dataframe as an excel spreadsheet. This pa

null 573 Jul 20, 2021
Dynamic object-oriented programming support for the Go language

Goop Description The Goop (Go Object-Oriented Programming) package provides support for dynamic object-oriented programming constructs in Go, much lik

Los Alamos National Laboratory 101 Jul 15, 2021
go/golang: fast bit set Bloom filter

package implements a fast bloom filter with real 'bitset' and JSONMarshal/JSONUnmarshal to store/reload the Bloom filter.

Andreas Briese 114 Jul 25, 2021
Golang implementation of Radix trees

go-radix Provides the radix package that implements a radix tree. The package only provides a single Tree implementation, optimized for sparse nodes.

Armon Dadgar 665 Jul 7, 2021