fastcache - fast thread-safe inmemory cache for big number of entries in Go

Overview

Build Status GoDoc Go Report codecov

fastcache - fast thread-safe inmemory cache for big number of entries in Go

Features

  • Fast. Performance scales on multi-core CPUs. See benchmark results below.
  • Thread-safe. Concurrent goroutines may read and write into a single cache instance.
  • The fastcache is designed for storing big number of entries without GC overhead.
  • Fastcache automatically evicts old entries when reaching the maximum cache size set on its creation.
  • Simple API.
  • Simple source code.
  • Cache may be saved to file and loaded from file.
  • Works on Google AppEngine.

Benchmarks

Fastcache performance is compared with BigCache, standard Go map and sync.Map.

GOMAXPROCS=4 go test github.com/VictoriaMetrics/fastcache -bench='Set|Get' -benchtime=10s
goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/fastcache
BenchmarkBigCacheSet-4      	    2000	  10566656 ns/op	   6.20 MB/s	 4660369 B/op	       6 allocs/op
BenchmarkBigCacheGet-4      	    2000	   6902694 ns/op	   9.49 MB/s	  684169 B/op	  131076 allocs/op
BenchmarkBigCacheSetGet-4   	    1000	  17579118 ns/op	   7.46 MB/s	 5046744 B/op	  131083 allocs/op
BenchmarkCacheSet-4         	    5000	   3808874 ns/op	  17.21 MB/s	    1142 B/op	       2 allocs/op
BenchmarkCacheGet-4         	    5000	   3293849 ns/op	  19.90 MB/s	    1140 B/op	       2 allocs/op
BenchmarkCacheSetGet-4      	    2000	   8456061 ns/op	  15.50 MB/s	    2857 B/op	       5 allocs/op
BenchmarkStdMapSet-4        	    2000	  10559382 ns/op	   6.21 MB/s	  268413 B/op	   65537 allocs/op
BenchmarkStdMapGet-4        	    5000	   2687404 ns/op	  24.39 MB/s	    2558 B/op	      13 allocs/op
BenchmarkStdMapSetGet-4     	     100	 154641257 ns/op	   0.85 MB/s	  387405 B/op	   65558 allocs/op
BenchmarkSyncMapSet-4       	     500	  24703219 ns/op	   2.65 MB/s	 3426543 B/op	  262411 allocs/op
BenchmarkSyncMapGet-4       	    5000	   2265892 ns/op	  28.92 MB/s	    2545 B/op	      79 allocs/op
BenchmarkSyncMapSetGet-4    	    1000	  14595535 ns/op	   8.98 MB/s	 3417190 B/op	  262277 allocs/op

MB/s column here actually means millions of operations per second. As you can see, fastcache is faster than the BigCache in all the cases. fastcache is faster than the standard Go map and sync.Map on workloads with inserts.

Limitations

  • Keys and values must be byte slices. Other types must be marshaled before storing them in the cache.
  • Big entries with sizes exceeding 64KB must be stored via distinct API.
  • There is no cache expiration. Entries are evicted from the cache only on cache size overflow. Entry deadline may be stored inside the value in order to implement cache expiration.

Architecture details

The cache uses ideas from BigCache:

  • The cache consists of many buckets, each with its own lock. This helps scaling the performance on multi-core CPUs, since multiple CPUs may concurrently access distinct buckets.
  • Each bucket consists of a hash(key) -> (key, value) position map and 64KB-sized byte slices (chunks) holding encoded (key, value) entries. Each bucket contains only O(chunksCount) pointers. For instance, 64GB cache would contain ~1M pointers, while similarly-sized map[string][]byte would contain ~1B pointers for short keys and values. This would lead to huge GC overhead.

64KB-sized chunks reduce memory fragmentation and the total memory usage comparing to a single big chunk per bucket. Chunks are allocated off-heap if possible. This reduces total memory usage because GC collects unused memory more frequently without the need in GOGC tweaking.

Users

FAQ

What is the difference between fastcache and other similar caches like BigCache or FreeCache?

  • Fastcache is faster. See benchmark results above.
  • Fastcache uses less memory due to lower heap fragmentation. This allows saving many GBs of memory on multi-GB caches.
  • Fastcache API is simpler. The API is designed to be used in zero-allocation mode.

Why fastcache doesn't support cache expiration?

Because we don't need cache expiration in VictoriaMetrics. Cached entries inside VictoriaMetrics never expire. They are automatically evicted on cache size overflow.

It is easy to implement cache expiration on top of fastcache by caching values with marshaled deadlines and verifying deadlines after reading these values from the cache.

Why fastcache doesn't support advanced features such as thundering herd protection or callbacks on entries' eviction?

Because these features would complicate the code and would make it slower. Fastcache source code is simple - just copy-paste it and implement the feature you want on top of it.

Comments
  • all: pass config as a parameter for init function

    all: pass config as a parameter for init function

    The rationale behind this is: in some case, the hasher function may be required to be overridden. XXhash is a very fast hash function, but sometimes there is no necessary to re-compute the digest of key if the key itself is already hashed.

    For example the key itself is a 32byte hashed value, so a simple Hasher function may just use the first 8 bytes as the digest. Although it may increase the possibility of hash collision it's faster.

    opened by rjl493456442 7
  • How do I write a package that I can adjust the

    How do I write a package that I can adjust the

    Basically I would like to make my pre-processing of fastcache to be done so I can assign the size of the cache inside my program instead of directly on fastcache.

    How do I do this?

    
    import(
            "github.com/VictoriaMetrics/fastcache"
    )
    type Cache struct {
            Cache fastcache
    }
    
    func New(maxBytes int) *Cache {
            if maxBytes <= 0 {
                    panic(fmt.Errorf("maxBytes must be greater than 0; got %d", maxBytes))
            }
            //      var c fastcache.Cache
            //      maxBucketBytes := uint64((maxBytes + bucketsCount - 1) / bucketsCount)
            //      for i := range c.buckets[:] {
            //              c.buckets[i].Init(maxBucketBytes)
            //      }
    
            Fcache = fastcache.New(maxBytes)
            return Fcache
    }
    
    func (*Cache) Set(k,v []byte) {
           //do something before fastcache.Set(k,v)
    }
    
    question 
    opened by hiqsociety 7
  • Fix generation overflow

    Fix generation overflow

    There is an issue in the map implementation where the generation is put into the top 24 bits of the index stored in the hash->index map

    https://github.com/VictoriaMetrics/fastcache/blob/master/fastcache.go#L339

    This generation is then used to test whether the data in the cache is still valid.

    https://github.com/VictoriaMetrics/fastcache/blob/master/fastcache.go#L352

    The problem arises when the 24 bit value of the generation overflows at 16,777,216. The generation stored in the bucket is uint64, it continues to grow larger while the generation value stored in the index drops back to 0.

    After this number of generations has been reached the data stored in the cache becomes unreadable as it will always fail the generation tests.

    It is unlikely in practice that systems will experience this bug, but the fix is reasonably simple and doesn't compromise performance.

    opened by fmstephe 7
  • module: update cespare/xxhash to v2

    module: update cespare/xxhash to v2

    Update cespare/xxhash package to github.com/cespare/xxhash/v2. Also, update related modules to the current latest.

    Here is the benchmark result using GCP Compute instance:

    • machine_type
      • n1-standard-16
    • image-family
      • debian-9
    • boot-disk-type
      • pd-ssd
    • preemptible
      • true

    cat /proc/cpuinfo

    processor       : 0
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 79
    model name      : Intel(R) Xeon(R) CPU @ 2.20GHz
    stepping        : 0
    microcode       : 0x1
    cpu MHz         : 2200.000
    cache size      : 56320 KB
    physical id     : 0
    siblings        : 16
    core id         : 0
    cpu cores       : 8
    apicid          : 0
    initial apicid  : 0
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 13
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp kaiser fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat arch_capabilities
    bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
    bogomips        : 4400.00
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 46 bits physical, 48 bits virtual
    power management:
    .
    .
    .
    
    gce-benchmark.bash startup-script

    #!/bin/bash
    set -x
    
    start_time="$(date -u +%s)"
    
    echo 'Install google-fluentd ...'
    curl -sSL https://dl.google.com/cloudagents/install-logging-agent.sh | sudo bash
    
    echo 'Restart google-fluentd ...'
    service google-fluentd restart
    
    # ----------------------------------------------------------------------------
    # Start tasks
    
    sudo apt-get update
    sudo apt-get install -yqq --no-install-recommends --no-install-suggests \
      ca-certificates \
      curl \
      netbase \
      wget \
      \
      bzr \
      git \
      mercurial \
      openssh-client \
      subversion \
      \
      procps \
      \
      g++ \
      gcc \
      libc6-dev \
      make \
      pkg-config
    
    GITHUB_USER="$(curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/attributes/github_user)"
    GIT_REMOTE="$(curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/attributes/git_remote)"
    PACKAGE="$(curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/attributes/package)"
    BRANCH="$(curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/attributes/branch)"
    BENCH_FUNCS="$(curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/attributes/bench_funcs)"
    STATE="$(curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/attributes/state)"
    
    export GOLANG_VERSION=1.12.4
    
    wget -qq -O ~/go.tgz "https://golang.org/dl/go${GOLANG_VERSION}.linux-amd64.tar.gz"
    sudo tar -C /usr/local -xzf ~/go.tgz
    rm ~/go.tgz
    
    export GOPATH=~/go
    export GOCACHE=~/.cache/go-build
    export PATH="$GOPATH/bin:/usr/local/go/bin:$PATH"
    mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
    mkdir -p "$GOCACHE" && chmod -R 777 "$GOCACHE"
    
    go version
    
    go get -u -v "$PACKAGE"
    
    export GO111MODULE=on
    
    cd "$GOPATH/src/$PACKAGE" || return
    if echo "$PACKAGE" | grep "$GITHUB_USER"; then
      git checkout "origin/$BRANCH"
    else
      git remote add "$GITHUB_USER" "$GIT_REMOTE"
      git fetch "$GITHUB_USER"
      git checkout "$GITHUB_USER/$BRANCH"
    fi
    
    go mod download
    go mod tidy -v
    go mod vendor -v
    
    REVISION="$(git rev-parse --short -q HEAD)"
    BENCH_OUT="$PWD/$(echo "$PACKAGE" | tr '/' '-')@$REVISION.$STATE.txt"
    
    GOMAXPROCS=$(nproc) go test -v -mod=vendor -run='^$' -bench="$BENCH_FUNCS" -benchmem -benchtime=10s "$PACKAGE" | tee -a "$BENCH_OUT" || true
    # touch "$BENCH_OUT"
    # for ncpu in 4 8 16; do
    #   GOMAXPROCS=$ncpu go test -v -mod=vendor -run='^$' -bench="$BENCH_FUNCS" -benchmem -benchtime=10s "$PACKAGE" | tee -a "$BENCH_OUT" || true
    # done
    
    CPUINFO_OUT="$PWD/$(echo "$PACKAGE" | tr '/' '-')@$REVISION.$STATE.cpuinfo.txt"
    
    cat /proc/cpuinfo > "$CPUINFO_OUT"
    
    gsutil cp "$BENCH_OUT" "$CPUINFO_OUT" "$(curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/attributes/gsutil_benchstat_bucket_name)" || true
    
    # End tasks
    # ----------------------------------------------------------------------------
    
    end_time="$(date -u +%s)"
    elapsed="$(("$end_time-$start_time"))"
    echo "Total of $elapsed seconds elapsed for tasks"
    
    echo 'Delete own GCE instance ...'
    yes | gcloud compute instances delete "$(hostname)" --zone "$(curl -s -H 'Metadata-Flavor:Google' http://metadata.google.internal/computeMetadata/v1/instance/zone | cut -d/ -f4)"
    

    gcloud command

    GOOGLE_CLOUD_PROJECT='fooproject' \
      MACHINE_TYPE='n1-standard-16' \
      GITHUB_USER='{VictoriaMetrics or zchee}' \
      GIT_REMOTE='https://github.com/VictoriaMetrics/fastcache' \
      PACKAGE='github.com/VictoriaMetrics/fastcache' \
      BRANCH='{master or xxhash/v2}' \
      BENCH_FUNCS='Set|Get' \
      STATE='{old or new}' \
      GSUTIL_BENCHSTAT_BUCKET_NAME='gs://foobucket/benchstat/' ;
      gcloud --project="$GOOGLE_CLOUD_PROJECT" alpha compute instances create --zone 'asia-northeast1-a' \
      --machine-type "$MACHINE_TYPE" --image-project='debian-cloud' --image-family='debian-9' \
      --boot-disk-type='pd-ssd' --preemptible \
      --scopes 'https://www.googleapis.com/auth/cloud-platform' \
      --metadata="github_user=${GITHUB_USER},git_remote=${GIT_REMOTE},package=${PACKAGE},branch=${BRANCH},bench_funcs=${BENCH_FUNCS},state=${STATE},gsutil_benchstat_bucket_name=${GSUTIL_BENCHSTAT_BUCKET_NAME}" \
      --metadata-from-file='startup-script=gce-benchmark.bash' \
      --verbosity='debug' \
      'benchstat'
    

    benchstat result

    name               old time/op    new time/op    delta
    SetBig-16            18.6µs ± 0%    18.0µs ± 0%   ~     (p=1.000 n=1+1)
    GetBig-16            3.91µs ± 0%    3.89µs ± 0%   ~     (p=1.000 n=1+1)
    BigCacheSet-16       5.91ms ± 0%    5.92ms ± 0%   ~     (p=1.000 n=1+1)
    BigCacheGet-16       1.53ms ± 0%    1.63ms ± 0%   ~     (p=1.000 n=1+1)
    BigCacheSetGet-16    6.69ms ± 0%    7.34ms ± 0%   ~     (p=1.000 n=1+1)
    CacheSet-16          1.89ms ± 0%    1.88ms ± 0%   ~     (p=1.000 n=1+1)
    CacheGet-16          1.18ms ± 0%    1.01ms ± 0%   ~     (p=1.000 n=1+1)
    CacheSetGet-16       6.12ms ± 0%    6.39ms ± 0%   ~     (p=1.000 n=1+1)
    StdMapSet-16         19.0ms ± 0%    18.4ms ± 0%   ~     (p=1.000 n=1+1)
    StdMapGet-16         3.00ms ± 0%    3.00ms ± 0%   ~     (p=1.000 n=1+1)
    StdMapSetGet-16      78.7ms ± 0%    78.6ms ± 0%   ~     (p=1.000 n=1+1)
    SyncMapSet-16        49.2ms ± 0%    47.5ms ± 0%   ~     (p=1.000 n=1+1)
    SyncMapGet-16         609µs ± 0%     612µs ± 0%   ~     (p=1.000 n=1+1)
    SyncMapSetGet-16     4.01ms ± 0%    4.01ms ± 0%   ~     (p=1.000 n=1+1)
    
    name               old speed      new speed      delta
    SetBig-16          14.1GB/s ± 0%  14.6GB/s ± 0%   ~     (p=1.000 n=1+1)
    GetBig-16          69.4GB/s ± 0%  69.8GB/s ± 0%   ~     (p=1.000 n=1+1)
    BigCacheSet-16     11.1MB/s ± 0%  11.1MB/s ± 0%   ~     (p=1.000 n=1+1)
    BigCacheGet-16     42.9MB/s ± 0%  40.1MB/s ± 0%   ~     (p=1.000 n=1+1)
    BigCacheSetGet-16  19.6MB/s ± 0%  17.9MB/s ± 0%   ~     (p=1.000 n=1+1)
    CacheSet-16        34.6MB/s ± 0%  34.8MB/s ± 0%   ~     (p=1.000 n=1+1)
    CacheGet-16        55.7MB/s ± 0%  65.0MB/s ± 0%   ~     (p=1.000 n=1+1)
    CacheSetGet-16     21.4MB/s ± 0%  20.5MB/s ± 0%   ~     (p=1.000 n=1+1)
    StdMapSet-16       3.45MB/s ± 0%  3.57MB/s ± 0%   ~     (p=1.000 n=1+1)
    StdMapGet-16       21.9MB/s ± 0%  21.9MB/s ± 0%   ~     (p=1.000 n=1+1)
    StdMapSetGet-16    1.67MB/s ± 0%  1.67MB/s ± 0%   ~     (all equal)
    SyncMapSet-16      1.33MB/s ± 0%  1.38MB/s ± 0%   ~     (p=1.000 n=1+1)
    SyncMapGet-16       108MB/s ± 0%   107MB/s ± 0%   ~     (p=1.000 n=1+1)
    SyncMapSetGet-16   32.7MB/s ± 0%  32.7MB/s ± 0%   ~     (p=1.000 n=1+1)
    
    name               old alloc/op   new alloc/op   delta
    SetBig-16             1.00B ± 0%     1.00B ± 0%   ~     (all equal)
    GetBig-16             1.00B ± 0%     1.00B ± 0%   ~     (all equal)
    BigCacheSet-16       6.30MB ± 0%    6.30MB ± 0%   ~     (p=1.000 n=1+1)
    BigCacheGet-16        556kB ± 0%     556kB ± 0%   ~     (p=1.000 n=1+1)
    BigCacheSetGet-16    5.18MB ± 0%    5.18MB ± 0%   ~     (p=1.000 n=1+1)
    CacheSet-16            571B ± 0%      572B ± 0%   ~     (p=1.000 n=1+1)
    CacheGet-16            572B ± 0%      572B ± 0%   ~     (all equal)
    CacheSetGet-16       1.15kB ± 0%    1.91kB ± 0%   ~     (p=1.000 n=1+1)
    StdMapSet-16          275kB ± 0%     275kB ± 0%   ~     (p=1.000 n=1+1)
    StdMapGet-16         2.57kB ± 0%    2.57kB ± 0%   ~     (p=1.000 n=1+1)
    StdMapSetGet-16       325kB ± 0%     325kB ± 0%   ~     (p=1.000 n=1+1)
    SyncMapSet-16        3.44MB ± 0%    3.44MB ± 0%   ~     (p=1.000 n=1+1)
    SyncMapGet-16          424B ± 0%      423B ± 0%   ~     (p=1.000 n=1+1)
    SyncMapSetGet-16     3.41MB ± 0%    3.41MB ± 0%   ~     (p=1.000 n=1+1)
    
    name               old allocs/op  new allocs/op  delta
    SetBig-16              0.00           0.00        ~     (all equal)
    GetBig-16              0.00           0.00        ~     (all equal)
    BigCacheSet-16         4.00 ± 0%      4.00 ± 0%   ~     (all equal)
    BigCacheGet-16         131k ± 0%      131k ± 0%   ~     (all equal)
    BigCacheSetGet-16      131k ± 0%      131k ± 0%   ~     (p=1.000 n=1+1)
    CacheSet-16            1.00 ± 0%      1.00 ± 0%   ~     (all equal)
    CacheGet-16            1.00 ± 0%      1.00 ± 0%   ~     (all equal)
    CacheSetGet-16         2.00 ± 0%      3.00 ± 0%   ~     (p=1.000 n=1+1)
    StdMapSet-16          65.5k ± 0%     65.5k ± 0%   ~     (all equal)
    StdMapGet-16           13.0 ± 0%      13.0 ± 0%   ~     (all equal)
    StdMapSetGet-16       65.5k ± 0%     65.5k ± 0%   ~     (all equal)
    SyncMapSet-16          263k ± 0%      263k ± 0%   ~     (all equal)
    SyncMapGet-16          13.0 ± 0%      13.0 ± 0%   ~     (all equal)
    SyncMapSetGet-16       262k ± 0%      262k ± 0%   ~     (all equal)
    

    benchcmp result

    benchmark                      old ns/op     new ns/op     delta
    BenchmarkSetBig-16             18645         18014         -3.38%
    BenchmarkGetBig-16             3910          3886          -0.61%
    BenchmarkBigCacheSet-16        5910041       5920862       +0.18%
    BenchmarkBigCacheGet-16        1526565       1634421       +7.07%
    BenchmarkBigCacheSetGet-16     6693952       7341088       +9.67%
    BenchmarkCacheSet-16           1892147       1883619       -0.45%
    BenchmarkCacheGet-16           1177109       1008929       -14.29%
    BenchmarkCacheSetGet-16        6121587       6393339       +4.44%
    BenchmarkStdMapSet-16          18968931      18382867      -3.09%
    BenchmarkStdMapGet-16          2999080       2995737       -0.11%
    BenchmarkStdMapSetGet-16       78671513      78627566      -0.06%
    BenchmarkSyncMapSet-16         49229967      47471593      -3.57%
    BenchmarkSyncMapGet-16         609224        612426        +0.53%
    BenchmarkSyncMapSetGet-16      4013008       4010546       -0.06%
    
    benchmark                      old MB/s     new MB/s     speedup
    BenchmarkSetBig-16             14059.39     14551.92     1.04x
    BenchmarkGetBig-16             69399.58     69813.42     1.01x
    BenchmarkBigCacheSet-16        11.09        11.07        1.00x
    BenchmarkBigCacheGet-16        42.93        40.10        0.93x
    BenchmarkBigCacheSetGet-16     19.58        17.85        0.91x
    BenchmarkCacheSet-16           34.64        34.79        1.00x
    BenchmarkCacheGet-16           55.68        64.96        1.17x
    BenchmarkCacheSetGet-16        21.41        20.50        0.96x
    BenchmarkStdMapSet-16          3.45         3.57         1.03x
    BenchmarkStdMapGet-16          21.85        21.88        1.00x
    BenchmarkStdMapSetGet-16       1.67         1.67         1.00x
    BenchmarkSyncMapSet-16         1.33         1.38         1.04x
    BenchmarkSyncMapGet-16         107.57       107.01       0.99x
    BenchmarkSyncMapSetGet-16      32.66        32.68        1.00x
    
    benchmark                      old allocs     new allocs     delta
    BenchmarkSetBig-16             0              0              +0.00%
    BenchmarkGetBig-16             0              0              +0.00%
    BenchmarkBigCacheSet-16        4              4              +0.00%
    BenchmarkBigCacheGet-16        131072         131072         +0.00%
    BenchmarkBigCacheSetGet-16     131078         131077         -0.00%
    BenchmarkCacheSet-16           1              1              +0.00%
    BenchmarkCacheGet-16           1              1              +0.00%
    BenchmarkCacheSetGet-16        2              3              +50.00%
    BenchmarkStdMapSet-16          65538          65538          +0.00%
    BenchmarkStdMapGet-16          13             13             +0.00%
    BenchmarkStdMapSetGet-16       65547          65547          +0.00%
    BenchmarkSyncMapSet-16         262589         262589         +0.00%
    BenchmarkSyncMapGet-16         13             13             +0.00%
    BenchmarkSyncMapSetGet-16      262188         262188         +0.00%
    
    benchmark                      old bytes     new bytes     delta
    BenchmarkSetBig-16             1             1             +0.00%
    BenchmarkGetBig-16             1             1             +0.00%
    BenchmarkBigCacheSet-16        6302976       6302970       -0.00%
    BenchmarkBigCacheGet-16        556265        556263        -0.00%
    BenchmarkBigCacheSetGet-16     5184662       5184654       -0.00%
    BenchmarkCacheSet-16           571           572           +0.18%
    BenchmarkCacheGet-16           572           572           +0.00%
    BenchmarkCacheSetGet-16        1145          1908          +66.64%
    BenchmarkStdMapSet-16          274693        274704        +0.00%
    BenchmarkStdMapGet-16          2566          2567          +0.04%
    BenchmarkStdMapSetGet-16       324890        324849        -0.01%
    BenchmarkSyncMapSet-16         3439036       3439029       -0.00%
    BenchmarkSyncMapGet-16         424           423           -0.24%
    BenchmarkSyncMapSetGet-16      3411022       3411018       -0.00%
    


    The performance increases a bit. Basically, this pull request updates and use the latest xxhash package.

    Whether the merge or not, up to you.

    opened by zchee 6
  • Is it thread-safe?but my program show that it is not thread-safe

    Is it thread-safe?but my program show that it is not thread-safe

    //result not correct
    func TestName1(t *testing.T) {
    
    	c := fastcache.New(1024 * 1024)
    
    	wg := sync.WaitGroup{}
    
    	for i := 0; i < 1000; i++ {
    
    		wg.Add(1)
    		go func(i2 int) {
    
    			result := c.Get(nil, []byte(`k`), )
    
    			r, _ := strconv.Atoi(string(result))
    			r++
    			c.Set([]byte(`k`), []byte(strconv.Itoa(r)))
    
    			wg.Done()
    		}(i)
    	}
    
    	wg.Wait()
    
    	result := c.Get(nil, []byte(`k`), )
    	log.Println(result)
    	log.Println(string(result))
    
    }
    
    //result is correct 
    func TestName2(t *testing.T) {
    
    	c := fastcache.New(1024 * 1024)
    
    	wg := sync.WaitGroup{}
    
    	mutex := sync.Mutex{}
    
    	for i := 0; i < 1000; i++ {
    
    		wg.Add(1)
    		go func(i2 int) {
    
    			mutex.Lock()
    			result := c.Get(nil, []byte(`k`), )
    
    			r, _ := strconv.Atoi(string(result))
    			r++
    			c.Set([]byte(`k`), []byte(strconv.Itoa(r)))
    
    			mutex.Unlock()
    
    			wg.Done()
    		}(i)
    	}
    
    	wg.Wait()
    
    	result := c.Get(nil, []byte(`k`), )
    	log.Println(result)
    	log.Println(string(result))
    
    }
    
    
    
    question 
    opened by zhang555 3
  • 2 Questions on fastcache

    2 Questions on fastcache

    There is no cache expiration. Entries are evicted from the cache only on cache size overflow. Entry deadline may be stored inside the value in order to implement cache expiration.

    does this mean FIFO or ALL entries are evicted from the cache?

    1. 64kb chunks are designed for all values (e.g. 512b values, 1k value etc) or purely for 64kb values only? e.g. if my values are mostly 512bytes in sizes, should I lower it to 512b instead manually inside the program? and if so, should I also change the values for SetBig (distinct API)?
    question 
    opened by hiqinternational 3
  • What is the purpose of

    What is the purpose of "dst" parameter in Get and HasGet?

    Hi!

    I'm having trouble wrapping up in my head, what is the purpose of having a dst parameter in the Get and HasGet methods.

    As far as I can tell, the returned value is being prepended by the dst byte slice. But what is a use case for such a logic?

    I was originally expecting this dst to be a destination buffer, where the value bytes would be written (not appended), whenever a match is found, thus saving on memory allocations, as the slice size won't change. That's where I got confused.

    I would appreciate any hint on how the dst is supposed to be used and maybe you a lready have plans to introduce a Get API with destination buffer parameters for cases when the (max) value size is known in advance.

    Thanks!

    opened by zablvit 2
  • API shortcoming on nil values

    API shortcoming on nil values

    We've been integrating fastcache into our project and @holiman found an interesting corner case that fastcache handles correctly internally, but fails to properly surface it to the user.

    In our specific use case, values associated to keys can be empty. So, what we would like to do is insert key->nil mappings and be able to retrieve them. This works half way:

    cache := fastcache.New(500 * 1024)
    
    fmt.Println(cache.Has([]byte("a")))
    cache.Set([]byte("a"), nil)
    fmt.Println(cache.Has([]byte("a")))
    

    ^ correctly returns false and then true.


    However, if I use Get, things don't look so well:

    cache := fastcache.New(500 * 1024)
    
    fmt.Println(cache.Get(nil, []byte("a")) == nil)
    cache.Set([]byte("a"), nil)
    fmt.Println(cache.Get(nil, []byte("a")) == nil)
    

    ^ returns nil in both cases. You could even set []byte{} and both would still return nil.


    Why is this an issue? Because we cannot differentiate between a value being inside-the-cache-and-empty, or a value missing-from-the-cache requiring database access. The current API only allows using Has-then-Get to cover this, but that's not thread safe any more.

    The fault is a combination of https://github.com/VictoriaMetrics/fastcache/blob/master/fastcache.go#L386, because Go's append will keep dst as nil if appending an empty slice; and https://github.com/VictoriaMetrics/fastcache/blob/master/fastcache.go#L161, because it discards the found flag and only returns dst.

    One solution would be to change the append line so it special cases the scenario where both dst and the found value is nil, but that won't really work if the user does supply a non-nil dst.

    Our best solution until now is to add a second Get method that returns not only the cached value, but also the flag whether it was found or not. This way we don't break the API.


    Does this seem good to you? We can open a PR if you want.

    enhancement 
    opened by karalabe 2
  • What happens when there are more entries than available on fastcache.New([maxBytes])?

    What happens when there are more entries than available on fastcache.New([maxBytes])?

    I know it's in circular byte buffer from here https://github.com/VictoriaMetrics/fastcache/issues/14

    Can anyone explain what happens when there are more entries than available on fastcache.New([maxBytes])?

    Just curious.

    question 
    opened by hiqsociety 2
  • just curious over SetBig

    just curious over SetBig

    Why not add SetBig here,

    if len(k) >= (1<<16) || len(v) >= (1<<16) {
                  SetBig(k,v) //what's wrong with adding here and needing to do a new SetBig?
        }
    

    Because I don't know if my cache value will exceed 64kb so I'll do :

    if len(k) >= (1<<16) || len(v) >= (1<<16) {
                   fastcache.SetBig(k,v)
         }else{
                   fastcache.Set(k,v)
         }
    

    Any problems with the above? With reference to function below:

       func (b *bucket) Set(k, v []byte, h uint64) {
    setCalls := atomic.AddUint64(&b.setCalls, 1)
    if setCalls%(1<<14) == 0 {
    	b.Clean()
    }
    
    if len(k) >= (1<<16) || len(v) >= (1<<16) {
    	// Too big key or value - its length cannot be encoded
    	// with 2 bytes (see below). Skip the entry.
    	return
    }
    
    question 
    opened by hiqsociety 2
  • Last question regarding entry of cache with same key, will the key be overwritten or as in FIFO, pushed backward?

    Last question regarding entry of cache with same key, will the key be overwritten or as in FIFO, pushed backward?

    Last question regarding entry of cache with same key e.g. Set(k,v), will the key be

    1. deleted first and then added or
    2. as in FIFO, pushed backwards?

    if it's 1, i'll leave it as it is. if it's 2, i would manually use Del(k) first because i want my other old entries to be Get(k) and not evicted when overpopulated.

    question 
    opened by hiqinternational 2
  • Questions about how to check the cashe usage and, about specifying the cache size

    Questions about how to check the cashe usage and, about specifying the cache size

    Hi Team,

    I've developed an application and used FastCache as an in-memory data store. So far, everything works fine. One thing I'm struggling is to figure out is how to see the cache usage in megabytes.

    So,

    1. Is there a way to get the cache usage (not the allocation) and/or the free quota?
    2. If it's not possible at the moment, would such a feature be made available in the future?

    There's some confusion in the documentation as well, when creating a cache using fastcache.New(maxBytes) (https://pkg.go.dev/github.com/VictoriaMetrics/fastcache#New)

    1. If we need a cache of 64MB, should we specify maxBytes as 64 or 64000000?
    opened by para-d 1
  • Vulnerability: CVE-2022-29526

    Vulnerability: CVE-2022-29526

    A vulnerability has been discovered in the library.

    Go before 1.17.10 and 1.18.x before 1.18.2 has Incorrect Privilege Assignment. When called with a non-zero flags parameter, the Faccessat function could incorrectly report that a file is accessible.

    Link

    opened by adolsalamanca 0
  • Clarification: k and v contents may be modified after returning from Set.

    Clarification: k and v contents may be modified after returning from Set.

    Several methods have a similar descriptor to the following in the docs:

    k and v contents may be modified after returning from Set.

    Upon first reading, I assumed that the Set() method would modify the k and v byte arrays and copies should be passed into the fastcache methods.

    I couldn't believe it and only after spending some time digging in the source did I realize that these sentences probably meant that the fastcache methods themselves stored copies of the k and v arrays and that you didn't have to worry about reusing the originals.

    Just a heads up, I found the particular wording confusing, perhaps consider a slightly different wording.

    opened by zcrypt0 1
  • [Feature] Notify when data evicted

    [Feature] Notify when data evicted

    Use channel to notify evicted

    Hope to have the following effects

    cache := fastcache.New(1024 * 1024 * 1024)
    evictChan := make(chan []byte)
    cache.NotifyEvicted(evictChan)
    
    select {
    case bt := <-evictChan:
            //do something here
            break
    }
    
    opened by 838239178 2
  • There are some confusions in the process of reading the source code

    There are some confusions in the process of reading the source code

        b.gen++
        if b.gen&((1<<genSizeBits)-1) == 0 {
        b.gen++
        }
    

    Hello the developers of fastcache. I am learning how this library running by read the source code. And i have some confuses when reading the code of function Get() in fastcache.go, line 341.

    In my opinion, the field gen in bucket means the reset rounds that the chunk had experienced. And if gen reach the maximum 1<<genSizeBits it should be reset to 1 and not be gen++.

    Is my understanding correct?

    opened by spemed 0
Owner
VictoriaMetrics
The Aspiring Monitoring Solution
VictoriaMetrics
This provides the lru package which implements a fixed-size thread safe LRU cache

golang-lru This provides the lru package which implements a fixed-size thread sa

byx 0 Dec 22, 2021
Thread-safe LRU cache with permanency and context-based expiration

go-wlru Thread-safe LRU cache with permanency and context-based expiration Operational Complexity (Time) Operation Best Average Worst Access Θ(1) Θ(1)

Jason Crawford 1 Mar 7, 2022
Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

Jun Kimura 2.2k Dec 30, 2022
cyhone 149 Dec 28, 2022
Package cache is a middleware that provides the cache management for Flamego.

cache Package cache is a middleware that provides the cache management for Flamego. Installation The minimum requirement of Go is 1.16. go get github.

Flamego 11 Nov 9, 2022
A mem cache base on other populator cache, add following feacture

memcache a mem cache base on other populator cache, add following feacture add lazy load(using expired data, and load it asynchronous) add singlefligh

zhq 1 Oct 28, 2021
Cache - A simple cache implementation

Cache A simple cache implementation LRU Cache An in memory cache implementation

Stanislav Petrashov 1 Jan 25, 2022
Gin-cache - Gin cache middleware with golang

Gin-cache - Gin cache middleware with golang

Anson 37 Nov 28, 2022
Fast key-value cache written on pure golang

GoCache Simple in-memory key-value cache with default or specific expiration time. Install go get github.com/DylanMrr/GoCache Features Key-value stor

Dmitriy Eginov 1 Nov 16, 2021
Ristretto - A fast, concurrent cache library built with a focus on performance and correctness

Ristretto Ristretto is a fast, concurrent cache library built with a focus on pe

Koichi Shiraishi 2 Aug 21, 2022
Concurrency-safe Go caching library with expiration capabilities and access counters

cache2go Concurrency-safe golang caching library with expiration capabilities. Installation Make sure you have a working Go environment (Go 1.2 or hig

Christian Muehlhaeuser 1.9k Jan 1, 2023
LevelDB style LRU cache for Go, support non GC object.

Go语言QQ群: 102319854, 1055927514 凹语言(凹读音“Wa”)(The Wa Programming Language): https://github.com/wa-lang/wa LRU Cache Install go get github.com/chai2010/c

chai2010 11 Jul 5, 2020
groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

groupcache Summary groupcache is a distributed caching and cache-filling library, intended as a replacement for a pool of memcached nodes in many case

Go 11.9k Dec 31, 2022
☔️ A complete Go cache library that brings you multiple ways of managing your caches

Gocache Guess what is Gocache? a Go cache library. This is an extendable cache library that brings you a lot of features for caching data. Overview He

Vincent Composieux 1.6k Jan 1, 2023
An in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

Jun Kimura 318 May 31, 2021
Efficient cache for gigabytes of data written in Go.

BigCache Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on hea

Allegro Tech 6.2k Dec 30, 2022
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major

Patrick Mylund Nielsen 6.8k Dec 29, 2022
Primer proyecto OSS en comunidad sobre cache en memoria.

GoKey ?? Concepto del proyecto: Sistema de base de datos clave valor, distribuido. En forma de cache en memoria. Especificaciones: Para conjuntar inf

Gophers LATAM 19 Aug 6, 2022