A high performance gin middleware to cache http response. Compared to gin-contrib/cache, It has a huge performance improvement. 高性能gin缓存中间件,相比于官方版本,有明显性能提升。

Overview

gin-cache

Release doc goreportcard for gin-cache

English | 🇨🇳 中文

A high performance gin middleware to cache http response. Compared to gin-contrib/cache. It has a huge performance improvement.

Feature

  • Has a huge performance improvement compared to gin-contrib/cache.
  • Support cache response in local memory and redis.
  • Offer a way to custom the cache key of request.
  • Use sync.Pool to cache high frequency objects.
  • Use singleflight to avoid hotspot invalid.

How To Use

Install

go get github.com/chenyahui/gin-cache

Example

Cache In Local Memory

package main

import (
	"time"

	"github.com/chenyahui/gin-cache"
	"github.com/chenyahui/gin-cache/persist"
	"github.com/gin-gonic/gin"
)

func main() {
	app := gin.New()

	app.GET("/hello",
		cache.CacheByPath(cache.Options{
			CacheDuration:       5 * time.Second,
			CacheStore:          persist.NewMemoryStore(1 * time.Minute),
			DisableSingleFlight: true,
		}),
		func(c *gin.Context) {
			time.Sleep(200 * time.Millisecond)
			c.String(200, "hello world")
		},
	)
	if err := app.Run(":8080"); err != nil {
		panic(err)
	}
}

Cache In Redis

package main

import (
	"time"

	"github.com/chenyahui/gin-cache"
	"github.com/chenyahui/gin-cache/persist"
	"github.com/gin-gonic/gin"
	"github.com/go-redis/redis/v8"
)

func main() {
	app := gin.New()

	redisStore := persist.NewRedisStore(redis.NewClient(&redis.Options{
		Network: "tcp",
		Addr:    "127.0.0.1:6379",
	}))

	app.GET("/hello",
		cache.CacheByPath(cache.Options{
			CacheDuration: 5 * time.Second,
			CacheStore:    redisStore,
		}),
		func(c *gin.Context) {
			c.String(200, "hello world")
		},
	)
	if err := app.Run(":8080"); err != nil {
		panic(err)
	}
}

Benchmark

wrk -c 500 -d 1m -t 5 http://127.0.0.1:8080/hello

MemoryStore

MemoryStore QPS

RedisStore

RedisStore QPS

Comments
  • [MemoryStore] Allow specifying max cache size to prevent OOM

    [MemoryStore] Allow specifying max cache size to prevent OOM

    In a production environment, caching data may keep growing so fast. MemoryStore would allow specifying max cache size or at least max cache length (number of cache keys) then prune when it reaches

    opened by huantt 6
  • Cache strategy based on some header

    Cache strategy based on some header

    I would like to have different cache key based on the header that users set. Is WithCacheStrategyByRequest() intended for this? I have trouble implementing this function. It looks like it's being overwritten in this lib.

    This is my code

    ` var cacheStrategy cache.Option = cache.WithCacheStrategyByRequest(func(c *gin.Context) (bool, cache.Strategy) { var key string = "yes"

    	if 1 < 2 {
    		key = "no"
    	}
    
    	return true, cache.Strategy{
    		CacheKey: key,
    	}
    })
    

    `

    And I use it in CacheByRequestURI() but it never gets executed.

    help wanted 
    opened by allesan 5
  • Question about in-memory store choice

    Question about in-memory store choice

    What was behind the decision in favour of jellydator/ttlcache over other libraries with high concurrency and unlimited capacity such as go-cache or bigcache for the in-memory store?

    Also, are you planning to move to ttlcache v3?

    opened by akolybelnikov 3
  • set设置值问题(类型)

    set设置值问题(类型)

    现在设置值会通过一个序列化方法变成二进制值 code 如果有需求是保存为字符串或者整型浮点型就不可用了,例如我要保存字符串asd,当前会保存为 \x0C\x00\x06asd。 我将原始函数改为SetByte现在满足需求,Set函数不做值类型的处理,是否还有其他好的实现方法。

    func (store *RedisStore) Set(key string, value interface{}, expire time.Duration) error {
    	ctx := context.TODO()
    	return store.RedisClient.Set(ctx, key, value, expire).Err()
    }
    
    question 
    opened by ResistanceTo 2
  • WriteHeader should only be called after Header().Write()

    WriteHeader should only be called after Header().Write()

    In this part of the code: https://github.com/chenyahui/gin-cache/blob/main/cache.go#L179-L185, c.Writer.WriteHeader(respCache.Status) is written before c.Writer.Header().Set(key, val).

    However, based on the gin's documentation, Writer.Header().Set() will only take effect if it's written before Writer.WriteHeader.

    type ResponseWriter interface {
    	// Header returns the header map that will be sent by
    	// WriteHeader. The Header map also is the mechanism with which
    	// Handlers can set HTTP trailers.
    	//
    	// Changing the header map after a call to WriteHeader (or
    	// Write) has no effect unless the modified headers are
    	// trailers.
    	//
    	// There are two ways to set Trailers. The preferred way is to
    	// predeclare in the headers which trailers you will later
    	// send by setting the "Trailer" header to the names of the
    	// trailer keys which will come later. In this case, those
    	// keys of the Header map are treated as if they were
    	// trailers. See the example. The second way, for trailer
    	// keys not known to the Handler until after the first Write,
    	// is to prefix the Header map keys with the TrailerPrefix
    	// constant value. See TrailerPrefix.
    	//
    	// To suppress automatic response headers (such as "Date"), set
    	// their value to nil.
    	Header() Header
    
    	// Write writes the data to the connection as part of an HTTP reply.
    	//
    	// If WriteHeader has not yet been called, Write calls
    	// WriteHeader(http.StatusOK) before writing the data. If the Header
    	// does not contain a Content-Type line, Write adds a Content-Type set
    	// to the result of passing the initial 512 bytes of written data to
    	// DetectContentType. Additionally, if the total size of all written
    	// data is under a few KB and there are no Flush calls, the
    	// Content-Length header is added automatically.
    	//
    	// Depending on the HTTP protocol version and the client, calling
    	// Write or WriteHeader may prevent future reads on the
    	// Request.Body. For HTTP/1.x requests, handlers should read any
    	// needed request body data before writing the response. Once the
    	// headers have been flushed (due to either an explicit Flusher.Flush
    	// call or writing enough data to trigger a flush), the request body
    	// may be unavailable. For HTTP/2 requests, the Go HTTP server permits
    	// handlers to continue to read the request body while concurrently
    	// writing the response. However, such behavior may not be supported
    	// by all HTTP/2 clients. Handlers should read before writing if
    	// possible to maximize compatibility.
    	Write([]byte) (int, error)
    
    	// WriteHeader sends an HTTP response header with the provided
    	// status code.
    	//
    	// If WriteHeader is not called explicitly, the first call to Write
    	// will trigger an implicit WriteHeader(http.StatusOK).
    	// Thus explicit calls to WriteHeader are mainly used to
    	// send error codes.
    	//
    	// The provided code must be a valid HTTP 1xx-5xx status code.
    	// Only one header may be written. Go does not currently
    	// support sending user-defined 1xx informational headers,
    	// with the exception of 100-continue response header that the
    	// Server sends automatically when the Request.Body is read.
    	WriteHeader(statusCode int)
    }
    
    question 
    opened by turfaa 2
  • Support go-redis/redis/v9 client

    Support go-redis/redis/v9 client

    Cannot use 'global.App.Redis' (type *"github.com/go-redis/redis/v9".Client) as the type *"github.com/go-redis/redis/v8".Client

    本地用的 v9 版本的包,发现您包里没有对应的。

    feature 
    opened by TeslaLyon 1
  • Exception using both gin-cache and nanmu42/gzip middleware

    Exception using both gin-cache and nanmu42/gzip middleware

    package main
    
    import (
    	"strings"
    	"time"
    
    	cache "github.com/chenyahui/gin-cache"
    	"github.com/chenyahui/gin-cache/persist"
    	"github.com/gin-gonic/gin"
    	"github.com/nanmu42/gzip"
    )
    
    func main() {
    	app := gin.New()
    	app.Use(gzip.DefaultHandler().Gin)
    	memoryStore := persist.NewMemoryStore(1 * time.Minute)
    
    	body := strings.Repeat("hello world", 100)
    	app.GET("/hello",
    		cache.CacheByRequestURI(memoryStore, 2*time.Second),
    		func(c *gin.Context) {
    			c.String(200, body)
    		},
    	)
    
    	if err := app.Run(":8080"); err != nil {
    		panic(err)
    	}
    }
    

    Run multiple requests:

    curl --compressed 127.0.0.1:8080/hello -v
    curl --compressed 127.0.0.1:8080/hello -v
    curl --compressed 127.0.0.1:8080/hello -v
    

    Should get the error message:

    # curl --compressed 127.0.0.1:8080/hello -v
    *   Trying 127.0.0.1:8080...
    * TCP_NODELAY set
    * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
    > GET /hello HTTP/1.1
    > Host: 127.0.0.1:8080
    > User-Agent: curl/7.68.0
    > Accept: */*
    > Accept-Encoding: deflate, gzip, br
    >
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 200 OK
    < Content-Encoding: gzip
    < Content-Type: text/plain; charset=utf-8
    < Vary: Accept-Encoding
    < Date: Sun, 04 Sep 2022 03:18:33 GMT
    < Content-Length: 1100
    <
    * Error while processing content unencoding: incorrect header check
    * Closing connection 0
    curl: (61) Error while processing content unencoding: incorrect header check
    
    opened by fufuok 1
  • feat: option to cache until next hour

    feat: option to cache until next hour

    Would be great to have option for dynamic caching time.

    E.g. I would like to cache until the next hour, if request comes at 9:01 AM, I'd like cache to expire at 10 AM.

    good first issue question 
    opened by l3uddz 1
  • fix: module name declares

    fix: module name declares

    i'm get

    go get: github.com/ReneKroon/ttlcache/[email protected]: parsing go.mod:
            module declares its path as: github.com/jellydator/ttlcache/v2
                    but was required as: github.com/ReneKroon/ttlcache/v2
    

    ReneKroon repo is declares

    check this plz

    good first issue 
    opened by hare85 1
  • Is there a way to invalidate cache?

    Is there a way to invalidate cache?

    For example I put the cache middleware for a GET endpoint, but there's also a POST endpoint which supposed to be updating the results fetched from the GET endpoint. Therefore, whenever I call the POST endpoint, I would like whatever cache exists on the GET endpoint to be invalidated immediately so I could return the most up to date results.

    Is there a way to accommodate this mechanism into gin-cache?

    question 
    opened by habibrosyad 1
  • 在使用singleflight, 对缓存的设置应该在singleflight里

    在使用singleflight, 对缓存的设置应该在singleflight里

    在使用singleflight, 对缓存的设置应该在singleflight里吧, 当高并发,发生shared同一个key的缓存值,重复设置store, 不知道你这redis能不能抗的住. 缓存应该只缓存成功的返回.毕竟你不知道下游业务是如何出错的,错误的返回不应该进行缓存. 另外对日志的调用,作为一个库,最好提供接口, 并给可选项配置,供使用者选择合适的日志组件. 结合你的库和官方的库,我也作为相应简化和适配. 刚好项目有用,所以也做了个库,gin-cache

    bug 
    opened by thinkgos 1
Releases(v1.7.1)
Package cache is a middleware that provides the cache management for Flamego.

cache Package cache is a middleware that provides the cache management for Flamego. Installation The minimum requirement of Go is 1.16. go get github.

Flamego 11 Nov 9, 2022
Ristretto - A high performance memory-bound Go cache

Ristretto Ristretto is a fast, concurrent cache library built with a focus on pe

Outcaste, Inc. 67 Nov 27, 2022
Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

Jun Kimura 2.2k Nov 29, 2022
A mem cache base on other populator cache, add following feacture

memcache a mem cache base on other populator cache, add following feacture add lazy load(using expired data, and load it asynchronous) add singlefligh

zhq 1 Oct 28, 2021
Cache - A simple cache implementation

Cache A simple cache implementation LRU Cache An in memory cache implementation

Stanislav Petrashov 1 Jan 25, 2022
Ristretto - A fast, concurrent cache library built with a focus on performance and correctness

Ristretto Ristretto is a fast, concurrent cache library built with a focus on pe

Koichi Shiraishi 2 Aug 21, 2022
🦉owlcache is a lightweight, high-performance, non-centralized, distributed Key/Value memory-cached data sharing application written by Go

??owlcache is a lightweight, high-performance, non-centralized, distributed Key/Value memory-cached data sharing application written by Go . keyword : golang cache、go cache、golang nosql

d4rkdu0 859 Nov 5, 2022
🧩 Redify is the optimized key-value proxy for quick access and cache of any other database throught Redis and/or HTTP protocol.

Redify (Any database as redis) License Apache 2.0 Redify is the optimized key-value proxy for quick access and cache of any other database throught Re

Dmitry Ponomarev 3 Sep 25, 2022
It is a cache system that supports the http port.

jarjarbinks This service has two different endpoints that are only used to save cache entry and find the saved entry with the relevant key. The cache

Cem Basaranoglu 8 Jan 31, 2022
LevelDB style LRU cache for Go, support non GC object.

Go语言QQ群: 102319854, 1055927514 凹语言(凹读音“Wa”)(The Wa Programming Language): https://github.com/wa-lang/wa LRU Cache Install go get github.com/chai2010/c

chai2010 11 Jul 5, 2020
groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

groupcache Summary groupcache is a distributed caching and cache-filling library, intended as a replacement for a pool of memcached nodes in many case

Go 11.8k Nov 28, 2022
☔️ A complete Go cache library that brings you multiple ways of managing your caches

Gocache Guess what is Gocache? a Go cache library. This is an extendable cache library that brings you a lot of features for caching data. Overview He

Vincent Composieux 1.6k Nov 28, 2022
fastcache - fast thread-safe inmemory cache for big number of entries in Go

Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead

VictoriaMetrics 1.6k Dec 2, 2022
An in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

Jun Kimura 318 May 31, 2021
Efficient cache for gigabytes of data written in Go.

BigCache Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on hea

Allegro Tech 6.2k Nov 29, 2022
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major

Patrick Mylund Nielsen 6.7k Nov 28, 2022
Primer proyecto OSS en comunidad sobre cache en memoria.

GoKey ?? Concepto del proyecto: Sistema de base de datos clave valor, distribuido. En forma de cache en memoria. Especificaciones: Para conjuntar inf

Gophers LATAM 19 Aug 6, 2022
LFU Redis implements LFU Cache algorithm using Redis as data storage

LFU Redis cache library for Golang LFU Redis implements LFU Cache algorithm using Redis as data storage LFU Redis Package gives you control over Cache

Mohamed Shapan 7 Nov 10, 2022