entcache - An experimental cache driver for ent with variety of storage options

Related tags

Caching entcache
Overview

entcache

An experimental cache driver for ent with variety of storage options, such as:

  1. A context.Context-based cache. Usually, attached to an HTTP request.

  2. A driver level cache embedded in the ent.Client. Used to share cache entries on the process level.

  3. A remote cache. For example, a Redis database that provides a persistence layer for storing and sharing cache entries between multiple processes.

  4. A cache hierarchy, or multi-level cache allows structuring the cache in hierarchical way. For example, a 2-level cache that composed from an LRU-cache in the application memory, and a remote-level cache backed by a Redis database.

Quick Introduction

First, go get the package using the following command.

go get ariga.io/entcache

After installing entcache, you can easily add it to your project with the snippet below:

// Open the database connection.
db, err := sql.Open(dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
	log.Fatal("opening database", err)
}
// Decorates the sql.Driver with entcache.Driver.
drv := entcache.NewDriver(db)
// Create an ent.Client.
client := ent.NewClient(ent.Driver(drv))

// Tell the entcache.Driver to skip the caching layer
// when running the schema migration.
if client.Schema.Create(entcache.Skip(ctx)); err != nil {
	log.Fatal("running schema migration", err)
}

// Run queries.
if u, err := client.User.Get(ctx, id); err != nil {
	log.Fatal("querying user", err)
}
// The query below is cached.
if u, err := client.User.Get(ctx, id); err != nil {
	log.Fatal("querying user", err)
}

However, you need to choose the cache storage carefully before adding entcache to your project. The section below covers the different approaches provided by this package.

High Level Design

On a high level, entcache.Driver decorates the Query method of the given driver, and for each call, generates a cache key (i.e. hash) from its arguments (i.e. statement and parameters). After the query is executed, the driver records the raw values of the returned rows (sql.Rows), and stores them in the cache store with the generated cache key. This means, that the recorded rows will be returned the next time the query is executed, if it was not evicted by the cache store.

The package provides a variety of options to configure the TTL of the cache entries, control the hash function, provide custom and multi-level cache stores, evict and skip cache entries. See the full documentation in go.dev/entcache.

Caching Levels

entcache provides several builtin cache levels:

  1. A context.Context-based cache. Usually, attached to a request and does not work with other cache levels. It is used to eliminate duplicate queries that are executed by the same request.

  2. A driver-level cache used by the ent.Client. An application usually creates a driver per database, and therefore, we treat it as a process-level cache.

  3. A remote cache. For example, a Redis database that provides a persistence layer for storing and sharing cache entries between multiple processes. A remote cache layer is resistant to application deployment changes or failures, and allows reducing the number of identical queries executed on the database by different process.

  4. A cache hierarchy, or multi-level cache allows structuring the cache in hierarchical way. The hierarchy of cache stores is mostly based on access speeds and cache sizes. For example, a 2-level cache that composed from an LRU-cache in the application memory, and a remote-level cache backed by a Redis database.

Context Level Cache

The ContextLevel option configures the driver to work with a context.Context level cache. The context is usually attached to a request (e.g. *http.Request) and is not available in multi-level mode. When this option is used as a cache store, the attached context.Context carries an LRU cache (can be configured differently), and the driver stores and searches entries in the LRU cache when queries are executed.

This option is ideal for applications that require strong consistency, but still want to avoid executing duplicate database queries on the same request. For example, given the following GraphQL query:

query($ids: [ID!]!) {
    nodes(ids: $ids) {
        ... on User {
            id
            name
            todos {
                id
                owner {
                    id
                    name
                }
            }
        }
    }
}

A naive solution for resolving the above query will execute, 1 for getting N users, another N queries for getting the todos of each user, and a query for each todo item for getting its owner (read more about the N+1 Problem).

However, Ent provides a unique approach for resolving such queries(read more in Ent website) and therefore, only 3 queries will be executed in this case. 1 for getting N users, 1 for getting the todo items of all users, and 1 query for getting the owners of all todo items.

With entcache, the number of queries may be reduced to 2, as the first and last queries are identical (see code example).

context-level-cache

Usage In GraphQL

In order to instantiate an entcache.Driver in a ContextLevel mode and use it in the generated ent.Client use the following configuration.

db, err := sql.Open(dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
	log.Fatal("opening database", err)
}
drv := entcache.NewDriver(db, entcache.ContextLevel())
client := ent.NewClient(ent.Driver(drv))

Then, when a GraphQL query hits the server, we wrap the request context.Context with an entcache.NewContext.

srv.AroundResponses(func(ctx context.Context, next graphql.ResponseHandler) *graphql.Response {
	if op := graphql.GetOperationContext(ctx).Operation; op != nil && op.Operation == ast.Query {
		ctx = entcache.NewContext(ctx)
	}
	return next(ctx)
})

That's it! Your server is ready to use entcache with GraphQL, and a full server example exits in examples/ctxlevel.

Middleware Example

An example of using the common middleware pattern in Go for wrapping the request context.Context with an entcache.NewContext in case of GET requests.

srv.Use(func(next http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		if r.Method == http.MethodGet {
			r = r.WithContext(entcache.NewContext(r.Context()))
		}
		next.ServeHTTP(w, r)
	})
})

Driver Level Cache

A driver-based level cached stores the cache entries on the ent.Client. An application usually creates a driver per database (i.e. sql.DB), and therefore, we treat it as a process-level cache. The default cache storage for this option is an LRU cache with no limit and no TTL for its entries, but can be configured differently.

driver-level-cache

Create a default cache driver, with no limit and no TTL.
db, err := sql.Open(dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
	log.Fatal("opening database", err)
}
drv := entcache.NewDriver(db)
client := ent.NewClient(ent.Driver(drv))
Set the TTL to 1s.
drv := entcache.NewDriver(drv, entcache.TTL(time.Second))
client := ent.NewClient(ent.Driver(drv))
Limit the cache to 128 entries and set the TTL to 1s.
drv := entcache.NewDriver(
    drv,
    entcache.TTL(time.Second),
    entcache.Levels(entcache.NewLRU(128)),
)
client := ent.NewClient(ent.Driver(drv))

Remote Level Cache

A remote-based level cache is used to share cached entries between multiple processes. For example, a Redis database. A remote cache layer is resistant to application deployment changes or failures, and allows reducing the number of identical queries executed on the database by different processes. This option plays nicely the multi-level option below.

Multi Level Cache

A cache hierarchy, or multi-level cache allows structuring the cache in hierarchical way. The hierarchy of cache stores is mostly based on access speeds and cache sizes. For example, a 2-level cache that compounds from an LRU-cache in the application memory, and a remote-level cache backed by a Redis database.

context-level-cache

rdb := redis.NewClient(&redis.Options{
    Addr: ":6379",
})
if err := rdb.Ping(ctx).Err(); err != nil {
    log.Fatal(err)
}
drv := entcache.NewDriver(
    drv,
    entcache.TTL(time.Second),
    entcache.Levels(
        entcache.NewLRU(256),
        entcache.NewRedis(rdb),
    ),
)
client := ent.NewClient(ent.Driver(drv))

Future Work

There are a few features we are working on, and wish to work on, but need help from the community to design them properly. If you are interested in one of the tasks or features below, do not hesitate to open an issue, or start a discussion on GitHub or in Ent Slack channel.

  1. Add a Memcache implementation for a remote-level cache.
  2. Support for smart eviction mechanism based on SQL parsing.
Comments
  • NATS Jetstream kv Remote Cache

    NATS Jetstream kv Remote Cache

    I am looking to change from Redis to NATS KV .

    nats kv is a globally distributed fault tolerate kv store that is new and built into nats.

    I have been using it instead of redis. So far it’s been awesome It’s supports TTL and purging .

    It might be interesting to explore it as a “driver” also.

    https://nats.io/blog/kv-cli/ has an explanation and demo video.

    the nats cli works out of the box with it . https://github.com/nats-io/natscli

    The golang nats client also has it. https://github.com/nats-io/nats.go

    the nats server needs no specific configuration to work with kv. It just works.

    https://github.com/nats-io/nats-server

    the security model is based on jwt . https://github.com/nats-io/nsc

    I can get something setup to explore integration if there is any interest ??

    opened by gedw99 5
  • cached data is changed by other query

    cached data is changed by other query

    A developer told me there seemed to be a bug in EntCache, and I tried to reproduce it :

    https://github.com/tsingsun/entbug-entcache

    i dont use docker compose file, and This issue is in mysql7 and 8 in my case.

    please run the testBugMySQL in bug_test.go

    test code

    func test(t *testing.T, client *ent.Client) {
          .........
          for i := 0; i < 2; i++ {
    		datas, err := client.SecurityPosition.Query().Where(securityposition.AccountIDIn(3)).
    			Order(ent.Asc(securityposition.FieldProjectID, securityposition.FieldProductID)).
    			All(context.Background())
    		if err != nil {
    			t.FailNow()
    		}
                    // first query the cache is correct, 
    		if datas[0].ID == 8470 && *datas[0].MaterialNo != "688280.SH" {
    			t.Error(i, *datas[0].MaterialNo)
    		}
    		if datas[1].ID == 8471 && *datas[1].MaterialNo != "300472.SZ" {
    			t.Error(i, *datas[1].MaterialNo)
    		}
                    // When the following query is executed, the datas[0].MaterialNo in the cache is changed
    		client.SecurityJournal.Query().
    			Where(securityjournal.AccountIDIn(3),
    				securityjournal.ChangeType("2"), securityjournal.IsDayBooking("Y"),
    			).Order(ent.Desc(securityjournal.FieldID)).
    			AllX(context.Background())
    	}
    

    test result

    === RUN   TestBugMySQL
    === RUN   TestBugMySQL/8
        bug_test.go:95: 1 efx_rate
        bug_test.go:98: 1 tl_margin
    --- FAIL: TestBugMySQL (0.03s)
        --- FAIL: TestBugMySQL/8 (0.03s)
    

    I don't know if I have a usage problem, bug the problem product same time in other developer

    Your Environment 🌎

    | Tech | Version | | ----------- | ------- | | Go | 1.18.3 | | Ent | 0.11.1 | | Database | MySQL | | Driver | https://github.com/go-sql-driver/mysql 1.6.0 |

    opened by tsingsun 4
  • bug: concurrent map writes causes panic

    bug: concurrent map writes causes panic

    Looks like this package uses golang/groupcache/lru, specifically: https://github.com/golang/groupcache/blob/41bb18bfe9da5321badc438f91158cd790a33aa3/lru/lru.go#L22

    Looks like it mentions that it is not concurrent safe, and this library doesn't add any locking or synchronization on top of it, which is problematic for any kind of http service. I'm running into an issue where reads are happening at the same time as writes, as shown below:

    fatal error: concurrent map writes
    
    goroutine 386 [running]:
    runtime.throw({0xd1f06d?, 0x404c4c?})
            /usr/local/go/src/runtime/panic.go:992 +0x71 fp=0xc0002d2d08 sp=0xc0002d2cd8 pc=0x4378d1
    runtime.mapdelete(0x410770?, 0x9faf31?, 0xc3b360?)
            /usr/local/go/src/runtime/map.go:715 +0x3c9 fp=0xc0002d2d70 sp=0xc0002d2d08 pc=0x411309
    github.com/golang/groupcache/lru.(*Cache).removeElement(0xc000032960, 0xc0001d96e0?)
            /home/user/go/pkg/mod/github.com/golang/[email protected]/lru/lru.go:109 +0xdb fp=0xc0002d2db8 sp=0xc0002d2d70 pc=0x9fb29b
    github.com/golang/groupcache/lru.(*Cache).Remove(...)
            /home/user/go/pkg/mod/github.com/golang/[email protected]/lru/lru.go:91
    ariga.io/entcache.(*LRU).Get(0xc000010620, {0xe63360?, 0xc000336090?}, {0xc064c0, 0xc000685380})
            /home/user/go/pkg/mod/ariga.io/[email protected]/level.go:117 +0x13b fp=0xc0002d2e28 sp=0xc0002d2db8 pc=0x9fd49b
    ariga.io/entcache.(*Driver).Query(0xc0001d97a0, {0xe63360?, 0xc000336090}, {0xc00010ca00, 0xf5}, {0xbfa880?, 0xc00000ef18}, {0xca7d60?, 0xc0001ddac0?})
            /home/user/go/pkg/mod/ariga.io/[email protected]/driver.go:149 +0x23b fp=0xc0002d2ee8 sp=0xc0002d2e28 pc=0x9fba5b
    entgo.io/ent/dialect/sql/sqlgraph.(*query).nodes(0xc0002d3008, {0xe63360, 0xc000336090}, {0xe63c20, 0xc0001d97a0})
            /home/user/go/pkg/mod/entgo.io/[email protected]/dialect/sql/sqlgraph/graph.go:561 +0x11d fp=0xc0002d2fc0 sp=0xc0002d2ee8 pc=0x9c407d
    entgo.io/ent/dialect/sql/sqlgraph.QueryNodes({0xe63360, 0xc000336090}, {0xe63c20?, 0xc0001d97a0?}, 0xc000116780)
            /home/user/go/pkg/mod/entgo.io/[email protected]/dialect/sql/sqlgraph/graph.go:497 +0xa5 fp=0xc0002d3038 sp=0xc0002d2fc0 pc=0x9c3e05
    mypkg/internal/ent.(*UserQuery).sqlAll(0xc000453540, {0xe63360, 0xc000336090}, {0x0, 0x0, 0xc05740?})
            /home/user/go/src/mypkg/internal/ent/user_query.go:343 +0x149 fp=0xc0002d3088 sp=0xc0002d3038 pc=0x9ec809
    mypkg/internal/ent.(*UserQuery).All(0x9bebe5?, {0xe63360, 0xc000336090})
            /home/user/go/src/mypkg/internal/ent/user_query.go:172 +0x5f fp=0xc0002d30d0 sp=0xc0002d3088 pc=0x9eb47f
    mypkg/internal/ent.(*UserQuery).Only(0xc000453540, {0xe63360, 0xc000336090})
            /home/user/go/src/mypkg/internal/ent/user_query.go:116 +0x6b fp=0xc0002d30f8 sp=0xc0002d30d0 pc=0x9eb10b
    mypkg/internal/ent.(*UserClient).Get(0xc0001d9800, {0xe63360, 0xc000336090}, 0xc0006850f0?)
            /home/user/go/src/mypkg/internal/ent/client.go:202 +0x174 fp=0xc0002d3158 sp=0xc0002d30f8 pc=0x9d26f4
    mypkg/internal/database.(*authService).Get(0xe5ebe0?, {0xe63360?, 0xc000336090?}, 0x4f33c5?)
            /home/user/go/src/mypkg/internal/database/auth_service.go:33 +0x2a fp=0xc0002d3188 sp=0xc0002d3158 pc=0xb18baa
    mymiddleware.(*AuthHandler[...]).AddToContext.func1(0xc000214c00)
            /home/user/go/pkg/mod/[email protected]/auth.go:234 +0x15e fp=0xc0002d3260 sp=0xc0002d3188 pc=0xba3c1e
    net/http.HandlerFunc.ServeHTTP(0x0?, {0xe610b0?, 0xc0002927d0?}, 0x0?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d3288 sp=0xc0002d3260 pc=0x6bcfef
    mymiddleware.UseSecurityTxt.func1.1({0xe610b0?, 0xc0002927d0?}, 0xc000214c00?)
            /home/user/go/pkg/mod/[email protected]/security.go:73 +0x127 fp=0xc0002d32d0 sp=0xc0002d3288 pc=0x7967a7
    net/http.HandlerFunc.ServeHTTP(0xc000684f50?, {0xe610b0?, 0xc0002927d0?}, 0x15?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d32f8 sp=0xc0002d32d0 pc=0x6bcfef
    mymiddleware.UseRobotsTxt.func1.1({0xe610b0, 0xc0002927d0}, 0xc000214c00?)
            /home/user/go/pkg/mod/[email protected]/security.go:40 +0x22f fp=0xc0002d3368 sp=0xc0002d32f8 pc=0x7964cf
    net/http.HandlerFunc.ServeHTTP(0xc000285a70?, {0xe610b0?, 0xc0002927d0?}, 0x15?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d3390 sp=0xc0002d3368 pc=0x6bcfef
    github.com/go-chi/httprate.(*rateLimiter).Handler.func1({0xe610b0, 0xc0002927d0}, 0xc000286300?)
            /home/user/go/pkg/mod/github.com/go-chi/[email protected]/limiter.go:129 +0x9ad fp=0xc0002d3510 sp=0xc0002d3390 pc=0x75988d
    net/http.HandlerFunc.ServeHTTP(0xc00028a540?, {0xe610b0?, 0xc0002927d0?}, 0xc00069ae80?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d3538 sp=0xc0002d3510 pc=0x6bcfef
    github.com/go-chi/chi/v5/middleware.(*Compressor).Handler.func1({0x7f331ce3ee38?, 0xc00069ae80}, 0xc000214c00)
            /home/user/go/pkg/mod/github.com/go-chi/chi/[email protected]/middleware/compress.go:213 +0x25e fp=0xc0002d35e8 sp=0xc0002d3538 pc=0x7508de
    net/http.HandlerFunc.ServeHTTP(0x40ff45?, {0x7f331ce3ee38?, 0xc00069ae80?}, 0xc000095668?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d3610 sp=0xc0002d35e8 pc=0x6bcfef
    github.com/go-chi/chi/v5/middleware.StripSlashes.func1({0x7f331ce3ee38, 0xc00069ae80}, 0xc000214c00)
            /home/user/go/pkg/mod/github.com/go-chi/chi/[email protected]/middleware/strip.go:30 +0x139 fp=0xc0002d3650 sp=0xc0002d3610 pc=0x755a59
    net/http.HandlerFunc.ServeHTTP(0x203000?, {0x7f331ce3ee38?, 0xc00069ae80?}, 0x4?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d3678 sp=0xc0002d3650 pc=0x6bcfef
    mymiddleware.UseNextURL.func1({0x7f331ce3ee38, 0xc00069ae80}, 0xc000214c00)
            /home/user/go/pkg/mod/[email protected]/redirect.go:37 +0x1dc fp=0xc0002d3780 sp=0xc0002d3678 pc=0x7957dc
    net/http.HandlerFunc.ServeHTTP(0xe63360?, {0x7f331ce3ee38?, 0xc00069ae80?}, 0x1351520?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d37a8 sp=0xc0002d3780 pc=0x6bcfef
    mymiddleware.UseStructuredLogger.func1.1({0xe62970, 0xc0001da1c0}, 0xc000214a00)
            /home/user/go/pkg/mod/[email protected]/logger.go:84 +0x485 fp=0xc0002d38a0 sp=0xc0002d37a8 pc=0x793fc5
    net/http.HandlerFunc.ServeHTTP(0x0?, {0xe62970?, 0xc0001da1c0?}, 0x88?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d38c8 sp=0xc0002d38a0 pc=0x6bcfef
    github.com/go-chi/chi/v5/middleware.Recoverer.func1({0xe62970?, 0xc0001da1c0?}, 0xe58401?)
            /home/user/go/pkg/mod/github.com/go-chi/chi/[email protected]/middleware/recoverer.go:38 +0x83 fp=0xc0002d3928 sp=0xc0002d38c8 pc=0x753ae3
    net/http.HandlerFunc.ServeHTTP(0xe63360?, {0xe62970?, 0xc0001da1c0?}, 0xe58408?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d3950 sp=0xc0002d3928 pc=0x6bcfef
    github.com/go-chi/chi/v5/middleware.RequestID.func1({0xe62970, 0xc0001da1c0}, 0xc000214900)
            /home/user/go/pkg/mod/github.com/go-chi/chi/[email protected]/middleware/request_id.go:76 +0x354 fp=0xc0002d3a00 sp=0xc0002d3950 pc=0x755774
    net/http.HandlerFunc.ServeHTTP(0xe632b8?, {0xe62970?, 0xc0001da1c0?}, 0x130c500?)
            /usr/local/go/src/net/http/server.go:2084 +0x2f fp=0xc0002d3a28 sp=0xc0002d3a00 pc=0x6bcfef
    github.com/go-chi/chi/v5.(*Mux).ServeHTTP(0xc00028e660, {0xe62970, 0xc0001da1c0}, 0xc000214800)
            /home/user/go/pkg/mod/github.com/go-chi/chi/[email protected]/mux.go:88 +0x442 fp=0xc0002d3a98 sp=0xc0002d3a28 pc=0x7041a2
    net/http.serverHandler.ServeHTTP({0xc0000c77a0?}, {0xe62970, 0xc0001da1c0}, 0xc000214800)
            /usr/local/go/src/net/http/server.go:2916 +0x43b fp=0xc0002d3b58 sp=0xc0002d3a98 pc=0x6c0a9b
    net/http.(*conn).serve(0xc0002b20a0, {0xe63360, 0xc0005585d0})
            /usr/local/go/src/net/http/server.go:1966 +0x5d7 fp=0xc0002d3fb8 sp=0xc0002d3b58 pc=0x6bba97
    net/http.(*Server).Serve.func3()
            /usr/local/go/src/net/http/server.go:3071 +0x2e fp=0xc0002d3fe0 sp=0xc0002d3fb8 pc=0x6c13ee
    runtime.goexit()
            /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002d3fe8 sp=0xc0002d3fe0 pc=0x46a321
    created by net/http.(*Server).Serve
            /usr/local/go/src/net/http/server.go:3071 +0x4db
    
    opened by lrstanley 2
  • entcache: upgrade

    entcache: upgrade "github.com/mitchellh/hashstructure" to v2

    on the page: Note on v2: It is highly recommended you use the "v2" release since this fixes some significant hash collisions issues from v1.

    opened by tsingsun 1
  • fix: protect adds/gets/deletes during concurrent actions

    fix: protect adds/gets/deletes during concurrent actions

    Resolves #17. Looks as though #18 was only protecting reads, whereas this should protect all actions, and utilizes a read-write mutex, to improve performance.

    opened by lrstanley 1
  • internal/examples: add basic todo app and a ctxlevel example

    internal/examples: add basic todo app and a ctxlevel example

    The todo app was taken from a8m/ent-graphql-example. The ctxlevel example shows how to use entcache with context.Context-level (or request level) cache.

    Next steps:

    • Add a section in the README that explains the context-level cache and link it to the example.
    • Add a multilevel cache example + docs.
    opened by a8m 1
  • add key holder

    add key holder

    Hi @a8m

    Inspired by cachops, i add key holders which created in onClose function. Each holder is equivalent with record(s). It make easier to evict keys, which are related to record(s).

    Example:

    expectQuery(evictCtx, t, drv, "SELECT name FROM users LIMIT 1 OFFSET 0", []interface{}{"a8m"}) // evicted
    expectQuery(ctx, t, drv, "SELECT name FROM users", []interface{}{"a8m", "a9m"}) // evicted too
    expectQuery(ctx, t, drv, "SELECT name FROM users LIMIT 1 OFFSET 1", []interface{}{"a9m"}) // still in cache
    

    Its also related to issue #13. When update a record with UpdateOne, ent will fetch record after updating.

    u := client.T.Get(ctx, id)
    u, _ = u.Update().SetName("nottest").Save(entcache.Evict(ctx))
    

    graph.go

    	if !update.Empty() {
    		var res sql.Result
    		query, args := update.Query()
    		if err := tx.Exec(ctx, query, args, &res); err != nil {
    			return err
    		}
    		...
    	}
    	...
    	rows := &sql.Rows{}
    	query, args := selector.Query()
    	if err := tx.Query(ctx, query, args, rows); err != nil {
    		return err
    	}
    	return u.scan(rows)
    
    opened by TcMits 0
  • initial commit

    initial commit

    The next steps are:

    • Add multiple examples (usage in webservers) and provide proper analysis.
    • Internal and external documentation (GoDoc and README).
    • Memcache support.
    • Improve cache eviction (SQL parser-based), and allow overring it.
    • Expose cache statistics (e.g. size, hits).
    opened by a8m 0
  • Enable cache selectively per query

    Enable cache selectively per query

    I am using gqlgen with ent. I want to test out the cache for only a select set of queries. One strategy to do this is to set the cache to skip by default for each incoming request but then be able to turn it on for specific GraphQL queries. The skip can be done at a middleware level. The only issue I am facing is that there is no public method to "toggle" ctxOptions.skip.

    Would this qualify as a feature to add to the public interface?

    opened by kingzbauer 0
  • paging list query  redis big keys

    paging list query redis big keys

    step 1 entcache is enabled

    step 2

    paging list query

    step 3

    redis big keys

    example

    offset := request.PageSize * (request.PageNum - 1)
    xxxx_filter.
    Limit(request.PageSize).
    Offset(offset).
    Order(ent.Desc("create_time")).
    All()
    
    
    opened by Liberxue 0
  • Bug: cache umarshalling custom JSONB type fails

    Bug: cache umarshalling custom JSONB type fails

    We have a number of fields which store arrays of enum string values in JSONB columns. When we get a cache hit, it instead returns this error:

    unmarshal field borders: invalid character '\\x00' looking for beginning of value
    

    Ent schema in question:

    // in ent/schema/route.go
    field.JSON("borders", customfield.Borders{}),
    
    // in customfield/borders.go
    type Border string
    type Borders []Border
    const (
    	BorderAIR     Border = "AIR"
    	BorderLAND    Border = "LAND"
    	BorderSEA     Border = "SEA"
    	BorderUnknown Border = "UNKNOWN"
    )
    
    

    Is there something we need to implement for cache marshalling/unmarshalling into the cache?

    opened by ivanvanderbyl 1
  • add nocache option?

    add nocache option?

    Example.

    1. Find data 1 (id: 1, name: test)
    2. Update data 1 (id: 1, name: nottest)
    3. Find data 1 -> this is the data that will be returned with name=test.

    So, should an option be added in this case to turn off the cache?

    opened by godcong 7
Owner
ariga
ariga
cyhone 149 Dec 28, 2022
Package cache is a middleware that provides the cache management for Flamego.

cache Package cache is a middleware that provides the cache management for Flamego. Installation The minimum requirement of Go is 1.16. go get github.

Flamego 11 Nov 9, 2022
A mem cache base on other populator cache, add following feacture

memcache a mem cache base on other populator cache, add following feacture add lazy load(using expired data, and load it asynchronous) add singlefligh

zhq 1 Oct 28, 2021
Cache - A simple cache implementation

Cache A simple cache implementation LRU Cache An in memory cache implementation

Stanislav Petrashov 1 Jan 25, 2022
Gin-cache - Gin cache middleware with golang

Gin-cache - Gin cache middleware with golang

Anson 37 Nov 28, 2022
LFU Redis implements LFU Cache algorithm using Redis as data storage

LFU Redis cache library for Golang LFU Redis implements LFU Cache algorithm using Redis as data storage LFU Redis Package gives you control over Cache

Mohamed Shapan 7 Nov 10, 2022
LevelDB style LRU cache for Go, support non GC object.

Go语言QQ群: 102319854, 1055927514 凹语言(凹读音“Wa”)(The Wa Programming Language): https://github.com/wa-lang/wa LRU Cache Install go get github.com/chai2010/c

chai2010 11 Jul 5, 2020
groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

groupcache Summary groupcache is a distributed caching and cache-filling library, intended as a replacement for a pool of memcached nodes in many case

Go 11.9k Dec 31, 2022
☔️ A complete Go cache library that brings you multiple ways of managing your caches

Gocache Guess what is Gocache? a Go cache library. This is an extendable cache library that brings you a lot of features for caching data. Overview He

Vincent Composieux 1.6k Jan 1, 2023
fastcache - fast thread-safe inmemory cache for big number of entries in Go

Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead

VictoriaMetrics 1.6k Dec 27, 2022
An in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC

GCache Cache library for golang. It supports expirable Cache, LFU, LRU and ARC. Features Supports expirable Cache, LFU, LRU and ARC. Goroutine safe. S

Jun Kimura 318 May 31, 2021
Efficient cache for gigabytes of data written in Go.

BigCache Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on hea

Allegro Tech 6.2k Dec 30, 2022
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

go-cache go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major

Patrick Mylund Nielsen 6.8k Dec 29, 2022
Primer proyecto OSS en comunidad sobre cache en memoria.

GoKey ?? Concepto del proyecto: Sistema de base de datos clave valor, distribuido. En forma de cache en memoria. Especificaciones: Para conjuntar inf

Gophers LATAM 19 Aug 6, 2022
gdcache is a pure non-intrusive distributed cache library implemented by golang

gdcache is a pure non-intrusive distributed cache library implemented by golang, you can use it to implement your own distributed cache

Jovan 10 Sep 26, 2022
Cachy is a simple and lightweight in-memory cache api.

cachy Table of Contents cachy Table of Contents Description Features Structure Configurability settings.json default values for backup_file_path Run o

Hayrullah cansu 4 Apr 24, 2022
A REST-API service that works as an in memory key-value store with go-minimal-cache library.

A REST-API service that works as an in memory key-value store with go-minimal-cache library.

HarunBuyuktepe 2 Aug 25, 2022
Cache list, count with filter param golang, using struct, hashkey

Dumbcache Cache list, count with filter param golang, using struct, hashkey Structure we hash your request object to md5 hashing and add a prefix coun

Te Nguyen 4 Nov 1, 2021