Open Source HTTP Reverse Proxy Cache and Time Series Dashboard Accelerator

Overview

     Follow on Twitter

License Coverage Status build Status Go Report Card CII Best Practices GoDoc Docker Pulls

Trickster is an HTTP reverse proxy/cache for http applications and a dashboard query accelerator for time series databases.

Learn more below, and check out our roadmap to find out what else is in the works.

Note: Trickster v1.1 is the production release, sourced from the v1.1.x branch. The main branch sources Trickster 2.0, which is currently in beta.

HTTP Reverse Proxy Cache

Trickster is a fully-featured HTTP Reverse Proxy Cache for HTTP applications like static file servers and web API's.

Proxy Feature Highlights

Time Series Database Accelerator

Trickster dramatically improves dashboard chart rendering times for end users by eliminating redundant computations on the TSDBs it fronts. In short, Trickster makes read-heavy Dashboard/TSDB environments, as well as those with highly-cardinalized datasets, significantly more performant and scalable.

Compatibility

Trickster works with virtually any Dashboard application that makes queries to any of these TSDB's:

Prometheus

ClickHouse

InfluxDB

Circonus IRONdb

See the Supported Origin Types document for full details

How Trickster Accelerates Time Series

1. Time Series Delta Proxy Cache

Most dashboards request from a time series database the entire time range of data they wish to present, every time a user's dashboard loads, as well as on every auto-refresh. Trickster's Delta Proxy inspects the time range of a client query to determine what data points are already cached, and requests from the tsdb only the data points still needed to service the client request. This results in dramatically faster chart load times for everyone, since the tsdb is queried only for tiny incremental changes on each dashboard load, rather than several hundred data points of duplicative data.

2. Step Boundary Normalization

When Trickster requests data from a tsdb, it adjusts the clients's requested time range slightly to ensure that all data points returned are aligned to normalized step boundaries. For example, if the step is 300s, all data points will fall on the clock 0's and 5's. This ensures that the data is highly cacheable, is conveyed visually to users in a more familiar way, and that all dashboard users see identical data on their screens.

3. Fast Forward

Trickster's Fast Forward feature ensures that even with step boundary normalization, real-time graphs still always show the most recent data, regardless of how far away the next step boundary is. For example, if your chart step is 300s, and the time is currently 1:21p, you would normally be waiting another four minutes for a new data point at 1:25p. Trickster will break the step interval for the most recent data point and always include it in the response to clients requesting real-time data.

Trying Out Trickster

Check out our end-to-end Docker Compose demo composition for a zero-configuration running environment.

Installing

Docker

Docker images are available on Docker Hub:

$ docker run --name trickster -d -v /path/to/trickster.conf:/etc/trickster/trickster.conf -p 0.0.0.0:8480:8480 tricksterproxy/trickster

See the 'deploy' Directory for more information about using or creating Trickster docker images.

Kubernetes

See the 'deploy' Directory for Kube and deployment files and examples.

Helm

Trickster Helm Charts are located at https://helm.tricksterproxy.io for installation, and maintained at https://github.com/tricksterproxy/helm-charts. We welcome chart contributions.

Building from source

To build Trickster from the source code yourself you need to have a working Go environment with version 1.9 or greater installed.

You can directly use the go tool to download and install the trickster binary into your GOPATH:

$ go get github.com/tricksterproxy/trickster
$ trickster -origin-url http://prometheus.example.com:9090 -origin-type prometheus

You can also clone the repository yourself and build using make:

$ mkdir -p $GOPATH/src/github.com/tricksterproxy
$ cd $GOPATH/src/github.com/tricksterproxy
$ git clone https://github.com/tricksterproxy/trickster.git
$ cd trickster
$ make build
$ ./OPATH/trickster -origin-url http://prometheus.example.com:9090 -origin-type prometheus

The Makefile provides several targets, including:

  • build: build the trickster binary
  • docker: build a docker container for the current HEAD
  • clean: delete previously-built binaries and object files
  • test: runs unit tests
  • bench: runs benchmark tests
  • rpm: builds a Trickster RPM

More information

  • Refer to the docs directory for additional info.

Contributing

Refer to CONTRIBUTING.md

Who Is Using Trickster

As the Trickster community grows, we'd like to keep track of who is using it in their stack. We invite you to submit a PR with your company name and @githubhandle to be included on the list.

  1. Comcast [@jranson]
  2. Selfnet e.V. [@ThoreKr]
  3. swarmstack [@mh720]
  4. Hostinger [@ton31337]
  5. The Remote Company (MailerLite, MailerSend, MailerCheck, YCode) [@aorfanos]
Issues
  • Trickster provokes grafana display quirks

    Trickster provokes grafana display quirks

    I noticed a few grafana display bugs in my graphs which at first I blamed on Thanos but forgot I had a trickster in front of it.

    Expected graph grafana-prometheus

    Thanos + Trickster graph grafana-thanos-2

    The data are the same but only come from different origin/path.

    This only happens when I set the end of range to now in grafana. If fix the end of range to a fixed time I can't reproduce.

    Taking a look at the data coming from Prometheus and Thanos + Trickster I got the following diff

    screenshot_15

    As you can see the last timestamp is not aligned with the step in the Thanos + Trickster case and it confuses grafana's output.

    Bypassing Trickster fixes the problem.

    bug 
    opened by sylr 20
  • Unable to handle scalar responses

    Unable to handle scalar responses

    If I send a simple scalar query such as /api/v1/query?query=5 trickster now returns errors. I bisected this and it seems that the error was introduced in ec4eff34d5532b1907723eeaabe620f02dd25b32 . The basic problem here is that trickster assumes the response type from prometheus as a vector, when in reality it can be (1) scalar (2) vector or (3) string. The way this worked before was that the unmarshal error was ignored and the response handed back (I have a patch that puts it back to that behavior.

    From what I see that looks like caching won't work on those scalar values -- as it wasn't able to unmarshal. Alternatively we could change the model to use model.Value and then add some type-switching. But since that is a potentially large change I figured I'd get others' input first.

    prometheus object-proxy 
    opened by jacksontj 20
  • Redis configuration is not caching any request

    Redis configuration is not caching any request

    After configuring Trcikster to use AWS ElastiCache/Redis 5.0.4, deployed as Master + 2 Replicas and without cluster mode enabled.

    Using Trickster 1.0-beta8 image

    Deployed on K8s with 3 replicas and exposing Trickester through an Ingress to allow Grafana to use it as Datasource.

    Every time Grafana runs a query Trickster misses the cache, even copying the request from Grafana and running it with curl multiple times it keeps failing the cache

        [caches]
    
            [caches.default]
            # cache_type defines what kind of cache Trickster uses
            # options are 'bbolt', 'filesystem', 'memory', 'redis' and 'redis_cluster'
            # The default is 'memory'.
            type = 'redis'
    
            # compression determines whether the cache should be compressed. default is true
            # changing the compression setting will leave orphans in your cache for the duration of timeseries_ttl_secs
            compression = true
    
            # timeseries_ttl_secs defines the relative expiration of cached timeseries. default is 6 hours (21600 seconds)
            timeseries_ttl_secs = 21600
    
            # fastforward_ttl_secs defines the relative expiration of cached fast forward data. default is 15s
            fastforward_ttl_secs = 15
    
            # object_ttl_secs defines the relative expiration of generically cached (non-timeseries) objects. default is 30s
            object_ttl_secs = 30
    
                ### Configuration options for the Cache Index
                # The Cache Index handles key management and retention for bbolt, filesystem and memory
                # Redis handles those functions natively and does not use the Trickster's Cache Index
                [caches.default.index]
    
                # reap_interval_secs defines how long the Cache Index reaper sleeps between reap cycles. Default is 3 (3s)
                reap_interval_secs = 3
    
                # flush_interval_secs sets how often the Cache Index saves its metadata to the cache from application memory. Default is 5 (5s)
                flush_interval_secs = 5
    
                # max_size_bytes indicates how large the cache can grow in bytes before the Index evicts least-recently-accessed items. default is 512MB
                max_size_bytes = 536870912
    
                # max_size_backoff_bytes indicates how far below max_size_bytes the cache size must be to complete a byte-size-based eviction exercise. default is 16MB
                max_size_backoff_bytes = 16777216
    
                # max_size_objects indicates how large the cache can grow in objects before the Index evicts least-recently-accessed items. default is 0 (infinite)
                max_size_objects = 0
    
                # max_size_backoff_objects indicates how far under max_size_objects the cache size must be to complete object-size-based eviction exercise. default is 100
                max_size_backoff_objects = 100
    
    
    
                ### Configuration options when using a Redis Cache
                [caches.default.redis]
                # protocol defines the protocol for connecting to redis ('unix' or 'tcp') 'tcp' is default
                protocol = 'tcp'
                # endpoint defines the fqdn+port or path to a unix socket file for connecting to redis
                # default is 'redis:6379'
                endpoint = 'redis-common.external-service.svc.cluster.local:6379'
                # password provides the redis password
                # default is empty
                password = ''
    
    aws redis 
    opened by carlosjgp 19
  • Ability to cache older-but-frequently-accessed data

    Ability to cache older-but-frequently-accessed data

    Setting value_retention_factor = 1536 to conf file has no impact. Only responses that have less than default 1024 sample are fully cached. Config file is correctly used because I can switch to filesystem caching.

    This is issue when Grafana Prometheus datasource use Resolution 1/1. It result small possible range_query "step", sample per pixel. For example querying 24h data the step is set to 60s --> 1441 sample returned and only 1024 are cached resulting always kmiss and phit.

    Workaround: When Grafana resolution is set to 1/2 --> step size increase to 120s --> 721 sample returned and Trickster has 100% cache hit rate.

    enhancement caching config 
    opened by jtorkkel 19
  • Panic with v0.1.3

    Panic with v0.1.3

    Hi,

    I installed v0.1.3 in order to see if it fixes https://github.com/Comcast/trickster/issues/92 but I encountered this.

    panic: runtime error: index out of range
    goroutine 493 [running]:
    main.(*PrometheusMatrixEnvelope).cropToRange(0xc0001c06f8, 0x5c06535c, 0x0)
    	/go/src/github.com/Comcast/trickster/handlers.go:1014 +0x4ee
    main.(*TricksterHandler).originRangeProxyHandler(0xc000062140, 0xc000312640, 0x41, 0xc00014c8a0)
    	/go/src/github.com/Comcast/trickster/handlers.go:840 +0x615
    created by main.(*TricksterHandler).queueRangeProxyRequest
    	/go/src/github.com/Comcast/trickster/handlers.go:660 +0x276
    
    opened by sylr 19
  • [Question] Use with Prometheus High Availability?

    [Question] Use with Prometheus High Availability?

    I'm super excited about this project! Thanks for sharing it with the community!

    I had a question about this part of the docs:

    In a Multi-Origin placement, you have one dashboard endpoint, one Trickster endpoint, and multiple Prometheus endpoints. Trickster is aware of each Prometheus endpoint and treats them as unique databases to which it proxies and caches data independently of each other.

    Could this work for load balancing multiple Prometheus servers in a HA setup? We currently have a pair of Prometheus servers in each region, redundantly scraping the same targets. Currently our Grafana is just pinned to one Prometheus server in each region, meaning that if that one goes down our dashboards go down until we manually change the datasource to point to the other one (and by that point we would have just restored the first server anyway). It's kind of a bummer because it means that while HA works great for alerting itself, it doesn't work for dashboards.

    Would be awesome if there was a way to achieve this with Trickster!

    enhancement prometheus 2.0 release tsmerge 
    opened by geekdave 19
  • issues with helm deployment

    issues with helm deployment

    I faced an issue yesterday with Chart 1.1.2, it seems the 1.0-beta chaged to be copy of 1.0-beta10 which caused trickster to fail to start after updating my kubernetes cluster nodes and pulling the docker image. then I tried to update to latest chart.

    for 1.3.0 also some values in values.yaml is duplicated like service section which cause an issue with config file having an empty listen_port

    also PVC contains deprecated values for example https://github.com/Comcast/trickster/blob/6ac009f29aeeea9476da9db6311d0aa7cf39033c/deploy/helm/trickster/templates/pvc.yaml#L1 there is no section called config in current values.yaml

    workaround: used tag 1.0.9-beta9

    bug config helm 
    opened by agoloAbbas 17
  • High memory usage with redis cache backend

    High memory usage with redis cache backend

    We're using the redis cache backend in our instance of trickster and we're seeing suprisingly high memory usage.

    We are running trickster in kubernetes, with memory limited to 4GB. Trickster only needs to be up for half an hour before it's killed for using more than 4GB memory (OOM).

    Has anyone seen any similar behavior? Any idea what could be going on? We've tested increasing the memory limits, they were originally set to 2GB, but the problem persists.

    caching performance 
    opened by stewrutledgebonnier 16
  • 1.1 performance regression

    1.1 performance regression

    It seems like when we load test trickster with a prometheus backend, that 1.1 has a reasonably large performance regression as compared to 1.0

    both 1.1.2 and 1.1.3 seems to max out at around 100 requests per second when we load test it. 1.0 doesn't seem to get throttled in the same way and can go to several hundred (haven't tried higher yet). We also see much higher cpu and memory usage for 1.1.

    We're trying about 15 different queries set to query over the last hour.

    performance investigating 
    opened by BertHartm 13
  • alignStepBoundaries edge cases

    alignStepBoundaries edge cases

    This method is not very large, but does a few things which IMO are questionable.

    1. reversing start/end if the user flipped start/end, it is not the responsibility of trickster to flip it. This is additionally confusing as things will work, but if trickster is removed from the call path all queries this fixes will break. IMO it is not the place of the cache to "correct" user queries.

    2. time.Now() checks I think its fine to cut off queries from going into the future, but this doesn't handle the case where both start/end are in the future. IMO in that case this should return an error. In the remaining cases I think its fine to leave it truncating end as the query results will remain unaffected, although I'd rather it didn't (the time.Now() constraint is a cache issue, not a query issue).

    3. default step param If the user didn't define one, this is not the place of a caching proxy to correct it. Here we should just return an error and make the user correct their query.

    opened by jacksontj 13
  • Add warning for potential Redis misconfigurations

    Add warning for potential Redis misconfigurations

    Hi - I've set trickster to cache to redis, I'm using beta 8 (but have tried multiple versions). Trickster seems to connect fine and even tries to store data:

    -- time=2019-06-19T12:56:57.101537228Z app=trickster caller=proxy/engines/cache.go:71 level=debug event="compressing cached data" cacheKey=thanos-query:9090.0a9332e4c9046613a62ba8a6e4a2e78a.sz time=2019-06-19T12:56:57.101899271Z app=trickster caller=cache/redis/redis.go:82 level=debug event="redis cache store" key=thanos-query:9090.0a9332e4c9046613a62ba8a6e4a2e78a.sz

    However i get no keys in redis and the dashboard(s) load no quicker - I have tried Trickster with the in-memory option and this works as expected.

    I am also able to write to redis both using the CLI and an external test application just to rule redis out.

    I've also tried standing up multiple Redis types (e.g standard, cluster and sentinel)

    Thanks!

    enhancement caching 
    opened by bradleyhession1 12
  • Add ADOPTERS file

    Add ADOPTERS file

    create an ADOPTERS.md file in the project root and relocate the "Who is Using Trickster" section from README.md into ADOPTERS.md as its primary content.

    documentation 
    opened by jranson 0
  • Response get not cached - Status is alway kmiss

    Response get not cached - Status is alway kmiss

    Hello, I have the problem, that one of the REST APIs I want to cache, is always failing and the REDIS Key is always deleted.

    Logs:

    trickster  | time=x app=trickster level=debug event="redis cache remove" key=demo.host.opc.6af458366efc65dde25ba9f9f7ffa1bc caller=cache/redis/redis.go:129
    trickster  | time=2022-05-29T23:04:22.057896626Z app=trickster level=debug event="redis cache miss" key=demo.host.opc.6af458366efc65dde25ba9f9f7ffa1bc caller=cache/redis/redis.go:117
    trickster  | time=2022-05-29T23:04:22.268980668Z app=trickster level=debug event="redis cache remove" key=demo.host.opc.6af458366efc65dde25ba9f9f7ffa1bc caller=cache/redis/redis.go:129
    trickster  | time=2022-05-29T23:04:23.177295418Z app=trickster level=debug event="redis cache miss" key=demo.host.opc.6af458366efc65dde25ba9f9f7ffa1bc caller=cache/redis/redis.go:117
    

    Headers of demo.host.com backend - not working:

    > HEAD /vdl/<LONGURL> HTTP/1.1
    > Host: 127.0.0.1:8480
    > User-Agent: curl/7.79.1
    > Accept: */*
    > 
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    < Access-Control-Allow-Origin: *
    Access-Control-Allow-Origin: *
    < Content-Type: application/json; charset=UTF-8
    Content-Type: application/json; charset=UTF-8
    < Date: x GMT
    Date: x GMT
    < X-Trickster-Result: engine=ObjectProxyCache; status=kmiss
    X-Trickster-Result: engine=ObjectProxyCache; status=kmiss
    config.yaml
    

    Working API:

    *   Trying 127.0.0.1:8480...
    * Connected to 127.0.0.1 (127.0.0.1) port 8480 (#0)
    > HEAD /plos/search?q=DNA HTTP/1.1
    > Host: 127.0.0.1:8480
    > User-Agent: curl/7.79.1
    > Accept: */*
    > 
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    < Access-Control-Allow-Credentials: true
    Access-Control-Allow-Credentials: true
    < Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization
    Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization
    < Access-Control-Allow-Methods: GET, PUT, POST, DELETE, PATCH, OPTIONS
    Access-Control-Allow-Methods: GET, PUT, POST, DELETE, PATCH, OPTIONS
    < Access-Control-Allow-Origin: *
    Access-Control-Allow-Origin: *
    < Access-Control-Max-Age: 1728000
    Access-Control-Max-Age: 1728000
    < Connection: keep-alive
    Connection: keep-alive
    < Content-Type: application/json;charset=utf-8
    Content-Type: application/json;charset=utf-8
    < Etag: "XXX"
    Etag: "XXX"
    < Last-Modified: x GMT
    Last-Modified: xGMT
    < Strict-Transport-Security: max-age=15724800; includeSubDomains
    Strict-Transport-Security: max-age=15724800; includeSubDomains
    < X-Cache-Status: MISS
    X-Cache-Status: MISS
    < X-Trickster-Result: engine=ObjectProxyCache; status=hit
    X-Trickster-Result: engine=ObjectProxyCache; status=hit
    < Date: xGMT
    Date: x GMT
    

    Trickster config.yaml

    frontend:
      listen_port: 8480
    
    request_rewriters:
      remove-cache-control:
        instructions: [
          ['header', 'delete', 'Cache-Control', 'no-store'],
          ['header', 'delete', 'Cache-Control', 'no-cache'],
          ['header', 'delete', 'Pragma'],
        ]
    
    
    backends:
      vvdl:
        hosts: [ demo.host.com ]
        origin_url: https://demo.host.com
        provider: reverseproxycache
        cache_name: rds
        req_rewriter_name: remove-cache-control
    
    
    # This API is working
      plos:
        hosts: [ api.plos.org ]
        origin_url: https://api.plos.org
        provider: reverseproxycache
        cache_name: rds
        forwarded_headers: standard
       req_rewriter_name: remove-cache-control
    
    
    metrics:
     listen_port: 8481   # available for scraping at http://<trickster>:<metrics.listen_port>/metrics
    
    logging:
      log_level: DEBUG
    
    caches:
      rds:
        provider: redis
        redis:
          client_type: standard
          protocol: tcp
          endpoint: 'redis:6379'
          db: 0
          password: "XXX"
    

    Do you have an idea - what wrong here ?

    opened by chfxr 0
  • MySQL Support for Trickster : Please

    MySQL Support for Trickster : Please

    We desperately need support for MySQL DB. Is MySQL support planned ? Even if there are bleeding edge support available, we are happy to test and provide feedback.

    opened by ksingh7 1
  • "invalid character '\\x1f' looking for beginning of value" error while using 'tsm' ALB mode

    Hey! 👋

    Screen Shot 2022-03-17 at 20 31 05@2x

    Got alot of errors in logs while using 'tsm' ALB mode 🙁

    I assume that I misconfigure something, but I've spend a couple of days on debugging already and found nothing special in my config.

    @jranson could you please assist? 🙏

    Thanks!

    Log:

    ...
    time=2022-03-17T17:27:54.525807351Z app=trickster level=debug event="memorycache cache store" cacheName=default cacheKey=prometheus-1.example.org:9090.opc.fcfefe8641c333d7cf85566801eec0e9 length=0 ttl=30s is_direct=true caller=cache/memory/memory.go:99
    time=2022-03-17T17:27:54.525818951Z app=trickster level=debug event="memorycache cache store" cacheKey=prometheus-2.example.org:9090.opc.fcfefe8641c333d7cf85566801eec0e9 length=0 ttl=30s is_direct=true caller=cache/memory/memory.go:99 cacheName=default
    time=2022-03-17T17:27:54.526315004Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="json: cannot unmarshal number into Go struct field WFMatrixData.data.result of type model.WFResult" caller=backends/prometheus/transformations.go:51
    time=2022-03-17T17:27:54.526342486Z app=trickster level=error event="vector unmarshaling error" caller=backends/prometheus/transformations.go:51 provider=prometheus detail="json: cannot unmarshal number into Go struct field WFMatrixData.data.result of type model.WFResult"
    time=2022-03-17T17:27:54.527278692Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/vector.go:68
    time=2022-03-17T17:27:54.527391793Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/vector.go:68
    time=2022-03-17T17:27:54.531197161Z app=trickster level=debug event="memorycache cache store" cacheName=default cacheKey=prometheus-2.example.org:9090.opc.b95724c13c5dd4d5322c552ffa4be3bb length=0 ttl=30s is_direct=true caller=cache/memory/memory.go:99
    time=2022-03-17T17:27:54.531886319Z app=trickster level=debug event="memorycache cache store" ttl=30s is_direct=true caller=cache/memory/memory.go:99 cacheName=default cacheKey=prometheus-1.example.org:9090.opc.b95724c13c5dd4d5322c552ffa4be3bb length=0
    time=2022-03-17T17:27:54.532798924Z app=trickster level=error event="labels unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/labels.go:81
    time=2022-03-17T17:27:54.532833629Z app=trickster level=error event="labels unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/labels.go:81
    time=2022-03-17T17:27:55.208069154Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="json: cannot unmarshal number into Go struct field WFMatrixData.data.result of type model.WFResult" caller=backends/prometheus/transformations.go:51
    time=2022-03-17T17:27:55.208160555Z app=trickster level=debug event="memory cache retrieve" caller=cache/memory/memory.go:148 cacheKey=prometheus-2.example.org:9090.opc.fcfefe8641c333d7cf85566801eec0e9
    time=2022-03-17T17:27:55.2085418Z app=trickster level=debug event="memory cache retrieve" cacheKey=prometheus-1.example.org:9090.opc.fcfefe8641c333d7cf85566801eec0e9 caller=cache/memory/memory.go:148
    time=2022-03-17T17:27:55.208790904Z app=trickster level=error event="vector unmarshaling error" detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/vector.go:68 provider=prometheus
    time=2022-03-17T17:27:55.208845925Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/vector.go:68
    time=2022-03-17T17:27:55.209217154Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="json: cannot unmarshal number into Go struct field WFMatrixData.data.result of type model.WFResult" caller=backends/prometheus/transformations.go:51
    time=2022-03-17T17:27:55.210003957Z app=trickster level=debug event="memory cache retrieve" cacheKey=prometheus-2.example.org:9090.opc.b95724c13c5dd4d5322c552ffa4be3bb caller=cache/memory/memory.go:148
    time=2022-03-17T17:27:55.210048805Z app=trickster level=debug event="memory cache retrieve" cacheKey=prometheus-1.example.org:9090.opc.b95724c13c5dd4d5322c552ffa4be3bb caller=cache/memory/memory.go:148
    time=2022-03-17T17:27:55.213145618Z app=trickster level=error event="labels unmarshaling error" detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/labels.go:81 provider=prometheus
    time=2022-03-17T17:27:55.21315784Z app=trickster level=error event="labels unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/labels.go:81
    time=2022-03-17T17:27:57.723877684Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/transformations.go:51
    time=2022-03-17T17:27:57.724431688Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/transformations.go:51
    time=2022-03-17T17:27:57.724522004Z app=trickster level=error event="vector unmarshaling error" detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/vector.go:68 provider=prometheus
    time=2022-03-17T17:27:57.724524341Z app=trickster level=error event="vector unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/vector.go:68
    time=2022-03-17T17:27:57.727053397Z app=trickster level=error event="labels unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/labels.go:81
    time=2022-03-17T17:27:57.727072358Z app=trickster level=error event="labels unmarshaling error" provider=prometheus detail="invalid character '\\x1f' looking for beginning of value" caller=backends/prometheus/model/labels.go:81
    

    Config:

    frontend:
      listen_port: 9090
    
    backends:
      zone-01:
        provider: prometheus
        origin_url: prometheus-1.example.org:9090
        prometheus:
          labels:
            region: zone-01
    
      zone-02:
        provider: prometheus
        origin_url: prometheus-2.example.org:9090
        prometheus:
          labels:
            region: zone-02
    
      prom-alb-all:
        provider: alb
        alb:
          mechanism: tsm
          pool:
            - zone-01
            - zone-02
        is_default: true
    
    logging:
      log_level: 'debug'
    

    Full config:

    main:
      config_handler_path: /trickster/config
      ping_handler_path: /trickster/ping
      reload_handler_path: /trickster/config/reload
      health_handler_path: /trickster/health
      pprof_server: both
      server_name: p-trickster-02
    backends:
      prom-alb-all:
        provider: alb
        timeout_ms: 180000
        keep_alive_timeout_ms: 300000
        max_idle_conns: 20
        cache_name: default
        healthcheck:
          verb: GET
          scheme: http
          path: /
          expected_codes:
          - 200
        timeseries_retention_factor: 1024
        timeseries_eviction_method: oldest
        paths:
          /-0000000011:
            path: /
            match_type: prefix
            handler: alb
            methods:
            - GET
            - HEAD
            no_metrics: false
            reqrewriter: []
        negative_cache_name: default
        timeseries_ttl_ms: 21600000
        fastforward_ttl_ms: 15000
        max_ttl_ms: 86400000
        revalidation_factor: 2
        max_object_size_bytes: 524288
        tracing_name: default
        alb:
          mechanism: tsm
          pool:
          - zone-01
          - zone-02
          output_format: prometheus
        tls: {}
        forwarded_headers: standard
        is_default: true
        reqrewriter: []
      zone-01:
        provider: prometheus
        origin_url: http://prometheus-1.example.org:9090
        timeout_ms: 180000
        keep_alive_timeout_ms: 300000
        max_idle_conns: 20
        cache_name: default
        cache_key_prefix: prometheus-1.example.org:9090
        healthcheck:
          verb: GET
          scheme: http
          host: prometheus-1.example.org:9090
          path: /api/v1/query
          query: query=up
          expected_codes:
          - 200
        timeseries_retention_factor: 1024
        timeseries_eviction_method: oldest
        paths:
          /-0000000101:
            path: /
            match_type: prefix
            handler: proxy
            methods:
            - GET
            - POST
            no_metrics: false
            reqrewriter: []
          /api/v1/-0000000101:
            path: /api/v1/
            match_type: prefix
            handler: proxy
            methods:
            - GET
            - POST
            no_metrics: false
            reqrewriter: []
          /api/v1/admin-1111111111:
            path: /api/v1/admin
            match_type: prefix
            handler: admin
            methods:
            - GET
            - HEAD
            - POST
            - PUT
            - DELETE
            - CONNECT
            - OPTIONS
            - TRACE
            - PATCH
            - PURGE
            no_metrics: false
            reqrewriter: []
          /api/v1/alertmanagers-0000000001:
            path: /api/v1/alertmanagers
            match_type: exact
            handler: proxycache
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/alerts-0000000001:
            path: /api/v1/alerts
            match_type: exact
            handler: alerts
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/label/-0000000001:
            path: /api/v1/label/
            match_type: prefix
            handler: labels
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/labels-0000000101:
            path: /api/v1/labels
            match_type: exact
            handler: labels
            methods:
            - GET
            - POST
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/query-0000000101:
            path: /api/v1/query
            match_type: exact
            handler: query
            methods:
            - GET
            - POST
            cache_key_params:
            - query
            - time
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/query_range-0000000101:
            path: /api/v1/query_range
            match_type: exact
            handler: query_range
            methods:
            - GET
            - POST
            cache_key_params:
            - query
            - step
            response_headers:
              Cache-Control: s-maxage=21600
            no_metrics: false
            reqrewriter: []
          /api/v1/rules-0000000001:
            path: /api/v1/rules
            match_type: exact
            handler: proxycache
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/series-0000000101:
            path: /api/v1/series
            match_type: exact
            handler: series
            methods:
            - GET
            - POST
            cache_key_params:
            - match[]
            - start
            - end
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/status-0000000001:
            path: /api/v1/status
            match_type: prefix
            handler: proxycache
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/targets-0000000001:
            path: /api/v1/targets
            match_type: exact
            handler: proxycache
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/targets/metadata-0000000001:
            path: /api/v1/targets/metadata
            match_type: exact
            handler: proxycache
            methods:
            - GET
            cache_key_params:
            - match_target
            - metric
            - limit
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
        negative_cache_name: default
        timeseries_ttl_ms: 21600000
        fastforward_ttl_ms: 15000
        max_ttl_ms: 86400000
        revalidation_factor: 2
        max_object_size_bytes: 524288
        tracing_name: default
        prometheus:
          labels:
            region: zone-01
        tls: {}
        forwarded_headers: standard
        reqrewriter: []
      zone-02:
        provider: prometheus
        origin_url: http://prometheus-2.example.org:9090
        timeout_ms: 180000
        keep_alive_timeout_ms: 300000
        max_idle_conns: 20
        cache_name: default
        cache_key_prefix: prometheus-2.example.org:9090
        healthcheck:
          verb: GET
          scheme: http
          host: prometheus-2.example.org:9090
          path: /api/v1/query
          query: query=up
          expected_codes:
          - 200
        timeseries_retention_factor: 1024
        timeseries_eviction_method: oldest
        paths:
          /-0000000101:
            path: /
            match_type: prefix
            handler: proxy
            methods:
            - GET
            - POST
            no_metrics: false
            reqrewriter: []
          /api/v1/-0000000101:
            path: /api/v1/
            match_type: prefix
            handler: proxy
            methods:
            - GET
            - POST
            no_metrics: false
            reqrewriter: []
          /api/v1/admin-1111111111:
            path: /api/v1/admin
            match_type: prefix
            handler: admin
            methods:
            - GET
            - HEAD
            - POST
            - PUT
            - DELETE
            - CONNECT
            - OPTIONS
            - TRACE
            - PATCH
            - PURGE
            no_metrics: false
            reqrewriter: []
          /api/v1/alertmanagers-0000000001:
            path: /api/v1/alertmanagers
            match_type: exact
            handler: proxycache
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/alerts-0000000001:
            path: /api/v1/alerts
            match_type: exact
            handler: alerts
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/label/-0000000001:
            path: /api/v1/label/
            match_type: prefix
            handler: labels
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/labels-0000000101:
            path: /api/v1/labels
            match_type: exact
            handler: labels
            methods:
            - GET
            - POST
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/query-0000000101:
            path: /api/v1/query
            match_type: exact
            handler: query
            methods:
            - GET
            - POST
            cache_key_params:
            - query
            - time
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/query_range-0000000101:
            path: /api/v1/query_range
            match_type: exact
            handler: query_range
            methods:
            - GET
            - POST
            cache_key_params:
            - query
            - step
            response_headers:
              Cache-Control: s-maxage=21600
            no_metrics: false
            reqrewriter: []
          /api/v1/rules-0000000001:
            path: /api/v1/rules
            match_type: exact
            handler: proxycache
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/series-0000000101:
            path: /api/v1/series
            match_type: exact
            handler: series
            methods:
            - GET
            - POST
            cache_key_params:
            - match[]
            - start
            - end
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/status-0000000001:
            path: /api/v1/status
            match_type: prefix
            handler: proxycache
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/targets-0000000001:
            path: /api/v1/targets
            match_type: exact
            handler: proxycache
            methods:
            - GET
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
          /api/v1/targets/metadata-0000000001:
            path: /api/v1/targets/metadata
            match_type: exact
            handler: proxycache
            methods:
            - GET
            cache_key_params:
            - match_target
            - metric
            - limit
            response_headers:
              Cache-Control: s-maxage=30
            no_metrics: false
            reqrewriter: []
        negative_cache_name: default
        timeseries_ttl_ms: 21600000
        fastforward_ttl_ms: 15000
        max_ttl_ms: 86400000
        revalidation_factor: 2
        max_object_size_bytes: 524288
        tracing_name: default
        prometheus:
          labels:
            region: zone-02
        tls: {}
        forwarded_headers: standard
        reqrewriter: []
    caches:
      default:
        provider: memory
        index:
          reap_interval_ms: 3000
          flush_interval_ms: 5000
          max_size_bytes: 536870912
          max_size_backoff_bytes: 16777216
          max_size_backoff_objects: 100
        redis:
          client_type: standard
          protocol: tcp
          endpoint: redis:6379
          endpoints:
          - redis:6379
        filesystem:
          cache_path: /tmp/trickster
        bbolt:
          filename: trickster.db
          bucket: trickster
        badger:
          directory: /tmp/trickster
          value_directory: /tmp/trickster
    frontend:
      listen_port: 9090
      tls_listen_port: 8483
    logging:
      log_level: debug
    metrics:
      listen_port: 8481
    tracing:
      default:
        provider: none
        service_name: trickster
        sample_rate: 1
        stdout: {}
        jaeger: {}
    negative_caches:
      default: {}
    reloading:
      listen_address: 127.0.0.1
      listen_port: 8484
      handler_path: /trickster/config/reload
      drain_timeout_ms: 30000
      rate_limit_ms: 3000
    
    opened by altokarev 3
  • Request Rewrite Example

    Request Rewrite Example

    For something like:

    rules:
      example-user-router:
        # default route is reader cluster
        next_route: example-reader-cluster
    
        input_source: path
        input_type: string
        operation: rmatch      # perform regex match against the path to see if it matches 'writer
        operation_arg: '^.*%7Bmylabel%3D%22([a-z0-9]{3})%22.*$'
        cases:
          rmatch-true:
            matches: [ 'true' ] # rmatch returns true when the input matches the regex; update next_route
            next_route: example-writer-cluster
    

    Trying to take the capture group above and add into the hostname as a prefix. Basically like a relabel_config in prometheus. I didn't see an examples on how to do it. Most of the examples for rewrites involve fixed values like here: https://github.com/trickstercache/trickster/blob/main/docs/request_rewriters.md#header-set

    Looking through the source: https://github.com/trickstercache/trickster/blob/0cf4524824c6916aade49dbd99ef360e61426289/pkg/proxy/request/rewriter/rewrite_instructions_test.go#L71

    Looks like that uses a ${trickster} var. Don't see where it originated.

    opened by base698 0
  • POST requests don't cache. (v2)

    POST requests don't cache. (v2)

    Hi

    I'm running trickster v2.0-beta2 and I am trying to cache POST requests with JSON data. I can't seem to get trickster to cache the posts. The response gets this header.

    X-Trickster-Result: engine=HTTPProxy; status=proxy-only
    

    My request goes to http://trickstercache/api/query

    This is my config

    main:
      config_handler_path: /trickster/config
      ping_handler_path: /trickster/ping
      health_handler_path: /trickster/health
    frontend:
      listen_port: 8480
    backends:
      default:
        provider: reverseproxycache
        origin_url: http://opentsdb-ro.opentsdb.svc.cluster.local:4242
        timeout_ms: 40000
        cache_name: default
        compressable_types:
          - text/javascript, text/css, text/plain, text/xml, text/json, application/json, application/javascript, application/xml ]
        healthcheck:
          verb: GET
          path: /api/version
        paths:
          query:
            path: /api/query
            methods: [ GET, POST ]
            match_type: prefix
            handler: proxycache
    caches:
      default:
        provider: memory
        index:
          max_size_bytes: 536870912
    negative_caches:
      default:
        "400": 3000
        "404": 3000
        "500": 3000
        "502": 3000
    logging:
      log_level: info
    metrics:
      listen_port: 8481
    

    What am I doing wrong?

    opened by Raboo 0
Releases(v2.0.0-beta2)
Tcp-proxy - A dead simple reverse proxy server.

tcp-proxy A proxy that forwords from a host to another. Building go build -ldflags="-X 'main.Version=$(git describe --tags $(git rev-list --tags --max

Injamul Mohammad Mollah 0 Jan 2, 2022
Goproxy - HTTP/HTTPS Forward and Reverse Proxy

Go HTTP(s) Forward/Reverse Proxy This is intended to provide the proxy for the goproxy frontend. It is currently a work in progress, and is not very s

David Christenson 0 Jan 4, 2022
Websockify-go - A reverse proxy that support tcp, http, https, and the most important, noVNC, which makes it a websockify

websockify-go | mproxy a reverse proxy that support tcp, http, https, and the mo

null 2 Mar 19, 2022
Application written in Go which polls Time-series data at specific intervals and saves to persistent storage

TPoller Server Overview The purpose of this application is to poll time-series data per time interval from any (telemetry) application running a gRPC

Bartlomiej Mika 4 Feb 10, 2022
An Advanced HTTP Reverse Proxy with Dynamic Sharding Strategies

Weaver - A modern HTTP Proxy with Advanced features Description Features Installation Architecture Configuration Contributing License Description Weav

Gojek 554 Jun 23, 2022
An Advanced HTTP Reverse Proxy with Dynamic Sharding Strategies

Weaver - A modern HTTP Proxy with Advanced features Description Features Installation Architecture Configuration Contributing License Description Weav

Gojek 553 Jun 28, 2022
A standalone Web Server developed with the standard http library, suport reverse proxy & flexible configuration

paddy 简介 paddy是一款单进程的独立运行的web server,基于golang的标准库net/http实现。 paddy提供以下功能: 直接配置http响应 目录文件服务器 proxy_pass代理 http反向代理 支持请求和响应插件 部署 编译 $ go build ./main/p

fangyousong 5 May 2, 2022
Fast time-series data storage server accessible over gRPC

tstorage-server Persistent fast time-series data storage server accessible over gRPC. tstorage-server is lightweight local on-disk storage engine serv

Bartlomiej Mika 5 Oct 12, 2021
Http-logging-proxy - A HTTP Logging Proxy For Golang

http-logging-proxy HTTP Logging Proxy Description This project builds a simple r

null 1 Feb 11, 2022
IP2Proxy Go package allows users to query an IP address to determine if it was being used as open proxy, web proxy, VPN anonymizer and TOR exits.

IP2Proxy Go Package This package allows user to query an IP address if it was being used as VPN anonymizer, open proxies, web proxies, Tor exits, data

IP2Location 13 Apr 16, 2022
Go HTTP tunnel is a reverse tunnel based on HTTP/2.

Go HTTP tunnel is a reverse tunnel based on HTTP/2. It enables you to share your localhost when you don't have a public IP.

Michal Jan Matczuk 2.9k Jun 21, 2022
Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution.

Connecting the Next Billion People Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core

Magma 1.3k Jun 15, 2022
gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era.

gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era. Current status: Maintenance mode, accepting PRs. Currently in

Yaroslav Pogrebnyak 1.7k Jun 27, 2022
TProx is a fast reverse proxy path traversal detector and directory bruteforcer.

TProx is a fast reverse proxy path traversal detector and directory bruteforcer Install • Usage • Examples • Join Discord Install Options From Source

Krypt0mux 27 Jun 13, 2022
Woole (or Wormhole) is a reverse-proxy, sniffing, and tunneling tool developed in Go

The Wormhole (or just Woole) is an Open-Source reverse-proxy, sniffing, and tunneling tool developed in Go Summary How it Works Client Server Build Di

Emerson Capuchi Romaneli 6 May 9, 2022
A paywall bypassing reverse proxy and DNS server written in go 🔨💵🧱

FreeNews ?? ?? ?? A paywall bypassing reverse proxy and DNS server written in go. This project is still hard work in progress. Expect stuff to just no

fipso 10 Jun 21, 2022
Reverse cwmp proxy

cwmp-proxy Integration of the proxy will provide you the ability to place CPEs and ACS servers in different networks. What about if the devices are pl

Ivan Stefanov 14 Feb 21, 2020
A fast reverse proxy to help you expose a local server behind a NAT or firewall to the internet.

frp README | 中文文档 What is frp? frp is a fast reverse proxy to help you expose a local server behind a NAT or firewall to the Internet. As of now, it s

null 57.6k Jun 28, 2022
Dead simple reverse proxy for all your containerized needss

Whats this ? Pawxi is yet another reverse proxy designed with simplicity in mind. Born out of a certain users frustration at the complexity of setting

null 14 May 26, 2022