ClickHouse http proxy and load balancer

Overview

Go Report Card Build Status Coverage

chproxy

English | 简体中文

Chproxy, is an http proxy and load balancer for ClickHouse database. It provides the following features:

  • May proxy requests to multiple distinct ClickHouse clusters depending on the input user. For instance, requests from appserver user may go to stats-raw cluster, while requests from reportserver user may go to stats-aggregate cluster.
  • May map input users to per-cluster users. This prevents from exposing real usernames and passwords used in ClickHouse clusters. Additionally this allows mapping multiple distinct input users to a single ClickHouse user.
  • May accept incoming requests via HTTP and HTTPS.
  • May limit HTTP and HTTPS access by IP/IP-mask lists.
  • May limit per-user access by IP/IP-mask lists.
  • May limit per-user query duration. Timed out or canceled queries are forcibly killed via KILL QUERY.
  • May limit per-user requests rate.
  • May limit per-user number of concurrent requests.
  • All the limits may be independently set for each input user and for each per-cluster user.
  • May delay request execution until it fits per-user limits.
  • Per-user response caching may be configured.
  • Response caches have built-in protection against thundering herd problem aka dogpile effect.
  • Evenly spreads requests among replicas and nodes using least loaded + round robin technique.
  • Monitors node health and prevents from sending requests to unhealthy nodes.
  • Supports automatic HTTPS certificate issuing and renewal via Let’s Encrypt.
  • May proxy requests to each configured cluster via either HTTP or HTTPS.
  • Prepends User-Agent request header with remote/local address and in/out usernames before proxying it to ClickHouse, so this info may be queried from system.query_log.http_user_agent.
  • Exposes various useful metrics in prometheus text format.
  • Configuration may be updated without restart - just send SIGHUP signal to chproxy process.
  • Easy to manage and run - just pass config file path to a single chproxy binary.
  • Easy to configure:
server:
  http:
    listen_addr: ":9090"
    allowed_networks: ["127.0.0.0/24"]

users:
  - name: "default"
    to_cluster: "default"
    to_user: "default"

# by default each cluster has `default` user which can be overridden by section `users`
clusters:
  - name: "default"
    nodes: ["127.0.0.1:8123"]

How to install

Precompiled binaries

Precompiled chproxy binaries are available here. Just download the latest stable binary, unpack and run it with the desired config:

./chproxy -config=/path/to/config.yml

Building from source

Chproxy is written in Go. The easiest way to install it from sources is:

go get -u github.com/Vertamedia/chproxy

If you don't have Go installed on your system - follow this guide.

Why it was created

ClickHouse may exceed max_execution_time and max_concurrent_queries limits due to various reasons:

  • max_execution_time may be exceeded due to the current implementation deficiencies.
  • max_concurrent_queries works only on a per-node basis. There is no way to limit the number of concurrent queries on a cluster if queries are spread across cluster nodes.

Such "leaky" limits may lead to high resource usage on all the cluster nodes. After facing this problem we had to maintain two distinct http proxies in front of our ClickHouse cluster - one for spreading INSERTs among cluster nodes and another one for sending SELECTs to a dedicated node where limits may be enforced somehow. This was fragile and inconvenient to manage, so chproxy has been created :)

Use cases

Spread INSERTs among cluster shards

Usually INSERTs are sent from app servers located in a limited number of subnetworks. INSERTs from other subnetworks must be denied.

All the INSERTs may be routed to a distributed table on a single node. But this increases resource usage (CPU and network) on the node comparing to other nodes, since it must parse each row to be inserted and route it to the corresponding node (shard).

It would be better to spread INSERTs among available shards and to route them directly to per-shard tables instead of distributed tables. The routing logic may be embedded either directly into applications generating INSERTs or may be moved to a proxy. Proxy approach is better since it allows re-configuring ClickHouse cluster without modification of application configs and without application downtime. Multiple identical proxies may be started on distinct servers for scalability and availability purposes.

The following minimal chproxy config may be used for this use case:

server:
  http:
      listen_addr: ":9090"

      # Networks with application servers.
      allowed_networks: ["10.10.1.0/24"]

users:
  - name: "insert"
    to_cluster: "stats-raw"
    to_user: "default"

clusters:
  - name: "stats-raw"

    # Requests are spread in `round-robin` + `least-loaded` fashion among nodes.
    # Unreachable and unhealthy nodes are skipped.
    nodes: [
      "10.10.10.1:8123",
      "10.10.10.2:8123",
      "10.10.10.3:8123",
      "10.10.10.4:8123"
    ]

Spread SELECTs from reporting apps among cluster nodes

Reporting apps usually generate various customer reports from SELECT query results. The load generated by such SELECTs on ClickHouse cluster may vary depending on the number of online customers and on the generated report types. It is obvious that the load must be limited in order to prevent cluster overload.

All the SELECTs may be routed to a distributed table on a single node. But this increases resource usage (RAM, CPU and network) on the node comparing to other nodes, since it must do final aggregation, sorting and filtering for the data obtained from cluster nodes (shards).

It would be better to create identical distributed tables on each shard and spread SELECTs among all the available shards.

The following minimal chproxy config may be used for this use case:

server:
  http:
      listen_addr: ":9090"

      # Networks with reporting servers.
      allowed_networks: ["10.10.2.0/24"]

users:
  - name: "report"
    to_cluster: "stats-aggregate"
    to_user: "readonly"
    max_concurrent_queries: 6
    max_execution_time: 1m

clusters:
  - name: "stats-aggregate"
    nodes: [
      "10.10.20.1:8123",
      "10.10.20.2:8123"
    ]
    users:
      - name: "readonly"
        password: "****"

Authorize users by passwords via HTTPS

Suppose you need to access ClickHouse cluster from anywhere by username/password. This may be used for building graphs from ClickHouse-grafana or tabix. It is bad idea to transfer unencrypted password and data over untrusted networks. So HTTPS must be used for accessing the cluster in such cases. The following chproxy config may be used for this use case:

server:
  https:
    listen_addr: ":443"
    autocert:
      cache_dir: "certs_dir"

users:
  - name: "web"
    password: "****"
    to_cluster: "stats-raw"
    to_user: "web"
    max_concurrent_queries: 2
    max_execution_time: 30s
    requests_per_minute: 10
    deny_http: true

    # Allow `CORS` requests for `tabix`.
    allow_cors: true

    # Enable requests queueing - `chproxy` will queue up to `max_queue_size`
    # of incoming requests for up to `max_queue_time` until they stop exceeding
    # the current limits.
    # This allows gracefully handling request bursts when more than
    # `max_concurrent_queries` concurrent requests arrive.
    max_queue_size: 40
    max_queue_time: 25s

    # Enable response caching. See cache config below.
    cache: "shortterm"

clusters:
  - name: "stats-raw"
    nodes: [
     "10.10.10.1:8123",
     "10.10.10.2:8123",
     "10.10.10.3:8123",
     "10.10.10.4:8123"
    ]
    users:
      - name: "web"
        password: "****"

caches:
  - name: "shortterm"
    dir: "/path/to/cache/dir"
    max_size: 150Mb

    # Cached responses will expire in 130s.
    expire: 130s

All the above configs combined

All the above cases may be combined in a single chproxy config:

server:
  http:
      listen_addr: ":9090"
      allowed_networks: ["10.10.1.0/24","10.10.2.0/24"]
  https:
    listen_addr: ":443"
    autocert:
      cache_dir: "certs_dir"

users:
  - name: "insert"
    allowed_networks: ["10.10.1.0/24"]
    to_cluster: "stats-raw"
    to_user: "default"

  - name: "report"
    allowed_networks: ["10.10.2.0/24"]
    to_cluster: "stats-aggregate"
    to_user: "readonly"
    max_concurrent_queries: 6
    max_execution_time: 1m

  - name: "web"
    password: "****"
    to_cluster: "stats-raw"
    to_user: "web"
    max_concurrent_queries: 2
    max_execution_time: 30s
    requests_per_minute: 10
    deny_http: true
    allow_cors: true
    max_queue_size: 40
    max_queue_time: 25s
    cache: "shortterm"

clusters:
  - name: "stats-aggregate"
    nodes: [
      "10.10.20.1:8123",
      "10.10.20.2:8123"
    ]
    users:
    - name: "readonly"
      password: "****"

  - name: "stats-raw"
    nodes: [
     "10.10.10.1:8123",
     "10.10.10.2:8123",
     "10.10.10.3:8123",
     "10.10.10.4:8123"
    ]
    users:
      - name: "default"

      - name: "web"
        password: "****"

caches:
  - name: "shortterm"
    dir: "/path/to/cache/dir"
    max_size: 150Mb
    expire: 130s

Configuration

Server

Chproxy may accept requests over HTTP and HTTPS protocols. HTTPS must be configured with custom certificate or with automated Let's Encrypt certificates.

Access to chproxy can be limitied by list of IPs or IP masks. This option can be applied to HTTP, HTTPS, metrics, user or cluster-user.

Users

There are two types of users: in-users (in global section) and out-users (in cluster section). This means all requests will be matched to in-users and if all checks are Ok - will be matched to out-users with overriding credentials.

Suppose we have one ClickHouse user web with read-only permissions and max_concurrent_queries: 4 limit. There are two distinct applications reading from ClickHouse. We may create two distinct in-users with to_user: "web" and max_concurrent_queries: 2 each in order to avoid situation when a single application exhausts all the 4-request limit on the web user.

Requests to chproxy must be authorized with credentials from user_config. Credentials can be passed via BasicAuth or via user and password query string args.

Limits for in-users and out-users are independent.

Clusters

Chproxy can be configured with multiple clusters. Each cluster must have a name and either a list of nodes or a list of replicas with nodes. See cluster-config for details. Requests to each cluster are balanced among replicas and nodes using round-robin + least-loaded approach. The node priority is automatically decreased for a short interval if recent requests to it were unsuccessful. This means that the chproxy will choose the next least loaded healthy node among least loaded replica for every new request.

Additionally each node is periodically checked for availability. Unavailable nodes are automatically excluded from the cluster until they become available again. This allows performing node maintenance without removing unavailable nodes from the cluster config.

Chproxy automatically kills queries exceeding max_execution_time limit. By default chproxy tries to kill such queries under default user. The user may be overriden with kill_query_user.

If cluster's users section isn't specified, then default user is used with no limits.

Caching

Chproxy may be configured to cache responses. It is possible to create multiple cache-configs with various settings. Response caching is enabled by assigning cache name to user. Multiple users may share the same cache. Currently only SELECT responses are cached. Caching is disabled for request with no_cache=1 in query string. Optional cache namespace may be passed in query string as cache_namespace=aaaa. This allows caching distinct responses for the identical query under distinct cache namespaces. Additionally, an instant cache flush may be built on top of cache namespaces - just switch to new namespace in order to flush the cache.

Security

Chproxy removes all the query params from input requests (except the user's params and listed here) before proxying them to ClickHouse nodes. This prevents from unsafe overriding of various ClickHouse settings.

Be careful when configuring limits, allowed networks, passwords etc. By default chproxy tries detecting the most obvious configuration errors such as allowed_networks: ["0.0.0.0/0"] or sending passwords via unencrypted HTTP.

Special option hack_me_please: true may be used for disabling all the security-related checks during config validation (if you are feeling lucky :) ).

Example of full configuration:

# Whether to print debug logs.
#
# By default debug logs are disabled.
log_debug: true

# Whether to ignore security checks during config parsing.
#
# By default security checks are enabled.
hack_me_please: true

# Optional response cache configs.
#
# Multiple distinct caches with different settings may be configured.
caches:
    # Cache name, which may be passed into `cache` option on the `user` level.
    #
    # Multiple users may share the same cache.
  - name: "longterm"

    # Path to directory where cached responses will be stored.
    dir: "/path/to/longterm/cachedir"

    # Maximum cache size.
    # `Kb`, `Mb`, `Gb` and `Tb` suffixes may be used.
    max_size: 100Gb

    # Expiration time for cached responses.
    expire: 1h

    # When multiple requests with identical query simultaneously hit `chproxy`
    # and there is no cached response for the query, then only a single
    # request will be proxied to clickhouse. Other requests will wait
    # for the cached response during this grace duration.
    # This is known as protection from `thundering herd` problem.
    #
    # By default `grace_time` is 5s. Negative value disables the protection
    # from `thundering herd` problem.
    grace_time: 20s

  - name: "shortterm"
    dir: "/path/to/shortterm/cachedir"
    max_size: 100Mb
    expire: 10s

# Optional network lists, might be used as values for `allowed_networks`.
network_groups:
  - name: "office"
    # Each item may contain either IP or IP subnet mask.
    networks: ["127.0.0.0/24", "10.10.0.1"]

  - name: "reporting-apps"
    networks: ["10.10.10.0/24"]

# Optional lists of query params to send with each proxied request to ClickHouse.
# These lists may be used for overriding ClickHouse settings on a per-user basis.
param_groups:
    # Group name, which may be passed into `params` option on the `user` level.
  - name: "cron-job"
    # List of key-value params to send
    params:
      - key: "max_memory_usage"
        value: "40000000000"

      - key: "max_bytes_before_external_group_by"
        value: "20000000000"

  - name: "web"
    params:
      - key: "max_memory_usage"
        value: "5000000000"

      - key: "max_columns_to_read"
        value: "30"

      - key: "max_execution_time"
        value: "30"

# Settings for `chproxy` input interfaces.
server:
  # Configs for input http interface.
  # The interface works only if this section is present.
  http:
    # TCP address to listen to for http.
    # May be in the form IP:port . IP part is optional.
    listen_addr: ":9090"

    # List of allowed networks or network_groups.
    # Each item may contain IP address, IP subnet mask or a name
    # from `network_groups`.
    # By default requests are accepted from all the IPs.
    allowed_networks: ["office", "reporting-apps", "1.2.3.4"]

    # ReadTimeout is the maximum duration for proxy to reading the entire
    # request, including the body.
    # Default value is 1m
    read_timeout: 5m

    # WriteTimeout is the maximum duration for proxy before timing out writes of the response.
    # Default is largest MaxExecutionTime + MaxQueueTime value from Users or Clusters
    write_timeout: 10m

    # IdleTimeout is the maximum amount of time for proxy to wait for the next request.
    # Default is 10m
    idle_timeout: 20m

  # Configs for input https interface.
  # The interface works only if this section is present.
  https:
    # TCP address to listen to for https.
    listen_addr: ":443"

    # Paths to TLS cert and key files.
    # cert_file: "cert_file"
    # key_file: "key_file"

    # Letsencrypt config.
    # Certificates are automatically issued and renewed if this section
    # is present.
    # There is no need in cert_file and key_file if this section is present.
    # Autocert requires application to listen on :80 port for certificate generation
    autocert:
      # Path to the directory where autocert certs are cached.
      cache_dir: "certs_dir"

      # The list of host names proxy is allowed to respond to.
      # See https://godoc.org/golang.org/x/crypto/acme/autocert#HostPolicy
      allowed_hosts: ["example.com"]

  # Metrics in prometheus format are exposed on the `/metrics` path.
  # Access to `/metrics` endpoint may be restricted in this section.
  # By default access to `/metrics` is unrestricted.
  metrics:
    allowed_networks: ["office"]

# Configs for input users.
users:
    # Name and password are used to authorize access via BasicAuth or
    # via `user`/`password` query params.
    # Password is optional. By default empty password is used.
  - name: "web"
    password: "****"

    # Requests from the user are routed to this cluster.
    to_cluster: "first cluster"

    # Input user is substituted by the given output user from `to_cluster`
    # before proxying the request.
    to_user: "web"

    # Whether to deny input requests over HTTP.
    deny_http: true

    # Whether to allow `CORS` requests like `tabix` does.
    # By default `CORS` requests are denied for security reasons.
    allow_cors: true

    # Requests per minute limit for the given input user.
    #
    # By default there is no per-minute limit.
    requests_per_minute: 4

    # Response cache config name to use.
    #
    # By default responses aren't cached.
    cache: "longterm"

    # An optional group of params to send to ClickHouse with each proxied request.
    # These params may be set in param_groups block.
    #
    # By default no additional params are sent to ClickHouse.
    params: "web"

    # The maximum number of requests that may wait for their chance
    # to be executed because they cannot run now due to the current limits.
    #
    # This option may be useful for handling request bursts from `tabix`
    # or `clickhouse-grafana`.
    #
    # By default all the requests are immediately executed without
    # waiting in the queue.
    max_queue_size: 100

    # The maximum duration the queued requests may wait for their chance
    # to be executed.
    # This option makes sense only if max_queue_size is set.
    # By default requests wait for up to 10 seconds in the queue.
    max_queue_time: 35s

  - name: "default"
    to_cluster: "second cluster"
    to_user: "default"
    allowed_networks: ["office", "1.2.3.0/24"]

    # The maximum number of concurrently running queries for the user.
    #
    # By default there is no limit on the number of concurrently
    # running queries.
    max_concurrent_queries: 4

    # The maximum query duration for the user.
    # Timed out queries are forcibly killed via `KILL QUERY`.
    #
    # By default there is no limit on the query duration.
    max_execution_time: 1m

    # Whether to deny input requests over HTTPS.
    deny_https: true

# Configs for ClickHouse clusters.
clusters:
    # The cluster name is used in `to_cluster`.
  - name: "first cluster"

    # Protocol to use for communicating with cluster nodes.
    # Currently supported values are `http` or `https`.
    # By default `http` is used.
    scheme: "http"

    # Cluster node addresses.
    # Requests are evenly distributed among them.
    nodes: ["127.0.0.1:8123", "shard2:8123"]

    # DEPRECATED: Each cluster node is checked for availability using this interval.
    # By default each node is checked for every 5 seconds.
    # Use `heartbeat.interval`.
    heartbeat_interval: 1m

    # User configuration for heart beat requests.
    # Credentials of the first user in clusters.users will be used for heart beat requests to clickhouse.
    heartbeat:
      # An interval for checking all cluster nodes for availability
      # By default each node is checked for every 5 seconds.
      interval: 1m

      # A timeout of wait response from cluster nodes
      # By default 3s
      timeout: 10s

      # The parameter to set the URI to request in a health check
      # By default "/?query=SELECT%201"
      request: "/?query=SELECT%201%2B1"

      # Reference response from clickhouse on health check request
      # By default "1\n"
      response: "2\n"

    # Timed out queries are killed using this user.
    # By default `default` user is used.
    kill_query_user:
      name: "default"
      password: "***"

    # Configuration for cluster users.
    users:
        # The user name is used in `to_user`.
      - name: "web"
        password: "password"
        max_concurrent_queries: 4
        max_execution_time: 1m

  - name: "second cluster"
    scheme: "https"

    # The cluster may contain multiple replicas instead of flat nodes.
    #
    # Chproxy selects the least loaded node among the least loaded replicas.
    replicas:
      - name: "replica1"
        nodes: ["127.0.1.1:8443", "127.0.1.2:8443"]
      - name: "replica2"
        nodes: ["127.0.2.1:8443", "127.0.2.2:8443"]

    users:
      - name: "default"
        max_concurrent_queries: 4
        max_execution_time: 1m

      - name: "web"
        max_concurrent_queries: 4
        max_execution_time: 10s
        requests_per_minute: 10
        max_queue_size: 50
        max_queue_time: 70s
        allowed_networks: ["office"]

Full specification is located here

Metrics

Metrics are exposed in prometheus text format at /metrics path.

Name Type Description Labels
bad_requests_total Counter The number of unsupported requests
cache_hits_total Counter The amount of cache hits cache, user, cluster, cluster_user
cache_items Gauge The number of items in each cache cache
cache_miss_total Counter The amount of cache misses cache, user, cluster, cluster_user
cache_size Gauge Size of each cache cache
cached_response_duration_seconds Summary Duration for cached responses. Includes the duration for sending response to client cache, user, cluster, cluster_user
canceled_request_total Counter The number of requests canceled by remote client user, cluster, cluster_user, replica, cluster_node
cluster_user_queue_overflow_total Counter The number of overflows for per-cluster_user request queues user, cluster, cluster_user
concurrent_limit_excess_total Counter The number of rejected requests due to max_concurrent_queries limit user, cluster, cluster_user, replica, cluster_node
concurrent_queries Gauge The number of concurrent queries at the moment user, cluster, cluster_user, replica, cluster_node
config_last_reload_successful Gauge Whether the last configuration reload attempt was successful
config_last_reload_success_timestamp_seconds Gauge Timestamp of the last successful configuration reload
host_health Gauge Health state of hosts by clusters cluster, replica, cluster_node
host_penalties_total Counter The number of given penalties by host cluster, replica, cluster_node
killed_request_total Counter The number of requests killed by proxy user, cluster, cluster_user, replica, cluster_node
proxied_response_duration_seconds Summary Duration for responses proxied from clickhouse user, cluster, cluster_user, replica, cluster_node
request_body_bytes_total Counter The amount of bytes read from request bodies user, cluster, cluster_user, replica, cluster_node
request_duration_seconds Summary Request duration. Includes possible queue wait time user, cluster, cluster_user, replica, cluster_node
request_queue_size Gauge Request queue size at the moment user, cluster, cluster_user
request_success_total Counter The number of successfully proxied requests user, cluster, cluster_user, replica, cluster_node
request_sum_total Counter The number of processed requests user, cluster, cluster_user, replica, cluster_node
response_body_bytes_total Counter The amount of bytes written to response bodies user, cluster, cluster_user, replica, cluster_node
status_codes_total Counter Distribution by response status codes user, cluster, cluster_user, replica, cluster_node, code
timeout_request_total Counter The number of timed out requests user, cluster, cluster_user, replica, cluster_node
user_queue_overflow_total Counter The number of overflows for per-user request queues user, cluster, cluster_user

An example of Grafana's dashboard for chproxy metrics is available here

dashboard example

FAQ

  • Is chproxy production ready?

    Yes, we successfully use it in production for both INSERT and SELECT requests.

  • What about chproxy performance?

    A single chproxy instance easily proxies 1Gbps of compressed INSERT data while using less than 20% of a single CPU core in our production setup.

  • Does chproxy support native interface for ClickHouse?

    No. Because currently all our services work with ClickHouse only via HTTP. Support for native interface may be added in the future.

Issues
  • Support compression transfer data

    Support compression transfer data

    I use the clickhouse-jdbc to write the data to the Clickhouse(1.1.54318) I will set the properties compress and decompress as true. The jdbc driver uses LZ4 to compress and decompress the data. And it can transfer data with Clickhouse(1.1.54318). When I connect to chproxy, it doesn't work properly.

    error log

    Caused by: java.io.IOException: Magic is not correct: 103
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.readNextBlock(ClickHouseLZ4Stream.java:93) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.checkNext(ClickHouseLZ4Stream.java:74) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.read(ClickHouseLZ4Stream.java:50) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.StreamSplitter.readFromStream(StreamSplitter.java:85) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.StreamSplitter.next(StreamSplitter.java:47) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.ClickHouseResultSet.<init>(ClickHouseResultSet.java:65) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:117) ~[clickhouse-jdbc-0.1.34.jar:na]
    

    config.yml

    hack_me_please: false
    
    server:
      http:
          listen_addr: "0.0.0.0:9090"
          allowed_networks: ["192.168.1.0/24"]
      metrics:
          allowed_networks: ["192.168.1.0/24"]
    
    users:
      - name: "default"
        to_cluster: "default"
        to_user: "default"
        allow_cors: true
    
    enhancement 
    opened by tacyuuhon 8
  • fix: fix the bytes encode/decode for redis cache

    fix: fix the bytes encode/decode for redis cache

    Using string(data) to convert the byte array to string introduces error in json marshal/unmarshal, hence causes error when returning cached response from redis.

    The reason is Unmarshal function in encode/json would replace invalid UTF-8 or invalid UTF-16 pairs with U+FFFD, therefore the payload string in redisCachePayload will actually change after json marshal/unmarshal since the byte array may contain invalid UTF-8/UTF-16 byte, the length of payload will thereby change, resulting the http server to find the declared length in header Content-Length mismatches the actual length of payload.

    The fix is to base64-encode/decode the byte array to string, thereby eliminates invalid UTF-8/UTF-16 bytes.

    size/M 
    opened by wangxinalex 6
  • Long and Boring Query Permanent Cache Miss

    Long and Boring Query Permanent Cache Miss

    I have a long and boring query which never been cached by Chproxy I don't understand the reason why. Enabling debugging doesn't provide any clarity as well, this issue reproduces manually by running same query again and again via ChProxy. Thanks for help,

    This Query has WITH statement inside and an example looks as below:

    WITH analyse AS ( SELECT * from (
        SELECT day, cc.id[1] as e1, cc.id[2] as e2, sum(score) as pscore
        from (
             select toStartOfHour(publish_date) as day, cc.id, sum(cc.score) as score
             from (
                      SELECT internal_id,
                             publish_date,
                             f.cc.score,
                             f.cc.id,
                             f.cc.label,
                             f.cc.ss,
                            groupArray(CASE WHEN ((id IN (19864582))) THEN 1 ELSE 0 END) as c1_1, 
                            groupArray(CASE WHEN((id IN (19864582))) THEN kpi.ss ELSE 0.0 END) as c1_1ss                  
                            FROM articles_ext_data f
                            array join kpis as kpi
                            JOIN entities e ON kpi.id = e.id  WHERE  (f.is_near_duplicate = 0)
                            AND ((publish_date >= toDateTime('2020-10-27 00:00:00')) AND (publish_date < toDateTime('2021-01-25 00:00:00'))) AND ((id IN (19864582)))
                      GROUP BY internal_id, publish_date, f.cc.score, f.cc.id, f.cc.label,f.cc.ss
                      )
                      array join cc
             where (((has(c1_1, 1))))
               AND (cc.label = 'label')
             group by day, cc.id
             order by score desc, cc.id desc, day desc
                )
        GROUP BY day, e1, e2 ) t
        join entities en1 on (t.e1 = en1.id)
        join entities en2 on (t.e2 = en2.id)
        WHERE  ((en1.type = 'CC' AND (en2.id IN (19864582))) OR (en2.type = 'CC' AND (en1.id IN (19864582))))
    ) SELECT * FROM (
    select tupleElement(result,2) as id, tupleElement(result,3) as name, tupleElement(result,4) as entityDescription, sum(tupleElement(result,5)) as proximityScore, group from (
        select multiIf( ((en1.id IN (19864582))) AND NOT ((en2.id IN (19864582))), tuple (t.day, en2.id, en2.name, en2.description, t.pscore),
        ((en2.id IN (19864582))) AND NOT ((en1.id IN (19864582))), tuple (t.day, en1.id, en1.name, en1.description, t.pscore),
        tuple (NULL, NULL, NULL , NULL, NULL)
        ) as result, 'R1' as group from analyse where (en1.year_death is NULL and en2.year_death is NULL)
    
        )     where tupleElement(result,1) is not NULL group by id, name, entityDescription, group order by proximityScore desc limit 200 by group
    );
    
    bug help wanted 
    opened by pavelnemirovsky 6
  •  invalid username or password for user

    invalid username or password for user "default"

    Hello, I am trying to use chproxy for our four node clickhouse cluster! and following in my config and error which i am getting!

    I installed chproxy by running the command as specified

    go get -u github.com/Vertamedia/chproxy
    

    config.yml

    $cat config.yml  
    server:
      http:
        listen_addr: :9090
        allowed_networks:
        - 127.0.0.1/32
        forceautocerthandler: false
        read_timeout: 1m
        write_timeout: 2m
        idle_timeout: 10m
    clusters:
    - name: admin
      scheme: http
      nodes:
      - ip1:8123
      - ip2:8123
      - ip3:8123
      - ip4:8123
      users:
      - name: admin
        password: myadmin_pass
      heartbeat:
        interval: 5s
        timeout: 3s
        request: /?query=SELECT%201
        response: |
          1
    users:
    - name: admin
      password: myadmin_pass
      to_cluster: admin
      to_user: admin
      max_concurrent_queries: 6
      max_execution_time: 1m
    log_debug: true
    

    When I run this! I am getting following error!

     ./chproxy -config config.yml                                                                            
    INFO: 2020/11/09 17:39:08 main.go:44: chproxy ver. unknown, rev. unknown, built at unknown
    INFO: 2020/11/09 17:39:08 main.go:45: Loading config: config.yml
    INFO: 2020/11/09 17:39:08 main.go:272: Loaded config:
    server:
      http:
        listen_addr: :9090
        allowed_networks:
        - 127.0.0.1/32
        forceautocerthandler: false
        read_timeout: 1m
        write_timeout: 2m
        idle_timeout: 10m
    clusters:
    - name: admin
      scheme: http
      nodes:
      - ip1:8123
      - ip2:8123
      - ip3:8123
      - ip4:8123
      users:
      - name: admin
        password: XXX
      heartbeat:
        interval: 5s
        timeout: 3s
        request: /?query=SELECT%201
        response: |
          1
    users:
    - name: admin
      password: XXX
      to_cluster: admin
      to_user: admin
      max_concurrent_queries: 6
      max_execution_time: 1m
    log_debug: true
    INFO: 2020/11/09 17:39:08 main.go:53: Loading config "config.yml": successful
    INFO: 2020/11/09 17:39:08 main.go:152: Serving http on ":9090"
    ERROR: 2020/11/09 17:39:14 proxy.go:59: "127.0.0.1:62293": invalid username or password for user "default"; query: ""
    

    Please let me know how can I fix this issue!

    question 
    opened by shashankkoppar 6
  • Connection exception

    Connection exception

    I'm using jdbc client written on on java. When i use a long-time query i got this exception -

    Caused by: java.sql.SQLException: org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 4096; actual size: 4088)
    	at ru.yandex.clickhouse.response.ClickHouseResultSet.hasNext(ClickHouseResultSet.java:129)
    	at ru.yandex.clickhouse.response.ClickHouseResultSet.next(ClickHouseResultSet.java:143)
    	at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:93)
    	at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:61)
    	at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:679)
    	at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:617)
    	... 45 more
    Caused by: org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 4096; actual size: 4088)
    	at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:198)
    	at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
    	at java.io.FilterInputStream.read(FilterInputStream.java:133)
    	at com.google.common.io.ByteStreams.read(ByteStreams.java:859)
    	at com.google.common.io.ByteStreams.readFully(ByteStreams.java:738)
    	at com.google.common.io.ByteStreams.readFully(ByteStreams.java:722)
    	at com.google.common.io.LittleEndianDataInputStream.readFully(LittleEndianDataInputStream.java:65)
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.readNextBlock(ClickHouseLZ4Stream.java:101)
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.checkNext(ClickHouseLZ4Stream.java:74)
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.read(ClickHouseLZ4Stream.java:60)
    	at ru.yandex.clickhouse.response.StreamSplitter.readFromStream(StreamSplitter.java:92)
    	at ru.yandex.clickhouse.response.StreamSplitter.next(StreamSplitter.java:54)
    	at ru.yandex.clickhouse.response.ClickHouseResultSet.hasNext(ClickHouseResultSet.java:116)
    

    Here is my congif -

    
    hack_me_please: true
    server:
      http:
        listen_addr: ":9090"
        read_timeout: 50m
    
    users:
      - name: ""
        password: ""
        to_cluster: ""
        to_user: ""
    
    clusters:
      - name: ""
        nodes: []
    

    I also tried the same query directly to database and it works fine

    opened by AntonKuzValid 6
  • Cache not working between browsers?

    Cache not working between browsers?

    Is CHproxy hashing the user-agent string or something similar as the cache key? I find that even though I can force-refresh the same page a million times and can verify the result comes from cache, if I switch to a different browser (or use cURL) the query gets re-run

    opened by jpiper 6
  • Can't connect to Managed ClickHouse cluster in the Yandex.Cloud

    Can't connect to Managed ClickHouse cluster in the Yandex.Cloud

    The only way to connect to cluster is via https with Yandex certificate Trying to connect via https to rc1b-z5qstya9********.mdb.yandexcloud.net:8443 node but receiving error: ERROR: 2019/10/11 13:19:13 scope.go:638: error while health-checking "rc1b-z5qstya9********.mdb.yandexcloud.net:8443" host: cannot send request in 82.730564ms: Get https://rc1b-z5qstya9********.mdb.yandexcloud.net:8443: x509: certificate signed by unknown authority Scheme: "https" is set. But how to add certificate to "cluster" section?

    Another solution is to allow insecure https connections. Is it possible?

    question 
    opened by m-ves 4
  • Add new allowed parameters

    Add new allowed parameters

    Using chproxy make uncomfortable work with database tools like DBeaver or DbVisualiser because they uses query parameters to limit shown data.

    Example:

    POST /?compress=1&password=123&result_overflow_mode=break&extremes=0&max_result_rows=100&user=default&database=default HTTP/1.1
    Content-Length: 83
    Content-Type: text/plain; charset=UTF-8
    Host: localhost
    Connection: Keep-Alive
    User-Agent: Apache-HttpClient/4.5.2 (Java/1.8.0_181)
    
    SELECT * FROM "default"."table" FORMAT TabSeparatedWithNamesAndTypes;
    

    chproxy cut parameters result_overflow_mode, extremes and max_result_rows, so when I open table in database tool ClickHouse tries to load ALL data from table instead of load only 100 rows.

    Please, add this parameters to allowed.

    opened by komex 4
  • Provide docker images

    Provide docker images

    Hi,guys

    I run chproxy with Docker in the production env. It brings me too much convenience. I like Docker Can you provide a Docker images? I think this is a good idea.

    Of course, I also created the docker image of chproxy. If you think it's good, you can use it directly. Dockerfile Docker Hub

    Merry Christmas :christmas_tree::christmas_tree::christmas_tree::tada::tada:

    enhancement 
    opened by tacyuuhon 4
  • [Feature Request] Round Time from Grafana

    [Feature Request] Round Time from Grafana

    Grafana generates query with time in seconds and is is very hard to cache it (because if i use time range from date to now - now change every second)

    It would be very cool if you can trim date in SQL with some options: by seconds,minutes,hours,days i think that feature can be implemented with REGex

    what do you think?

    opened by iShift 4
  • Questions about chproxy setting https

    Questions about chproxy setting https

    When I configure the listen_addr of http to 9090:

    server: http: listen_addr: ":9090" allowed_networks: ["192.168.0.0/16","10.10.10.0/24"] https: listen_addr: ":443" autocert: cache_dir: "certs_dir"

    the startup error is reported as follows: FATAL: 2021/06/04 08:25:54 main.go:147:letsencryptspecification requires http server to listen on :80 port to satisfy http-01 challenge. Otherwise, certificates will be impossible to generate

    When the listen_addr of http is modified to 80, the startup is normal. server: http: listen_addr: ":80" allowed_networks: ["192.168.0.0/16","10.10.10.0/24"] https: listen_addr: ":443" autocert: cache_dir: "certs_dir"

    Two questions: (1)、Why set the port of the http address to 80? (2)、When I configure https, how to use jdbc to access clickhouse?

    Thank You.

    question question answered 
    opened by zcw5116 3
  • High concurrency set session_id invalid

    High concurrency set session_id invalid

    When I start multiple threads to send sql queries, queries with the same session_id are forwarded to different server,but I want let then to then same server log be like:

    DEBUG: 2022/04/27 10:11:34 proxy.go:119: [ Id: 16E90FC31005B38F; User "aaa"(4) proxying as "default"(4) to "host1:8123"(3); *** Duration: 5324577 μs]: request success; query: "CREATE TABLE IF NOT EXISTS default.a ENGINE MergeTree() ORDER BY (tuple()) SELECT 1 AS id"; Method: POST; URL: "http://host1:8123/?&query=CREATE TABLE IF NOT EXISTS default.a ENGINE MergeTree() ORDER BY (tuple()) SELECT 1 AS id&query_id=16E90FC31005B38F&session_id=d86d6cfc-3052-43e1-a2a8-70da47f52364&session_timeout=60"
    DEBUG: 2022/04/27 10:11:41 proxy.go:121: [ Id: 16E90FC31005B399; User "aaa"(5) proxying as "default"(5) to "host2:8123"(2); *** Duration: 685558 μs]: request failure: non-200 status code 404; query: "SELECT * FROM default.a"; Method: POST; URL: "http://host2:8123/?session_id=d86d6cfc-3052-43e1-a2a8-70da47f52364&query=SELECT * FROM default.a&query_id=16E90FC31005B399&session_id=d86d6cfc-3052-43e1-a2a8-70da47f52364&session_timeout=60"
    

    And config is:

    log_debug: false
    hack_me_please: false
    caches:
      - name: "longterm"
        mode: "file_system"
        file_system:
          dir: "/var/lib/chproxy"
          max_size: 10Gb
        expire: 2h
        grace_time: 100s
    
    network_groups:
      - name: "test"
        networks: ["host"]
    
    param_groups:
      ***
    server:
      http:
        listen_addr: ":9090"
        allowed_networks: ["host"]
        read_timeout: 5m
        write_timeout: 10m
        idle_timeout: 15m
    
    users:
      - name: "aaa"
        password: "aaa"
        to_cluster: "cluster"
        to_user: "default"
        deny_http: false
        allow_cors: true
        max_queue_size: 100
        max_queue_time: 60s
        max_concurrent_queries: 5
        max_execution_time: 60s
    clusters:
      - name: "cluster"
        scheme: "http"
    
        nodes: ["host1:8123", "host2:8123"]
        heartbeat:
          interval: 15s
          timeout: 5s
          request: "/?query=SELECT%201%2B1"
          response: "2\n"
    
        kill_query_user:
          name: "default"
          password: ""
          
        users:
          - name: "default"
            password: ""
            max_queue_size: 100
            max_queue_time: 60s
            max_concurrent_queries: 5
            max_execution_time: 60s
    
    bug 
    opened by ddddddcf 3
  • Bump async from 2.6.3 to 2.6.4 in /docs

    Bump async from 2.6.3 to 2.6.4 in /docs

    Bumps async from 2.6.3 to 2.6.4.

    Changelog

    Sourced from async's changelog.

    v2.6.4

    • Fix potential prototype pollution exploit (#1828)
    Commits
    Maintainer changes

    This version was pushed to npm by hargasinski, a new releaser for async since your current version.


    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies size/XS javascript 
    opened by dependabot[bot] 1
  • Bump minimist from 1.2.5 to 1.2.6 in /docs

    Bump minimist from 1.2.5 to 1.2.6 in /docs

    Bumps minimist from 1.2.5 to 1.2.6.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies size/XS javascript 
    opened by dependabot[bot] 1
  • (Redis cache) Add the possibility to have a specific cache per user

    (Redis cache) Add the possibility to have a specific cache per user

    For security issues, some companies need that each sql user has their own cache. Indeed, if clickhouse is configured with ROW policies or other kind of user rights, a user could have access to forbidden data if the query was previously done by another user with the right access and the result is is already cached in CH Proxy.

    good first issue feature request 
    opened by mga-chka 0
  • Bump prismjs from 1.26.0 to 1.27.0 in /docs

    Bump prismjs from 1.26.0 to 1.27.0 in /docs

    Bumps prismjs from 1.26.0 to 1.27.0.

    Release notes

    Sourced from prismjs's releases.

    v1.27.0

    Release 1.27.0

    Changelog

    Sourced from prismjs's changelog.

    1.27.0 (2022-02-17)

    New components

    Updated components

    Updated plugins

    Other

    • Core
      • Added better error message for missing grammars (#3311) 2cc4660b
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies size/XS javascript 
    opened by dependabot[bot] 1
Releases(v1.15.0)
Owner
Vertamedia
Advertising Technology Company
Vertamedia
Go-clickhouse - ClickHouse client for Go

ClickHouse client for Go 1.18+ This client uses native protocol to communicate w

Uptrace 94 May 21, 2022
Collects many small inserts to ClickHouse and send in big inserts

ClickHouse-Bulk Simple Yandex ClickHouse insert collector. It collect requests and send to ClickHouse servers. Installation Download binary for you pl

Nikolay Pavlovich 355 May 11, 2022
Distributed tracing using OpenTelemetry and ClickHouse

Distributed tracing backend using OpenTelemetry and ClickHouse Uptrace is a dist

Uptrace 679 May 19, 2022
Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)

mogo Mogo is a lightweight browser-based logs analytics and logs search platform

Shimo HQ 623 May 20, 2022
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件

Bifrost ---- 面向生产环境的 MySQL 同步到Redis,ClickHouse等服务的异构中间件 English 漫威里的彩虹桥可以将 雷神 送到 阿斯加德 和 地球 而这个 Bifrost 可以将 你 MySQL 里的数据 全量 , 实时的同步到 : Redis MongoDB Cl

brokerCAP 1.1k May 19, 2022
support clickhouse

Remote storage adapter This is a write adapter that receives samples via Prometheus's remote write protocol and stores them in Graphite, InfluxDB, cli

weetime 24 May 10, 2022
Jaeger ClickHouse storage plugin implementation

Jaeger ClickHouse Jaeger ClickHouse gRPC storage plugin. This is WIP and it is based on https://github.com/bobrik/jaeger/tree/ivan/clickhouse/plugin/s

Pavol Loffay 1 Feb 15, 2022
Clickhouse support for GORM

clickhouse Clickhouse support for GORM Quick Start package main import ( "fmt" "github.com/sweetpotato0/clickhouse" "gorm.io/gorm" ) // User

null 1 Oct 24, 2021
A proxy is database proxy that de-identifies PII for PostgresDB and MySQL

Surf Surf is a database proxy that is capable of de-identifying PII and anonymizing sentive data fields. Supported databases include Postgres, MySQL,

null 1 Dec 14, 2021
Go sqlite3 http vfs: query sqlite databases over http with range headers

sqlite3vfshttp: a Go sqlite VFS for querying databases over http(s) sqlite3vfshttp is a sqlite3 VFS for querying remote databases over http(s). This a

Peter Sanford 34 May 2, 2022
A high-performance MySQL proxy

kingshard 中文主页 Overview kingshard is a high-performance proxy for MySQL powered by Go. Just like other mysql proxies, you can use it to split the read

Fei Chen 6k May 19, 2022
Gaea is a mysql proxy, it's developed by xiaomi b2c-dev team.

简介 Gaea是小米中国区电商研发部研发的基于mysql协议的数据库中间件,目前在小米商城大陆和海外得到广泛使用,包括订单、社区、活动等多个业务。Gaea支持分库分表、sql路由、读写分离等基本特性,更多详细功能可以参照下面的功能列表。其中分库分表方案兼容了mycat和kingshard两个项目的路

Xiaomi 2.3k May 13, 2022
MySQL proxy backups check recovery

一 、前言 感谢kingshard明星开源项目,本服务正是基于kingshard 开发而来。 本服务适用于相对封闭且经常断电的环境 针对此场景建议使用 MyISAM引擎。 在生产环境中我们数据库可能会遭遇各种各样的不测从而导致数据丢失,大概分为以下几种: 硬件故障 软件故障(目前生产环境经常发生的)

null 4 Oct 29, 2021
A reverse proxy for postgres which rewrites queries.

pg-rewrite-proxy A reverse proxy for postgres which rewrites queries. Arbitrary rewriting is supported by supplying an LUA script to the proxy applica

Patients Know Best 6 Mar 31, 2022
Goproxy4mysql - Kingshard- a high-performance proxy for MySQL powered by Go

kingshard 中文主页 Fork from github.com/flike/kingshard Overview kingshard is a high

null 1 Jan 7, 2022
CRUD API example is written in Go using net/http package and MySQL database.

GoCrudBook CRUD API example is written in Go using net/http package and MySQL database. Requirements Go MySQL Code Editor Project Structure GoCrudBook

Serhat Karabulut 2 Apr 17, 2022
SQL API is designed to be able to run queries on databases without any configuration by simple HTTP call.

SQL API SQL API is designed to be able to run queries on databases without any configuration by simple HTTP call. The request contains the DB credenti

Çiçeksepeti Tech 24 Apr 25, 2022
A tool to run queries in defined frequency and expose the count as prometheus metrics. Supports MongoDB and SQL

query2metric A tool to run db queries in defined frequency and expose the count as prometheus metrics. Why ? Product metrics play an important role in

S Santhosh Nagaraj 18 Apr 21, 2022