gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era.

Overview

gobetween

Tag Build Status Go Report Card Docs Docker Snap Status Telegram License

gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era.

Current status: Maintenance mode, accepting PRs. Currently in use in several highly loaded production environments.

Features

  • Fast L4 Load Balancing

  • Clear & Flexible Configuration with TOML or JSON

    • File - read configuration from the file
    • URL - query URL by HTTP and get configuration from the response body
    • Consul - query Consul key-value storage API for configuration
  • Management REST API

    • System Information - general server info
    • Configuration - dump current config
    • Servers - list, create & delete
    • Stats & Metrics - for servers and backends including rx/tx, status, active connections & etc.
  • Discovery

    • Static - hardcode backends list in the config file
    • Docker - query backends from Docker / Swarm API filtered by label
    • Exec - execute an arbitrary program and get backends from its stdout
    • JSON - query arbitrary http url and pick backends from response json (of any structure)
    • Plaintext - query arbitrary http and parse backends from response text with customized regexp
    • SRV - query DNS server and get backends from SRV records
    • Consul - query Consul Services API for backends
    • LXD - query backends from LXD
  • Healthchecks

    • Ping - simple TCP ping healthcheck
    • Exec - execute arbitrary program passing host & port as options, and read healthcheck status from the stdout
    • Probe - send specific bytes to backend (udp, tcp or tls) and expect a correct answer (bytes or regexp)
  • Balancing Strategies (with SNI support)

    • Weight - select backend from pool based relative weights of backends
    • Roundrobin - simple elect backend from pool in circular order
    • Iphash - route client to the same backend based on client ip hash
    • Iphash1 - same as iphash but backend removal consistent (clients remain connecting to the same backend, even if some other backends down)
    • Leastconn - select backend with least active connections
    • Leastbandwidth - backends with least bandwidth
  • Integrates seamlessly with Docker and with any custom system (thanks to Exec discovery and healthchecks)

  • Single binary distribution

Architecture

gobetween

Usage

Hacking

Debug and Test

Run several web servers for tests in different terminals:

  • $ python -m SimpleHTTPServer 8000
  • $ python -m SimpleHTTPServer 8001

Instead of Python's internal HTTP module, you can also use a single binary (Go based) webserver like: https://github.com/udhos/gowebhello

gowebhello has support for SSL sertificates as well (HTTPS mode), in case you want to do quick demos of the TLS+SNI capabilities of gobetween.

Put localhost:8000 and localhost:8001 to static_list of static discovery in config file, then try it:

  • $ gobetween -c gobetween.toml

  • $ curl http://localhost:3000

Enable profiler and debug issues you encounter

[profiler]
enabled = true     # false | true
bind    = ":6060"  # "host:port"

Performance

It's Fast! See Performance Testing

The Name

It's a play on words: gobetween ("go between").

Also, it's written in Go, and it's a proxy so it's something that stays between 2 parties 😄

License

MIT. See LICENSE file for more details.

Authors & Maintainers

All Contributors

Community

  • Join gobetween Telegram group here.

Logo

Logo by Max Demchenko

Comments
  • LXD backend discovery support

    LXD backend discovery support

    Hello,

    I think gobetween would be a great fit for LXD containers, so I made a proof-of-concept discovery. The idea is to configure the LXD discovery like any other discovery mechanism, and then launch an LXD container like so:

    $ lxc launch ubuntu foo --config user.gobetween.label="foo" --config user.gobetween.private_port=80
    

    LXD reserves the user.* config key for user-specified metadata. The full config reference is here

    Right now, discovery is only done on the local server, so only local containers are discovered. However, it could be extended to support remote LXD server(s).

    I'm happy to answer any questions about this feature. In addition, I wasn't able to find a doc that lists the requirements for contributing a patch, so please let me know if I need to do anything else.

    What I would really like to do is use gobetween to dynamically generate server entries based on containers. This is because LXD does not provide a built-in mechanism to automatically handle external-to-container traffic (similar to publishing a docker port). In effect, gobetween would supply that support. I understand this might be too much of a niche case for gobetween, though. Perhaps a better solution is to write a glue utility that bridges the LXD server and gobetween by using the gobetween REST API rather than have gobetween directly support this.

    feature 
    opened by jtopjian 62
  • gobetween becomes

    gobetween becomes "stuck" (sometimes)

    I have GB configured as an SNI router for multiple backend services.

    EDIT: OS: CentOS 7.5+ updates Docker: Docker version 18.06.1-ce, build e68fc7a (installed as per instructions on docker.com)

    What happens is that after a bit of run time (say a few hours), GB stops responding. i.e. it accepts the connection, but doesn't forward it to the backend. I test using openssl s_client -connect ...

    Restarting the GB docker brings things back to life.

    I have been using GB v 0.5.0 (and now 0.6.0)

    My LoadBalancer VM is typically a 2 CPU 1 GB machine.

    This problem hit again today, so I have made my load balancer VM to 4 CPU 4 GB (I know for the usage it is way too high). I am in now wait-and-watch mode if this happens again.

    What debug info could I capture to help isolate the problem, the next time this occurs?

    FWIW, I found an earlier issue which could be related (not sure though) https://github.com/yyyar/gobetween/issues/74

    Relevant section of the config file:

    [api]
    enabled = true
    bind = ":444"
    
    [logging]
    level = "debug"
    output = "stdout"
    
    [defaults]
    protocol = "tcp"
    balance = "leastconn"
    max_connections = 0
    client_idle_timeout = "0"
    backend_idle_timeout = "0"
    backend_connection_timeout = "0"
    ##############################################################################
    
    [servers]
    
    [servers.in443]
    protocol = "tcp"
    bind = ":443"
    sni_enabled = true
    
    [servers.in443.sni]
    enabled = true
    read_timeout = "10s"
    hostname_matching_strategy = "regexp"
    unexpected_hostname_strategy = "reject"
    
    [servers.in443.discovery]
    kind = "consul"
    failpolicy = "setempty"
    consul_host = "myconsulserver:8500"
    # all services should register themselves with this service name
    consul_service_name = "in443service"
    interval = "60s"
    timeout = "10s"
    
    [servers.in443.healthcheck]
    kind = "ping"
    interval = "60s"
    ping_timeout_duration = "5s"
    

    Regards, Shantanu

    bug :fire: 
    opened by shantanugadgil 36
  • Error: use of closed network connection

    Error: use of closed network connection

    Hi,

    I have user gobetween proxy quite long time, but now entered onti sisutation when unknown what to do and how to fix issue. Full error message:

    2018-07-19 05:12:57 [ERROR] (udp/server): Error sending data to backend write udp 10.24.87.230:35226->10.24.87.231:4729: use of closed network connection
    

    Running CentOS 7.5. SELinux is disabled go version go1.10.3 linux/amd64

    Config:

    #
    # gobetween.toml - sample config file
    #
    # Website: http://gobetween.io
    # Documentation: https://github.com/yyyar/gobetween/wiki/Configuration
    #
    # Logging configuration
    #
    [logging]
    level = "info"   # "debug" | "info" | "warn" | "error"
    #output = "stdout" # "stdout" | "stderr" | "/path/to/gobetween.log"
    output = "/var/log/gobetween/gobetween.log" # "stdout" | "stderr" | "/path/to/gobetween.log"
    
    # REST API server configuration
    #
    [api]
    enabled = true  # true | false
    bind = ":8888"  # "host:port"
    cors = false    # cross-origin resource sharing
    
    #
    # Default values for server configuration, may be overriden in [servers] sections.
    # All "duration" fields (for examole, postfixed with '_timeout') have the following format:
    # <int><duration> where duration can be one of 'ms', 's', 'm', 'h'.
    # Examples: "5s", "1m", "500ms", etc. "0" value means no limit
    #
    [defaults]
    max_connections = 0              # Maximum simultaneous connections to the server
    client_idle_timeout = "0"        # Client inactivity duration before forced connection drop
    backend_idle_timeout = "0"       # Backend inactivity duration before forced connection drop
    backend_connection_timeout = "0" # Backend connection timeout (ignored in udp)
    
    [servers.vflow]
    bind = "0.0.0.0:4729"
    protocol = "udp"
    
      [servers.vflow.udp]
      max_responses = 0
      max_requests = 1
    
      [servers.vflow.discovery]
      kind = "static"
      static_list = [
          "10.24.87.231:4729 weight=1",
          "10.24.87.232:4729 weight=1"
      ]
    
      [servers.vflow.healthcheck]
      kind = "exec"
      interval = "30s"
      timeout = "30s" 
    
      exec_command = "/usr/share/exec_healthcheck.sh"
      exec_expected_positive_output = "1"
      exec_expected_negative_output = "0"
    

    I have tried to comment out healthcheck part, but didn't helped

    opened by gedOHub 29
  • Improve UDP performance

    Improve UDP performance

    I do a test about UDP and find the result not so good with iperf. the testbed is 4core cpu, 4G memory vm and the transfer upload bandwidth is about 50M bps. at the same testbed, ipvs is about 150M bps and haproxy tcp about 450M bps. FYI。

    opened by zhanyonm 16
  • Do you have plan on supporting TCP+TLS to connect to backends?

    Do you have plan on supporting TCP+TLS to connect to backends?

    Hi all, I have one question if you don't mind, I see that the API supports TLS connection, but what about the TCP reverse proxy? Will you support TLS in the future?

    question 
    opened by didip 15
  • [Add] Support for Prometheus Metrics Endpoint

    [Add] Support for Prometheus Metrics Endpoint

    This Adds a Prometheus Metrics Endpoint at port 9284.

    Currently the service would need to be restarted to clear out old metrics. Still not sure how I want to handle the cleanup of that. But this does what we want for our use case.

    Always up for Feedback.

    opened by Techcadia 14
  • Track backends list in udp session

    Track backends list in udp session

    Hi This patchset implements tracking backends (using exec-dscovery) when udp proxying, in case of client does not close UDP socket.

    In my environment I need to proxy UDP and I use exec-discovery to find backends (backend soft also provides TCP ping-API, and I can discovery backends)

    So, when backends list changed (i.e. exec discovers), client session backend list does not change, and gobetween sends UDP-datagrams to "dead" backends.

    It seems like this: gobetween.toml

    [servers]
    
    [servers.brubeck]
    bind = "127.0.0.1:8125"
    protocol = "udp"
    
      [servers.brubeck.udp]
      max_responses = 0
    
      [servers.brubeck.discovery]
      kind = "exec"
      interval = "2s"
      timeout = "2s"
      exec_command = ["/home/operator/bin/exec.sh"] #simple curl-based HTTP-check
    

    Client app, which send UDP packets:

    import socket
    from time import sleep
    
    UDP_IP = "127.0.0.1"
    UDP_PORT = 8125
    MESSAGE = "complex.delete_me.tttt:1|c"
    
    sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) #don't close socket while app works
    
    while True:
        print("send")
        sock.sendto(MESSAGE.encode(), (UDP_IP, UDP_PORT))
        sleep(1)
    

    Starting gobetween and app:

    bin/gobetween --config /home/operator/gobetween.toml &
    /home/operator/sendudp.py
    

    See traffic:

    11:35:02.513432 IP 10.0.2.9.58559 > 10.9.6.81.8126: UDP, length 26 # send to backend01
    ...
    

    When backend01 fails, client continues send to him:

    11:35:02.513432 IP 10.0.2.9.58559 > 10.9.6.81.8126: UDP, length 26 # backend01 fails, but gobetween proxy to him...
    

    This patchset fix this behaviour: after new discovery client will send UDP-datagrams to new backend.

    (Maybe, it is better to add corresponding configuration option? I may do this.)

    In all other cases backward compatibility saved.

    Thanks in advance!

    opened by ghost 13
  • the max_requests configure no works & stats of xxx_connections wrong for udp in the new 0.6 release

    the max_requests configure no works & stats of xxx_connections wrong for udp in the new 0.6 release

    hi all: for the latest release, the udp max_requests doesn't work as expected. this is my configure used [servers.11020b75d191432ba6beb04d51d7b400] bind = "10.212.1.103:2345" protocol = "udp"

    balance = "roundrobin"

    #max_connections = 1 client_idle_timeout = "60s" backend_idle_timeout = "60s" backend_connection_timeout = "60s" [servers.11020b75d191432ba6beb04d51d7b400.udp] # (optional) max_requests = 1 # (optional) if > 0 accepts no more requests than max_requests and closes session (since 0.5.0) max_responses = 0 # (required) if > 0 accepts no more responses that max_responses from backend and closes session (will be optional since 0.5.0)

    [servers.11020b75d191432ba6beb04d51d7b400.discovery]
    kind = "static"
    failpolicy = "keeplast"
    static_list = [
    
      "192.168.48.235:2345",
    
    ]
    
    [servers.11020b75d191432ba6beb04d51d7b400.healthcheck]
    fails = 2
    passes = 2
    interval = "5s"
    timeout="5s"
    kind = "exec"
    exec_command = "/usr/share/healthcheck.sh"  # (required) command to execute
    exec_expected_positive_output = "success"           # (required) expected output of command in case of success
    exec_expected_negative_output = "fail"
    

    I do two test cases: the first case: the max_requests: 1 test result: there is no any backend response to client. the stats output: { "active_connections": 0, "rx_total": 0, "tx_total": 4, "rx_second": 0, "tx_second": 0, "backends": [ { "host": "192.168.48.235", "port": "2345", "priority": 1, "weight": 1, "stats": { "live": true, "discovered": true, "total_connections": 0, "active_connections": 0, "refused_connections": 0, "rx": 0, "tx": 4, "rx_second": 0, "tx_second": 0 } } ]

    the second test: the max_requests: 2 test result: there more than two clients work at same time. the stats output: { "active_connections": 0, "rx_total": 8642, "tx_total": 891, "rx_second": 145, "tx_second": 15, "backends": [ { "host": "192.168.48.235", "port": "2345", "priority": 1, "weight": 1, "stats": { "live": true, "discovered": true, "total_connections": 103, "active_connections": 11, "refused_connections": 0, "rx": 8352, "tx": 861, "rx_second": 145, "tx_second": 15 } } ] } actually all the clients are working all the time, but the stats total_connections increase quickly. I'm sure the clients(ip+port) not idle and the total_connections not overdue.

    could you confirm this issue or tell me where is my wrong? thanks.

    opened by zhanyonm 12
  • SNI support for proxying

    SNI support for proxying

    Hi, Is there a plan in the near future to implement SNI based routing?

    Regards, Shantanu

    BTW: Awesome software which I discovered only by accident!!! The Consul discovery backend is truly great!!! 👍 👍

    feature 
    opened by shantanugadgil 10
  • Error: dial tcp :0: getsockopt: connection refused

    Error: dial tcp :0: getsockopt: connection refused

    Hello, Im trying gobetween on my dev machine - Im using docker-machine & boot2docker (1.12.3)

    This is my simple docker compose:

    version: '2'
    
    networks:
      backend:
    
    services:
      app:
        image: php:7.0-alpine
        expose:
          - 8080
        volumes:
          - .:/data
        command: php -S 0.0.0.0:8080 /data/index.php
        networks:
          - backend
        labels:
          - "scale.app=true"
    
      gobetween:
        image: yyyar/gobetween
        depends_on:
          - app
        ports:
          - "80:80"
        volumes:
          - "./gobetween/conf:/etc/gobetween/conf/:rw"
          - "/var/run/docker.sock:/var/run/docker.sock"
        networks:
          - backend
    

    Gobetween conf looks like this:

    [logging]
    level = "debug"
    
    [servers.app]
    bind = "0.0.0.0:80"
    protocol = "tcp"
    balance = "roundrobin"
    
      [servers.app.discovery]
        interval = "10s"
        timeout = "2s"
        kind = "docker"
        docker_endpoint = "unix://var/run/docker.sock"  # Docker / Swarm API
        docker_container_label = "scale.app=true"  # label to filter containers
        docker_container_private_port = 8080   # gobetween will take public container port for this private port
        docker_container_host_env_var = "HOSTNAME"
    

    Gobetween is able to discover new backends, when i try to scale up the app, but proxying is not working - see log output:

    app_1        | PHP 7.0.13 Development Server started at Fri Nov 25 20:08:30 2016
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (manager): Initializing...
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (server): Creating 'app': 0.0.0.0:80 roundrobin docker none
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (scheduler): Starting scheduler
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (manager): Initialized
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (api): API disabled
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (dockerFetch): Fetching unix://var/run/docker.sock scale.app=true 8080
    gobetween_1  | 2016-11-25 20:08:40 [DEBUG] (server.handle): Accepted 10.211.55.2:52649 -> [::]:80
    gobetween_1  | 2016-11-25 20:08:40 [DEBUG] (server.handle): Accepted 10.211.55.2:52648 -> [::]:80
    gobetween_1  | 2016-11-25 20:08:40 [ERROR] (server.handle): dial tcp :0: getsockopt: connection refused
    gobetween_1  | 2016-11-25 20:08:40 [ERROR] (server.handle): dial tcp :0: getsockopt: connection refused
    

    Am I missing something?

    opened by honzatrtik 10
  • Buildin for Windows

    Buildin for Windows

    I tried to build for Windows AMD64

    1. I tried building on Windows for Windows
    2. I tried to build on Linux for Windows

    On both platforms I get the same issue :-(

    ../pkg/mod/github.com/eric-lindau/[email protected]/udp.go:29:47: undefined: syscall.IPPROTO_RAW ../pkg/mod/github.com/eric-lindau/[email protected]/udp.go:38:37: cannot use int(fd) (type int) as type syscall.Handle in argument to syscall.SetsockoptInt ../pkg/mod/github.com/eric-lindau/[email protected]/udp.go:38:63: undefined: syscall.IP_HDRINCL

    I know it must be possible because I downloaded the windows binary and that works fine... :-)

    bug help wanted question 
    opened by usernametaken 9
  • Ubuntu | Apparmor enforcing

    Ubuntu | Apparmor enforcing

    Hello,

    Anyone had any issues with Apparmor restricting Go-between?

    Fresh install, Apparmor is enforcing all Go-between profiles and processes causing all config and ports to be shut.

    Profile can then be manually removed to bypass this, but by reading forums with Ubuntu this shouldn't be needed, and the software in question could have been restricted due to potential security issues or exploits.

    Anyone had similar issues?

    Kr,

    opened by PiperMp3 1
  • Skip RRSIG records in response.

    Skip RRSIG records in response.

    Fixes #327.

    Without this change a domain that as DNSSEC enabled resulted in the following output:

    {"level":"info","msg":"Fetching 1.1.1.1:53 _api._tcp.k8s.example.net.","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"debug","msg":"Fetching 1.1.1.1:53 A/AAAA node1.example.net.","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"warning","msg":"No IP found for node1.example.net., skipping...","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"error","msg":"srv error Non-SRV record in SRV answer retrying in 2s","name":"discovery","time":"2022-02-27T12:30:28Z"}
    {"level":"info","msg":"Applying failpolicy keeplast","name":"discovery","time":"2022-02-27T12:30:28Z"}
    

    With this change the result is:

    {"level":"debug","msg":"Fetching 1.1.1.1:53 A/AAAA node1.example.net.","name":"srvFetch","time":"2022-02-27T07:27:35-05:00"}
    {"level":"debug","msg":"Initial check ping for {X.X.X.X 6443}","name":"healthcheck/worker","time":"2022-02-27T07:27:42-05:00"}
    {"level":"debug","msg":"Got check result ping: {{X.X.X.X 6443} 2}","name":"healthcheck/worker","time":"2022-02-27T07:27:42-05:00"}
    {"level":"info","msg":"Sending to scheduler: {{X.X.X.X 6443} 2}","name":"healthcheck/worker","time":"2022-02-27T07:27:42-05:00"}
    
    
    opened by blakerouse 0
  • DNSSEC breaks SRV discovery

    DNSSEC breaks SRV discovery

    I am trying to use SRV discovery in gobetween (Docker :latest) and it is failing because the domain I am using has DNSSEC enabled. The results in a dns.RRSIG to be included in the answers from the DNS server.

    Output from gobetween:

    {"level":"info","msg":"Fetching 1.1.1.1:53 _api._tcp.k8s.example.net.","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"debug","msg":"Fetching 1.1.1.1:53 A/AAAA node1.example.net.","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"warning","msg":"No IP found for node1.example.net., skipping...","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"error","msg":"srv error Non-SRV record in SRV answer retrying in 2s","name":"discovery","time":"2022-02-27T12:30:28Z"}
    {"level":"info","msg":"Applying failpolicy keeplast","name":"discovery","time":"2022-02-27T12:30:28Z"}
    
    opened by blakerouse 0
  • Idea: Client side load balancing using metrics

    Idea: Client side load balancing using metrics

    I am planning to implement Client side load balancing.

    The way it works is very simple.. Each Application Server exports telemetry to a central telemetry server ( or cluster). The telemetry server aggregates the data and does real time PUSH to all connected clients. The client can use whatever logic they want to choose what server to connect to and when.

    The advantage of this approach is that:

    • the Load Balancer is no longer in the data plane for every request, and so it allow the system to scale out.
    • by using Net Data you get automated metrics and alerts for everything you deploy anyway. They are real time metrics :)

    Topology / Approach to do it

    1. Setup Net Data Or use their free cloud offering ( which is free and unlimited )

    https://www.netdata.cloud/pricing/

    https://github.com/netdata/netdata

    GUI DEMO: https://london.my-netdata.io/#after=-540;before=0;=undefined;theme=slate;help=true;utc=Europe%2FBerlin

    2. Add the Netapp plugin to the Server so that it can export metrics to Net Data Server

    https://github.com/netdata/netdata/blob/master/packaging/docker/README.md

    3. Netapp exports the data in realtime to a Postgresql Timeseries DB.

    This doc and code shows a simple example. Doc: https://www.timescale.com/blog/writing-it-metrics-from-netdata-to-timescaledb/amp/ Code: https://github.com/mahlonsmith/netdata-timescale-relay

    https://github.com/timescale/timescaledb

    4. Postgresql Timeseries DB is exporting an aggregation to the Client over WS, SSE, WenTransport or other.

    https://github.com/timescale/promscale gives you aggregation as a prometheus feed...

    5. The client then implements whatever Load balancing logic it wants

    Scale out / HA / Global load balancing of the metrics system itself.

    You can scale out Postgresql Timeseries DB using the standard approaches for any Postgresql DB. https://fly.io/blog/globally-distributed-postgres/ https://fly.io/docs/flyctl/postgres-db/

    I am not saying that Fly.io has to be used, but just as an example. With fly you get Global Load Balancing for free and so then this proposed Client side load balancing would be globally load balanced. So clients can get their metrics feed from fly to connect to wherever the go between servers are located.

    The other thing is that it is better to host your metrics on a different cloud or network than what your own Servers or Cloud is running on. Otherwise when things go down, you can't see that they have gone down.

    opened by gedw99 0
Releases(0.8.0)
Owner
Yaroslav Pogrebnyak
Programmer, writer, speaker and musician. Perfectionist and Buddha follower.
Yaroslav Pogrebnyak
Cat Balancer is line based load balancer for net cat nc.

Cat Balancer Cat Balancer is line based load balancer for net cat nc. Usage cb [-p <producers-port>] [-c <consumers-port>] One Producer to One Consum

CGI France 4 Jul 6, 2022
A modern layer 7 load balancer from baidu

BFE BFE is a modern layer 7 load balancer from baidu. Advantages Multiple protocols supported, including HTTP, HTTPS, SPDY, HTTP2, WebSocket, TLS, Fas

null 5.7k Sep 17, 2022
Kiwi-balancer - A balancer is a gateway between the clients and the server

Task description Imagine a standard client-server relationship, only in our case

Jozef Lami 0 Feb 11, 2022
Proxy - Minimalistic TCP relay proxy.

Proxy Minimalistic TCP relay proxy. Installation ensure you have go >= 1.17 installed clone the repo cd proxy go install main.go Examples Listen on po

null 1 May 22, 2022
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Spiral Scout 6.7k Sep 25, 2022
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Spiral Scout 6.1k Dec 9, 2021
A load balancer supporting multiple LB strategies written in Go

farely A load balancer supporting multiple LB strategies written in Go. Goal The goal of this project is purley educational, I started it as a brainst

Mehdi Cheracher 11 Aug 7, 2022
KgLb - L4 Load Balancer

KgLb KgLb is L4 a load balancer built on top of linux ip virtual server (ip_vs). It provides rich functionality such as discovery, health checks for r

Dropbox 129 Mar 10, 2022
A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF

VC5 A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF This is very much a proof of concept at this stage - mos

David Coles 33 Aug 29, 2022
Basic Load Balancer

Load Balancer Work flow based on code snippet Trade-offs: 1. Using etcd as a global variable map. 2. Using etcd to store request references rather tha

Nikhil Vasudev 0 Nov 1, 2021
Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal

Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal If I figure out how to make it work.. How it works! The Vippy LB PoC uses BGP/IPVS and E

Daniel Finneran 5 Mar 10, 2022
A Load-balancer made from steel

slb The Steel Load Balancer A load-balancer forged in the fires of Sheffield Getting slb Prebuilt binaries for armv7 and amd64 exist in the releases p

null 74 Apr 18, 2022
Lightweight http response time based load balancer written in Go

HTTP Load Balancer Specifications http servers should always return time taken to proceed request in headers as EXECUTION_TIME in ms this load balance

Gaëtan 0 Feb 22, 2022
A Service Load Balancer for Kubernetes.

PureLB - is a Service Load Balancer for Kubernetes PureLB is a load-balancer orchestrator for Kubernetes clusters. It uses standard Linux networking a

PureLB 29 Sep 13, 2022
Consistelancer - Consistent hashing load balancer for Kubernetes

Setup minikube start kubectl apply -f k8s-env.yaml skaffold dev # test locks ku

null 2 Aug 28, 2022
Simple load-balancer for npchat servers, based on the xor distance metric between node & user id

npchat-helmsman Simple load-balancer for npchat servers, based on the xor distance metric between node & user id. Local Development Clone this reposit

npchat 0 Jan 15, 2022
mt-multiserver-proxy is a reverse proxy designed for linking multiple Minetest servers together

mt-multiserver-proxy mt-multiserver-proxy is a reverse proxy designed for linking multiple Minetest servers together. It is the successor to multiserv

null 13 Aug 29, 2022
Tcp-proxy - A dead simple reverse proxy server.

tcp-proxy A proxy that forwords from a host to another. Building go build -ldflags="-X 'main.Version=$(git describe --tags $(git rev-list --tags --max

Injamul Mohammad Mollah 0 Jan 2, 2022
Example of how to write reverse proxy in Go that runs on Cloud Run with Tailscale

Cloudrun Tailscale Reverse Proxy Setup Create a ephemeral key in Tailscale Set TAILSCALE_AUTHKEY in your Cloud Run environment variables Set TARGET_UR

ThreeComma.io 8 Jul 17, 2022