High-performance PHP application server, load-balancer and process manager written in Golang

Overview

RoadRunner

Total alerts

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a service with the ability to extend its functionality on a per-project basis.

RoadRunner includes PSR-7/PSR-17 compatible HTTP and HTTP/2 server and can be used to replace classic Nginx+FPM setup with much greater performance and flexibility.

Official Website | Documentation

Features:

  • Production-ready
  • PCI DSS compliant
  • PSR-7 HTTP server (file uploads, error handling, static files, hot reload, middlewares, event listeners)
  • HTTPS and HTTP/2 support (including HTTP/2 Push, H2C)
  • A Fully customizable server, FastCGI support
  • Flexible environment configuration
  • No external PHP dependencies (64bit version required), drop-in (based on Goridge)
  • Load balancer, process manager and task pipeline
  • Integrated metrics (Prometheus)
  • Workflow engine by Temporal.io
  • Works over TCP, UNIX sockets and standard pipes
  • Automatic worker replacement and safe PHP process destruction
  • Worker create/allocate/destroy timeouts
  • Max jobs per worker
  • Worker lifecycle management (controller)
    • maxMemory (graceful stop)
    • TTL (graceful stop)
    • idleTTL (graceful stop)
    • execTTL (brute, max_execution_time)
  • Payload context and body
  • Protocol, worker and job level error management (including PHP errors)
  • Development Mode
  • Integrations with Symfony, Laravel, Slim, CakePHP, Zend Expressive
  • Application server for Spiral
  • Included in Laravel Octane
  • Automatic reloading on file changes
  • Works on Windows (Unix sockets (AF_UNIX) supported on Windows 10)

Installation:

$ composer require spiral/roadrunner:v2.0 nyholm/psr7
$ ./vendor/bin/rr get-binary

For getting roadrunner binary file you can use our docker image: spiralscout/roadrunner:X.X.X (more information about image and tags can be found here)

Configuration can be located in .rr.yaml file (full sample):

rpc:
  listen: tcp://127.0.0.1:6001

server:
  command: "php worker.php"

http:
  address: "0.0.0.0:8080"

logs:
  level: error

Read more in Documentation.

Example Worker:

<?php

use Spiral\RoadRunner;
use Nyholm\Psr7;

include "vendor/autoload.php";

$worker = RoadRunner\Worker::create();
$psrFactory = new Psr7\Factory\Psr17Factory();

$worker = new RoadRunner\Http\PSR7Worker($worker, $psrFactory, $psrFactory, $psrFactory);

while ($req = $worker->waitRequest()) {
    try {
        $rsp = new Psr7\Response();
        $rsp->getBody()->write('Hello world!');

        $worker->respond($rsp);
    } catch (\Throwable $e) {
        $worker->getWorker()->error((string)$e);
    }
}

Run:

To run application server:

$ ./rr serve

License:

The MIT License (MIT). Please see LICENSE for more information. Maintained by Spiral Scout.

Issues
  • Symfony: Default SESSION does not work (no cookie is set in Response)

    Symfony: Default SESSION does not work (no cookie is set in Response)

    Hi!

    I have this code:

    <?php
    /**
     * Created by PhpStorm.
     * User: richard
     * Date: 22.06.18
     * Time: 11:59
     */
    
    require __DIR__ . '/vendor/autoload.php';
    
    use App\Kernel;
    use Symfony\Bridge\PsrHttpMessage\Factory\DiactorosFactory;
    use Symfony\Bridge\PsrHttpMessage\Factory\HttpFoundationFactory;
    use Symfony\Component\Debug\Debug;
    use Symfony\Component\Dotenv\Dotenv;
    use Symfony\Component\HttpFoundation\Request;
    
    if (getenv('APP_ENV') === false) {
        (new Dotenv())->load(__DIR__.'/.env');
    }
    $env = getenv('APP_ENV') ?: 'dev';
    $debug = getenv('APP_DEBUG') ? ((bool) getenv('APP_DEBUG')) : !in_array($env, ['prod', 'k8s']);
    
    if ($debug) {
        umask(0000);
        Debug::enable();
    }
    if ($trustedProxies = $_SERVER['TRUSTED_PROXIES'] ?? false) {
        Request::setTrustedProxies(explode(',', $trustedProxies), Request::HEADER_X_FORWARDED_ALL ^ Request::HEADER_X_FORWARDED_HOST);
    }
    if ($trustedHosts = $_SERVER['TRUSTED_HOSTS'] ?? false) {
        Request::setTrustedHosts(explode(',', $trustedHosts));
    }
    $kernel = new Kernel($env, $debug);
    $httpFoundationFactory = new HttpFoundationFactory();
    
    
    
    
    $relay = new Spiral\Goridge\StreamRelay(STDIN, STDOUT);
    $psr7 = new Spiral\RoadRunner\PSR7Client(new Spiral\RoadRunner\Worker($relay));
    
    
    while ($req = $psr7->acceptRequest()) {
        try {
            $request = $httpFoundationFactory->createRequest($req);
            $response = $kernel->handle($request);
    
            $psr7factory = new DiactorosFactory();
            $psr7response = $psr7factory->createResponse($response);
            $psr7->respond($psr7response);
    
            $kernel->terminate($request, $response);
        } catch (\Throwable $e) {
            $psr7->getWorker()->error((string)$e);
        }
    }
    

    It's slighly modified Symfony code to handle env-variables :) (Also, it's stunningly fast!!!!)

    But I don't get any cookies returned, so I can see my logg inn is accepted, and I'm redirected to the dashboard, but there I get a permission denied and redirect to login (because no cookies have been set)...

    Any tips on how to troubleshoot, or what might be wrong?

    E-medium A-docs 
    opened by Richard87 108
  • [πŸ› BUG]: RR [`v2.5.7`] doesn't construct new workers after call resetting command

    [πŸ› BUG]: RR [`v2.5.7`] doesn't construct new workers after call resetting command

    No duplicates πŸ₯².

    • [X] I have searched for a similar issue in our bug tracker and didn't find any solutions.

    What happened?

    Sometimes i faced with issue when rr doesn't start workers after resetting

    I use the command for reloading application after deploying new version

    php -r '
    require_once "/var/www/..../vendor/autoload.php";
    $rpc = \Spiral\Goridge\RPC\RPC::create("tcp://127.0.0.1:6001");
    $rpc->call("resetter.Reset", "http");
    ' 2> /dev/null && echo "roadrunner restarted"
    
    

    And this works fine. In logs of the roadrunner, i can find such logs.

    {"level":"info","ts":1654876751.0767014,"logger":"http","msg":"HTTP plugin got restart request. Restarting..."}
    {"level":"debug","ts":1654876751.482103,"logger":"server","msg":"worker constructed","pid":18535}
    {"level":"debug","ts":1654876751.7076323,"logger":"server","msg":"worker constructed","pid":18539}
    {"level":"debug","ts":1654876751.92825,"logger":"server","msg":"worker constructed","pid":18543}
    {"level":"debug","ts":1654876752.1424391,"logger":"server","msg":"worker constructed","pid":18547}
    {"level":"debug","ts":1654876752.3567128,"logger":"server","msg":"worker constructed","pid":18551}
    {"level":"debug","ts":1654876752.5836997,"logger":"server","msg":"worker constructed","pid":18555}
    {"level":"debug","ts":1654876752.7970486,"logger":"server","msg":"worker constructed","pid":18569}
    {"level":"debug","ts":1654876753.012394,"logger":"server","msg":"worker constructed","pid":18573}
    {"level":"debug","ts":1654876753.2264977,"logger":"server","msg":"worker constructed","pid":18579}
    {"level":"debug","ts":1654876753.439048,"logger":"server","msg":"worker constructed","pid":18583}
    {"level":"debug","ts":1654876753.6546476,"logger":"server","msg":"worker constructed","pid":18587}
    {"level":"debug","ts":1654876753.8703601,"logger":"server","msg":"worker constructed","pid":18591}
    {"level":"info","ts":1654876753.8704066,"logger":"http","msg":"HTTP workers Pool successfully restarted"}
    {"level":"info","ts":1654876753.8704116,"logger":"http","msg":"HTTP handler listeners successfully re-added"}
    {"level":"info","ts":1654876753.8704147,"logger":"http","msg":"HTTP plugin successfully restarted"}
    

    But sometimes when i tried to reload workers i faced with unexpected behavior when rr doesn't construct new workers

    {"level":"info","ts":1654865797.447119,"logger":"http","msg":"HTTP plugin got restart request. Restarting..."}
    {"level":"debug","ts":1654865947.0437615,"logger":"rpc","msg":"Started RPC service","address":"tcp://127.0.0.1:6001","plugins":["informer","resetter"]}
    {"level":"debug","ts":1654865947.3511188,"logger":"server","msg":"worker constructed","pid":26076}
    {"level":"debug","ts":1654865947.6003292,"logger":"server","msg":"worker constructed","pid":26090}
    {"level":"debug","ts":1654865947.8625834,"logger":"server","msg":"worker constructed","pid":26094}
    {"level":"debug","ts":1654865948.083429,"logger":"server","msg":"worker constructed","pid":26098}
    {"level":"debug","ts":1654865948.2959368,"logger":"server","msg":"worker constructed","pid":26102}
    {"level":"debug","ts":1654865948.5081518,"logger":"server","msg":"worker constructed","pid":26106}
    {"level":"debug","ts":1654865948.7213166,"logger":"server","msg":"worker constructed","pid":26110}
    {"level":"debug","ts":1654865948.9349105,"logger":"server","msg":"worker constructed","pid":26114}
    {"level":"debug","ts":1654865949.1486824,"logger":"server","msg":"worker constructed","pid":26118}
    {"level":"debug","ts":1654865949.3617551,"logger":"server","msg":"worker constructed","pid":26122}
    {"level":"debug","ts":1654865949.5777884,"logger":"server","msg":"worker constructed","pid":26127}
    {"level":"debug","ts":1654865949.7925146,"logger":"server","msg":"worker constructed","pid":26131}
    

    Construction of new workers was after restarting the service

    And that's all. It seems like reload plugin was stopped by something. After that, i need to restart the service completely.

    Also, i have such logs in syslog that appeared after restarting the service, but i don't know does it relate to the issue or not

    rr[24396]: panic: runtime error: invalid memory address or nil pointer dereference
    rr[24396]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0xd77dbd]
    rr[24396]: goroutine 55954476 [running]:
    rr[24396]: github.com/spiral/roadrunner-plugins/v2/http.(*Plugin).workers(...)
    rr[24396]: #011github.com/spiral/roadrunner-plugins/[email protected]/http/plugin.go:358
    rr[24396]: github.com/spiral/roadrunner-plugins/v2/http.(*Plugin).Workers(0x1529d40)
    rr[24396]: #011github.com/spiral/roadrunner-plugins/[email protected]/http/plugin.go:342 +0xbd
    rr[24396]: github.com/spiral/roadrunner-plugins/v2/informer.(*Plugin).Workers(...)
    rr[24396]: #011github.com/spiral/roadrunner-plugins/[email protected]/informer/plugin.go:38
    rr[24396]: github.com/spiral/roadrunner-plugins/v2/informer.(*rpc).Workers(0x2, {0xc000d5f490, 0x1}, 0xc000da9b78)
    rr[24396]: #011github.com/spiral/roadrunner-plugins/[email protected]/informer/rpc.go:31 +0x5e
    rr[24396]: reflect.Value.call({0xc0007b7500, 0xc00000f1b0, 0x13}, {0x17222dd, 0x4}, {0xc000befef8, 0x3, 0x3})
    rr[24396]: #011reflect/value.go:543 +0x814
    rr[24396]: reflect.Value.Call({0xc0007b7500, 0xc00000f1b0, 0x0}, {0xc000db46f8, 0x3, 0x3})
    rr[24396]: #011reflect/value.go:339 +0xc5
    rr[24396]: net/rpc.(*service).call(0xc0007a3700, 0xc000db47b0, 0xd79dd9, 0xc000d5ede0, 0xc0007ba300, 0xc000774360, {0x14c2860, 0xc000f168e0, 0x15ef4e0}, {0x156fa00, ...}, ...)
    rr[24396]: #011net/rpc/server.go:377 +0x239
    rr[24396]: created by net/rpc.(*Server).ServeCodec
    rr[24396]: #011net/rpc/server.go:474 +0x405
    systemd[1]: rr.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
    systemd[1]: rr.service: Failed with result 'exit-code'.
    systemd[1]: Stopped High-performance PHP application server.
    systemd[1]: Started High-performance PHP application server.
    

    PHP7.4 (opcache_cli enabled without storing in files) Ubuntu 18.04 x86_64 rr version 2.5.7 (build time: 2021-11-13T16:43:25+0000, go1.17.2)

    server:
      command: 'php /var/www/...../worker.php'
      relay: 'pipes'
    
    rpc:
      listen: tcp://127.0.0.1:6001
    
    http:
      address: 10.0.6.50:80
      pool:
        max_jobs: 50000
        num_workers: 12
    
    logs:
      mode: production
      level: debug
      file_logger_options:
        log_output: /var/log/rr/access.log
        max_size: 100
        max_age: 48
        compress: true
    
    

    Maybe i do something wrong when i try to reload workers? Thank you for any help.

    Version

    2.5.7

    Relevant log output

    No response

    B-bug F-need-verification 
    opened by obrazkowv 60
  • [πŸ› BUG]: Omit the bugfix version for the config version check

    [πŸ› BUG]: Omit the bugfix version for the config version check

    No duplicates πŸ₯².

    • [X] I have searched for a similar issue in our bug tracker and didn't find any solutions.

    What happened?

    config_plugin_init: RR version is older than configuration version, RR version: %!s(func() string=0x972b20), configuration version: 2.7.3 
    
    version: "2.7.3"
    
    rpc:
      listen: tcp://:6001
    
    server:
      command: "php /app/index.php"
      relay: pipes
    
    http:
      address: "0.0.0.0:3001"
    #  middleware: []
      pool:
        num_workers: 1
        debug: true
        max_jobs: 1
    

    Version

    2.7.3

    Relevant log output

    No response

    B-bug 
    opened by sergey-telpuk 53
  • Laravel - sessions don't work (cookies problem)

    Laravel - sessions don't work (cookies problem)

    Steps to reproduce

    • composer create-project --prefer-dist laravel/laravel laravel57
    • cd laravel57
    • edit env file - database connection
    • php artisan make:auth
    • php artisan migrate
    • do this steps: https://github.com/spiral/roadrunner/wiki/Laravel-Framework
    • rr -v -d
    • Open browser and go to http://0.0.0.0:8080/register, fill out the form, submit and see the 419 ERROR

    Environment

    • PHP 7.3
    • Laravel 5.7

    How to fix it? thanks

    F-need-verification 
    opened by osbre 35
  • Too many opened tcp connections on low load

    Too many opened tcp connections on low load

    Recently I tried to do some stress testing stuff with my application (works with RR). I used artillery (https://artillery.io) to do this. When I firstly run the tests for 60 seconds with 10 requests/sec everything was fine, but many tcp connections were opened by RR process. On a longer distance (3 minutes) errors start to happen.

    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 10ms
    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 20ms
    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 40ms
    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 80ms
    2020/10/15 11:43:45 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 160ms
    2020/10/15 11:43:46 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 320ms
    2020/10/15 11:43:46 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 640ms
    2020/10/15 11:43:47 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:48 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:49 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:50 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:51 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:52 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:53 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:54 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:55 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:56 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:57 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    2020/10/15 11:43:58 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
    

    lsof -a -p $(pidof rr) shows 30 worker pipes, some other connections and loads of tcp connections like ubuntu2004.localdomain:http->_gateway:58032. I've found only a temporary solution - increase opened files limit, but I think this is not the root of the problem. Maybe these connections are not closed immediately after the response and I don't know how to fix that.

    F-need-verification A-network 
    opened by evonicgu 30
  • Symfony worker (for inclusion in the wiki)

    Symfony worker (for inclusion in the wiki)

    Hi, thanks for this very interesting project!

    Here is a worker to serve apps based on the default Symfony skeleton:

    <?php
    // worker.php
    // Install the following mandatory packages:
    // composer req spiral/roadrunner symfony/psr-http-message-bridge
    
    ini_set('display_errors', 'stderr');
    
    use App\Kernel;
    use Spiral\Goridge\StreamRelay;
    use Spiral\RoadRunner\PSR7Client;
    use Spiral\RoadRunner\Worker;
    use Symfony\Bridge\PsrHttpMessage\Factory\DiactorosFactory;
    use Symfony\Bridge\PsrHttpMessage\Factory\HttpFoundationFactory;
    use Symfony\Component\Debug\Debug;
    use Symfony\Component\Dotenv\Dotenv;
    use Symfony\Component\HttpFoundation\Request;
    
    require 'vendor/autoload.php';
    
    // The check is to ensure we don't use .env in production
    if (!isset($_SERVER['APP_ENV']) && !isset($_ENV['APP_ENV'])) {
        if (!class_exists(Dotenv::class)) {
            throw new \RuntimeException('APP_ENV environment variable is not defined. You need to define environment variables for configuration or add "symfony/dotenv" as a Composer dependency to load variables from a .env file.');
        }
        (new Dotenv())->load(__DIR__.'/.env');
    }
    
    $env = $_SERVER['APP_ENV'] ?? $_ENV['APP_ENV'] ?? 'dev';
    $debug = (bool) ($_SERVER['APP_DEBUG'] ?? $_ENV['APP_DEBUG'] ?? ('prod' !== $env));
    
    if ($debug) {
        umask(0000);
    
        Debug::enable();
    }
    
    if ($trustedProxies = $_SERVER['TRUSTED_PROXIES'] ?? $_ENV['TRUSTED_PROXIES'] ?? false) {
        Request::setTrustedProxies(explode(',', $trustedProxies), Request::HEADER_X_FORWARDED_ALL ^ Request::HEADER_X_FORWARDED_HOST);
    }
    
    if ($trustedHosts = $_SERVER['TRUSTED_HOSTS'] ?? $_ENV['TRUSTED_HOSTS'] ?? false) {
        Request::setTrustedHosts(explode(',', $trustedHosts));
    }
    
    $kernel = new Kernel($env, $debug);
    $relay = new StreamRelay(STDIN, STDOUT);
    $psr7 = new PSR7Client(new Worker($relay));
    $httpFoundationFactory = new HttpFoundationFactory();
    $diactorosFactory = new DiactorosFactory();
    
    while ($req = $psr7->acceptRequest()) {
        try {
            $request = $httpFoundationFactory->createRequest($req);
            $response = $kernel->handle($request);
            $psr7->respond($diactorosFactory->createResponse($response));
            $kernel->terminate($request, $response);
            $kernel->reboot(null);
        } catch (\Throwable $e) {
            $psr7->getWorker()->error((string)$e);
        }
    }
    

    And the corresponding .rr.yaml file:

    http:
      address: 0.0.0.0:8080
      workers:
        command: "php test-sf/worker.php"
        pool:
          numWorkers: 4
    
    static:
      dir:   "test-sf/public"
      forbid: [".php", ".htaccess"]
    

    Would you mind including it in the wiki?

    opened by dunglas 28
  • [BUG] Workers not restarting after stop

    [BUG] Workers not restarting after stop

    We have a local rr build to run our services with such config file

    rpc:
      listen: tcp://127.0.0.1:6001
    
    server:
      # Worker starting command, with any required arguments.
      #
      # This option is required.
      command: "php ./vendor/bin/rr-worker start --refresh-app --relay-dsn tcp://127.0.0.1:6001"
    
      # User name (not UID) for the worker processes. An empty value means to use the RR process user.
      #
      # Default: ""
      user: ""
    
      # Group name (not GID) for the worker processes. An empty value means to use the RR process user.
      #
      # Default: ""
      group: ""
    
      # Worker relay can be: "pipes", TCP (eg.: tcp://127.0.0.1:6001), or socket (eg.: unix:///var/run/rr.sock).
      #
      # Default: "pipes"
      relay: tcp://127.0.0.1:6001
    
      # Timeout for relay connection establishing (only for socket and TCP port relay).
      #
      # Default: 60s
      relay_timeout: 60s
    
    logs:
      # default
      mode: development
      level: debug
      encoding: console
      output: stdout
      err_output: stdout
      channels:
        http:
          mode: development
          level: debug
          encoding: console
          output: stdout
        server:
          mode: development
          level: debug
          encoding: console
          output: stdout
        rpc:
          mode: development
          level: debug
          encoding: console
          output: stdout
    
    http:
      address: "0.0.0.0:80"
      # middlewares for the http plugin, order matters
      middleware: ["static", "gzip", "headers"]
      # uploads
      uploads:
        forbid: [".php", ".exe", ".bat"]
      trusted_subnets:
        [
            "10.0.0.0/8",
            "127.0.0.0/8",
            "172.16.0.0/12",
            "192.168.0.0/16",
            "::1/128",
            "fc00::/7",
            "fe80::/10",
        ]
      # headers (middleware)
      headers:
        cors:
          allowed_origin: "*"
          allowed_headers: "*"
          allowed_methods: "GET,POST,PUT,PATCH,DELETE"
          allow_credentials: true
          exposed_headers: "Cache-Control,Content-Language,Content-Type,Expires,Last-Modified,Pragma"
          max_age: 600
      # http static (middleware)
      static:
        dir: "public"
        forbid: [".php"]
      pool:
        # default - num of logical CPUs
        num_workers: 4
        # default 0 - no limit
        max_jobs: 1
        # default 1 minute
        allocate_timeout: 60s
        # default 1 minute
        destroy_timeout: 60s
        # supervisor used to control http workers
        supervisor:
          # watch_tick defines how often to check the state of the workers (seconds)
          watch_tick: 1s
          # ttl defines maximum time worker is allowed to live (seconds)
          ttl: 0
          # idle_ttl defines maximum duration worker can spend in idle mode after first use. Disabled when 0 (seconds)
          idle_ttl: 10s
          # exec_ttl defines maximum lifetime per job (seconds)
          exec_ttl: 10s
          # max_worker_memory limits memory usage per worker (MB)
          max_worker_memory: 100
      ssl:
        # host and port separated by semicolon (default :443)
        address: :443
        redirect: false
        # ssl cert
        cert: /ssl-cert/self-signed.crt
        # ssl private key
        key: /ssl-cert/self-signed.key
      # HTTP service provides HTTP2 transport
      http2:
        h2c: false
        max_concurrent_streams: 128
      # Automatically detect PHP file changes and reload connected services (docs:
      # https://roadrunner.dev/docs/beep-beep-reload). Drop this section for this feature disabling.
      reload:
        # Sync interval.
        #
        # Default: "1s"
        interval: 1s
    
        # Global patterns to sync.
        #
        # Default: [".php"]
        patterns: [ ".php" ]
    
        # List of included for sync services (this is a map, where key name is a plugin name).
        #
        # Default: <empty map>
        services:
          server:
            # Directories to sync. If recursive is set to true, recursive sync will be applied only to the directories in
            # "dirs" section. Dot (.) means "current working directory".
            #
            # Default: []
            dirs: [ "." ]
    
            # Recursive search for file patterns to add.
            #
            # Default: false
            recursive: true
    
            # Ignored folders.
            #
            # Default: []
            ignore: [ "vendor" ]
    
            # Service specific file pattens to sync.
            #
            # Default: []
            patterns: [ ".php", ".go", ".md" ]
          http:
            # Directories to sync. If recursive is set to true, recursive sync will be applied only to the directories in
            # "dirs" section. Dot (.) means "current working directory".
            #
            # Default: []
            dirs: [ "." ]
    
            # Recursive search for file patterns to add.
            #
            # Default: false
            recursive: true
    
            # Ignored folders.
            #
            # Default: []
            ignore: [ "vendor" ]
    
            # Service specific file pattens to sync.
            #
            # Default: []
            patterns: [ ".php", ".go", ".md", ".js", ".css", ".json" ]
    

    Actually, it runs Laravel bridged workers. Due to Laravel Nova not compatible with async (it spawns a lot of DB connections and stores some settings in static vars outside the app container), we decided to set max_jobs: 1 until problem is solved. I expected to see this happen: explanation

    As for now, I receive error in browser:

    1 error occurred:
    	* supervised_exec_with_context: Timeout: context deadline exceeded
    

    Errortrace, Backtrace or Panictrace

    gb_admin | 2021-04-01T10:25:27.127Z     WARN    server  server/plugin.go:208    no free workers in pool {"error": "static_pool_exec_with_context: NoFreeWorkers:\n\tworker_watcher_get_free_worker: no free workers in the container, timeout exceed"}
    gb_admin | github.com/spiral/roadrunner/v2/plugins/server.(*Plugin).collectEvents
    gb_admin |      github.com/spiral/roadrunner/[email protected]/plugins/server/plugin.go:208
    gb_admin | github.com/spiral/roadrunner/v2/pkg/events.(*HandlerImpl).Push
    gb_admin |      github.com/spiral/roadrunner/[email protected]/pkg/events/general.go:37
    gb_admin | github.com/spiral/roadrunner/v2/pkg/pool.(*StaticPool).getWorker
    gb_admin |      github.com/spiral/roadrunner/[email protected]/pkg/pool/static_pool.go:230
    gb_admin | github.com/spiral/roadrunner/v2/pkg/pool.(*StaticPool).execWithTTL
    gb_admin |      github.com/spiral/roadrunner/[email protected]/pkg/pool/static_pool.go:175
    gb_admin | github.com/spiral/roadrunner/v2/pkg/pool.(*supervised).Exec.func1
    gb_admin |      github.com/spiral/roadrunner/[email protected]/pkg/pool/supervisor_pool.go:99
    
    B-bug F-need-verification 
    opened by vpatrikeev 27
  • [RR2, QUESTION] Does RoadRunner respect X-Forwarded-Proto and X-Forwarded-Port?

    [RR2, QUESTION] Does RoadRunner respect X-Forwarded-Proto and X-Forwarded-Port?

    I created this issue but maybe the problem is deeper. I used nginx-proxy which allows running apps locally in docker using https. And it works without any issue. Then I tried to use RoadRunner and the relevant Symfony bundle. The thing is that Spiral\RoadRunner\Http\Request::$uri contains the url with http scheme instead of https and as such Symfony generates links with http scheme instead of https, which leads to further problems.

    Is there something I miss? How can I configure RoadRunner to treat the forwarded https connection, while serving http itself?

    R-question 
    opened by denisvmedia 26
  • [BUG] RR2 stop passing requests to workers

    [BUG] RR2 stop passing requests to workers

    we are upgrading from RR1 to RR2

    after deployment everything works and after some time (few hours) the roadrunner stops responding

    I expected to see this happen: RR keeps proccessing requests indefinitely

    Instead, this happened: RR stops passing requests to workers

    the port is still open and listening to requests

    # curl --max-time 10 -vvv 127.0.0.1:8080
    *   Trying 127.0.0.1:8080...
    * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
    > GET / HTTP/1.1
    > Host: 127.0.0.1:8080
    > User-Agent: curl/7.74.0
    > Accept: */*
    > 
    * Operation timed out after 10000 milliseconds with 0 bytes received
    * Closing connection 0
    curl: (28) Operation timed out after 10000 milliseconds with 0 bytes received
    

    workers are sitting idle with no execs and i can list them

    # /usr/local/bin/rr -c '/etc/roadrunner/.rr.yaml' workers
    Workers of [http]:
    +---------+-----------+---------+---------+---------+--------------------+
    |   PID   |  STATUS   |  EXECS  | MEMORY  |  CPU%   |      CREATED       |
    +---------+-----------+---------+---------+---------+--------------------+
    |    2379 | ready     |       0 | 28 MB   |    0.01 | 4 minutes ago      |
    |    2380 | ready     |       0 | 28 MB   |    0.01 | 4 minutes ago      |
    +---------+-----------+---------+---------+---------+--------------------+
    

    they are rotated as they reach TTL but live past idle TTL and calling reset just hangs indefinitely i can just see the rotating dots

    Resetting plugin: [http]  β—βˆ™βˆ™
    

    also while restarting i am not even getting the HTTP plugin got restart request. Restarting... info log

    The version of RR used: 2.5.3 i used binary from docker image spiralscout/roadrunner:2.5.3 in php:8.0.12-cli

    My .rr.yaml configuration is:

    rpc:
      enable: true
      listen: unix:///etc/roadrunner/rr.sock
    
    server:
      command: "php -d opcache.enable_cli=1 /var/www/html/app/worker.php"
      relay: "pipes"
    
    http:
      address: :8080
    
      pool:
        num_workers: 2
    
        supervisor:
          watch_tick: 1s # check every 1s
    
          # Maximal worker memory usage in megabytes (soft limit). Zero means no limit.
          max_worker_memory: 150 # 150MB
          ttl: 300s # maximum time to live for the worker (soft)
          idle_ttl: 60s # maximum allowed amount of time worker can spend in idle before being removed (for weak db connections, soft)
          exec_ttl: 360s # max_execution_time (brutal)
    
    endure:
      log_level: info
    
    logs:
      mode: production
      level: info
    
    metrics:
      address: :8081
    

    Errortrace, Backtrace or Panictrace there are no errors in logs

    B-regression 
    opened by dstrop 21
  • [BUG] All workers ready, exec 0

    [BUG] All workers ready, exec 0

    Hi! First of all, thanks for RoadRunner! :heart: It's a great tool

    We have some stability issues in one of our big websites. This has occurred many times in the past and we can't identify a clear cause for what's happening. RoadRunner suddenly stops responding to requests completely after several hours or days. rr workers reveals that all of the workers are ready with an EXECS of 0. The issue never resolves on its own until we do a rr reset.

    I'm not sure if maybe sometimes the workers start incompletely, unable to process requests, without rr reloading them because they never reach any of the soft or hard limits. After some time they might be accumulating until there are no healthy workers left to process any requests. This is just a theory. The workers are displayed as ready.

    The version of RR used:

    rr version 2.3.0 (build time: 2021-06-11T14:54:08+0000, go1.16.4)

    My .rr.yaml configuration is:

    rpc:
      listen:     tcp://127.0.0.1:6000
    
    server:
      command: "/usr/bin/php7.4 psr-worker.php"
    
    http:
      address: 0.0.0.0:8082
      
      pool:
        num_workers: 48
        
        supervisor:
          watch_tick: 60s
          max_worker_memory: 200
          ttl: 84600s
          exec_ttl: 30s
    

    We've just added error logging to our config now, so unfortunately I can't provide any logs yet, I will update this report as soon as we have anything.

    Do you have any suspicions what might be causing this? Are there any things other than logs that we can provide that might be helpful?

    Thanks for your help!

    B-bug F-need-verification Y-Release blocker 
    opened by iluuu1994 20
  • [Feature Request] Worker fatal error handling

    [Feature Request] Worker fatal error handling

    For now if worker crashed with fatal error the stack trace are print to browser.

    I have several suggestions:

    • Prevent printing stack traces to browser
    • Add options to configure static html (or url request) for custom error page
    • Print stack trace to stderr
    • Implement raven-go and push errors to Sentry
    C-enhancement 
    opened by grachevko 20
  • [πŸ› BUG]: RR hangs on the `wait4` syscall when `rr reset` is using at the same time

    [πŸ› BUG]: RR hangs on the `wait4` syscall when `rr reset` is using at the same time

    No duplicates πŸ₯².

    • [X] I have searched for a similar issue in our bug tracker and didn't find any solutions.

    What happened?

    RR uses wait4 syscall to be notified for the PHP process exit. Suppose the user stops the worker for any purpose (syscall or just an internal issue) and is using an rr reset simultaneously. In that case, that will lead to deadlock in the worker_watcher, because one part of the ww will be trying to reallocate the worker while the second part will hold the mutex.

    Version

    all versions are affected

    Relevant log output

    No response

    B-bug 
    opened by rustatian 0
  • [πŸ’‘ FEATURE REQUEST]: RPC response error code classification

    [πŸ’‘ FEATURE REQUEST]: RPC response error code classification

    Plugin

    No response

    I have an idea!

    I have an idea, listen to me!!

    Latest direction of RR giving control of the server via RPC commands with protobuf messages is absolutely amazing. In the same spirit, it would be nice to have a way to distinguish response errors. E.g, consider Jobs pipeline creation:

    $pipeline = $this->jobs->create(
        new AMQPCreateInfo(
            name: 'pipeline_1',
            prefetch: 1,
            queue: 'queue_1',
            exchange: 'amqp.direct',
            routingKey: 'queue_1',
            multipleAck: false,
            requeueOnFail: false,
        )
    );
    

    If the pipeline already exists, JobsInterface will throw JobsException, but on the PHP side we have no way of knowing that it's the issue with already existing pipeline and not e.g RPC failure or something else.

    We'd have to look into the exception message which is great for debugging, but not so much for building user-specific workflows around it.

    It would help if JobsException also had a code given from RPC. We could also split up exceptions like Jobs\PipelineAlreadyExistsException, but that would still require passing error code from the RPC.

    C-feature-request 
    opened by rauanmayemir 0
  • [πŸ’‘ FEATURE REQUEST]: Kafka driver

    [πŸ’‘ FEATURE REQUEST]: Kafka driver

    Plugin

    JOBS

    I have an idea!

    I have an idea, listen to me!! I propose to add support kafka driver in jobs like as amqp driver Reason: Event-driven architecture based on events in different topics in kafka often requires many consumers with some logic of processing events in topics

    D-driver P-jobs P-new-plugin 
    opened by Smolevich 1
  • [πŸ’‘ FEATURE REQUEST]: RR as a Caddy module

    [πŸ’‘ FEATURE REQUEST]: RR as a Caddy module

    Plugin

    No response

    I have an idea!

    Implement a RR as a Caddy module.

    Draft notes: Implement a RoundTripper to have a RR as a Caddy transport (http). Since this is good for the HTTP, other plugins, such as rr-temporal or those, that don't use an HTTP as a transport can't be easily integrated (for example, the whole queues bundle (jobs, sqs, beanstalk, etc.), rr-temporal integration. So, basically, we should find a way to use the RR plugins inside the Caddy. OR: Just use a classic approach, like, reverse_proxy to the RR endpoint.


    config:

    {
    	# the roadrunner app
    	roadrunner {
    		rpc tcp://127.0.0.1:6001 # remove ??
    		server {
                            on_init: # Do we need that inside the Caddy ???
                            pool: # we need a pool config inside the server
                                debug: false
                                command php psr-worker.php
                                num_workers: 0
                                max_jobs: 64
                                allocate_timeout: 60s
                                destroy_timeout: 60s
                                    supervisor:
                                        watch_tick: 1s
                                        ttl: 0s
                                        idle_ttl: 10s
                                        max_worker_memory: 128
                                        exec_ttl: 60s
    		}
    		...
    	}
    }
    
    app.example.com {
    	root * /srv
    	encode gzip
    	mercure {
    		...
    	}
    	php_fastcgi roadrunner {
    		transport roadrunner
    	}
    	file_server
    }
    
    1. We can't use RR plugins (at least in the initial implementation), until we properly cover all plugin's functionality by tests.
    2. We should use a pool configuration inside the server section (this is not the same as the RR configuration server section).
    3. We don't need to use a roadrunner repository. I guess, we only need to use a RR SDK, especially: link
    4. I guess, we also don't need an RPC plugin, since we don't use RPC.

    Part of the discussion is here: https://github.com/roadrunner-server/roadrunner/issues/917

    S-RFC C-feature-request Y-Low 
    opened by rustatian 1
  • [πŸ’‘ FEATURE REQUEST]: Byu-byu JSON πŸ‘‹πŸ»

    [πŸ’‘ FEATURE REQUEST]: Byu-byu JSON πŸ‘‹πŸ»

    I have an idea!

    Eliminate ALL internal communication via json and move it to Protobuf. Todo [WIP]:

    • [ ] HTTP Plugin
      • [ ] Request
      • [ ] Response
    • [ ] JOBS drivers (Item's context)
    • [ ] ?
    C-enhancement 
    opened by rustatian 0
Releases(v2.10.5)
Owner
Spiral Scout
Spiral Scout is a full-service digital agency, providing design, development and online marketing services to businesses around San Francisco and beyond.
Spiral Scout
Cat Balancer is line based load balancer for net cat nc.

Cat Balancer Cat Balancer is line based load balancer for net cat nc. Usage cb [-p <producers-port>] [-c <consumers-port>] One Producer to One Consum

CGI France 3 May 24, 2022
Kiwi-balancer - A balancer is a gateway between the clients and the server

Task description Imagine a standard client-server relationship, only in our case

Jozef Lami 0 Feb 11, 2022
YoMo 43 Jun 20, 2022
A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF

VC5 A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF This is very much a proof of concept at this stage - mos

David Coles 29 Jun 21, 2022
A load balancer supporting multiple LB strategies written in Go

farely A load balancer supporting multiple LB strategies written in Go. Goal The goal of this project is purley educational, I started it as a brainst

Mehdi Cheracher 10 Feb 10, 2022
Lightweight http response time based load balancer written in Go

HTTP Load Balancer Specifications http servers should always return time taken to proceed request in headers as EXECUTION_TIME in ms this load balance

GaΓ«tan 0 Feb 22, 2022
πŸ§™ High-performance PHP-to-Golang IPC/RPC bridge

High-performance PHP-to-Golang IPC bridge Goridge is high performance PHP-to-Golang codec library which works over native PHP sockets and Golang net/r

RoadRunner 1.1k Jun 24, 2022
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

mobus 2 Sep 19, 2021
gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era.

gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era. Current status: Maintenance mode, accepting PRs. Currently in

Yaroslav Pogrebnyak 1.7k Jun 16, 2022
A modern layer 7 load balancer from baidu

BFE BFE is a modern layer 7 load balancer from baidu. Advantages Multiple protocols supported, including HTTP, HTTPS, SPDY, HTTP2, WebSocket, TLS, Fas

null 5.5k Jun 24, 2022
KgLb - L4 Load Balancer

KgLb KgLb is L4 a load balancer built on top of linux ip virtual server (ip_vs). It provides rich functionality such as discovery, health checks for r

Dropbox 129 Mar 10, 2022
Simple Reverse Proxy Load Balancer

lb - a reverse proxy load-balancing server, It implements the Weighted Round Robin Balancing algorithm

Blessing Pariola 3 Mar 23, 2022
Basic Load Balancer

Load Balancer Work flow based on code snippet Trade-offs: 1. Using etcd as a global variable map. 2. Using etcd to store request references rather tha

Nikhil Vasudev 0 Nov 1, 2021
Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal

Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal If I figure out how to make it work.. How it works! The Vippy LB PoC uses BGP/IPVS and E

Daniel Finneran 5 Mar 10, 2022
A Load-balancer made from steel

slb The Steel Load Balancer A load-balancer forged in the fires of Sheffield Getting slb Prebuilt binaries for armv7 and amd64 exist in the releases p

null 74 Apr 18, 2022
A Service Load Balancer for Kubernetes.

PureLB - is a Service Load Balancer for Kubernetes PureLB is a load-balancer orchestrator for Kubernetes clusters. It uses standard Linux networking a

PureLB 21 Jun 15, 2022
Consistelancer - Consistent hashing load balancer for Kubernetes

Setup minikube start kubectl apply -f k8s-env.yaml skaffold dev # test locks ku

null 1 Jan 12, 2022
Simple load-balancer for npchat servers, based on the xor distance metric between node & user id

npchat-helmsman Simple load-balancer for npchat servers, based on the xor distance metric between node & user id. Local Development Clone this reposit

npchat 0 Jan 15, 2022
Squzy - is a high-performance open-source monitoring, incident and alert system written in Golang with Bazel and love.

Squzy - opensource monitoring, incident and alerting system About Squzy - is a high-performance open-source monitoring and alerting system written in

Squzy 454 May 24, 2022