Pomerium is an identity-aware access proxy.

Overview

pomerium logo

pomerium chat GitHub Actions Go Report Card GoDoc LICENSE codecov Docker Pulls

Pomerium is an identity-aware proxy that enables secure access to internal applications. Pomerium provides a standardized interface to add access control to applications regardless of whether the application itself has authorization or authentication baked-in. Pomerium gateways both internal and external requests, and can be used in situations where you'd typically reach for a VPN.

Pomerium can be used to:

  • provide a single-sign-on gateway to internal applications.
  • enforce dynamic access policy based on context, identity, and device state.
  • aggregate access logs and telemetry data.
  • a VPN alternative.

Docs

For comprehensive docs, and tutorials see our documentation.

Issues
  • G Suite Service Account Group Membership Validation Fails

    G Suite Service Account Group Membership Validation Fails

    What happened?

    • Running on Kubernetes with the helm chart along with nginx ingress
    • We use G Suite Service account for group membership validation.
    • We have two users A and B
    • user A is part of roughly 20 something groups
    • user B is part of roughly 60 something groups (including owner of some groups)

    What did you expect to happen?

    • Group membership works for user A and user A can access everything downstream fine
    • For user B they get an error ERR_CONNECTION_CLOSED on Chrome and a blank page on Firefox
    • When tried on Safari user got kCFErrorDomainCFNetwork error 303
    • Also when user A who is an Pomerium admin tried to log in they got an 403 (creating separate issue for that)

    How'd it happen?

    1. User B tried to log in

    What's your environment like?

    • Pomerium version (retrieve with pomerium --version or /ping endpoint):
    pomerium/v0.6.3 (+github.com/pomerium/pomerium; 1c7d30b; go1.14)
    
    • Server Operating System/Architecture/Cloud:

    What's your config.yaml?

    - from: https://EXT
      to: http://INT
      allowed_groups:
        - [email protected]
    

    What did you see in the logs?

    [
        {
            "level": "debug",
            "X-Forwarded-For": [
                "XXXXX"
            ],
            "X-Forwarded-Host": [
                "INTERNAL"
            ],
            "X-Forwarded-Port": [
                "443"
            ],
            "X-Forwarded-Proto": [
                "https"
            ],
            "X-Real-Ip": [
                "XXXXX"
            ],
            "ip": "XXXXX",
            "user_agent": "REMOVED",
            "req_id": "f9ec84a4-14d9-ecc1-4632-aee4d192bd76",
            "error": "Forbidden: [email protected] is not authorized for INTERNAL",
            "time": "2020-03-26T19:18:33Z",
            "message": "proxy: AuthorizeSession"
        },
        {
            "level": "info",
            "X-Forwarded-For": [
                "INTERNAL"
            ],
            "X-Forwarded-Host": [
                "INTERNAL"
            ],
            "X-Forwarded-Port": [
                "443"
            ],
            "X-Forwarded-Proto": [
                "https"
            ],
            "X-Real-Ip": [
                "INTERNAL"
            ],
            "ip": "XXXXX",
            "user_agent": "REMOVED",
            "req_id": "f9ec84a4-14d9-ecc1-4632-aee4d192bd76",
            "error": "Forbidden: [email protected] is not authorized for INTERNAL",
            "time": "2020-03-26T19:18:33Z",
            "message": "httputil: ErrorResponse"
        }
    ]
    

    Additional context

    Add any other context about the problem here.

    NeedsInvestigation 
    opened by shreyaskarnik 35
  • proxy returns `stream terminated by RST_STREAM with error code: PROTOCOL_ERROR` while trying to authorize

    proxy returns `stream terminated by RST_STREAM with error code: PROTOCOL_ERROR` while trying to authorize

    What happened?

    500 - Internal Server Error when trying to access a proxied service

    What did you expect to happen?

    Forwarding

    How'd it happen?

    1. Go to proxied domain
    2. Log in with Google
    3. 500

    What's your environment like?

    • Pomerium version: 0.5.1

    • Environment: Kubernetes Manifests:

    apiVersion: v1
    kind: Service
    metadata:
      name: pomerium
    spec:
      ports:
        - port: 80
          name: http
          targetPort: 443
        - name: metrics
          port: 9090
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: pomerium
    spec:
      replicas: 1
      template:
        spec:
          containers:
          - name: pomerium
            image: pomerium
            args:
            - --config=/etc/pomerium/config.yaml
            env:
            - {name: INSECURE_SERVER, value: "true"}
            - {name: POMERIUM_DEBUG, value: "true"}
            - {name: AUTHENTICATE_SERVICE_URL, value: https://pomerium-authn.$(DOMAIN)}
            - {name: FORWARD_AUTH_URL, value: https://pomerium-fwd.$(DOMAIN)}
            - {name: IDP_PROVIDER, value: google}
            - name: COOKIE_SECRET
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: cookie-secret
            - name: SHARED_SECRET
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: shared-secret
            - name: IDP_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: idp-client-id
            - name: IDP_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: idp-client-secret
            - name: IDP_SERVICE_ACCOUNT
              valueFrom:
                secretKeyRef:
                  name: pomerium
                  key: idp-service-account
            ports:
              - containerPort: 443
                name: http
              - containerPort: 9090
                name: metrics
            livenessProbe:
              httpGet:
                path: /ping
                port: 443
            readinessProbe:
              httpGet:
                path: /ping
                port: 443
            volumeMounts:
            - mountPath: /etc/pomerium/
              name: config
          volumes:
          - name: config
            configMap:
              name: pomerium
    ---
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: pomerium
      annotations:
        certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    spec:
      tls:
      - hosts:
        - pomerium-authn.$(DOMAIN)
        - pomerium-fwd.$(DOMAIN)
        secretName: pomerium-tls
      rules:
        - host: pomerium-authn.$(DOMAIN)
          http:
            paths:
              - path: /
                backend:
                  serviceName: pomerium
                  servicePort: 80
        - host: pomerium-fwd.$(DOMAIN)
          http:
            paths:
              - path: /
                backend:
                  serviceName: pomerium
                  servicePort: 80
    

    What's your config.yaml?

    policy: 
      - from: https://test-nginx.cs-eng-apps-europe-west3.gcp.infra.csol.cloud
        to: http://httpbin.pomerium/
        allowed_domains:
          - container-solutions.com
    

    What did you see in the logs?

    11:23AM ERR http-error error="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] http-code=500 http-message="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" ip=10.40.0.6 req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"
    11:23AM DBG proxy: AuthorizeSession error="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] ip=10.40.0.6 req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"
    11:23AM DBG http-request X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] duration=5.869461 [email protected] group=<groups> host=test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud ip=10.40.0.6 method=GET path=/ req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed service=all size=8150 status=500 user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"
    

    Additional context

    I get this error both with all-in-one mode and by deploying the services separately. I have been messing with GRPC ports etc. before opening this issue but I could not find what the problem is. It seems like the proxy wants to talk to authorize over gRPCs but that is not available in INSECURE_SERVER mode

    deployment 
    opened by sim1 33
  • Azure IDP implementation broken

    Azure IDP implementation broken

    What happened?

    When using the Azure IDP implementation, Pomerium does not retrieve all user data or may not process it properly. This issue blocks access to all configured applications as Pomerium will match the invalid user data with the configured policy.

    Your help is appreciated!

    What did you expect to happen?

    The Azure IDP implementation should provide the requested information.

    How'd it happen?

    The following data is shown after a successful Azure AD sign in on the Pomerium HTTPS endpoint:

    Name: <AD Display name>
    UserID: qm5pToiuIWNnS8o8WS_SNcbE2Q2<16 char redacted>
    User: qm5pToiuIWNnS8o8WS_SNcbE2Q2<16 char redacted>
    Group: ec9a2ef9-c01c-4525-8057-<12 char redacted>
    Expiry: 2020-07-08 22:08:10 +0000 UTC
    Issued: 2020-07-08 21:08:11 +0000 UTC
    Issuer: <Pomerium HTTPS endpoint>
    Audience: <Pomerium HTTPS endpoint>
    

    Note, that the value of UserID and User are equal but none of them show the email address of the user, which is configured in Azure AD. The following API permissions are configured: image

    Could the problem be related to these permissions? I have followed the Quickstart Guide, but noticed that the Azure AD set up section seems to show an older version the Azure Portal and doesn't mention 5 Azure Active Directory Graph permissions which are set in the screenshot but not granted by default.

    What's your environment like?

    • Pomerium version: Pomerium 0.9.2-1592838263+3b74053
    • Ubuntu 20.04, Running with docker-compose using Forward Authentication with traefik

    What's your config.yaml?

    pomerium_debug: true
    insecure_server: true
    address: :80
    forward_auth_url: http://auth
    authenticate_service_url: https://auth.redacted.domain.example.com
    
    idp_provider: azure
    idp_client_id: <secret>
    idp_client_secret: <client>
    idp_provider_url: https://login.microsoftonline.com/<endpoint id>/v2.0
    
    cookie_secret: <secret>
    shared_secret: <secret>
    
    policy:
    - from: https://traefik.redacted.domain.example.com
      to: http://traefik
      allowed_users:
        - <Azure AD user>
    - from: https://third-party.example.com
      to: http://<app dns-name>
      allowed_users:
        - <Azure AD user>
    

    What did you see in the logs?

    pomerium.log

    opened by Sidewinder53 28
  • v0.9.0 - Connection refused - no error in logs

    v0.9.0 - Connection refused - no error in logs

    What happened?

    After updating Pomerioum (docker compose) to the latest v0.9.0 I get a "connection refused" on all policies. I did not find any "fatal" or "error" in the logs. New connections are not shown in the logs. Full restart log below.

    What did you expect to happen?

    Since there were no breaking changes in the update the same configuration should still work as before.

    How'd it happen?

    1. fetched the new pomerium v0.9.0 image
    2. deployed my old docker compose file with the same config.yaml used in v0.8.3
    3. tried connect to to a service proxied by pomerium

    What's your environment like?

    • Pomerium 0.9.0
    • Docker Compose in Ubuntu 20.04

    What's your config.yaml?

    config.yaml
    policy:
    # Portainer
      - from: https://portainer.domain.com
        to: http://192.168.1.2:9000
        allowed_users:
          - [email protected] 
    
    ...more policies...
    

    What's your docker compose file?

    docker compose
    version: "2"
    services:
      pomerium:
        container_name: pomerium
        image: pomerium/pomerium:latest
        environment:
          - AUTHENTICATE_SERVICE_URL=https://authenticate.domain.com
          - AUTOCERT=true
          - AUTOCERT_USE_STAGING=false
          - AUTOCERT_DIR=/pomerium/certs/
          - IDP_PROVIDER=google
          - IDP_CLIENT_ID=XXX.apps.googleusercontent.com
          - IDP_CLIENT_SECRET=XXX
          - COOKIE_SECRET=XXX
          - ADMINISTRATORS="[email protected]"
          - HTTP_REDIRECT_ADDR=:80
        volumes:
          - pomerium_config:/pomerium/
        ports:
          - 80:80
          - 443:443
        restart: always
    

    What did you see in the logs?

    Full restart log, after the last line nothing happens.

    2020-05-31T19:41:43.950195768Z {"level":"fatal","error":"http: Server closed","time":"2020-05-31T19:41:43Z","message":"cmd/pomerium"}
    2020-05-31T19:41:46.150121440Z 2020/05/31 19:41:46 [INFO][cache:0xc0000f8e10] Started certificate maintenance routine
    2020-05-31T19:41:46.150296820Z {"level":"info","addr":":80","time":"2020-05-31T19:41:46Z","message":"starting http redirect server"}
    2020-05-31T19:41:46.274039653Z {"level":"info","version":"0.9.0-1590940862+914b952","time":"2020-05-31T19:41:46Z","message":"cmd/pomerium"}
    2020-05-31T19:41:46.275111005Z {"level":"info","port":"45679","time":"2020-05-31T19:41:46Z","message":"gRPC server started"}
    2020-05-31T19:41:46.275162790Z {"level":"info","port":"33161","time":"2020-05-31T19:41:46Z","message":"HTTP server started"}
    2020-05-31T19:41:46.283430991Z {"level":"debug","service":"envoy","location":"/tmp/.pomerium-envoy/envoy-config.yaml","time":"2020-05-31T19:41:46Z","message":"wrote config file to location"}
    2020-05-31T19:41:46.284438038Z {"level":"info","addr":"localhost:5443","time":"2020-05-31T19:41:46Z","message":"internal/grpc: grpc with insecure"}
    2020-05-31T19:41:46.461503150Z {"level":"warn","time":"2020-05-31T19:41:46Z","message":"google: no service account, will not fetch groups"}
    2020-05-31T19:41:46.465330339Z {"level":"info","host":"authenticate.domain.cc","time":"2020-05-31T19:41:46Z","message":"enabled authenticate service"}
    2020-05-31T19:41:46.553332606Z {"level":"info","checksum":"dca1e1bd47b186af","time":"2020-05-31T19:41:46Z","message":"authorize: updating options"}
    2020-05-31T19:41:46.553385609Z {"level":"info","PublicKey":"LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFUG9YUmp3U1VoWW9RbmF0SUUrVkxQR0lrUTBIMgp3NjFoZGJDUTlkQnNqUjVRMTB3ZFhheHByTmp1azlqbVVyQVhkQ2VjZVBEdHBDbWdrbmhaVnQvb2FBPT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==","time":"2020-05-31T19:41:46Z","message":"authorize: ecdsa public key"}
    2020-05-31T19:41:46.570117954Z {"level":"info","time":"2020-05-31T19:41:46Z","message":"enabled authorize service"}
    2020-05-31T19:41:46.651502935Z {"level":"info","checksum":"dca1e1bd47b186af","time":"2020-05-31T19:41:46Z","message":"authorize: updating options"}
    2020-05-31T19:41:46.651554263Z {"level":"info","PublicKey":"LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFYUI1NUk1eSt6c1dwZFRvM09Xa2RIODhNSXpQUgowZ3ptaGx4eG1FUTdzNXAvMnkvRXE1SkZwd3hnN1JUTmR6aWVJbmdlRjEyWGpjYVBUVDg2OE5jZ0lBPT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==","time":"2020-05-31T19:41:46Z","message":"authorize: ecdsa public key"}
    2020-05-31T19:41:46.666941534Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"autocache: with options: &{PoolOptions:<nil> PoolScheme:http PoolPort:8333 PoolTransportFn:0xff8650 PoolContext:<nil> MemberlistConfig:<nil> Logger:0xc000d26dc0}"}
    2020-05-31T19:41:46.666998871Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"autocache: defaulting to lan configuration"}
    2020-05-31T19:41:46.668890953Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"autocache: self addr is: 172.28.0.2"}
    2020-05-31T19:41:46.668943733Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"autocache: groupcache self: http://172.28.0.2:8333 options: &{BasePath: Replicas:0 HashFn:<nil>}"}
    2020-05-31T19:41:46.668983078Z {"level":"info","service":"autocache","insecure":true,"addr":":8333","time":"2020-05-31T19:41:46Z","message":"internal/httputil: http server started"}
    2020-05-31T19:41:46.669581812Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Initiating push/pull sync with:  127.0.0.1:7946"}
    2020-05-31T19:41:46.669763225Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Stream connection from=127.0.0.1:46502"}
    2020-05-31T19:41:46.671030086Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Failed to join ::1: dial tcp [::1]:7946: connect: cannot assign requested address"}
    2020-05-31T19:41:46.671515289Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Stream connection from=172.28.0.2:40150"}
    2020-05-31T19:41:46.671564111Z {"service":"autocache","time":"2020-05-31T19:41:46Z","message":"[DEBUG] memberlist: Initiating push/pull sync with:  172.28.0.2:7946"}
    2020-05-31T19:41:46.672329542Z {"level":"info","time":"2020-05-31T19:41:46Z","message":"enabled cache service"}
    2020-05-31T19:41:46.674609564Z {"level":"info","addr":"localhost:5443","time":"2020-05-31T19:41:46Z","message":"internal/grpc: grpc with insecure"}
    2020-05-31T19:41:46.675453547Z {"level":"info","addr":"127.0.0.1:45679","time":"2020-05-31T19:41:46Z","message":"starting control-plane gRPC server"}
    2020-05-31T19:41:46.675501306Z {"level":"info","addr":"127.0.0.1:33161","time":"2020-05-31T19:41:46Z","message":"starting control-plane HTTP server"}
    2020-05-31T19:41:48.413980648Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"initializing epoch 0 (hot restart version=disabled)"}
    2020-05-31T19:41:48.414236986Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"statically linked extensions:"}
    2020-05-31T19:41:48.414744052Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.lua, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash"}
    2020-05-31T19:41:48.415085345Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.dubbo_proxy.filters: envoy.filters.dubbo.router"}
    2020-05-31T19:41:48.415283131Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource"}
    2020-05-31T19:41:48.415624112Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter"}
    2020-05-31T19:41:48.415699524Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.health_checkers: envoy.health_checkers.redis"}
    2020-05-31T19:41:48.415712361Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts"}
    2020-05-31T19:41:48.415724314Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.resolvers: envoy.ip"}
    2020-05-31T19:41:48.415734991Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  http_cache_factory: envoy.extensions.http.cache.simple"}
    2020-05-31T19:41:48.415745972Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.statsd"}
    2020-05-31T19:41:48.415763154Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router"}
    2020-05-31T19:41:48.415777476Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls"}
    2020-05-31T19:41:48.415789885Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.dubbo_proxy.route_matchers: default"}
    2020-05-31T19:41:48.415849352Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata"}
    2020-05-31T19:41:48.416121836Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy"}
    2020-05-31T19:41:48.416146091Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis"}
    2020-05-31T19:41:48.416187131Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls"}
    2020-05-31T19:41:48.416352357Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy"}
    2020-05-31T19:41:48.416373386Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.thrift_proxy.transports: auto, framed, header, unframed"}
    2020-05-31T19:41:48.416398357Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.retry_priorities: envoy.retry_priorities.previous_priorities"}
    2020-05-31T19:41:48.416457641Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.dubbo_proxy.protocols: dubbo"}
    2020-05-31T19:41:48.416520463Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector"}
    2020-05-31T19:41:48.416563294Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.udp_listeners: raw_udp_listener"}
    2020-05-31T19:41:48.416614102Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin"}
    2020-05-31T19:41:48.416648092Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.dubbo_proxy.serializers: dubbo.hessian2"}
    2020-05-31T19:41:48.416659869Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"  envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.tcp_grpc, envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log"}
    2020-05-31T19:41:48.425046560Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"admin address: 127.0.0.1:9901"}
    2020-05-31T19:41:48.425825817Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.shrink_heap."}
    2020-05-31T19:41:48.425938675Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.stop_accepting_connections."}
    2020-05-31T19:41:48.425952821Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.stop_accepting_connections."}
    2020-05-31T19:41:48.425968369Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.stop_accepting_connections."}
    2020-05-31T19:41:48.425980358Z {"level":"debug","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"No overload action is configured for envoy.overload_actions.stop_accepting_connections."}
    2020-05-31T19:41:48.426293122Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"runtime: layers:\n  - name: base\n    static_layer:\n      {}\n  - name: admin\n    admin_layer:\n      {}"}
    2020-05-31T19:41:48.426620029Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading tracing configuration"}
    2020-05-31T19:41:48.426686792Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading 0 static secret(s)"}
    2020-05-31T19:41:48.426699113Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading 1 cluster(s)"}
    2020-05-31T19:41:48.426939128Z {"level":"debug","service":"envoy","name":"grpc","time":"2020-05-31T19:41:48Z","message":"completionThread running"}
    2020-05-31T19:41:48.428789217Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 127.0.0.1:45679"}
    2020-05-31T19:41:48.428851262Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS initial cluster pomerium-control-plane-grpc"}
    2020-05-31T19:41:48.428997296Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster pomerium-control-plane-grpc completed"}
    2020-05-31T19:41:48.429160022Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-control-plane-grpc contains no targets"}
    2020-05-31T19:41:48.429384565Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-control-plane-grpc initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.429483193Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster pomerium-control-plane-grpc added 1 removed 0"}
    2020-05-31T19:41:48.429528136Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=pomerium-control-plane-grpc primary=0 secondary=0"}
    2020-05-31T19:41:48.429610601Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 0"}
    2020-05-31T19:41:48.429623781Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=pomerium-control-plane-grpc primary=0 secondary=0"}
    2020-05-31T19:41:48.429675575Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 1"}
    2020-05-31T19:41:48.429773073Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize primary init clusters empty: true"}
    2020-05-31T19:41:48.429788830Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize secondary init clusters empty: true"}
    2020-05-31T19:41:48.430044032Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize cds api ready: true"}
    2020-05-31T19:41:48.430055808Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: initializing cds"}
    2020-05-31T19:41:48.430066753Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"gRPC mux addWatch for type.googleapis.com/envoy.config.cluster.v3.Cluster"}
    2020-05-31T19:41:48.430077913Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"No stream available to sendDiscoveryRequest for type.googleapis.com/envoy.config.cluster.v3.Cluster"}
    2020-05-31T19:41:48.430089471Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"[bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:47] Establishing new gRPC bidi stream for rpc StreamAggregatedResources(stream .envoy.service.discovery.v3.DiscoveryRequest) returns (stream .envoy.service.discovery.v3.DiscoveryResponse);"}
    2020-05-31T19:41:48.430101750Z {"level":"debug","service":"envoy","name":"envoy","time":"2020-05-31T19:41:48Z"}
    2020-05-31T19:41:48.430115659Z {"level":"debug","service":"envoy","name":"router","time":"2020-05-31T19:41:48Z","message":"\"[C0][S10772420691089791976] cluster \\'pomerium-control-plane-grpc\\' match for URL \\'/envoy.service.discovery.v3.AggregatedDiscoveryService/StreamAggregatedResources\\'\""}
    2020-05-31T19:41:48.430136395Z {"level":"debug","service":"envoy","name":"router","time":"2020-05-31T19:41:48Z","message":"\"[C0][S10772420691089791976] router decoding headers:\\n\\':method\\', \\'POST\\'\\n\\':path\\', \\'/envoy.service.discovery.v3.AggregatedDiscoveryService/StreamAggregatedResources\\'\\n\\':authority\\', \\'pomerium-control-plane-grpc\\'\\n\\':scheme\\', \\'http\\'\\n\\'te\\', \\'trailers\\'\\n\\'content-type\\', \\'application/grpc\\'\\n\\'x-envoy-internal\\', \\'true\\'\\n\\'x-forwarded-for\\', \\'172.28.0.2\\'\""}
    2020-05-31T19:41:48.430155830Z {"level":"debug","service":"envoy","name":"envoy","time":"2020-05-31T19:41:48Z"}
    2020-05-31T19:41:48.430182708Z {"level":"debug","service":"envoy","name":"pool","time":"2020-05-31T19:41:48Z","message":"queueing request due to no available connections"}
    2020-05-31T19:41:48.430194569Z {"level":"debug","service":"envoy","name":"pool","time":"2020-05-31T19:41:48Z","message":"creating a new connection"}
    2020-05-31T19:41:48.430205192Z {"level":"debug","service":"envoy","name":"client","time":"2020-05-31T19:41:48Z","message":"[C0] connecting"}
    2020-05-31T19:41:48.430216219Z {"level":"debug","service":"envoy","name":"connection","time":"2020-05-31T19:41:48Z","message":"[C0] connecting to 127.0.0.1:45679"}
    2020-05-31T19:41:48.430227711Z {"level":"debug","service":"envoy","name":"connection","time":"2020-05-31T19:41:48Z","message":"[C0] connection in progress"}
    2020-05-31T19:41:48.430238787Z {"level":"debug","service":"envoy","name":"http2","time":"2020-05-31T19:41:48Z","message":"[C0] updating connection-level initial window size to 268435456"}
    2020-05-31T19:41:48.430539407Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading 0 listener(s)"}
    2020-05-31T19:41:48.430569233Z {"level":"info","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"loading stats sink configuration"}
    2020-05-31T19:41:48.430580730Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"added target LDS to init manager Server"}
    2020-05-31T19:41:48.430826566Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"starting main dispatch loop"}
    2020-05-31T19:41:48.430847734Z {"level":"debug","service":"envoy","name":"connection","time":"2020-05-31T19:41:48Z","message":"[C0] connected"}
    2020-05-31T19:41:48.430859549Z {"level":"debug","service":"envoy","name":"client","time":"2020-05-31T19:41:48Z","message":"[C0] connected"}
    2020-05-31T19:41:48.430914975Z {"level":"debug","service":"envoy","name":"pool","time":"2020-05-31T19:41:48Z","message":"[C0] attaching to next request"}
    2020-05-31T19:41:48.430929538Z {"level":"debug","service":"envoy","name":"pool","time":"2020-05-31T19:41:48Z","message":"[C0] creating stream"}
    2020-05-31T19:41:48.431015109Z {"level":"debug","service":"envoy","name":"router","time":"2020-05-31T19:41:48Z","message":"[C0][S10772420691089791976] pool ready"}
    2020-05-31T19:41:48.436008721Z {"level":"debug","service":"envoy","name":"router","time":"2020-05-31T19:41:48Z","message":"[C0][S10772420691089791976] upstream headers complete: end_stream=false"}
    2020-05-31T19:41:48.436180941Z {"level":"debug","service":"envoy","name":"http","time":"2020-05-31T19:41:48Z","message":"\"async http request response headers (end_stream=false):\\n\\':status\\', \\'200\\'\\n\\'content-type\\', \\'application/grpc\\'\""}
    2020-05-31T19:41:48.436197677Z {"level":"debug","service":"envoy","name":"envoy","time":"2020-05-31T19:41:48Z"}
    2020-05-31T19:41:48.436209044Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Received gRPC message for type.googleapis.com/envoy.config.cluster.v3.Cluster at version 1"}
    2020-05-31T19:41:48.436524440Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Pausing discovery requests for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment"}
    2020-05-31T19:41:48.436546585Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cds: add 21 cluster(s), remove 0 cluster(s)"}
    2020-05-31T19:41:48.436629164Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'pomerium-control-plane-grpc\\' skipped\""}
    2020-05-31T19:41:48.438302788Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 127.0.0.1:33161"}
    2020-05-31T19:41:48.438367202Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster pomerium-control-plane-http during init"}
    2020-05-31T19:41:48.438440757Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster pomerium-control-plane-http"}
    2020-05-31T19:41:48.438525214Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster pomerium-control-plane-http completed"}
    2020-05-31T19:41:48.438690058Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-control-plane-http contains no targets"}
    2020-05-31T19:41:48.438733519Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-control-plane-http initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.438749461Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster pomerium-control-plane-http added 1 removed 0"}
    2020-05-31T19:41:48.438854131Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=pomerium-control-plane-http primary=0 secondary=0"}
    2020-05-31T19:41:48.438931488Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.439007696Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=pomerium-control-plane-http primary=0 secondary=0"}
    2020-05-31T19:41:48.439107552Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'pomerium-control-plane-http\\'\""}
    2020-05-31T19:41:48.439680374Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster pomerium-authz during init"}
    2020-05-31T19:41:48.439703949Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster pomerium-authz"}
    2020-05-31T19:41:48.439898821Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address [::1]:5443"}
    2020-05-31T19:41:48.439918202Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"DNS hosts have changed for localhost"}
    2020-05-31T19:41:48.440092493Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"DNS refresh rate reset for localhost, refresh rate 5000 ms"}
    2020-05-31T19:41:48.440109701Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster pomerium-authz completed"}
    2020-05-31T19:41:48.440121045Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-authz contains no targets"}
    2020-05-31T19:41:48.440239835Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster pomerium-authz initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.440256193Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster pomerium-authz added 1 removed 0"}
    2020-05-31T19:41:48.440398978Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=pomerium-authz primary=0 secondary=0"}
    2020-05-31T19:41:48.440431384Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.440447025Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=pomerium-authz primary=0 secondary=0"}
    2020-05-31T19:41:48.440566722Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'pomerium-authz\\'\""}
    2020-05-31T19:41:48.441524266Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9000"}
    2020-05-31T19:41:48.441737671Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-5a1e0211ffbc5dc7 during init"}
    2020-05-31T19:41:48.441754974Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-5a1e0211ffbc5dc7"}
    2020-05-31T19:41:48.441766278Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-5a1e0211ffbc5dc7 completed"}
    2020-05-31T19:41:48.441777392Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-5a1e0211ffbc5dc7 contains no targets"}
    2020-05-31T19:41:48.441949978Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-5a1e0211ffbc5dc7 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.441983228Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-5a1e0211ffbc5dc7 added 1 removed 0"}
    2020-05-31T19:41:48.442038365Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-5a1e0211ffbc5dc7 primary=0 secondary=0"}
    2020-05-31T19:41:48.442051770Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.442103371Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-5a1e0211ffbc5dc7 primary=0 secondary=0"}
    2020-05-31T19:41:48.442180135Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-5a1e0211ffbc5dc7\\'\""}
    2020-05-31T19:41:48.443271154Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.19:5000"}
    2020-05-31T19:41:48.443359447Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-37a82db4c0ffdc3b during init"}
    2020-05-31T19:41:48.443456473Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-37a82db4c0ffdc3b"}
    2020-05-31T19:41:48.443471492Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-37a82db4c0ffdc3b completed"}
    2020-05-31T19:41:48.443513566Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-37a82db4c0ffdc3b contains no targets"}
    2020-05-31T19:41:48.443558800Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-37a82db4c0ffdc3b initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.443659717Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-37a82db4c0ffdc3b added 1 removed 0"}
    2020-05-31T19:41:48.443676117Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-37a82db4c0ffdc3b primary=0 secondary=0"}
    2020-05-31T19:41:48.443735698Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.443882334Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-37a82db4c0ffdc3b primary=0 secondary=0"}
    2020-05-31T19:41:48.443901647Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-37a82db4c0ffdc3b\\'\""}
    2020-05-31T19:41:48.444924751Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.19:8080"}
    2020-05-31T19:41:48.444943849Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-81a4c0a20675e4ee during init"}
    2020-05-31T19:41:48.445083007Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-81a4c0a20675e4ee"}
    2020-05-31T19:41:48.445114365Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-81a4c0a20675e4ee completed"}
    2020-05-31T19:41:48.445129868Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-81a4c0a20675e4ee contains no targets"}
    2020-05-31T19:41:48.445260490Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-81a4c0a20675e4ee initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.445277616Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-81a4c0a20675e4ee added 1 removed 0"}
    2020-05-31T19:41:48.445419363Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-81a4c0a20675e4ee primary=0 secondary=0"}
    2020-05-31T19:41:48.445450798Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.445507868Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-81a4c0a20675e4ee primary=0 secondary=0"}
    2020-05-31T19:41:48.445654650Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-81a4c0a20675e4ee\\'\""}
    2020-05-31T19:41:48.446421584Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9014"}
    2020-05-31T19:41:48.446568439Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-f4225a06a6156331 during init"}
    2020-05-31T19:41:48.446584877Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-f4225a06a6156331"}
    2020-05-31T19:41:48.446614293Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-f4225a06a6156331 completed"}
    2020-05-31T19:41:48.446744279Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-f4225a06a6156331 contains no targets"}
    2020-05-31T19:41:48.446777624Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-f4225a06a6156331 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.446964598Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-f4225a06a6156331 added 1 removed 0"}
    2020-05-31T19:41:48.446992453Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-f4225a06a6156331 primary=0 secondary=0"}
    2020-05-31T19:41:48.447069046Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.447083371Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-f4225a06a6156331 primary=0 secondary=0"}
    2020-05-31T19:41:48.447095257Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-f4225a06a6156331\\'\""}
    2020-05-31T19:41:48.448135909Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9017"}
    2020-05-31T19:41:48.448157254Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-9ec141595634cb8e during init"}
    2020-05-31T19:41:48.448168798Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-9ec141595634cb8e"}
    2020-05-31T19:41:48.448229136Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-9ec141595634cb8e completed"}
    2020-05-31T19:41:48.448337708Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-9ec141595634cb8e contains no targets"}
    2020-05-31T19:41:48.448438037Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-9ec141595634cb8e initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.448518378Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-9ec141595634cb8e added 1 removed 0"}
    2020-05-31T19:41:48.448694690Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-9ec141595634cb8e primary=0 secondary=0"}
    2020-05-31T19:41:48.448712266Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.448724111Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-9ec141595634cb8e primary=0 secondary=0"}
    2020-05-31T19:41:48.448735271Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-9ec141595634cb8e\\'\""}
    2020-05-31T19:41:48.449708788Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9013"}
    2020-05-31T19:41:48.449743378Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-6520db35db3cdb0c during init"}
    2020-05-31T19:41:48.449828985Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-6520db35db3cdb0c"}
    2020-05-31T19:41:48.449950633Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-6520db35db3cdb0c completed"}
    2020-05-31T19:41:48.449979792Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-6520db35db3cdb0c contains no targets"}
    2020-05-31T19:41:48.449991314Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-6520db35db3cdb0c initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.450019429Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-6520db35db3cdb0c added 1 removed 0"}
    2020-05-31T19:41:48.450063175Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-6520db35db3cdb0c primary=0 secondary=0"}
    2020-05-31T19:41:48.450136998Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.450198706Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-6520db35db3cdb0c primary=0 secondary=0"}
    2020-05-31T19:41:48.450228561Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-6520db35db3cdb0c\\'\""}
    2020-05-31T19:41:48.451439558Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9016"}
    2020-05-31T19:41:48.451466306Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-c474f5d3263b67c5 during init"}
    2020-05-31T19:41:48.451477710Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-c474f5d3263b67c5"}
    2020-05-31T19:41:48.451554718Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-c474f5d3263b67c5 completed"}
    2020-05-31T19:41:48.451616608Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-c474f5d3263b67c5 contains no targets"}
    2020-05-31T19:41:48.451664226Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-c474f5d3263b67c5 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.451770301Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-c474f5d3263b67c5 added 1 removed 0"}
    2020-05-31T19:41:48.451803519Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-c474f5d3263b67c5 primary=0 secondary=0"}
    2020-05-31T19:41:48.451920223Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.451986843Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-c474f5d3263b67c5 primary=0 secondary=0"}
    2020-05-31T19:41:48.452001595Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-c474f5d3263b67c5\\'\""}
    2020-05-31T19:41:48.452968856Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:19999"}
    2020-05-31T19:41:48.453002376Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-bd1bcd473d1c9c94 during init"}
    2020-05-31T19:41:48.453821904Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-bd1bcd473d1c9c94"}
    2020-05-31T19:41:48.453866364Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-bd1bcd473d1c9c94 completed"}
    2020-05-31T19:41:48.453878333Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-bd1bcd473d1c9c94 contains no targets"}
    2020-05-31T19:41:48.453889358Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-bd1bcd473d1c9c94 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.453904399Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-bd1bcd473d1c9c94 added 1 removed 0"}
    2020-05-31T19:41:48.453915547Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-bd1bcd473d1c9c94 primary=0 secondary=0"}
    2020-05-31T19:41:48.453926408Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.453936886Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-bd1bcd473d1c9c94 primary=0 secondary=0"}
    2020-05-31T19:41:48.453973686Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-bd1bcd473d1c9c94\\'\""}
    2020-05-31T19:41:48.454736167Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9022"}
    2020-05-31T19:41:48.456040853Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-3731acb224b4dbe1 during init"}
    2020-05-31T19:41:48.456091807Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-3731acb224b4dbe1"}
    2020-05-31T19:41:48.456103690Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-3731acb224b4dbe1 completed"}
    2020-05-31T19:41:48.456114464Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-3731acb224b4dbe1 contains no targets"}
    2020-05-31T19:41:48.456125057Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-3731acb224b4dbe1 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.456135821Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-3731acb224b4dbe1 added 1 removed 0"}
    2020-05-31T19:41:48.456146646Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-3731acb224b4dbe1 primary=0 secondary=0"}
    2020-05-31T19:41:48.456157459Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.456167837Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-3731acb224b4dbe1 primary=0 secondary=0"}
    2020-05-31T19:41:48.456178498Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-3731acb224b4dbe1\\'\""}
    2020-05-31T19:41:48.456393814Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9035"}
    2020-05-31T19:41:48.456418134Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-43037149f27f249d during init"}
    2020-05-31T19:41:48.456559230Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-43037149f27f249d"}
    2020-05-31T19:41:48.456589669Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-43037149f27f249d completed"}
    2020-05-31T19:41:48.456619558Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-43037149f27f249d contains no targets"}
    2020-05-31T19:41:48.456686466Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-43037149f27f249d initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.456733803Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-43037149f27f249d added 1 removed 0"}
    2020-05-31T19:41:48.456798970Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-43037149f27f249d primary=0 secondary=0"}
    2020-05-31T19:41:48.456877184Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.457017406Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-43037149f27f249d primary=0 secondary=0"}
    2020-05-31T19:41:48.457049945Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-43037149f27f249d\\'\""}
    2020-05-31T19:41:48.458156092Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:8888"}
    2020-05-31T19:41:48.458207167Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-b31981dd2e23d760 during init"}
    2020-05-31T19:41:48.458232685Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-b31981dd2e23d760"}
    2020-05-31T19:41:48.458246017Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-b31981dd2e23d760 completed"}
    2020-05-31T19:41:48.458295627Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-b31981dd2e23d760 contains no targets"}
    2020-05-31T19:41:48.458309630Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-b31981dd2e23d760 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.458321257Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-b31981dd2e23d760 added 1 removed 0"}
    2020-05-31T19:41:48.458374840Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-b31981dd2e23d760 primary=0 secondary=0"}
    2020-05-31T19:41:48.458468151Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.458556915Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-b31981dd2e23d760 primary=0 secondary=0"}
    2020-05-31T19:41:48.458606211Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-b31981dd2e23d760\\'\""}
    2020-05-31T19:41:48.459494493Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9002"}
    2020-05-31T19:41:48.459516071Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-595c96a54e13cb13 during init"}
    2020-05-31T19:41:48.459582915Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-595c96a54e13cb13"}
    2020-05-31T19:41:48.459641384Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-595c96a54e13cb13 completed"}
    2020-05-31T19:41:48.459785256Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-595c96a54e13cb13 contains no targets"}
    2020-05-31T19:41:48.459825515Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-595c96a54e13cb13 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.459904042Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-595c96a54e13cb13 added 1 removed 0"}
    2020-05-31T19:41:48.459918889Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-595c96a54e13cb13 primary=0 secondary=0"}
    2020-05-31T19:41:48.459978680Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.460016722Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-595c96a54e13cb13 primary=0 secondary=0"}
    2020-05-31T19:41:48.460047906Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-595c96a54e13cb13\\'\""}
    2020-05-31T19:41:48.461011492Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9002"}
    2020-05-31T19:41:48.461028213Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-6e95de0e09f537a4 during init"}
    2020-05-31T19:41:48.461089548Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-6e95de0e09f537a4"}
    2020-05-31T19:41:48.461165078Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-6e95de0e09f537a4 completed"}
    2020-05-31T19:41:48.461220947Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-6e95de0e09f537a4 contains no targets"}
    2020-05-31T19:41:48.461302860Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-6e95de0e09f537a4 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.461378614Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-6e95de0e09f537a4 added 1 removed 0"}
    2020-05-31T19:41:48.461504294Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-6e95de0e09f537a4 primary=0 secondary=0"}
    2020-05-31T19:41:48.461536079Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.461548932Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-6e95de0e09f537a4 primary=0 secondary=0"}
    2020-05-31T19:41:48.461661281Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-6e95de0e09f537a4\\'\""}
    2020-05-31T19:41:48.462523534Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9001"}
    2020-05-31T19:41:48.462557539Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-e524878422c7359f during init"}
    2020-05-31T19:41:48.462659731Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-e524878422c7359f"}
    2020-05-31T19:41:48.462692341Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-e524878422c7359f completed"}
    2020-05-31T19:41:48.462826634Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-e524878422c7359f contains no targets"}
    2020-05-31T19:41:48.462842522Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-e524878422c7359f initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.462909742Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-e524878422c7359f added 1 removed 0"}
    2020-05-31T19:41:48.462997987Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-e524878422c7359f primary=0 secondary=0"}
    2020-05-31T19:41:48.463071099Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.463163855Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-e524878422c7359f primary=0 secondary=0"}
    2020-05-31T19:41:48.463231869Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-e524878422c7359f\\'\""}
    2020-05-31T19:41:48.464170774Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9001"}
    2020-05-31T19:41:48.464189284Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-41179ead81ff71b3 during init"}
    2020-05-31T19:41:48.464200421Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-41179ead81ff71b3"}
    2020-05-31T19:41:48.464259850Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-41179ead81ff71b3 completed"}
    2020-05-31T19:41:48.464273723Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-41179ead81ff71b3 contains no targets"}
    2020-05-31T19:41:48.464355607Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-41179ead81ff71b3 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.464446218Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-41179ead81ff71b3 added 1 removed 0"}
    2020-05-31T19:41:48.464463165Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-41179ead81ff71b3 primary=0 secondary=0"}
    2020-05-31T19:41:48.464503162Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.464546704Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-41179ead81ff71b3 primary=0 secondary=0"}
    2020-05-31T19:41:48.464720623Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-41179ead81ff71b3\\'\""}
    2020-05-31T19:41:48.465847740Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:8090"}
    2020-05-31T19:41:48.465865380Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-527900684d2a08c1 during init"}
    2020-05-31T19:41:48.465876798Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-527900684d2a08c1"}
    2020-05-31T19:41:48.465900789Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-527900684d2a08c1 completed"}
    2020-05-31T19:41:48.465973216Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-527900684d2a08c1 contains no targets"}
    2020-05-31T19:41:48.466005024Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-527900684d2a08c1 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.466078794Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-527900684d2a08c1 added 1 removed 0"}
    2020-05-31T19:41:48.466153520Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-527900684d2a08c1 primary=0 secondary=0"}
    2020-05-31T19:41:48.466226341Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.466304163Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-527900684d2a08c1 primary=0 secondary=0"}
    2020-05-31T19:41:48.466368905Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-527900684d2a08c1\\'\""}
    2020-05-31T19:41:48.467409049Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9023"}
    2020-05-31T19:41:48.467434669Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-ba709490a86c11a1 during init"}
    2020-05-31T19:41:48.467446026Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-ba709490a86c11a1"}
    2020-05-31T19:41:48.467498010Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-ba709490a86c11a1 completed"}
    2020-05-31T19:41:48.467623528Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-ba709490a86c11a1 contains no targets"}
    2020-05-31T19:41:48.467656947Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-ba709490a86c11a1 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.467752649Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-ba709490a86c11a1 added 1 removed 0"}
    2020-05-31T19:41:48.467803406Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-ba709490a86c11a1 primary=0 secondary=0"}
    2020-05-31T19:41:48.467863935Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.467969207Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-ba709490a86c11a1 primary=0 secondary=0"}
    2020-05-31T19:41:48.468045414Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-ba709490a86c11a1\\'\""}
    2020-05-31T19:41:48.469034006Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"transport socket match, socket default selected for host with address 192.168.1.2:9025"}
    2020-05-31T19:41:48.469052143Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"add/update cluster policy-1a81cefd0b545e72 during init"}
    2020-05-31T19:41:48.469079098Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"adding TLS cluster policy-1a81cefd0b545e72"}
    2020-05-31T19:41:48.469091203Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"initializing Primary cluster policy-1a81cefd0b545e72 completed"}
    2020-05-31T19:41:48.469102479Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-1a81cefd0b545e72 contains no targets"}
    2020-05-31T19:41:48.469144605Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Cluster policy-1a81cefd0b545e72 initialized, notifying ClusterImplBase"}
    2020-05-31T19:41:48.469158028Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"membership update for TLS cluster policy-1a81cefd0b545e72 added 1 removed 0"}
    2020-05-31T19:41:48.469230153Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: init complete: cluster=policy-1a81cefd0b545e72 primary=0 secondary=0"}
    2020-05-31T19:41:48.469362094Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 2"}
    2020-05-31T19:41:48.469388667Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: adding: cluster=policy-1a81cefd0b545e72 primary=0 secondary=0"}
    2020-05-31T19:41:48.469400254Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"\"cds: add/update cluster \\'policy-1a81cefd0b545e72\\'\""}
    2020-05-31T19:41:48.469426732Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize state: 3"}
    2020-05-31T19:41:48.469468299Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize primary init clusters empty: true"}
    2020-05-31T19:41:48.469528249Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize secondary init clusters empty: true"}
    2020-05-31T19:41:48.469614010Z {"level":"debug","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"maybe finish initialize cds api ready: true"}
    2020-05-31T19:41:48.469683657Z {"level":"info","service":"envoy","name":"upstream","time":"2020-05-31T19:41:48Z","message":"cm init: all clusters initialized"}
    2020-05-31T19:41:48.469727635Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Pausing discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration"}
    2020-05-31T19:41:48.469799126Z {"level":"info","service":"envoy","name":"main","time":"2020-05-31T19:41:48Z","message":"all clusters initialized. initializing init manager"}
    2020-05-31T19:41:48.469840910Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Server initializing"}
    2020-05-31T19:41:48.469908249Z {"level":"debug","service":"envoy","name":"init","time":"2020-05-31T19:41:48Z","message":"init manager Server initializing target LDS"}
    2020-05-31T19:41:48.469976025Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"gRPC mux addWatch for type.googleapis.com/envoy.config.listener.v3.Listener"}
    2020-05-31T19:41:48.470006702Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Resuming discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration"}
    2020-05-31T19:41:48.470085386Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Resuming discovery requests for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment"}
    2020-05-31T19:41:48.470210956Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"gRPC config for type.googleapis.com/envoy.config.cluster.v3.Cluster accepted with 21 resources with version 1"}
    2020-05-31T19:41:48.644857827Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Received gRPC message for type.googleapis.com/envoy.config.listener.v3.Listener at version 1"}
    2020-05-31T19:41:48.647007609Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"Pausing discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration"}
    2020-05-31T19:41:48.727764019Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"begin add/update listener: name=https-ingress hash=4718058485297799746"}
    2020-05-31T19:41:48.728079763Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"  filter #0:"}
    2020-05-31T19:41:48.728330770Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"    name: envoy.filters.listener.tls_inspector"}
    2020-05-31T19:41:48.728493010Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"  config: {\n \"@type\": \"type.googleapis.com/google.protobuf.Empty\"\n}"}
    2020-05-31T19:41:48.728623773Z {"level":"debug","service":"envoy","name":"envoy","time":"2020-05-31T19:41:48Z"}
    2020-05-31T19:41:48.729967113Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"  filter #0:"}
    2020-05-31T19:41:48.730052256Z {"level":"debug","service":"envoy","name":"config","time":"2020-05-31T19:41:48Z","message":"    name: envoy.filters.network.http_connection_manager"}
          
          
        
    

    Additional context

    Going back to v0.8.3 is working as expected.

    WaitingForInfo 
    opened by fightforlife 28
  • Websocket error with linkerd dashboard

    Websocket error with linkerd dashboard

    What happened?

    I cannot get the linkerd dashboard live stream of calls to work. I get "websocket error: undefined`.

    What did you expect to happen?

    I expect to see a stream of calls to my service. I see the live stream if I port-forward. I do not see the live stream if I proxy with pomerium.

    What's your environment like?

    • Pomerium version (retrieve with pomerium --version or /ping endpoint): v0.5.1
    • Server Operating System/Architecture/Cloud: K8s Rev: v1.14.9-eks-ba3d77

    What's your config.yaml?

    apiVersion: v1
    data:
      config.yaml: "policy: \n  - allow_websockets: true\n    allowed_groups:\n    - [email protected]\n
        \   allowed_users: []\n    from: https://glooe-monitoring.production.tidepool.org\n
        \   to: http://glooe-grafana.gloo-system.svc.cluster.local\n  - allow_websockets:
        true\n    allowed_groups:\n    - [email protected]\n    allowed_users: []\n    from:
        https://linkerd-dashboard.production.tidepool.org\n    to: http://linkerd-dashboard.linkerd.svc.cluster.local:8080\n"
    kind: ConfigMap
    metadata:
      annotations:
        flux.weave.works/antecedent: pomerium:helmrelease/pomerium
      creationTimestamp: "2019-12-12T18:50:52Z"
      labels:
        app.kubernetes.io/instance: pomerium
        app.kubernetes.io/managed-by: Tiller
        app.kubernetes.io/name: pomerium
        helm.sh/chart: pomerium-4.1.2
      name: pomerium
      namespace: pomerium
      resourceVersion: "26735217"
      selfLink: /api/v1/namespaces/pomerium/configmaps/pomerium
      uid: 525ccabf-1d10-11ea-abeb-02c077500bb6
    

    Additional context

    Add any other context about the problem here.

    NeedsInvestigation 
    opened by derrickburns 26
  • OKTA IDP doesn't work for version 0.10.2

    OKTA IDP doesn't work for version 0.10.2

    Hello,

    What happened?

    We follow this documentation to deploy pomerium on a K8S cluster (rancher RKE) -> https://github.com/pomerium/pomerium/tree/v0.10.2/examples/kubernetes.

    Our configuration works with version 0.8.x and we tried to upgrade to the last version v0.10.2. The OKTA authentication works, but the access to the backend application defined in the policy is denied.

    We saw a change between these version on the idp_service_account, now it's seems that we have to use a json base64 encode. So we did that:

    cat api_key.json
    
    {
        "api_token": "XXXX OKTA IDP TOKEN XXXX"
    }
    
    cat api_key.json | base64
    ewogICAgImFwaV90b2tlbiI6ICIiWFhYWCBPS1RBIElEUCBUT0tFTiBYWFhYCn0K
    
    

    We use this in the pomerium config.yml file (we also try with "api_key" format with no success, we find this kind of key in a commit).

    The authentication works but we have a denied message, we are pretty sure that the call API to OKTA doesn't work but we don't find a log linked to this behavior (excepted the log below) on all pomerium services (we use separated services).

    image

    What did you expect to happen?

    Pomerium regarding the defined policy should accept the connection (we have the same configuration on each component in 0.8.X and it's works).

    What's your environment like?

    • Pomerium version -> 0.10.2
    • Server Operating System/Architecture/Cloud: Centos8 / K8S by RKE rancher

    What's your config.yaml?

    Our kubernetes-config.yaml is:

    insecure_server: true
    grpc_insecure: true
    grpc_address: ":80"
    
    pomerium_debug: true
    authenticate_service_url: https://authenticate.sso.domain.tld
    authorize_service_url: http://pomerium-authorize-service.namespace.svc.cluster.local
    cache_service_url: http://pomerium-cache-service.namespace.svc.cluster.local
    
    override_certificate_name: "*.sso.domain.tld"
    
    idp_provider: okta
    idp_client_id: <OKTA_APP_CLIENT_ID>
    idp_client_secret: <OKTA_APP_CLIENT_SECRET>
    idp_provider_url: https://ourdomain.okta.com
    idp_service_account: ewogICAgImFwaV90b2tlbiI6ICIiWFhYWCBPS1RBIElEUCBUT0tFTiBYWFhYCn0K
    
    policy:
        - from: https://hello.sso.domain.tld
          to: http://hello.namespace.svc.cluster.local
          allowed_groups:
            - <OKTA_GROUP_ID>
    

    What did you see in the logs?

    I don't know if it's link but we have this kind of log during the authentication:

    7:14PM INF authenticate: session load error error="Bad Request: internal/sessions: session is not found" X-Forwarded-For=["10.42.32.0,10.42.32.11"] X-Forwarded-Host=["authenticate.sso.domain.tld"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["http"] X-Forwarded-Server=["traefik-ingress-controller-external-68c79c4f8-p8w4x"] X-Real-Ip=["10.42.32.0"] ip=127.0.0.1 request-id=XXXXX user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
    7:14PM INF authenticate: session load error error="Bad Request: internal/sessions: session is not found" X-Forwarded-For=["10.42.32.0,10.42.32.11"] X-Forwarded-Host=["authenticate.sso.domain.tld"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["http"] X-Forwarded-Server=["traefik-ingress-controller-external-68c79c4f8-p8w4x"] X-Real-Ip=["10.42.32.0"] ip=127.0.0.1 request-id=YYYYY user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
    

    Thanks for your help,

    bug WaitingForInfo 
    opened by KevDBG 24
  • `Access-Control-Allow-Origin` error on authenticate service

    `Access-Control-Allow-Origin` error on authenticate service

    What happened?

    I'm experimenting a strange and problematic situation, I suppose it is since v0.5.0 because I've never seen that before.

    At first it seemed similar to #390 but this one is about the pomerium service not answering CORS correctly.

    Basically what happens is something like this:

    • An SPA is making XHR calls without problems (with the X-Requested-With header)
    • at one point, one of the request is considered as needing reauth by the proxy
    • so the proxy returns a redirect response toward the authenticate service
    • the browser tries to validate this can be done with OPTIONS
    • we get the following error in the browser:
    Access to XMLHttpRequest at 'https://auth.example.com/.pomerium/sign_in?redirect_uri=https%3A%2F%2Fapp.example.com%2Fapi%stuff%2F10&sig=nSjPGT0tgnrsizrhZnWZZ0WvYSI_Zyy0UaMXkY-vdtg%3D&ts=1574843825' (redirected from 'https://app.example.com/api/stuff/10') from origin 'https://app.example.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
    

    What did you expect to happen?

    It seems to me that it is authenticate that does not answer CORS requests while I think it should.

    ## Environment

    • Pomerium version (retrieve with pomerium --version or /ping endpoint): v0.5.0
    • Server Operating System/Architecture/Cloud: AKS

    What did you see in the logs?

    The logs are not very clear about what happens

    authenticate

    {
        "level": "info",
        "fwd_ip": [
            "86.234.73.194"
        ],
        "ip": "10.242.1.28",
        "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:70.0) Gecko/20100101 Firefox/70.0",
        "referer": "https://app.example.com/optimize/stuff/10",
        "req_id": "08388fd6-6fcb-99e4-075a-12f4c86c4189",
        "error": "internal/sessions: session is not found",
        "time": "2019-11-27T08:32:05Z",
        "message": "authenticate: verify session"
    }
    {
        "level": "debug",
        "fwd_ip": [
            "86.234.73.194"
        ],
        "ip": "10.242.1.28",
        "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:70.0) Gecko/20100101 Firefox/70.0",
        "referer": "https://app.example.com/optimize/stuff/10",
        "req_id": "08388fd6-6fcb-99e4-075a-12f4c86c4189",
        "duration": 0.227797,
        "size": 841,
        "status": 302,
        "email": "",
        "group": "",
        "method": "GET",
        "service": "authenticate",
        "host": "auth.example.com",
        "path": "/.pomerium/sign_in?redirect_uri=https%3A%2F%2Fapp.example.com%2Fapi%2Fstuffs%2F10&sig=oMLbIJX2xjMf2-YkzmZgCrdDYBJSSR5IDdxv7blDN_o%3D&ts=1574843525",
        "time": "2019-11-27T08: 32: 05Z",
        "message": "http-request"
    }
    {
        "level": "debug",
        "fwd_ip": [
            "109.220.184.108"
        ],
        "ip": "10.242.0.22",
        "user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0",
        "referer": "https://app.example.com/optimize/stuff/10",
        "req_id": "54dd9be1-43d9-65c4-16b6-f7ab64ba348e",
        "duration": 0.308897,
        "size": 0,
        "status": 200,
        "email": "",
        "group": "",
        "method": "OPTIONS",
        "service": "authenticate",
        "host": "auth.example.com",
        "path": "/.pomerium/sign_in?redirect_uri=https%3A%2F%2Fapp.example.com%2Fapi%2Fstuffs%2F10&sig=1bga3DYmFYiNUea7g_Fk4uTkeic7G34dOeWlt9eJWAM%3D&ts=1574843638",
        "time": "2019-11-27T08: 33: 58Z",
        "message": "http-request"
    }
    
    bug 
    opened by victornoel 20
  • proxy: grpc client should retry connections to services on failure

    proxy: grpc client should retry connections to services on failure

    Describe the bug

    I restarted pomerium, tried to login with my user and got a 500 error. After refreshing the page, I'm correctly logged in.

    To Reproduce Steps to reproduce the behavior:

    1. Restart pomerium with a fresh set of secrets (to ensure user has to log again)
    2. Go to a protected service and log in
    3. Saw 500 error

    Logs of the proxy:

    {"level":"error","fwd_ip":"10.4.0.1","ip":"10.4.0.42","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0","referer":"https://accounts.google.com/signin/oauth/oauthchooseaccount?client_id=XXXXXXXXX&flowName=GeneralOAuthFlow","req_id":"017ee31d-aad7-5207-a989-a834895ca395","error":"rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.7.240.43:443: i/o timeout\"","time":"2019-03-25T10:00:16Z","message":"proxy: error redeeming authorization code"}
    

    There is no error in the authenticate service.

    Expected behavior

    User should be able to login at any time :)

    Environment:

    • Pomerium version (retrieve with pomerium --version): v0.0.2+45e6a8d
    • Server Operating System/Architecture/Cloud: GKE / GSuite
    NeedsInvestigation 
    opened by victornoel 20
  • JWT_CLAIMS_HEADERS=email won't generate X-Pomerium-Claim-Email

    JWT_CLAIMS_HEADERS=email won't generate X-Pomerium-Claim-Email

    It looks like JWT_CLAIMS_HEADERS=email no longer generates X-Pomerium-Claim-Email header to the backend since 0.14.0.

    Reading the relevant doc, it looks like the header can be customized now but I did not expect the old config to stop working.

    bug 
    opened by yegle 19
  • Pomerium docker image fails to start when supplied with non-letsencrypt wildcard cert

    Pomerium docker image fails to start when supplied with non-letsencrypt wildcard cert

    I have a wild card cert from Sectigo that is already in use on other web servers so I wanted to reuse it for pomerium. When I try to launch pomerium it says no certificate supplied.

    Here is an output from my docker-compose.yaml file:

    version: "3"
    services:
      pomerium:
        image: pomerium/pomerium:latest
        environment:
          # Generate new secret keys. e.g. `head -c32 /dev/urandom | base64`
          - COOKIE_SECRET=*******
        volumes:
          # Mount your domain's certificates : https://www.pomerium.io/docs/reference/certificates
          - /opt/pomerium/certs/<domain>.<tld>.crt:/pomerium/cert.pem:ro
          - /opt/pomerium/certs/<domain>.<tld>.key:/pomerium/privkey.pem:ro
          # Mount your config file : https://www.pomerium.io/docs/reference/reference/
          - /opt/pomerium/config.yaml:/pomerium/config.yaml:ro
        ports:
          - 443:443
    
      # https://httpbin.corp.beyondperimeter.com --> Pomerium --> http://httpbin
      httpbin:
        image: kennethreitz/httpbin:latest
        expose:
          - 80
    

    What's your environment like?

    Centos7 VM running on Xen with the latest docker-ce and docker compose installed.

    What's your config.yaml?

    # See detailed configuration settings : https://www.pomerium.io/docs/reference/reference/
    authenticate_service_url: https://sub.domain.tld
    
    # identity provider settings : https://www.pomerium.io/docs/identity-providers.html
    idp_provider: azure
    idp_provider_url: https://login.microsoftonline.com/<azure tenant>/v2.0/
    idp_client_id: <azure app id>
    idp_client_secret: <azure key>
    
    policy:
      - from: https://sub.domain.tld
        to: http://<internal-ip>
        allowed_domains:
          - domain.tld
          - domain.tld
          - domain.tld
          - domain.tld
    #  - from: https://external-httpbin.corp.beyondperimeter.com
    #    to: https://httpbin.org
    #    allow_public_unauthenticated_access: true
    
    

    What did you see in the logs?

    pomerium_1  | {"level":"fatal","error":"config: options from viper validation error config:no certificates supplied nor was insecure mode set","time":"2020-02-06T19:46:43Z","message":"cmd/pomerium"}
    httpbin_1   | [2020-02-06 19:46:44 +0000] [1] [INFO] Starting gunicorn 19.9.0
    httpbin_1   | [2020-02-06 19:46:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
    httpbin_1   | [2020-02-06 19:46:44 +0000] [1] [INFO] Using worker: gevent
    httpbin_1   | [2020-02-06 19:46:44 +0000] [8] [INFO] Booting worker with pid: 8
    pomerium_pomerium_1 exited with code 1
    
    opened by archness1 18
  • Impersonate doesn't seem to work

    Impersonate doesn't seem to work

    Describe the bug

    After entering a user email or a group in the impersonate input (user and/or group) and clicking on impersonate, nothing happens: it seems there is a 302 that redirects me to the same /.pomerium url but I'm still logged as myself and not the impersonated user.

    Expected behavior

    To have my session updated as if I was logged as the user (or the group).

    Environment:

    • Pomerium version (retrieve with pomerium --version): v0.0.5
    • Server Operating System/Architecture/Cloud: AKS

    Configuration file(s): See #152 a always ;)

    Logs:

    {"level":"debug","fwd_ip":"10.240.0.66","ip":"10.240.0.52","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0","referer":"https://app.xxx.com/.pomerium","req_id":"af0966fc-1965-10f6-3905-5994d80923ea","duration":5.587115,"size":0,"status":302,"email":"","group":"","method":"POST","service":"proxy","url":"/.pomerium/impersonate","time":"2019-06-11T15:27:19Z","message":"http-request"}
    {"level":"debug","fwd_ip":"10.240.0.66","ip":"10.240.0.52","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0","referer":"https://app.xxx.com/.pomerium","req_id":"8bc2ad55-c02b-8991-0aeb-27b85d4b5e26","duration":5.144314,"size":12244,"status":200,"email":"[email protected]","group":"[email protected],[email protected],[email protected]","method":"GET","service":"proxy","url":"/.pomerium","time":"2019-06-11T15:27:19Z","message":"http-request"}
    
    accepted feature 
    opened by victornoel 18
  • http response code error

    http response code error

    What happened?

    Request an invalid URL,response code 421

    Header

    Request URL: https://test.example.net:6443/testtest
    Request Method: GET
    Status Code: 421 Misdirected Request
    Remote Address: 127.0.0.1:6443
    Referrer Policy: strict-origin-when-cross-origin
    

    Response

    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
            "http://www.w3.org/TR/html4/strict.dtd">
    <html>
        <head>
            <meta http-equiv="Content-Type" content="text/html;charset=utf-8">
            <title>Error response</title>
        </head>
        <body>
            <h1>Error response</h1>
            <p>Error code: 404</p>
            <p>Message: File not found.</p>
            <p>Error code explanation: HTTPStatus.NOT_FOUND - Nothing matches the given URI.</p>
        </body>
    </html>
    
    

    What did you expect to happen?

    response code 404

    What's your config.yaml?

    address: :6443
    routes:
    - allow_public_unauthenticated_access: true
      from: https://sl.putuozt.cn:6443
      pass_identity_headers: true
      to: http://127.0.0.1:8080
    

    What's your environment like?

    • v0.18.0
    • ubuntu x64

    What did you see in the logs?

    {"level":"info","service":"envoy","upstream-cluster":"route-a5c05ab8f8100d40","method":"GET","authority":"test.example.net:6443","path":"/testtest","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36","referer":"","forwarded-for":"192.168.111.132","request-id":"2928cf38-e276-46a2-a3d8-c092e396c347","duration":18.541006,"size":469,"response-code":421,"response-code-details":"via_upstream","time":"2022-08-02T16:58:13+08:00","message":"http-request"}
    
    

    Additional context

    Add any other context about the problem here.

    opened by cfanbo 0
  • chore(deps): bump github.com/golangci/golangci-lint from 1.47.2 to 1.47.3

    chore(deps): bump github.com/golangci/golangci-lint from 1.47.2 to 1.47.3

    Bumps github.com/golangci/golangci-lint from 1.47.2 to 1.47.3.

    Release notes

    Sourced from github.com/golangci/golangci-lint's releases.

    v1.47.3

    Changelog

    • 72fc41ce build(deps): bump github.com/BurntSushi/toml from 1.1.0 to 1.2.0 (#3009)
    • 57d61afb build(deps): bump github.com/GaijinEntertainment/go-exhaustruct/v2 from 2.2.0 to 2.2.2 (#3030)
    • 9cb17e4f build(deps): bump github.com/alingse/asasalint from 0.0.10 to 0.0.11 (#3003)
    • 2ab46788 build(deps): bump github.com/daixiang0/gci from 0.4.3 to 0.5.0 (#3031)
    • 03d9b113 build(deps): bump github.com/ryancurrah/gomodguard from 1.2.3 to 1.2.4 (#3029)
    • e55f22c7 build(deps): bump github.com/sirupsen/logrus from 1.8.1 to 1.9.0 (#3010)
    • c7ed8b67 build(deps): bump github.com/sivchari/nosnakecase from 1.5.0 to 1.7.0 (#3008)
    • 95d57d99 build(deps): bump gitlab.com/bosi/decorder from 0.2.2 to 0.2.3 (#3033)
    • d186efe9 build(deps): bump honnef.co/go/tools from 0.3.2 to 0.3.3 (#3032)
    • 846fab81 cgo: fix linters ignoring Cgo files (#3025)
    • d44cd49a feat: remove some go1.18 limitations (#3001)
    • 886fbd71 gci: fix panic with invalid configuration option (#3019)
    Changelog

    Sourced from github.com/golangci/golangci-lint's changelog.

    v1.47.3

    1. updated linters:
      • remove some go1.18 limitations
      • asasalint: from 0.0.10 to 0.0.11
      • decorder: from 0.2.2 to v0.2.3
      • gci: fix panic with invalid configuration option
      • gci: from 0.4.3 to v0.5.0
      • go-exhaustruct: from 2.2.0 to 2.2.2
      • gomodguard: from 1.2.3 to 1.2.4
      • nosnakecase: from 1.5.0 to 1.7.0
      • honnef.co/go/tools: from 0.3.2 to v0.3.3
    2. misc
      • cgo: fix linters ignoring CGo files
    Commits
    • d186efe build(deps): bump honnef.co/go/tools from 0.3.2 to 0.3.3 (#3032)
    • 57d61af build(deps): bump github.com/GaijinEntertainment/go-exhaustruct/v2 from 2.2.0...
    • 95d57d9 build(deps): bump gitlab.com/bosi/decorder from 0.2.2 to 0.2.3 (#3033)
    • 2ab4678 build(deps): bump github.com/daixiang0/gci from 0.4.3 to 0.5.0 (#3031)
    • 03d9b11 build(deps): bump github.com/ryancurrah/gomodguard from 1.2.3 to 1.2.4 (#3029)
    • cee90e3 docs: fix broken license link (#3028)
    • 846fab8 cgo: fix linters ignoring Cgo files (#3025)
    • 886fbd7 gci: fix panic with invalid configuration option (#3019)
    • d03608f doc: add Bytebase info the trusted-by page (#3013)
    • a9dc1ce dev: change format like function without args (#3012)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependency 
    opened by dependabot[bot] 1
  • When the configuration file `config.yaml` is modified for many times, the content of the read configuration file is empty

    When the configuration file `config.yaml` is modified for many times, the content of the read configuration file is empty

    What happened?

    When the configuration file config.yaml is modified for many times, the content of the configuration read file is empty occasionally. At this time, the file size is 0

    It is normal if you view files body on the terminal $ cat /data/gateway/config.yaml

    How'd it happen?

    bin/pomerium --config=/data/gateway/config.yaml

    What's your environment like?

    pomerium: v0.18.0+89a105c8 envoy: 1.21.3+4861429dfffb599f28b9399c34ea2a2c268bfb6d10aca0a53bc9b67d847a4595

    • v0.17.3 and v0.18.0
    • ubuntu x64

    What's your config.yaml?

    envoy_admin_address: "0.0.0.0:10000"
    address: :6443
    authenticate_service_url: https://auth.example.net:6443
    autocert: false
    

    What did you see in the logs?

    {"level":"info","config_file_source":"/data/gateway/config.yaml","service":"all","config":"local","checksum":"69a9fbb16159849f","time":"2022-08-01T14:56:57+08:00","message":"config: updated config"}
    {"level":"debug","watch_file":"/data/gateway/config.yaml","time":"2022-08-01T14:56:57+08:00","message":"filemgr: watching file for changes"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"envoy_version":"1.21.3+4861429dfffb599f28b9399c34ea2a2c268bfb6d10aca0a53bc9b67d847a4595","version":"v0.18.0+89a105c8","time":"2022-08-01T14:56:57+08:00","message":"cmd/pomerium"}
    {"level":"info","address":"127.0.0.1:35373","time":"2022-08-01T14:56:57+08:00","message":"grpc: dialing"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"service":"all","config":"databroker","checksum":"69a9fbb16159849f","time":"2022-08-01T14:56:57+08:00","message":"config: updated config"}
    {"level":"info","outbound_port":"35373","databroker_urls":["http://127.0.0.1:5443"],"time":"2022-08-01T14:56:57+08:00","message":"config: starting databroker config source syncer"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"service":"metrics_manager","time":"2022-08-01T14:56:57+08:00","message":"metrics: http server disabled"}
    2:56PM ERR cryptutil: no TLS certificate found for domain, using self-signed certificate domain=*
    2:56PM ERR cryptutil: no TLS certificate found for domain, using self-signed certificate domain=*
    {"level":"info","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"grpc-port":"43533","http-port":"43017","outbound-port":"35373","metrics-port":"38795","debug-port":"39445","time":"2022-08-01T14:56:57+08:00","message":"server started"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"time":"2022-08-01T14:56:58+08:00","message":"envoy: starting envoy process"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"path":"/tmp/pomerium-envoy4216174686/envoy","checksum":"4861429dfffb599f28b9399c34ea2a2c268bfb6d10aca0a53bc9b67d847a4595","time":"2022-08-01T14:56:58+08:00","message":"running envoy"}
    2:56PM INF grpc: dialing address=127.0.0.1:35373
    2:56PM INF envoy: start monitoring subprocess pid=8474
    {"level":"info","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"time":"2022-08-01T14:56:58+08:00","message":"enabled authenticate service"}
    2:56PM INF authorize: signing key Algorithm=ES256 KeyID=486cb1dc5f0e247f83bddcd4bc54c06dba3bb7d2c3603b9016f7369c76a9bf90 Public Key={"alg":"ES256","crv":"P-256","kid":"486cb1dc5f0e247f83bddcd4bc54c06dba3bb7d2c3603b9016f7369c76a9bf90","kty":"EC","use":"sig","x":"QxDlVC2aX-ztVLDSst9X5Gv3OxebL57oGngaLXVQy-I","y":"h1RavSV_ivQfUpW1OvXlUqy7hZYcfYOb9F3V4jDn9d8"}
    2:56PM INF initializing epoch 0 (base id=178414421, hot restart version=11.104) name=main service=envoy
    2:56PM INF statically linked extensions: name=main service=envoy
    
    ...
    
    {"level":"error","domain":"*","time":"2022-08-01T14:57:24+08:00","message":"cryptutil: no TLS certificate found for domain, using self-signed certificate"}
    {"level":"error","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"error":"rpc error: code = Unauthenticated desc = invalid JWT: go-jose/go-jose: error in cryptographic primitive","time":"2022-08-01T14:57:24+08:00","message":"controlplane: error storing configuration event"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","config_change_id":"50a81274-7f1b-4b30-be68-76638c9fa913","time":"2022-08-01T14:57:24+08:00","message":"envoy: starting envoy process"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","config_change_id":"50a81274-7f1b-4b30-be68-76638c9fa913","time":"2022-08-01T14:57:24+08:00","message":"envoy: releasing envoy process for hot-reload"}
    2:57PM INF cds: add 0 cluster(s), remove 1 cluster(s) name=upstream service=envoy
    {"level":"error","config_file_source":"/data/gateway/config.yaml","config_change_id":"50a81274-7f1b-4b30-be68-76638c9fa913","error":"authenticate: 'COOKIE_SECRET' invalid cryptutil: got 0 bytes but want 32","time":"2022-08-01T14:57:24+08:00","message":"authenticate: failed to update state"}
    {"level":"info","Algorithm":"ES256","KeyID":"58b140964c909994d7b5779f83429d2eaa33a320c4c8316db2e487c854e51feb","Public Key":{"use":"sig","kty":"EC","kid":"58b140964c909994d7b5779f83429d2eaa33a320c4c8316db2e487c854e51feb","crv":"P-256","alg":"ES256","x":"lTaTn970OQW_nYC2adlbAuAT0ujUYFCRD6q4-65jRik","y":"v105eglvbBHn3YkNUQ11I9PbVU79OmMgHtLg4cAbDfI"},"time":"2022-08-01T14:57:24+08:00","message":"authorize: signing key"}
    2:57PM INF cds: added/updated 0 cluster(s), skipped 0 unmodified cluster(s) name=upstream service=envoy
    {"level":"info","pid":8545,"time":"2022-08-01T14:57:24+08:00","message":"envoy: start monitoring subprocess"}
    {"level":"error","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"error":"rpc error: code = Unauthenticated desc = invalid JWT: go-jose/go-jose: error in cryptographic primitive","time":"2022-08-01T14:57:24+08:00","message":"controlplane: error storing configuration event"}
    {"level":"error","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"error":"rpc error: code = Unauthenticated desc = invalid JWT: go-jose/go-jose: error in cryptographic primitive","time":"2022-08-01T14:57:24+08:00","message":"controlplane: error storing configuration event"}
    {"level":"info","address":"127.0.0.1:35373","time":"2022-08-01T14:57:24+08:00","message":"grpc: dialing"}
    {"level":"warn","provider":"","time":"2022-08-01T14:57:24+08:00","message":"no directory provider configured"}
    {"level":"error","config_file_source":"/data/gateway/config.yaml","config_change_id":"50a81274-7f1b-4b30-be68-76638c9fa913","error":"identity: provider is not defined","time":"2022-08-01T14:57:24+08:00","message":"databroker: failed to create authenticator"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","config_change_id":"50a81274-7f1b-4b30-be68-76638c9fa913","address":"127.0.0.1:35373","time":"2022-08-01T14:57:24+08:00","message":"grpc: dialing"}
    {"level":"warn","time":"2022-08-01T14:57:24+08:00","message":"proxy: configuration has no policies"}
    {"level":"error","config_file_source":"/data/gateway/config.yaml","bootstrap":true,"service":"identity_manager","syncer_id":"identity_manager","syncer_type":"","error":"rpc error: code = Canceled desc = context canceled","time":"2022-08-01T14:57:24+08:00","message":"sync"}
    {"level":"error","error":"proxy: invalid 'COOKIE_SECRET': cryptutil: got 0 bytes but want 32","time":"2022-08-01T14:57:24+08:00","message":"proxy: failed to update proxy state from configuration settings"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","config_change_id":"50a81274-7f1b-4b30-be68-76638c9fa913","service":"all","config":"databroker","checksum":"7ae1d4b5217e0a0d","time":"2022-08-01T14:57:24+08:00","message":"config: updated config"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","config_change_id":"cc76ce4a-80d3-4702-b715-c6f217f7d1f9","time":"2022-08-01T14:57:24+08:00","message":"config: file updated, reconfiguring..."}
    {"level":"warn","config_file":"/data/gateway/config.yaml","keys":["server_name"],"time":"2022-08-01T14:57:24+08:00","message":"config contained unknown keys that were ignored"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","config_change_id":"cc76ce4a-80d3-4702-b715-c6f217f7d1f9","service":"all","config":"local","checksum":"afd089203ce697db","time":"2022-08-01T14:57:24+08:00","message":"config: updated config"}
    {"level":"info","address":"127.0.0.1:35373","time":"2022-08-01T14:57:24+08:00","message":"grpc: dialing"}
    {"level":"info","config_file_source":"/data/gateway/config.yaml","config_change_id":"cc76ce4a-80d3-4702-b715-c6f217f7d1f9","time":"2022-08-01T14:57:24+08:00","message":"metrics: http server disabled"}
    2:57PM ERR listener \'https-ingress\' failed to bind or apply socket options: cannot bind \'[::]:443\': Permission denied name=config service=envoy
    2:57PM ERR delta config for type.googleapis.com/envoy.config.listener.v3.Listener rejected: Error adding/updating listener(s) https-ingress: cannot bind \'[::]:443\': Permission denied name=config service=envoy
    2:57PM INF config: starting databroker config source syncer databroker_urls=["http://127.0.0.1:5443"] outbound_port=35373
    2:57PM ERR gRPC config for type.googleapis.com/envoy.config.listener.v3.Listener rejected: Error adding/updating listener(s) https-ingress: cannot bind \'[::]:443\': Permission denied name=config service=envoy
    2:57PM FTL error applying configuration error="Error adding/updating listener(s) https-ingress: cannot bind '[::]:443': Permission denied\n" code=13 details=null nonce_version=0 resource_type=type.googleapis.com/envoy.config.listener.v3.Listener resources_subscribe=[] resources_unsubscribe=[]
    
    

    Additional context

    The initial listening port is 6443. After several modifications, since the configuration file read content is empty, the listening port uses the default 443

    debug function

    func readFileBody(f string) string {
    	fi, err := os.Stat(f)
    	if err != nil {
    		panic(err)
    	}
    	fmt.Printf("Name=%s	Size=%d	Mode=%d	ModTime=%s\n", fi.Name(), fi.Size(), fi.Mode(), fi.ModTime())
    
    	b, err := os.ReadFile(f)
    	if err != nil {
    		panic(err)
    	}
    	return string(b)
    }
    
    

    output

    Name=config.yaml	Size=0	Mode=420	ModTime=2022-08-01 15:06:34.816186326 +0800 CST
    
    OR 
    
    panic: stat /data/gateway/config.yaml: no such file or directory
    
    opened by cfanbo 0
  • remove unused file

    remove unused file

    Summary

    remove unused file SECURITY.md

    Checklist

    • [ ] reference any related issues
    • [X] updated docs
    • [ ] updated unit tests
    • [ ] updated UPGRADING.md
    • [ ] add appropriate tag (improvement / bug / etc)
    • [ ] ready for review
    opened by cfanbo 0
  • chore(deps): bump node from 16 to 18

    chore(deps): bump node from 16 to 18

    Bumps node from 16 to 18.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies docker 
    opened by dependabot[bot] 1
  • Outlier detection and Health check aren't valid keys

    Outlier detection and Health check aren't valid keys

    What happened?

    When using Outlier detection, and the health_checks (in any combo) they are not used, and I see:

    {"level":"warn","config_file":"/etc/pomerium/config.yaml","keys":["routes[13].outlier_detection"],"time":"2022-07-28T12:21:21+02:00","message":"config contained unknown keys that were ignored"}
    

    in the logs.

    This was on 0.17.03 and 0.18

    What did you expect to happen?

    Outlier detection and healthchecks should work, and not be thrown out by config.

    How'd it happen?

    Added the settings to the config, and observed the above logs.

    Then restarted pomerium, and found the same.

    Finally, I upgraded to 18 and still had the same.

    All the while I have 7 hosts in the LoadBalancer, and with round robin I consistently get served connections to the hosts that should be ejected (they are down).

    What's your environment like?

    Pomerium 18 on Ubuntu 18.04 VM, installed as binary

    What's your config.yaml?

    ...
        - from: https://example
          prefix: /dev
          outlier_detection: {"consecutive_5xx": 3}
          health_checks:
          - timeout: 10s
            interval: 240s
            healthy_threshold: 1
            unhealthy_threshold: 2
            http_health_check:
              path: "/"
          to: 
          - http://host1/dev
          - http://host2/dev
          - ...
          tls_skip_verify: true
          allow_websockets: true
          policy:
            - allow:
                or:
                  - email:
                      is: [email protected]
    

    What did you see in the logs?

    Jul 28 12:19:44 host.com pomerium[31839]: {"level":"warn","config_file":"/etc/pomerium/config.yaml","keys":["routes[x].outlier_detection"],"time":"2022-07-28T12:19:44+02:00","message":"config contained unknown keys that were ignored"}
    
    NeedsInvestigation 
    opened by ajcollett 0
Releases(v0.18.0)
Cost-aware network traffic analysis

Traffic Refinery Overview Traffic Refinery is a cost-aware network traffic analysis library implemented in Go For a project overview, installation inf

null 4 Jul 8, 2022
Sesame: an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer

Sesame Overview Sesame is an Ingress controller for Kubernetes that works by dep

Sesame 1 Dec 28, 2021
Fast, concurrent, streaming access to Amazon S3, including gof3r, a CLI. http://godoc.org/github.com/rlmcpherson/s3gof3r

s3gof3r s3gof3r provides fast, parallelized, pipelined streaming access to Amazon S3. It includes a command-line interface: gof3r. It is optimized for

Randall McPherson 1.1k Jul 14, 2022
Google Compute Engine (GCE) VM takeover via DHCP flood - gain root access by getting SSH keys added by google_guest_agent

Abstract This is an advisory about an unpatched vulnerability (at time of publishing this repo, 2021-06-25) affecting virtual machines in Google's Com

null 513 Jul 24, 2022
Access your Kubernetes Deployment over the Internet

Kubexpose: Access your Kubernetes Deployment over the Internet Kubexpose makes it easy to access a Kubernetes Deployment over a public URL. It's a Kub

Abhishek Gupta 42 Jul 10, 2022
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

OVO Technology 595 Jul 31, 2022
Terraform provider to access CEPH S3 API

terraform-provider-ceph (S3) A very simple Terraform provider to create/delete buckets via CEPH S3 API. Build and install go build -o terraform-provid

Modular Finance 0 Nov 26, 2021
The Masa Testnet and access bootnodes and node IP's

Masa Testnet Node V1.0 Get An OpenVPN File You must must be connected to our OpenVPN network in order to join the Masa Testnet and access bootnodes an

Masa Finance 60 Jul 28, 2022
Using this you can access node external ip address value from your pod.

Using this you can access node external ip address value from your pod.

Sputnik Systems 0 Jan 30, 2022
A small utility to generate a kubectl configuration file for all clusters you have access to in GKE.

gke-config-helper A small utility to generate a kubectl configuration file for all clusters you have access to in GKE. Usage $ gke-config-helper The b

Calle Pettersson 4 Feb 9, 2022
An Oracle Cloud (OCI) Pulumi resource package, providing multi-language access to OCI

Oracle Cloud Infrastructure Resource Provider The Oracle Cloud Infrastructure (OCI) Resource Provider lets you manage OCI resources. Installing This p

Pulumi 11 Jul 19, 2022
easyssh-proxy provides a simple implementation of some SSH protocol features in Go

easyssh-proxy easyssh-proxy provides a simple implementation of some SSH protocol features in Go. Feature This project is forked from easyssh but add

Bo-Yi Wu 250 Jul 27, 2022
S3 Reverse Proxy with GET, PUT and DELETE methods and authentication (OpenID Connect and Basic Auth)

Menu Why ? Features Configuration Templates Open Policy Agent (OPA) API GET PUT DELETE AWS IAM Policy Grafana Dashboard Prometheus metrics Deployment

Havrileck Alexandre 116 Jul 26, 2022
The Cloud Native Application Proxy

Traefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your ex

Traefik Labs 39.2k Aug 8, 2022
Reworking kube-proxy's architecture

Kubernetes Proxy NG The Kubernetes Proxy NG a new design of kube-proxy aimed at allowing Kubernetes business logic to evolve with minimal to no impact

Kubernetes SIGs 136 Jul 26, 2022
An Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer

NGINX Ingress Controller Overview ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer. Learn more a

null 0 Nov 15, 2021
Custom Terraform provider that allows provisioning VGS Proxy Routes.

VGS Terraform Provider Custom Terraform provider that allows provisioning VGS Proxy Routes. How to Install Requirements: terraform ver 0.12 or later M

Very Good Security, Inc. 4 Mar 12, 2022
A Cloud-Native Network Proxy

Introduction ServiceCar is a cloud-native network proxy that run on cloud and edge and embraces the diversity of languages and developer frameworks. S

OpenMSP 5 May 20, 2022
stratus is a cross-cloud identity broker that allows workloads with an identity issued by one cloud provider to exchange this identity for a workload identity issued by another cloud provider.

stratus stratus is a cross-cloud identity broker that allows workloads with an identity issued by one cloud provider to exchange this identity for a w

robert lestak 1 Dec 26, 2021
A cloud native Identity & Access Proxy / API (IAP) and Access Control Decision API

Heimdall Heimdall is inspired by Ory's OAthkeeper, tries however to resolve the functional limitations of that product by also building on a more mode

Dimitrij Drus 9 Jul 28, 2022
Graphik is a Backend as a Service implemented as an identity-aware document & graph database with support for gRPC and graphQL

Graphik is a Backend as a Service implemented as an identity-aware, permissioned, persistant document/graph database & pubsub server written in Go.

null 304 Jul 17, 2022
Identity-service - An OAuth2 identity provider that operates over gRPC

Identity-service - An OAuth2 identity provider that operates over gRPC

Otter Social 2 May 2, 2022
Identity - An OAuth2 identity provider that operates over gRPC

Otter Social > Identity Provider An OAuth2 identity provider that operates over

Otter Social 2 May 2, 2022
Boundary enables identity-based access management for dynamic infrastructure.

Boundary Please note: We take Boundary's security and our users' trust very seriously. If you believe you have found a security issue in Boundary, ple

HashiCorp 3.4k Aug 1, 2022
Identity & Access Management simplified and secure.

IAM Zero Identity & Access Management simplified and secure. ?? Get Started | ?? Support What is IAM Zero? IAM Zero detects identity and access manage

Common Fate 201 Jul 31, 2022
rpCheckup is an AWS resource policy security checkup tool that identifies public, external account access, intra-org account access, and private resources.

rpCheckup - Catch AWS resource policy backdoors like Endgame rpCheckup is an AWS resource policy security checkup tool that identifies public, externa

Gold Fig Labs Inc. 143 Jun 21, 2022
A fast data generator that's multi-table aware and supports multi-row DML.

If you need to generate a lot of random data for your database tables but don't want to spend hours configuring a custom tool for the job, then datage

Coding Concepts 48 Apr 29, 2022
gpool - a generic context-aware resizable goroutines pool to bound concurrency based on semaphore.

gpool - a generic context-aware resizable goroutines pool to bound concurrency. Installation $ go get github.com/sherifabdlnaby/gpool import "github.c

Sherif Abdel-Naby 84 Feb 21, 2022
xlog is a logger for net/context aware HTTP applications

⚠️ Check zerolog, the successor of xlog. HTTP Handler Logger xlog is a logger for net/context aware HTTP applications. Unlike most loggers, xlog will

Olivier Poitrey 135 Nov 22, 2021