Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.

Overview

Pixie!


Docs Slack Mentioned in Awesome Kubernetes Mentioned in Awesome Go Build Status codecov


What is Pixie?

Pixie

Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.

Try our community beta and join our community on slack.


Table of contents

Quick Start

Review Pixie's requirements to make sure that your Kubernetes cluster is supported.

Signup

Visit our product page and signup with your google account.

Install CLI

Run the command below:

bash -c "$(curl -fsSL https://withpixie.ai/install.sh)"

Or see our Installation Docs to install Pixie using Docker, Debian, RPM or with the latest binary.

(optional) Setup a sandbox

If you don't already have a K8s cluster available, you can use Minikube to set-up a local environment:

  • On Linux, run minikube start --cpus=4 --memory=6000 --driver=kvm2 -p=. The default docker driver is not currently supported, so using the kvm2 driver is important.

  • On Mac, run minikube start --cpus=4 --memory=6000 -p=.

More detailed instructions are available here.

Start a demo-app:

🚀 Deploy Pixie

Use the CLI to deploy the Pixie Platform in your K8s cluster by running:

px deploy

Alternatively, you can deploy with YAML or Helm.


Check out our install guides and walkthrough videos for alternate install schemes.

Get Instant Auto-Telemetry

Run scripts with px CLI

CLI Demo


Service SLA:

px run px/service_stats


Node health:

px run px/node_stats


MySQL metrics:

px run px/mysql_stats


Explore more scripts by running:

px scripts list


Check out our pxl_scripts folder for more examples.


View machine generated dashboards with Live views

CLI Demo

The Pixie Platform auto-generates "Live View" dashboard to visualize script results.

You can view them by clicking on the URLs prompted by px or by visiting:

https://work.withpixie.ai/live


Pipe Pixie dust into any tool

CLI Demo

You can transform and pipe your script results into any other system or workflow by consuming px results with tools like jq.

Example with http_data:

px run px/http_data -o json| jq -r .

More examples here


To see more script examples and learn how to write your own, check out our docs for more guides


Contributing

Refer to our contribution guide!

Under the Hood

Three fundamental innovations enable Pixie's magical developer experience:

Progressive Instrumentation: Pixie Edge Modules (“PEMs”) collect full body request traces (via eBPF), system metrics & K8s events without the need for code-changes and at less than 5% overhead. Custom metrics, traces & logs can be integrated into the Pixie Command Module.

In-Cluster Edge Compute: The Pixie Command Module is deployed in your K8s cluster to isolate data storage and computation within your environment for drastically better intelligence, performance & security.

Command Driven Interfaces: Programmatically access data via the Pixie CLI and Pixie UI which are designed ground-up to allow you to run analysis & debug scenarios faster than any other developer tool.

For more information on Pixie Platform's architecture, check out our docs or overview deck

Resources

About Us

Pixie was started by a San Francisco based startup, Pixie Labs Inc. Our north star is to build a new generation of intelligent products which empower developers to engineer the future. We were acquired by New Relic in 2020.

New Relic, Inc. open sourced Pixie in April 2021.

License

Pixie is licensed under Apache License, Version 2.0.

Issues
  • Self-Hosted Pixie Install Script

    Self-Hosted Pixie Install Script

    Is your feature request related to a problem? Please describe. We would like to have an install experience for the self-hosted version of Pixie that is as easy to use as the one hosted on withpixie.ai.

    Additional context Our team has been busy at work this month open sourcing Pixie's source code, docs, website, and other assets, We are also actively applying to be a CNCF sandbox project!

    One of our last remaining items is to publish an install script to deploy a self-hosted version of Pixie.

    Who offers a hosted version of Pixie?

    New Relic currently offers a 100% free hosted version of Pixie Cloud. This hosting has no contingencies and will be offered indefinitely to the Pixie Community. All the code used for hosting is open source, including out production manifest files.

    What will the Self-Hosted install script do?

    The Self-Hosted install script will deploy Pixie Cloud so that you can use Pixie without any external dependencies. This is the exact version of Pixie Cloud we deploy, so it'll behave exactly as the hosted version, but will require management/configuration.

    What is the timeline? 

    Good question. :) We had planned to open source this script by 5/4. Unfortunately, we didn’t make it. We need more time to ensure that the Pixie Cloud deploy script will be just as easy to install Pixie Cloud as it is to install the hosted version of Pixie (in < 2 minutes!)

    But I really want to run a Self-Hosted Pixie...now!

    Technically you can build and run a self-hosted Pixie using Skaffold. Check out:

    https://github.com/pixie-labs/pixie/blob/main/skaffold/skaffold_cloud.yaml https://github.com/pixie-labs/pixie/tree/main/k8s/cloud https://github.com/pixie-labs/pixie/tree/main/k8s/cloud_deps

    These directions are not fully documented and the team is choosing to focus on quickly delivering the self-hosted install script. We'll constantly be iterating on the documentation to make the project more open source friendly.

    opened by zasgar 20
  • [Doc issue] no ingress installed so dev_dns_updater did nothing

    [Doc issue] no ingress installed so dev_dns_updater did nothing

    Describe the bug I've been followed the document to deploy pixie cloud, and setup-dns section would update /etc/hosts if there is any ingress rules in k8s cluster. But it didn't have!

    ➜  pixie git:(main) ✗ kubectl get ing
    No resources found in default namespace.
    ➜  pixie git:(main) ✗ kubectl get ing -n plc
    No resources found in plc namespace.
    

    And it of course doesn't change anything:

    ➜  pixie git:(main) ✗ ./dev_dns_updater --domain-name="dev.withpixie.dev"  --kubeconfig=$HOME/.kube/config --n=plc
    INFO[0000] DNS Entries                                   entries="dev.withpixie.dev, work.dev.withpixie.dev, segment.dev.withpixie.dev, docs.dev.withpixie.dev" service=cloud-proxy-service
    INFO[0000] DNS Entries                                   entries=cloud.dev.withpixie.dev service=vzconn-service
    

    It didn't change /etc/hosts file!

    To Reproduce

    Expected behavior Should update /etc/hosts and we could visit dev.withpixie.dev in browser.

    Screenshots

    Logs

    App information (please complete the following information):

    • Pixie version: master branch
    • K8s cluster version: minikube on macOS 10.15.7 k8s version v1.22.2

    Additional context

    opened by Colstuwjx 12
  • Compile error, missing HTTP Tables.

    Compile error, missing HTTP Tables.

    Describe the bug Cannot run any scripts due to a HTTP Event module not found?

    Script compilation failed: L222 : C22  Table 'http_events' not found.\n
    

    To Reproduce Steps to reproduce the behavior: Install fresh version of Pixie on Minikube Cluster

    Expected behavior Pixie scripts to execute

    Screenshots image image

    Logs Please attach the logs by running the following command:

    ./px collect-logs (See Zip File) 
    

    pixie_logs_20210505024739.zip App information (please complete the following information):

    • Pixie version: 0.5.3+Distribution.0ff53f6.20210503183144.1
    • K8s cluster version: v1.20.2
    opened by WarpWing 12
  • px deploy failed flatcar linux kubernetes cluster

    px deploy failed flatcar linux kubernetes cluster

    Describe the bug A clear and concise description of what the bug is. $ px deploy (failed)

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error fatal failed to fetch vizier versions error=open /home/core/.pixie/auth.json: no such file or directory

    Expected behavior A clear and concise description of what you expected to happen. pixie should be running properly

    Screenshots If applicable, add screenshots to help explain your problem. Please make sure the screenshot does not contain any sensitive information such as API keys or access tokens.

    Logs Please attach the logs by running the following command: px deploy ./px collect-logs

    
    **App information (please complete the following information):**
    - Pixie version
    - K8s cluster version v1.19.2
    
    fatal failed to fetch vizier versions error=open /home/core/.pixie/auth.json: no such file or directory
    
    Please help
    
    
    **Additional context**
    Add any other context about the problem here.
    
    opened by 4ss3g4f 9
  • perf profiler BPF program fails to install

    perf profiler BPF program fails to install

    Describe the bug Error: "Compiler error on line 324, column 22: Table 'stack_traces.beta' not found."

    Environment:

    • kubeadm installed cluster running on baremetal ubuntu 20.04 kernel 5.15

    To Reproduce Steps to reproduce the behavior:

    1. deploy px deploy
    2. click on script px/pod or px/node

    Expected behavior Get pod / node dashboard

    Screenshots pixie-bug

    Logs Please attach the logs by running the following command:

    ./px collect-logs
    

    pixie_logs_20220103200225.zip

    App information (please complete the following information):

    • Pixie version 0.9.16
    • K8s cluster version 1.21.5
    • Node Kernel version 5.15
    • Browser version ff 95.0.1
    opened by knfoo 8
  • Add MatchRegexRule UDF

    Add MatchRegexRule UDF

    Add MatchRegexRule UDF for use in security PxL scripts.

    • Add a UDF, MatchRegexRule, that takes an encoded json of regular expression rules, and string. It returns the first rule that matches the string or empty string if no match.
    • Add unit tests for MatchRegexRule.
    opened by elaguerta 7
  • Add more detailed instructions to the dev docs

    Add more detailed instructions to the dev docs

    Improve the DEVELOPMENT.md documentation.

    • Include prerequisites.
    • Add an example to Vizier section of running unit tests.
    • Link to instructions for spinning up a Minikube cluster to deploy onto.
    • Clarify the differences between Vizier and Pixie Cloud.
    • Add workaround instructions for failed px deploy.
    • Note when and where various commands in the instructions should be run and explain what they do in greater detail.
    opened by hmstepanek 7
  • Enhance openssl tracing to fallback to function pointer addrs when dlsym fails

    Enhance openssl tracing to fallback to function pointer addrs when dlsym fails

    This is the next step for #407.

    The netty-tcnative shared library does not use a dynamic OpenSSL_num_version symbol. This means the symbol ends up in the symtab rather than dynsym. The former is private and incompatible with dlsym, while the latter is public and is supported by the existing openssl tracing. See this slack thread for more background.

    This PR needs some work, but it's far enough along that I wanted to seek feedback on my approach

    Testing

    • The openssl_trace_bpf_test tests cases pass
      • [x] nginx openssl 1.1.1
      • [x] nginx openssl 1.1.0
      • [x] Node 12.3.1
      • [x] Node 14.18.1

    Assuming this approach is how we want to proceed, the following will need to be addressed

    Todo

    • [x] Parameterized openssl_trace_bpf_test to test the dlsym and RawFptrManager implementations of openssl tracing
    • [x] ~Write unit tests for RawFptrManager~ -- I think our bpf test is sufficient for now and the scaffolding required to test this would be quite a bit of work so I'm holding off on this for now
    • [x] ~Clean up the code added to ProcParser (much of it is duplicated with another function)~ -- not sure this is necessary
    • [x] ~Make ProcessMap more efficient (see TODO comment about using octal numbers)~ -- wasn't sure if doing so would be more efficient. Open to any feedback you have here
    • [x] Fix remaining openssl_trace_bpf_test test cases. They currently cause seg faults. Each test case passes individually, but when more than one case is enabled it results in seg faults (pastebin)
    opened by ddelnano 6
  • Give PEMs general tolerations so that they can deploy on tainted nodes.

    Give PEMs general tolerations so that they can deploy on tainted nodes.

    Is your feature request related to a problem? Please describe. Some clusters use taints and tolerations for workload isolation or other scheduling concerns. This can prevent the vizier-pem pods from scheduling to some nodes.

    Describe the solution you'd like I'd like a flag to enable the vizier-pems to schedule everywhere. Something like a --tolerate flag on the deploy subcommand that creates the vizier-pem daemonset with these tolerations:

          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    

    Describe alternatives you've considered I've patched the daemonset after deployment to get around this. If this is deployed as a permanent fixture on the cluster, this would also be handled elsewhere.

    kind/feature 
    opened by elsonrodriguez 6
  • Auto-close the

    Auto-close the "you can now close this window" browser page

    Is your feature request related to a problem? Please describe. No

    Describe the solution you'd like Add a javascript tag or something to automatically close the window after a short delay, so I don't have to actually interact with my browser during the setup. Shouldn't be complicated to add as you already control the page that's shown after the authorization :)

    Describe alternatives you've considered None

    ┆Issue is synchronized with this Jira Task by Unito

    opened by XaF 6
  • Unable to deploy on minikube with k8s v1.11

    Unable to deploy on minikube with k8s v1.11

    // filed on behalf of @XaF

    Describe the bug UI Does

    To Reproduce Steps to reproduce the behavior:

    1. Download and authorize CLI in minikube
    2. Run deployment command
    3. n -pl pods start running with flakiness
    4. UI does not refresh to console view.

    Expected behavior After deployment, UI should refresh to console view to execute queries

    Screenshots Shared via zoom call

    Logs Please attach the logs by running the following command:

    pixie_logs_20200221141224 (1).zip

    App information (please complete the following information):

    • Pixie version: v0.1.16
    • K8s cluster version: v1.11

    Additional context n/a

    ┆Issue is synchronized with this Jira Task by Unito

    kind/bug 
    opened by ishanmkh 6
  • Add Memory Limiter in Kelvin

    Add Memory Limiter in Kelvin

    Is your feature request related to a problem? Please describe. Some users are applying a memory limit on Kelvin to avoid memory spikes when running large queries. Using the K8s memory limit will cause the Kelvin to OOMKill when it does run these larger queries. This can make it appear that the Pixie deployment is unstable.

    Describe the solution you'd like Rather than trying to process all incoming data from the PEMs, Kelvin should respect the memory limit and only process results up to what it can fit in memory. This will prevent the Kelvin from OOMKilling. In the future we should also show a warning message to inform users they are not seeing the full results.

    kind/feature priority/important-soon triage/accepted area/query 
    opened by aimichelle 0
  • Allow BPFTrace scripts to run indefinitely

    Allow BPFTrace scripts to run indefinitely

    Is your feature request related to a problem? Please describe. Currently, Pixie allows users to deploy bpftrace scripts to collect custom telemetry data which ain't offered by Pixie (e.x - oom_kill). But there isn't a way to make Pixie continuous looks for those traces in the background indefinitely.

    Describe the solution you'd like Allow to create tracepoints without a TTL.

    Describe alternatives you've considered Currently, the only workaround is periodically calling pxtrace.UpsertTracepoint with the relevant data to reset the TTL. But this really user-friendly and introduce a lot of friction to developer workflows.

    Additional context My use case is to have a custom kprob write data into a pixie table and extract those data using the open telemetry extension for later processing.

    opened by MrSupiri 0
  • BPFTrace Uprobe Support

    BPFTrace Uprobe Support

    Is your feature request related to a problem? Please describe. The current bpftrace feature allows attaching kprobes to kernel functions, but not to any user space functions.

    Describe the solution you'd like BPFTrace support for deploying custom uprobes to user space functions. This would enable dynamic tracing of any user space applications or libraries. It should be able to deploy to all the nodes with the target UPID and tracepoint.

    opened by chengruizhe 0
  • Login with GitHub Support

    Login with GitHub Support

    Is your feature request related to a problem? Please describe. Currently we only able to sign using mail or google accout. It would be great to have a GitHub provider in the login page: $ px auth login

    Describe the solution you'd like Add GitHub login support at https://work.withpixie.ai/auth/callback

    Describe alternatives you've considered -

    Additional context -

    opened by Dentrax 0
  • Add thriftmux (finagle) openssl trace bpf test and get it passing by using a locally built netty, netty-tcnative and finagle

    Add thriftmux (finagle) openssl trace bpf test and get it passing by using a locally built netty, netty-tcnative and finagle

    This PR is a proof of concept for the remaining functionality needed to support netty TLS tracing (#407). Since netty and netty-tcnative have been released with the functionality (https://github.com/netty/netty/pull/12438 and netty-tcnative#731), I wanted to verify that using finagle with these new versions results in a passing thriftmux mtls test.

    Since finagle is not using the latest netty yet, I decided to verify this by building an example twitter service with the updated netty libraries and current finagle and copied all the necessary jars to the src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux/libs directory. This is slightly hacky but validates that once finagle upgrades its dependencies we will have fully working mux mtls tracing 🎉

    Testing

    - New test verifies that TLS tracing is successful (output below)
    v[email protected]:/vagrant$ ./scripts/sudo_bazel_run.sh -c dbg src/stirling/source_connectors/socket_tracer:thriftmux_openssl_trace_bpf_test
    Bazel options: -c dbg
    Bazel target: src/stirling/source_connectors/socket_tracer:thriftmux_openssl_trace_bpf_test
    Run args:
    Pass through env. vars:
    WARNING: option '--remote_download_outputs' was expanded to from both option '--remote_download_minimal' (source /vagrant/.bazelrc) and option '--remote_download_toplevel' (source command line options)
    INFO: Build options --action_env, --compilation_mode, --host_action_env, and 2 more have changed, discarding analysis cache.
    INFO: Analyzed target //src/stirling/source_connectors/socket_tracer:thriftmux_openssl_trace_bpf_test (547 packages loaded, 54312 targets configured).
    INFO: Found 1 target...
    Target //src/stirling/source_connectors/socket_tracer:thriftmux_openssl_trace_bpf_test up-to-date:
      bazel-bin/src/stirling/source_connectors/socket_tracer/thriftmux_openssl_trace_bpf_test
    INFO: Elapsed time: 32.782s, Critical Path: 10.79s
    INFO: 3 processes: 1 internal, 2 linux-sandbox.
    INFO: Build completed successfully, 3 total actions
    sudo: bazel-out/k8-fastbuild/bin/src/stirling/source_connectors/socket_tracer/thriftmux_openssl_trace_bpf_test: command not found
    [email protected]:/vagrant$ sudo -E  bazel-bin/src/stirling/source_connectors/socket_tracer/thriftmux_openssl_trace_bpf_test
    I20220617 06:01:26.235352 257027 env.cc:47] Started: bazel-bin/src/stirling/source_connectors/socket_tracer/thriftmux_openssl_trace_bpf_test
    [==========] Running 1 test from 1 test suite.
    [----------] Global test environment set-up.
    [----------] 1 test from OpenSSLTraceRawFptrsTest/0, where TypeParam = px::stirling::ThriftMuxServerContainerWrapper
    [ RUN      ] OpenSSLTraceRawFptrsTest/0.mtls_thriftmux_client
    I20220617 06:01:27.769742 257027 container_runner.cc:43] Loaded image: bazel/src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux:server_image
    I20220617 06:01:27.769810 257027 container_runner.cc:121] docker run --rm --pid=host --name=thriftmux_server_81183764575614 bazel/src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux:server_image --use-tls true
    I20220617 06:01:28.086665 257027 container_runner.cc:160] Container thriftmux_server_81183764575614 status: running
    I20220617 06:01:28.122049 257027 container_runner.cc:191] Container thriftmux_server_81183764575614 process PID: 257101
    I20220617 06:01:28.122181 257027 container_runner.cc:193] Container thriftmux_server_81183764575614 waiting for log message: Finagle version
    I20220617 06:01:28.163548 257027 container_runner.cc:205] Container thriftmux_server_81183764575614 status: running
    I20220617 06:01:28.163583 257027 container_runner.cc:218] Container thriftmux_server_81183764575614 not in ready state, will try again (60 attempts remaining).
    I20220617 06:01:29.224248 257027 container_runner.cc:205] Container thriftmux_server_81183764575614 status: running
    I20220617 06:01:29.224375 257027 container_runner.cc:218] Container thriftmux_server_81183764575614 not in ready state, will try again (59 attempts remaining).
    I20220617 06:01:30.260349 257027 container_runner.cc:205] Container thriftmux_server_81183764575614 status: running
    I20220617 06:01:30.260419 257027 container_runner.cc:241] Container thriftmux_server_81183764575614 is ready.
    Warning: use -cacerts option to access cacerts keystore
    Certificate was added to keystore
    I20220617 06:01:30.932763 257027 thriftmux_openssl_trace_bpf_test.cc:105] keytool -importcert command output: ''
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by com.twitter.jvm.Hotspot (file:/app/px/src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux/libs/util_util-jvm_src_main_scala_com_twitter_jvm_jvm.jar) to field sun.management.ManagementFactoryHelper.jvm
    WARNING: Please consider reporting this to the maintainers of com.twitter.jvm.Hotspot
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.Init$ $anonfun$once$1
    INFO: Finagle version ? (rev=?) built at ?
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.FixedInetResolver$ factory$lzycompute
    INFO: Successfully loaded a fixed inet resolver: [email protected]51
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver inetResolver$lzycompute
    INFO: Using default inet resolver
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.InetResolver$ factory$lzycompute
    INFO: Successfully loaded an inet resolver: [email protected]
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[inet] = com.twitter.finagle.InetResolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[fixedinet] = com.twitter.finagle.FixedInetResolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[neg] = com.twitter.finagle.NegResolver$([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[nil] = com.twitter.finagle.NilResolver$([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[fail] = com.twitter.finagle.FailResolver$([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[local] = com.twitter.finagle.mdns.LocalResolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[mdns] = com.twitter.finagle.mdns.MDNSResolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[zk] = com.twitter.finagle.zookeeper.ZkResolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[zk2] = com.twitter.finagle.serverset2.Zk2Resolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[twitter] = com.twitter.server.TwitterResolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[serverset] = com.twitter.server.ServerSetResolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[flag] = com.twitter.server.FlagResolver([email protected])
    Jun 17, 2022 6:01:33 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[configbusdns] = com.twitter.wilyns.common.ConfigBusDnsSrvResolver([email protected])
    Jun 17, 2022 6:01:34 AM com.twitter.finagle.tracing.DefaultTracer$ $anonfun$new$1
    INFO: Tracer: com.twitter.finagle.tracing.TwitterTracer
    Jun 17, 2022 6:01:34 AM com.twitter.finagle.tracing.DefaultTracer$ $anonfun$new$1
    INFO: Tracer: com.twitter.server.tracer.ServiceNamingTracer
    Jun 17, 2022 6:01:34 AM com.twitter.finagle.tracing.DefaultTracer$ $anonfun$new$1
    INFO: Tracer: com.twitter.server.tracer.PerfTracer
    I20220617 06:01:36.604614 257027 thriftmux_openssl_trace_bpf_test.cc:112] thriftmux client command output: '257223
    06:01:33.020 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
    06:01:33.023 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
    06:01:33.023 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
    06:01:33.037 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
    06:01:33.037 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11
    06:01:33.038 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
    06:01:33.038 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
    06:01:33.039 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.storeFence: available
    06:01:33.039 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
    06:01:33.040 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled
    06:01:33.040 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
    06:01:33.041 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$7 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @1433046b
    06:01:33.042 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): unavailable
    06:01:33.042 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
    06:01:33.043 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 2084569088 bytes (maybe)
    06:01:33.043 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
    06:01:33.044 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
    06:01:33.045 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes
    06:01:33.045 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
    06:01:33.046 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
    06:01:33.046 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
    06:01:33.048 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 8
    06:01:33.048 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 8
    06:01:33.048 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
    06:01:33.048 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 7
    06:01:33.048 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 1048576
    06:01:33.048 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
    06:01:33.048 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
    06:01:33.049 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
    06:01:33.049 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
    06:01:33.049 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
    06:01:33.049 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: false
    06:01:33.049 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
    06:01:33.054 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
    06:01:33.054 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
    06:01:33.078 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
    06:01:33.078 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true
    06:01:33.078 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true
    06:01:33.078 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.detectNativeLibraryDuplicates: true
    06:01:33.094 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_644238070809447770059.so
    06:01:33.097 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
    06:01:33.097 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
    06:01:33.099 [main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 127.0.0.1)
    06:01:33.100 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096
    06:01:33.807 [main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:33.817 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
    06:01:34.680 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 4
    06:01:34.781 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 257239 (auto-detected)
    06:01:34.789 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 02:42:ac:ff:fe:11:00:02 (auto-detected)
    06:01:34.836 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
    06:01:34.836 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
    06:01:34.836 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
    06:01:34.986 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_tcnative_linux_x86_6413765898396419730782.so
    06:01:34.993 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Loaded library with name 'netty_tcnative_linux_x86_64'
    06:01:34.993 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Initialize netty-tcnative using engine: 'default'
    06:01:34.993 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative using native library: BoringSSL
    06:01:35.122 [finagle/netty4-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
    06:01:35.122 [finagle/netty4-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
    06:01:35.123 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.Resource[email protected]
    06:01:35.143 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:35.147 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: disabled
    06:01:35.147 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: disabled
    06:01:35.147 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.chunkSize: disabled
    06:01:35.147 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.blocking: disabled
    06:01:35.158 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 => ECDHE-ECDSA-AES128-GCM-SHA256
    06:01:35.158 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 => ECDHE-ECDSA-AES128-GCM-SHA256
    06:01:35.159 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 => ECDHE-RSA-AES128-GCM-SHA256
    06:01:35.159 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256 => ECDHE-RSA-AES128-GCM-SHA256
    06:01:35.159 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384
    06:01:35.159 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384
    06:01:35.159 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 => ECDHE-RSA-AES256-GCM-SHA384
    06:01:35.159 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_256_GCM_SHA384 => ECDHE-RSA-AES256-GCM-SHA384
    06:01:35.160 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-ECDSA-CHACHA20-POLY1305
    06:01:35.160 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-ECDSA-CHACHA20-POLY1305
    06:01:35.160 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-RSA-CHACHA20-POLY1305
    06:01:35.160 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-RSA-CHACHA20-POLY1305
    06:01:35.160 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-PSK-CHACHA20-POLY1305
    06:01:35.161 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-PSK-CHACHA20-POLY1305
    06:01:35.161 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA => ECDHE-ECDSA-AES128-SHA
    06:01:35.161 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA => ECDHE-ECDSA-AES128-SHA
    06:01:35.161 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA => ECDHE-RSA-AES128-SHA
    06:01:35.161 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA => ECDHE-RSA-AES128-SHA
    06:01:35.161 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA => ECDHE-PSK-AES128-CBC-SHA
    06:01:35.162 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_AES_128_CBC_SHA => ECDHE-PSK-AES128-CBC-SHA
    06:01:35.162 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA => ECDHE-ECDSA-AES256-SHA
    06:01:35.162 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA => ECDHE-ECDSA-AES256-SHA
    06:01:35.162 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA => ECDHE-RSA-AES256-SHA
    06:01:35.162 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_256_CBC_SHA => ECDHE-RSA-AES256-SHA
    06:01:35.163 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA => ECDHE-PSK-AES256-CBC-SHA
    06:01:35.163 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_AES_256_CBC_SHA => ECDHE-PSK-AES256-CBC-SHA
    06:01:35.163 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_128_GCM_SHA256 => AES128-GCM-SHA256
    06:01:35.163 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_128_GCM_SHA256 => AES128-GCM-SHA256
    06:01:35.163 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_256_GCM_SHA384 => AES256-GCM-SHA384
    06:01:35.163 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_256_GCM_SHA384 => AES256-GCM-SHA384
    06:01:35.163 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_128_CBC_SHA => AES128-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_128_CBC_SHA => AES128-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_PSK_WITH_AES_128_CBC_SHA => PSK-AES128-CBC-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_PSK_WITH_AES_128_CBC_SHA => PSK-AES128-CBC-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_256_CBC_SHA => AES256-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_256_CBC_SHA => AES256-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_PSK_WITH_AES_256_CBC_SHA => PSK-AES256-CBC-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_PSK_WITH_AES_256_CBC_SHA => PSK-AES256-CBC-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA
    06:01:35.164 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA
    06:01:35.165 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Supported protocols (OpenSSL): [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2, TLSv1.3]
    06:01:35.165 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Default cipher suites (OpenSSL): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256]
    06:01:35.283 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:35.363 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:36.007 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x7fde820f, L:/127.0.0.1:36806 - R:localhost/127.0.0.1:8080] HANDSHAKEN: protocol:TLSv1.3 cipher suite:TLS_AES_128_GCM_SHA256
    StringString
    '
    I20220617 06:01:36.607581 257027 thriftmux_openssl_trace_bpf_test.cc:121] Client PID: 257223
    I20220617 06:01:37.609087 257027 source_connector.cc:36] Initializing source connector: socket_trace_connector
    I20220617 06:01:37.609162 257027 linux_headers.cc:209] Found Linux kernel version using .note section.
    I20220617 06:01:37.609176 257027 linux_headers.cc:90] Obtained Linux version string from `uname`: 5.15.0-30-generic
    I20220617 06:01:37.609185 257027 linux_headers.cc:615] Detected kernel release (uname -r): 5.15.0-30-generic
    I20220617 06:01:37.609218 257027 bcc_wrapper.cc:120] Using linux headers found at /lib/modules/5.15.0-30-generic/build for BCC runtime.
    prog tag mismatch e91652313e6ab948 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 2887c6ef23e553c6 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch ee2da2c350e72e9a 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch a4e7e27b97ff94b5 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch ee2da2c350e72e9a 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch a4e7e27b97ff94b5 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch cf3d1d11587abb60 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 3441acd7b1a21bec 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch c118b7b91747c558 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 3d2b8ea46c3385b6 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 4da5dbaf076df4c5 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 12852d91cbb10f65 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch ee8480525c956783 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 62d6101fb5258d1f 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 8dc25966a4821580 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 9111f910d2db6506 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 4bc57be3c99e8626 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 5d19c44243e75e39 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch f5223247cdd87783 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch b0648cdc47f4be7b 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 14c7005d68f38075 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch bebc625985750281 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 5bda89e43466f4e5 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch a9ff8648b5c97d06 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch df7b9a81b75e34d9 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch f7cf68a783364cb0 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 3e3da39bde6b67d 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 46f06d74a4d7cb2a 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch b521cb51a1f8971d 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch bb099a101fe1b28f 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 2612a1664ad7bf74 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch c1a0835c8ad42386 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 5ea984cbc76e0c12 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 7c448da0541e65ce 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 720b7acebd90e415 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch d39e07fecd0a5a32 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 81a265485761afc8 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 81a265485761afc8 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    I20220617 06:01:46.777340 257027 socket_trace_connector.cc:386] Number of kprobes deployed = 40
    I20220617 06:01:46.777384 257027 socket_trace_connector.cc:387] Probes successfully deployed.
    I20220617 06:01:46.777415 257027 socket_trace_connector.cc:329] Initializing perf buffers with ncpus=2 and scaling_factor=0.9
    I20220617 06:01:46.777447 257027 socket_trace_connector.cc:318] Total perf buffer usage for kData buffers across all cpus: 75497472
    I20220617 06:01:46.777462 257027 socket_trace_connector.cc:318] Total perf buffer usage for kControl buffers across all cpus: 3963614
    I20220617 06:01:46.777473 257027 bcc_wrapper.cc:345] Opening perf buffer: socket_data_events [requested_size=18874368 num_pages=8192 size=33554432] (per cpu)
    I20220617 06:01:46.783172 257027 bcc_wrapper.cc:345] Opening perf buffer: socket_control_events [requested_size=943718 num_pages=256 size=1048576] (per cpu)
    I20220617 06:01:46.783730 257027 bcc_wrapper.cc:345] Opening perf buffer: conn_stats_events [requested_size=943718 num_pages=256 size=1048576] (per cpu)
    I20220617 06:01:46.784673 257027 bcc_wrapper.cc:345] Opening perf buffer: mmap_events [requested_size=94371 num_pages=32 size=131072] (per cpu)
    I20220617 06:01:46.785048 257027 bcc_wrapper.cc:345] Opening perf buffer: go_grpc_events [requested_size=18874368 num_pages=8192 size=33554432] (per cpu)
    I20220617 06:01:46.791344 257027 socket_trace_connector.cc:391] Number of perf buffers opened = 5
    prog tag mismatch 63a2a8f2f5eb9582 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    W20220617 06:01:46.833567 257277 uprobe_symaddrs.cc:669] Unable to find openssl symbol 'OpenSSL_version_num' using dlopen/dlsym. Attempting to find address manually for pid 257101
    W20220617 06:01:46.835098 257277 uprobe_symaddrs.cc:676] Unable to find openssl symbol 'OpenSSL_version_num' with raw function pointer: Not Found : Failed to find map entry for pid: 257027 and path: /var/lib/docker/overlay2/73d6213fd5e6aceb3ddc5118b65fbf0938f985013d27267281d3398d67cf8d81/merged/tmp/libnetty_tcnative_linux_x86.so vmem start: 7f8765c00000 and segment offset: 0
    prog tag mismatch 65c30a2ea61365f9 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 3048781613de46ad 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 887bce9e073038ea 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 73d5785c6881bdd2 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 16fce0d10c30c45a 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    I20220617 06:01:47.614797 257277 uprobe_manager.cc:743] Number of uprobes deployed = 5
    Warning: use -cacerts option to access cacerts keystore
    I20220617 06:01:48.128255 257027 thriftmux_openssl_trace_bpf_test.cc:105] keytool -importcert command output: 'keytool error: java.lang.Exception: Certificate not imported, alias <mykey> already exists
    '
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by com.twitter.jvm.Hotspot (file:/app/px/src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux/libs/util_util-jvm_src_main_scala_com_twitter_jvm_jvm.jar) to field sun.management.ManagementFactoryHelper.jvm
    WARNING: Please consider reporting this to the maintainers of com.twitter.jvm.Hotspot
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    Jun 17, 2022 6:01:49 AM com.twitter.finagle.Init$ $anonfun$once$1
    INFO: Finagle version ? (rev=?) built at ?
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.FixedInetResolver$ factory$lzycompute
    INFO: Successfully loaded a fixed inet resolver: [email protected]726
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver inetResolver$lzycompute
    INFO: Using default inet resolver
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.InetResolver$ factory$lzycompute
    INFO: Successfully loaded an inet resolver: [email protected]
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[inet] = com.twitter.finagle.InetResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[fixedinet] = com.twitter.finagle.FixedInetResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[neg] = com.twitter.finagle.NegResolver$([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[nil] = com.twitter.finagle.NilResolver$([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[fail] = com.twitter.finagle.FailResolver$([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[local] = com.twitter.finagle.mdns.LocalResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[mdns] = com.twitter.finagle.mdns.MDNSResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[zk] = com.twitter.finagle.zookeeper.ZkResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[zk2] = com.twitter.finagle.serverset2.Zk2Resolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[twitter] = com.twitter.server.TwitterResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[serverset] = com.twitter.server.ServerSetResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[flag] = com.twitter.server.FlagResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3
    INFO: Resolver[configbusdns] = com.twitter.wilyns.common.ConfigBusDnsSrvResolver([email protected])
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.tracing.DefaultTracer$ $anonfun$new$1
    INFO: Tracer: com.twitter.finagle.tracing.TwitterTracer
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.tracing.DefaultTracer$ $anonfun$new$1
    INFO: Tracer: com.twitter.server.tracer.ServiceNamingTracer
    Jun 17, 2022 6:01:50 AM com.twitter.finagle.tracing.DefaultTracer$ $anonfun$new$1
    INFO: Tracer: com.twitter.server.tracer.PerfTracer
    I20220617 06:01:51.475641 257278 conn_tracker.cc:466] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=-:-1 role=kRoleUnknown protocol=kProtocolUnknown New connection tracker
    I20220617 06:01:51.475921 257278 conn_tracker.cc:476] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleUnknown protocol=kProtocolUnknown RemoteAddr updated 127.0.0.1, reason=[Inferred from conn_open.]
    I20220617 06:01:51.476263 257278 conn_tracker.cc:496] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleUnknown protocol=kProtocolUnknown Role updated kRoleUnknown -> kRoleServer, reason=[Inferred from conn_open.]]
    I20220617 06:01:51.476702 257278 conn_tracker.cc:110] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolUnknown conn_open: [type=kConnOpen ts=81207401091378 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] source_fn=kSyscallAccept [addr=[family=2 addr=127.0.0.1 port=51343]]]
    I20220617 06:01:51.701884 257278 conn_tracker.cc:524] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Protocol changed: kProtocolUnknown->kProtocolMux, reason=[inferred from data_event]
    I20220617 06:01:51.702504 257278 conn_tracker.cc:548] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux SSL state changed: false->true, reason=[inferred from data_event]
    I20220617 06:01:51.702757 257278 conn_tracker.cc:153] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Data event: attr:[[ts=81207594508977 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] protocol=kProtocolMux role=kRoleServer dir=kIngress ssl=true source_fn=kSSLRead pos=220 size=19 buf_size=19]] msg_size:19 msg:[\x00\x00\x00\x0F\x7F\x00\x00\x01tinit check]
    I20220617 06:01:51.703022 257278 conn_tracker.cc:153] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Data event: attr:[[ts=81207595287012 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] protocol=kProtocolMux role=kRoleServer dir=kEgress ssl=true source_fn=kSSLWrite pos=0 size=19 buf_size=19]] msg_size:19 msg:[\x00\x00\x00\x0F\x7F\x00\x00\x01tinit check]
    I20220617 06:01:51.703238 257278 conn_tracker.cc:153] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Data event: attr:[[ts=81207606242826 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] protocol=kProtocolMux role=kRoleServer dir=kIngress ssl=true source_fn=kSSLRead pos=239 size=46 buf_size=46]] msg_size:46 msg:[\x00\x00\x00*D\x00\x00\x01\x00\x01\x00\x00\x00\x0Amux-framer\x00\x00\x00\x04\x7F\xFF\xFF\xFF\x00\x00\x00\x03tls\x00\x00\x00\x03off]
    I20220617 06:01:51.703460 257278 conn_tracker.cc:153] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Data event: attr:[[ts=81207606631398 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] protocol=kProtocolMux role=kRoleServer dir=kEgress ssl=true source_fn=kSSLWrite pos=19 size=46 buf_size=46]] msg_size:46 msg:[\x00\x00\x00*\xBC\x00\x00\x01\x00\x01\x00\x00\x00\x0Amux-framer\x00\x00\x00\x04\x7F\xFF\xFF\xFF\x00\x00\x00\x03tls\x00\x00\x00\x03off]
    I20220617 06:01:51.703668 257278 conn_tracker.cc:153] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Data event: attr:[[ts=81207668299521 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] protocol=kProtocolMux role=kRoleServer dir=kIngress ssl=true source_fn=kSSLRead pos=285 size=8 buf_size=8]] msg_size:8 msg:[\x00\x00\x00\x04A\x00\x00\x01]
    I20220617 06:01:51.703876 257278 conn_tracker.cc:153] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Data event: attr:[[ts=81207668669626 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] protocol=kProtocolMux role=kRoleServer dir=kEgress ssl=true source_fn=kSSLWrite pos=65 size=8 buf_size=8]] msg_size:8 msg:[\x00\x00\x00\x04\xBF\x00\x00\x01]
    I20220617 06:01:51.704073 257278 conn_tracker.cc:153] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Data event: attr:[[ts=81207673593545 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] protocol=kProtocolMux role=kRoleServer dir=kIngress ssl=true source_fn=kSSLRead pos=293 size=204 buf_size=204]] msg_size:204 msg:[\x00\x00\x00\xC8\x02\x00\x00\x02\x00\x03\x00(com.twitter.finagle.tracing.TraceContext\x00 \x1A\xFDf1.\xFE\xFC\x0A\x1A\xFDf1.\xFE\xFC\x0A\x1A\xFDf1.\xFE\xFC\x0A\x00\x00\x00\x00\x00\x00\x00\x02\x00\x1Bcom.twitter.finagle.Retries\x00\x04\x00\x00\x00\x00\x00\x1Ccom.twitter.finagle.Deadline\x00\x10\x16\xF9S\x1A\x94\xEA5\x00\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00\x00\x00\x00\x80\x01\x00\x01\x00\x00\x00\x05query\x00\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x06String\x00]
    I20220617 06:01:51.704331 257278 conn_tracker.cc:153] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kCollecting remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Data event: attr:[[ts=81207675198780 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] protocol=kProtocolMux role=kRoleServer dir=kEgress ssl=true source_fn=kSSLWrite pos=73 size=48 buf_size=48]] msg_size:48 msg:[\x00\x00\x00,\xFE\x00\x00\x02\x00\x00\x00\x80\x01\x00\x02\x00\x00\x00\x05query\x00\x00\x00\x00\x0B\x00\x00\x00\x00\x00\x0CStringString\x00]
    I20220617 06:01:51.704960 257278 conn_tracker.h:270] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux req_frames=4 resp_frames=4
    I20220617 06:01:51.705171 257278 conn_tracker.h:277] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux records=4
    I20220617 06:01:51.820039 257278 conn_tracker.h:270] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux req_frames=0 resp_frames=0
    I20220617 06:01:51.820343 257278 conn_tracker.h:277] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux records=0
    I20220617 06:01:51.939800 257278 conn_tracker.h:270] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux req_frames=0 resp_frames=0
    I20220617 06:01:51.940104 257278 conn_tracker.h:277] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux records=0
    I20220617 06:01:52.054641 257278 conn_tracker.h:270] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux req_frames=0 resp_frames=0
    I20220617 06:01:52.054699 257278 conn_tracker.h:277] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux records=0
    I20220617 06:01:52.091889 257027 thriftmux_openssl_trace_bpf_test.cc:112] thriftmux client command output: '257328
    06:01:49.573 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
    06:01:49.576 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
    06:01:49.576 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
    06:01:49.592 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
    06:01:49.593 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11
    06:01:49.594 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
    06:01:49.594 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
    06:01:49.595 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.storeFence: available
    06:01:49.595 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
    06:01:49.596 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled
    06:01:49.596 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
    06:01:49.597 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$7 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @2611b9a3
    06:01:49.598 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): unavailable
    06:01:49.598 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
    06:01:49.599 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 2084569088 bytes (maybe)
    06:01:49.599 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
    06:01:49.599 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
    06:01:49.601 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes
    06:01:49.601 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
    06:01:49.602 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
    06:01:49.602 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
    06:01:49.604 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 8
    06:01:49.604 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 8
    06:01:49.605 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
    06:01:49.605 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 7
    06:01:49.605 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 1048576
    06:01:49.605 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
    06:01:49.605 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
    06:01:49.606 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
    06:01:49.606 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
    06:01:49.606 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
    06:01:49.606 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: false
    06:01:49.607 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
    06:01:49.611 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
    06:01:49.611 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
    06:01:49.630 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
    06:01:49.630 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true
    06:01:49.631 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true
    06:01:49.631 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.detectNativeLibraryDuplicates: true
    06:01:49.643 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_649787055381890454168.so
    06:01:49.646 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
    06:01:49.647 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
    06:01:49.648 [main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 127.0.0.1)
    06:01:49.649 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096
    06:01:50.322 [main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:50.330 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
    06:01:50.877 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 4
    06:01:50.931 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 257344 (auto-detected)
    06:01:50.933 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 02:42:ac:ff:fe:11:00:02 (auto-detected)
    06:01:50.956 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
    06:01:50.956 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
    06:01:50.956 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
    06:01:51.043 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_tcnative_linux_x86_6418059860008190110923.so
    06:01:51.044 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Loaded library with name 'netty_tcnative_linux_x86_64'
    06:01:51.044 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Initialize netty-tcnative using engine: 'default'
    06:01:51.044 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative using native library: BoringSSL
    06:01:51.148 [finagle/netty4-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
    06:01:51.148 [finagle/netty4-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
    06:01:51.149 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:51.166 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:51.170 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: disabled
    06:01:51.170 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: disabled
    06:01:51.170 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.chunkSize: disabled
    06:01:51.171 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.blocking: disabled
    06:01:51.182 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 => ECDHE-ECDSA-AES128-GCM-SHA256
    06:01:51.183 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 => ECDHE-ECDSA-AES128-GCM-SHA256
    06:01:51.183 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 => ECDHE-RSA-AES128-GCM-SHA256
    06:01:51.183 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256 => ECDHE-RSA-AES128-GCM-SHA256
    06:01:51.184 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384
    06:01:51.184 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384
    06:01:51.185 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 => ECDHE-RSA-AES256-GCM-SHA384
    06:01:51.185 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_256_GCM_SHA384 => ECDHE-RSA-AES256-GCM-SHA384
    06:01:51.185 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-ECDSA-CHACHA20-POLY1305
    06:01:51.185 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-ECDSA-CHACHA20-POLY1305
    06:01:51.186 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-RSA-CHACHA20-POLY1305
    06:01:51.186 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-RSA-CHACHA20-POLY1305
    06:01:51.186 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-PSK-CHACHA20-POLY1305
    06:01:51.186 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-PSK-CHACHA20-POLY1305
    06:01:51.187 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA => ECDHE-ECDSA-AES128-SHA
    06:01:51.187 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA => ECDHE-ECDSA-AES128-SHA
    06:01:51.188 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA => ECDHE-RSA-AES128-SHA
    06:01:51.188 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA => ECDHE-RSA-AES128-SHA
    06:01:51.188 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA => ECDHE-PSK-AES128-CBC-SHA
    06:01:51.188 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_AES_128_CBC_SHA => ECDHE-PSK-AES128-CBC-SHA
    06:01:51.189 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA => ECDHE-ECDSA-AES256-SHA
    06:01:51.189 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA => ECDHE-ECDSA-AES256-SHA
    06:01:51.190 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA => ECDHE-RSA-AES256-SHA
    06:01:51.190 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_256_CBC_SHA => ECDHE-RSA-AES256-SHA
    06:01:51.190 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA => ECDHE-PSK-AES256-CBC-SHA
    06:01:51.190 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_AES_256_CBC_SHA => ECDHE-PSK-AES256-CBC-SHA
    06:01:51.191 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_128_GCM_SHA256 => AES128-GCM-SHA256
    06:01:51.191 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_128_GCM_SHA256 => AES128-GCM-SHA256
    06:01:51.191 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_256_GCM_SHA384 => AES256-GCM-SHA384
    06:01:51.191 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_256_GCM_SHA384 => AES256-GCM-SHA384
    06:01:51.192 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_128_CBC_SHA => AES128-SHA
    06:01:51.192 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_128_CBC_SHA => AES128-SHA
    06:01:51.192 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_PSK_WITH_AES_128_CBC_SHA => PSK-AES128-CBC-SHA
    06:01:51.192 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_PSK_WITH_AES_128_CBC_SHA => PSK-AES128-CBC-SHA
    06:01:51.193 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_256_CBC_SHA => AES256-SHA
    06:01:51.193 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_256_CBC_SHA => AES256-SHA
    06:01:51.193 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_PSK_WITH_AES_256_CBC_SHA => PSK-AES256-CBC-SHA
    06:01:51.193 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_PSK_WITH_AES_256_CBC_SHA => PSK-AES256-CBC-SHA
    06:01:51.194 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA
    06:01:51.194 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA
    06:01:51.195 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Supported protocols (OpenSSL): [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2, TLSv1.3]
    06:01:51.195 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Default cipher suites (OpenSSL): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256]
    06:01:51.313 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:51.383 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected]
    06:01:51.580 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x0a592861, L:/127.0.0.1:36808 - R:localhost/127.0.0.1:8080] HANDSHAKEN: protocol:TLSv1.3 cipher suite:TLS_AES_128_GCM_SHA256
    StringString
    '
    I20220617 06:01:52.092134 257027 thriftmux_openssl_trace_bpf_test.cc:121] Client PID: 257328
    I20220617 06:01:52.169785 257278 conn_tracker.cc:139] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux conn_close: [type=kConnClose ts=81208079159863 conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] source_fn=kSyscallClose [wr_bytes=121 rd_bytes=497]]
    I20220617 06:01:52.169863 257278 conn_tracker.cc:598] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Marked for death, countdown=3
    I20220617 06:01:52.169898 257278 conn_tracker.cc:198] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux ConnStats timestamp=81208079254680 wr=121 rd=497 close=2
    I20220617 06:01:52.170377 257278 conn_tracker.h:270] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux req_frames=0 resp_frames=0
    I20220617 06:01:52.170421 257278 conn_tracker.h:277] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux records=0
    I20220617 06:01:52.170449 257278 conn_tracker.cc:796] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Death countdown=2
    I20220617 06:01:52.282838 257278 conn_tracker.h:270] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux req_frames=0 resp_frames=0
    I20220617 06:01:52.282907 257278 conn_tracker.h:277] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux records=0
    I20220617 06:01:52.282935 257278 conn_tracker.cc:796] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Death countdown=1
    I20220617 06:01:54.176921 257027 container_runner.cc:59] docker rm -f thriftmux_server_81183764575614
    I20220617 06:01:54.383963 257027 conn_tracker.cc:78] conn_id=[upid=257101:8118390 fd=510 gen=81207401090146] state=kTransferring remote_addr=127.0.0.1:36808 role=kRoleServer protocol=kProtocolMux Being destroyed
    [       OK ] OpenSSLTraceRawFptrsTest/0.mtls_thriftmux_client (28184 ms)
    [----------] 1 test from OpenSSLTraceRawFptrsTest/0 (28184 ms total)
    
    [----------] Global test environment tear-down
    [==========] 1 test from 1 test suite ran. (28185 ms total)
    [  PASSED  ] 1 test.
    I20220617 06:01:54.420465 257027 env.cc:51] Shutting down
    

    Todo

    • [ ] Determine why RawFptrManager broke when rebasing #446 (what this branch became)
    • [ ] Update to use a real finagle release once its netty and netty-tcnative version is upgraded
    • [ ] Remove byteman and have pixie search for libnetty_tcnative_x86.so when it has a unique file path (i.e. libnetty_tcnative_linux_x86_648014508084024950371.so)
    opened by ddelnano 1
Owner
Pixie Labs
Engineers use Pixie’s auto-telemetry to debug distributed environments in real-time
Pixie Labs
This POC is built with the goal to collect events/logs from the host systems such as Kubernetes, Docker, VMs, etc. A buffering layer is added to buffer events from the collector

What is does This POC is build with the goal to collect events/logs from the host systems such as Kubernetes, docker, VMs etc. A buffering layer is ad

Gufran  Mirza 3 Dec 16, 2021
Golang Rich Error, for having better visibility when things don't work as expected.

Rich Error Why we need better error handling? the error messages in go are ugly and very hard to read, also they don't give us much data about what ex

Peyman Goldasteh 12 Jun 10, 2022
Peimports - based on golang's debug/pe this package gives quick access to the ordered imports of pe files with ordinal support

This code is almost entirely derived from the Go standard library's debug/pe package. It didn't provide access to ordinal based entries in the IAT and

Mike Wiacek 0 Jan 5, 2022
Pragmatic and minimalistic module for collecting and sending traces from Go code 💪🏽

tracing-go Pragmatic and minimalistic module for collecting and exporting trace data from the Go code. prometheus/client_golang but for Traces NOTE: T

Bartlomiej Plotka 9 May 20, 2022
Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status.

Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status.

Hamed Yousefi 21 Jun 14, 2022
Very powerful server agent for collecting & sending logs & metrics with an easy-to-use web console.

logkit-community 中文版 Introduce Very powerful server agent for collecting & sending logs & metrics with an easy-to-use web console. logkit-community De

Qiniu Cloud 1.2k Jun 20, 2022
Write log entries, get X-Ray traces.

logtoxray Write to logs, get X-Ray traces. No distributed tracing instrumenation library required. ?? ?? ?? THIS PROJECT IS A WORK-IN-PROGRESS PROTOTY

JBD 27 Apr 24, 2022
Tracetest - Trace-based testing. End-to-end tests powered by your OpenTelemetry Traces.

End-to-end tests powered by OpenTelemetry. For QA, Dev, & Ops. Live Demo | Documentation | Twitter | Discord | Blog Click on the image or this link to

kubeshop 126 Jun 20, 2022
Library and program to parse and forward HAProxy logs

haminer Library and program to parse and forward HAProxy logs. Supported forwarder, Influxdb Requirements Go for building from source code git for dow

Shulhan 22 Mar 20, 2019
Search and analysis tooling for structured logs

Zed The Zed system provides an open-source, cloud-native, and searchable data lake for semi-structured and structured data. Zed lakes utilize a supers

Brim 810 Jun 20, 2022
A customized GORM logger that implements the appropriate interface and uses Logrus to output logs

CryptoMath GORM Logger A customized GORM logger that implements the appropriate interface and uses Logrus to output logs. Install go get github.com/ma

Crypto Math 0 Nov 6, 2021
This includes a library and set of utilities to deal with audit events.

auditevent A small and flexible library to help you create audit events. Context While audit logging may seem like a very simple thing to add to an ap

metal toolbox 2 Jun 6, 2022
Leveled execution logs for Go

glog ==== Leveled execution logs for Go. This is an efficient pure Go implementation of leveled logs in the manner of the open source C++ package h

Go 3.2k Jun 20, 2022
raft variant with topology order logs

Safe: A log that is safe if it has been replicated to a quorum, no matter whether or not the committed flag is set on any replica.

null 27 May 28, 2022
Like Prometheus, but for logs.

Loki: like Prometheus, but for logs. Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It

Grafana Labs 16.4k Jun 26, 2022
gtl - Gemini Tiny Logs - A simple TUI for the tinylog format on gemini

GTL: Gemini Tiny Logs Goal: A TUI for the tinylogs format on the gemini space. See screenshots Installation gtl requires go ≥ 1.16 From Source git clo

bacardi55 16 Jun 4, 2022
mtail - extract internal monitoring data from application logs for collection into a timeseries database

mtail - extract internal monitoring data from application logs for collection into a timeseries database mtail is a tool for extracting metrics from a

Google 3.2k Jun 28, 2022
Lumberjack is a Go package for writing logs to rolling files.

Lumberjack is a Go package for writing logs to rolling files.

Go微服务 4 Feb 24, 2022
Leveled execution logs for Go.

glog Leveled execution logs for Go. This is an efficient pure Go implementation of leveled logs in the manner of the open source C++ package glog. By

null 0 Nov 29, 2021