Instant Kubernetes-Native Application Observability

Overview

Pixie!


Docs Slack Mentioned in Awesome Kubernetes Mentioned in Awesome Go Build Status


What is Pixie?

Pixie

Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.

We're building up Pixie for broad use by the end of 2020. If you are interested, feel free to try our community beta and join our community on slack.


Table of contents

Quick Start

Review Pixie's requirements to make sure that your Kubernetes cluster is supported.

Signup

Visit our product page and signup with your google account.

Install CLI

Run the command below:

bash -c "$(curl -fsSL https://withpixie.ai/install.sh)"

Or see our Installation Docs to install Pixie using Docker, Debian, RPM or with the latest binary.

(optional) Setup a sandbox

If you don't already have a K8s cluster available, you can use Minikube to set-up a local environment:

  • On Linux, run minikube start --cpus=4 --memory=6000 --driver=kvm2 -p=<cluster-name>. The default docker driver is not currently supported, so using the kvm2 driver is important.

  • On Mac, run minikube start --cpus=4 --memory=6000 -p=<cluster-name>.

More detailed instructions are available here.

Start a demo-app:

🚀 Deploy Pixie

Use the CLI to deploy the Pixie Platform in your K8s cluster by running:

px deploy

Alternatively, you can deploy with YAML or Helm.


Check out our install guides and walkthrough videos for alternate install schemes.

Get Instant Auto-Telemetry

Run scripts with px CLI

CLI Demo


Service SLA:

px run px/service_stats


Node health:

px run px/node_stats


MySQL metrics:

px run px/mysql_stats


Explore more scripts by running:

px scripts list


Check out our pxl_scripts repo for more examples.


View machine generated dashboards with Live views

CLI Demo

The Pixie Platform auto-generates "Live View" dashboard to visualize script results.

You can view them by clicking on the URLs prompted by px or by visiting:

https://work.withpixie.ai/live


Pipe Pixie dust into any tool

CLI Demo

You can transform and pipe your script results into any other system or workflow by consuming px results with tools like jq.

Example with http_data:

px run px/http_data -o json| jq -r .

More examples here


To see more script examples and learn how to write your own, check out our docs for more guides


Contributing

We're excited to have you contribute to Pixie. Our community has adopted the Contributor Covenant as its code of conduct, and we expect all participants to adhere to it. Please report any violations to [email protected]. All code contributions require the Contributor License Agreement. The CLA can be signed when creating your first PR.

There are many ways to contribute to Pixie:

  • Bugs: Something not working as expected? Send a bug report.
  • Features: Need new Pixie capabilities? Send a feature request.
  • Views & Scripts Requests: Need help building a live view or pxl scripts? Send a live view request.
  • PxL Scripts: PxL scripts are used to extend Pixie functionality. They are an excellent way to contribute to golden debugging workflows. Look here for more information.
  • Pixienaut Community: Interested in becoming a Pixienaut and in helping shape our community? Apply here.

Open Source

Along with building Pixie as a freemium SaaS product, contributing open and accessible projects to the broader developer community is integral to our roadmap.

We plan to contribute in two ways:

  • Open Sourced Pixie Platform Primitives: We plan to open-source components of the Pixie Platform which can be independently useful to developers after our Beta. These include our Community Pxl Scripts, Pixie CLI, eBPF Collectors etc. If you are interested in contributing during our Beta, email us.
  • Unlimited Pixie Community Access: Our Pixie Community product is a completely free offering with all core features of the Pixie developer experience. We will invest in this offering for the long term to give developers across the world an easy and zero cost way to use Pixie.

Under the Hood

Three fundamental innovations enable Pixie's magical developer experience:

Progressive Instrumentation: Pixie Edge Modules (“PEMs”) collect full body request traces (via eBPF), system metrics & K8s events without the need for code-changes and at less than 5% overhead. Custom metrics, traces & logs can be integrated into the Pixie Command Module.

In-Cluster Edge Compute: The Pixie Command Module is deployed in your K8s cluster to isolate data storage and computation within your environment for drastically better intelligence, performance & security.

Command Driven Interfaces: Programmatically access data via the Pixie CLI and Pixie UI which are designed ground-up to allow you to run analysis & debug scenarios faster than any other developer tool.

For more information on Pixie Platform's architecture, check out our docs or overview deck

Resources

About Us

Founded in late 2018, we are a San Francisco based stealth machine intelligence startup. Our north star is to build a new generation of intelligent products which empower developers to engineer the future.

We're heads down building Pixie and excited to share it broadly with the community later this year. If you're interested in learning more about us or in our current job openings, we'd love to hear from you.

License

Pixie Community is the free offering of Pixie's proprietary SaaS product catalogue.

Our PxL Scripts are licensed under Apache License, Version 2.0.

Other Pixie Platform components such as Pixie CLI and eBPF based Data Collectors will also be licensed under the Apache License, Version 2.0. Contribution of these are planned for Oct 2020.

Issues
  • Self-Hosted Pixie Install Script

    Self-Hosted Pixie Install Script

    Is your feature request related to a problem? Please describe. We would like to have an install experience for the self-hosted version of Pixie that is as easy to use as the one hosted on withpixie.ai.

    Additional context Our team has been busy at work this month open sourcing Pixie's source code, docs, website, and other assets, We are also actively applying to be a CNCF sandbox project!

    One of our last remaining items is to publish an install script to deploy a self-hosted version of Pixie.

    Who offers a hosted version of Pixie?

    New Relic currently offers a 100% free hosted version of Pixie Cloud. This hosting has no contingencies and will be offered indefinitely to the Pixie Community. All the code used for hosting is open source, including out production manifest files.

    What will the Self-Hosted install script do?

    The Self-Hosted install script will deploy Pixie Cloud so that you can use Pixie without any external dependencies. This is the exact version of Pixie Cloud we deploy, so it'll behave exactly as the hosted version, but will require management/configuration.

    What is the timeline? 

    Good question. :) We had planned to open source this script by 5/4. Unfortunately, we didn’t make it. We need more time to ensure that the Pixie Cloud deploy script will be just as easy to install Pixie Cloud as it is to install the hosted version of Pixie (in < 2 minutes!)

    But I really want to run a Self-Hosted Pixie...now!

    Technically you can build and run a self-hosted Pixie using Skaffold. Check out:

    https://github.com/pixie-labs/pixie/blob/main/skaffold/skaffold_cloud.yaml https://github.com/pixie-labs/pixie/tree/main/k8s/cloud https://github.com/pixie-labs/pixie/tree/main/k8s/cloud_deps

    These directions are not fully documented and the team is choosing to focus on quickly delivering the self-hosted install script. We'll constantly be iterating on the documentation to make the project more open source friendly.

    opened by zasgar 22
  • google login hangs

    google login hangs

    Trying the pixie online installer. After signing up with google, login hangs forever with:

    Authenticating
    Logging in...
    

    To Reproduce Steps to reproduce the behavior:

    1. Go to signup, use google
    2. Login with google

    Expected behavior To be logged in

    kind/bug priority/backlog triage/not-reproducible 
    opened by Morriz 17
  • [Doc issue] no ingress installed so dev_dns_updater did nothing

    [Doc issue] no ingress installed so dev_dns_updater did nothing

    Describe the bug I've been followed the document to deploy pixie cloud, and setup-dns section would update /etc/hosts if there is any ingress rules in k8s cluster. But it didn't have!

    ➜  pixie git:(main) ✗ kubectl get ing
    No resources found in default namespace.
    ➜  pixie git:(main) ✗ kubectl get ing -n plc
    No resources found in plc namespace.
    

    And it of course doesn't change anything:

    ➜  pixie git:(main) ✗ ./dev_dns_updater --domain-name="dev.withpixie.dev"  --kubeconfig=$HOME/.kube/config --n=plc
    INFO[0000] DNS Entries                                   entries="dev.withpixie.dev, work.dev.withpixie.dev, segment.dev.withpixie.dev, docs.dev.withpixie.dev" service=cloud-proxy-service
    INFO[0000] DNS Entries                                   entries=cloud.dev.withpixie.dev service=vzconn-service
    

    It didn't change /etc/hosts file!

    To Reproduce

    Expected behavior Should update /etc/hosts and we could visit dev.withpixie.dev in browser.

    Screenshots

    Logs

    App information (please complete the following information):

    • Pixie version: master branch
    • K8s cluster version: minikube on macOS 10.15.7 k8s version v1.22.2

    Additional context

    opened by Colstuwjx 12
  • Compile error, missing HTTP Tables.

    Compile error, missing HTTP Tables.

    Describe the bug Cannot run any scripts due to a HTTP Event module not found?

    Script compilation failed: L222 : C22  Table 'http_events' not found.\n
    

    To Reproduce Steps to reproduce the behavior: Install fresh version of Pixie on Minikube Cluster

    Expected behavior Pixie scripts to execute

    Screenshots image image

    Logs Please attach the logs by running the following command:

    ./px collect-logs (See Zip File) 
    

    pixie_logs_20210505024739.zip App information (please complete the following information):

    • Pixie version: 0.5.3+Distribution.0ff53f6.20210503183144.1
    • K8s cluster version: v1.20.2
    opened by WarpWing 12
  • gRPC-c data parsing

    gRPC-c data parsing

    Stirling now registers on perf buffers where the gRPC-c eBPF module writes data to. There are 3 buffers:

    1. gRPC events
    2. gRPC headers
    3. close events

    The logic of handling gRPC sessions works for Golang events. This logic is now used for gRPC-c events as well. The data that the gRPC-c eBPF module passes to the user-space differs from the data that the Golang gRPC eBPF module passes to the user-space. This PR is basically an abstraction layer that "translates" gRPC-c eBPF events to the known format of Golang gRPC events.

    gRPC-c events are still not enabled; They will be enabled in the next PR, where the needed probes will be attached by the UProbeManager. However, the gRPC-c eBPF program is now compiled; because in order for the code to find the perf buffers, they must exist.

    opened by orishuss 9
  • px deploy failed flatcar linux kubernetes cluster

    px deploy failed flatcar linux kubernetes cluster

    Describe the bug A clear and concise description of what the bug is. $ px deploy (failed)

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error fatal failed to fetch vizier versions error=open /home/core/.pixie/auth.json: no such file or directory

    Expected behavior A clear and concise description of what you expected to happen. pixie should be running properly

    Screenshots If applicable, add screenshots to help explain your problem. Please make sure the screenshot does not contain any sensitive information such as API keys or access tokens.

    Logs Please attach the logs by running the following command: px deploy ./px collect-logs

    
    **App information (please complete the following information):**
    - Pixie version
    - K8s cluster version v1.19.2
    
    fatal failed to fetch vizier versions error=open /home/core/.pixie/auth.json: no such file or directory
    
    Please help
    
    
    **Additional context**
    Add any other context about the problem here.
    
    opened by 4ss3g4f 9
  • perf profiler BPF program fails to install

    perf profiler BPF program fails to install

    Describe the bug Error: "Compiler error on line 324, column 22: Table 'stack_traces.beta' not found."

    Environment:

    • kubeadm installed cluster running on baremetal ubuntu 20.04 kernel 5.15

    To Reproduce Steps to reproduce the behavior:

    1. deploy px deploy
    2. click on script px/pod or px/node

    Expected behavior Get pod / node dashboard

    Screenshots pixie-bug

    Logs Please attach the logs by running the following command:

    ./px collect-logs
    

    pixie_logs_20220103200225.zip

    App information (please complete the following information):

    • Pixie version 0.9.16
    • K8s cluster version 1.21.5
    • Node Kernel version 5.15
    • Browser version ff 95.0.1
    opened by knfoo 8
  • Add MatchRegexRule UDF

    Add MatchRegexRule UDF

    Add MatchRegexRule UDF for use in security PxL scripts.

    • Add a UDF, MatchRegexRule, that takes an encoded json of regular expression rules, and string. It returns the first rule that matches the string or empty string if no match.
    • Add unit tests for MatchRegexRule.
    opened by elaguerta 7
  • Add more detailed instructions to the dev docs

    Add more detailed instructions to the dev docs

    Improve the DEVELOPMENT.md documentation.

    • Include prerequisites.
    • Add an example to Vizier section of running unit tests.
    • Link to instructions for spinning up a Minikube cluster to deploy onto.
    • Clarify the differences between Vizier and Pixie Cloud.
    • Add workaround instructions for failed px deploy.
    • Note when and where various commands in the instructions should be run and explain what they do in greater detail.
    opened by hmstepanek 7
  • Enhance openssl tracing to fallback to function pointer addrs when dlsym fails

    Enhance openssl tracing to fallback to function pointer addrs when dlsym fails

    This is the next step for #407.

    The netty-tcnative shared library does not use a dynamic OpenSSL_num_version symbol. This means the symbol ends up in the symtab rather than dynsym. The former is private and incompatible with dlsym, while the latter is public and is supported by the existing openssl tracing. See this slack thread for more background.

    This PR needs some work, but it's far enough along that I wanted to seek feedback on my approach

    Testing

    • The openssl_trace_bpf_test tests cases pass
      • [x] nginx openssl 1.1.1
      • [x] nginx openssl 1.1.0
      • [x] Node 12.3.1
      • [x] Node 14.18.1

    Assuming this approach is how we want to proceed, the following will need to be addressed

    Todo

    • [x] Parameterized openssl_trace_bpf_test to test the dlsym and RawFptrManager implementations of openssl tracing
    • [x] ~Write unit tests for RawFptrManager~ -- I think our bpf test is sufficient for now and the scaffolding required to test this would be quite a bit of work so I'm holding off on this for now
    • [x] ~Clean up the code added to ProcParser (much of it is duplicated with another function)~ -- not sure this is necessary
    • [x] ~Make ProcessMap more efficient (see TODO comment about using octal numbers)~ -- wasn't sure if doing so would be more efficient. Open to any feedback you have here
    • [x] Fix remaining openssl_trace_bpf_test test cases. They currently cause seg faults. Each test case passes individually, but when more than one case is enabled it results in seg faults (pastebin)
    opened by ddelnano 6
  • Give PEMs general tolerations so that they can deploy on tainted nodes.

    Give PEMs general tolerations so that they can deploy on tainted nodes.

    Is your feature request related to a problem? Please describe. Some clusters use taints and tolerations for workload isolation or other scheduling concerns. This can prevent the vizier-pem pods from scheduling to some nodes.

    Describe the solution you'd like I'd like a flag to enable the vizier-pems to schedule everywhere. Something like a --tolerate flag on the deploy subcommand that creates the vizier-pem daemonset with these tolerations:

          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    

    Describe alternatives you've considered I've patched the daemonset after deployment to get around this. If this is deployed as a permanent fixture on the cluster, this would also be handled elsewhere.

    kind/feature 
    opened by elsonrodriguez 6
  • Add support for tracing tls traffic from finagle (netty based) services

    Add support for tracing tls traffic from finagle (netty based) services

    This is a refresh of #463 now that an official finagle release exists. This PR also includes the changes from #562 so that will need to be merged first.

    Testing

    New test verifies that TLS tracing is successful (output below) [email protected]:/vagrant$ ./scripts/sudo_bazel_run.sh -c dbg src/stirling/source_connectors/socket_tracer:thriftmux_openssl_trace_bpf_test INFO: Analyzed target //src/stirling/source_connectors/socket_tracer:thriftmux_openssl_trace_bpf_test (2 packages loaded, 329 targets configured). INFO: Found 1 target... Target //src/stirling/source_connectors/socket_tracer:thriftmux_openssl_trace_bpf_test up-to-date: bazel-bin/src/stirling/source_connectors/socket_tracer/thriftmux_openssl_trace_bpf_test INFO: Elapsed time: 23.604s, Critical Path: 12.10s INFO: 3 processes: 1 internal, 2 linux-sandbox. INFO: Build completed successfully, 3 total actions INFO: Build completed successfully, 3 total actions exec ${PAGER:-/usr/bin/less} "$0" || exit 1 Executing tests from //src/stirling/source_connectors/socket_tracer:thriftmux_openssl_trace_bpf_test ----------------------------------------------------------------------------- I20220817 06:45:16.290016 243447 env.cc:47] Started: src/stirling/source_connectors/socket_tracer/thriftmux_openssl_trace_bpf_test [==========] Running 1 test from 1 test suite. [----------] Global test environment set-up. [----------] 1 test from OpenSSLTraceRawFptrsTest/0, where TypeParam = px::stirling::ThriftMuxServerContainerWrapper [ RUN ] OpenSSLTraceRawFptrsTest/0.mtls_thriftmux_client I20220817 06:45:17.457741 243447 container_runner.cc:43] Loaded image: bazel/src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux:server_image I20220817 06:45:17.457787 243447 container_runner.cc:121] docker run --rm --pid=host --name=thriftmux_server_445962533992195 bazel/src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux:server_image --use-tls true Error: No such object: thriftmux_server_445962533992195 I20220817 06:45:17.615180 243447 container_runner.cc:160] Container thriftmux_server_445962533992195 status: I20220817 06:45:17.615226 243447 container_runner.cc:169] Container thriftmux_server_445962533992195 not yet running, will try again (60 attempts remaining). I20220817 06:45:18.659492 243447 container_runner.cc:160] Container thriftmux_server_445962533992195 status: running I20220817 06:45:18.730299 243447 container_runner.cc:191] Container thriftmux_server_445962533992195 process PID: 243532 I20220817 06:45:18.730358 243447 container_runner.cc:193] Container thriftmux_server_445962533992195 waiting for log message: Finagle version I20220817 06:45:18.810045 243447 container_runner.cc:205] Container thriftmux_server_445962533992195 status: running I20220817 06:45:18.810093 243447 container_runner.cc:218] Container thriftmux_server_445962533992195 not in ready state, will try again (59 attempts remaining). I20220817 06:45:19.842669 243447 container_runner.cc:205] Container thriftmux_server_445962533992195 status: running I20220817 06:45:19.842713 243447 container_runner.cc:241] Container thriftmux_server_445962533992195 is ready. Warning: use -cacerts option to access cacerts keystore Certificate was added to keystore I20220817 06:45:20.214411 243447 thriftmux_openssl_trace_bpf_test.cc:102] keytool -importcert command output: '' WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.twitter.jvm.Hotspot (file:/app/maven/v1/https/repo1.maven.org/maven2/com/twitter/util-jvm_2.13/22.7.0/util-jvm_2.13-22.7.0.jar) to field sun.management.ManagementFactoryHelper.jvm WARNING: Please consider reporting this to the maintainers of com.twitter.jvm.Hotspot WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Aug 17, 2022 6:45:21 AM com.twitter.finagle.Init$ $anonfun$once$1 INFO: Finagle version 22.7.0 (rev=4d10525ef4fe89732bcfec6286fac7fe426e9082) built at 20220728-183850 Aug 17, 2022 6:45:21 AM com.twitter.finagle.FixedInetResolver$ factory$lzycompute INFO: Successfully loaded a fixed inet resolver: [email protected]59f Aug 17, 2022 6:45:21 AM com.twitter.finagle.BaseResolver inetResolver$lzycompute INFO: Using default inet resolver Aug 17, 2022 6:45:21 AM com.twitter.finagle.InetResolver$ factory$lzycompute INFO: Successfully loaded an inet resolver: [email protected] Aug 17, 2022 6:45:21 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[inet] = com.twitter.finagle.InetResolver([email protected]) Aug 17, 2022 6:45:21 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[fixedinet] = com.twitter.finagle.FixedInetResolver([email protected]) Aug 17, 2022 6:45:21 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[neg] = com.twitter.finagle.NegResolver$([email protected]) Aug 17, 2022 6:45:21 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[nil] = com.twitter.finagle.NilResolver$([email protected]) Aug 17, 2022 6:45:21 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[fail] = com.twitter.finagle.FailResolver$([email protected]) I20220817 06:45:23.707464 243447 thriftmux_openssl_trace_bpf_test.cc:109] thriftmux client command output: '243649 06:45:21.059 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework 06:45:21.062 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple 06:45:21.062 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4 06:45:21.075 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false 06:45:21.075 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11 06:45:21.076 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available 06:45:21.077 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available 06:45:21.077 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.storeFence: available 06:45:21.078 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available 06:45:21.078 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled 06:45:21.079 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true 06:45:21.080 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$7 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @4f49f6af 06:45:21.080 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.(long, int): unavailable 06:45:21.081 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available 06:45:21.081 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 2084569088 bytes (maybe) 06:45:21.082 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir) 06:45:21.082 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model) 06:45:21.083 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes 06:45:21.083 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1 06:45:21.084 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available 06:45:21.084 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false 06:45:21.086 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 8 06:45:21.086 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 8 06:45:21.086 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192 06:45:21.087 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 7 06:45:21.087 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 1048576 06:45:21.087 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256 06:45:21.087 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64 06:45:21.087 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768 06:45:21.087 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192 06:45:21.088 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0 06:45:21.088 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: false 06:45:21.088 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023 06:45:21.091 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 06:45:21.092 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 06:45:21.107 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir) 06:45:21.107 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true 06:45:21.107 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true 06:45:21.107 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.detectNativeLibraryDuplicates: true 06:45:21.137 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_648940572427438364515.so 06:45:21.140 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false 06:45:21.141 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false 06:45:21.142 [main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 127.0.0.1) 06:45:21.143 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096 06:45:21.647 [main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:21.653 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available 06:45:22.024 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 4 06:45:22.081 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 243667 (auto-detected) 06:45:22.083 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 02:42:ac:ff:fe:11:00:02 (auto-detected) 06:45:22.096 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled 06:45:22.097 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0 06:45:22.097 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384 06:45:22.156 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_tcnative_linux_x86_6413878350725438279019.so 06:45:22.157 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Loaded library with name 'netty_tcnative_linux_x86_64' 06:45:22.157 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Initialize netty-tcnative using engine: 'default' 06:45:22.158 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative using native library: BoringSSL 06:45:22.256 [finagle/netty4-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true 06:45:22.256 [finagle/netty4-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true 06:45:22.256 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:22.272 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:22.276 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: disabled 06:45:22.276 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: disabled 06:45:22.277 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.chunkSize: disabled 06:45:22.277 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.blocking: disabled 06:45:22.287 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 => ECDHE-ECDSA-AES128-GCM-SHA256 06:45:22.287 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 => ECDHE-ECDSA-AES128-GCM-SHA256 06:45:22.288 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 => ECDHE-RSA-AES128-GCM-SHA256 06:45:22.288 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256 => ECDHE-RSA-AES128-GCM-SHA256 06:45:22.288 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384 06:45:22.288 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384 06:45:22.288 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 => ECDHE-RSA-AES256-GCM-SHA384 06:45:22.288 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_256_GCM_SHA384 => ECDHE-RSA-AES256-GCM-SHA384 06:45:22.289 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-ECDSA-CHACHA20-POLY1305 06:45:22.289 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-ECDSA-CHACHA20-POLY1305 06:45:22.289 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-RSA-CHACHA20-POLY1305 06:45:22.289 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-RSA-CHACHA20-POLY1305 06:45:22.290 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-PSK-CHACHA20-POLY1305 06:45:22.291 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-PSK-CHACHA20-POLY1305 06:45:22.291 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA => ECDHE-ECDSA-AES128-SHA 06:45:22.291 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA => ECDHE-ECDSA-AES128-SHA 06:45:22.291 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA => ECDHE-RSA-AES128-SHA 06:45:22.291 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA => ECDHE-RSA-AES128-SHA 06:45:22.292 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA => ECDHE-PSK-AES128-CBC-SHA 06:45:22.292 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_AES_128_CBC_SHA => ECDHE-PSK-AES128-CBC-SHA 06:45:22.292 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA => ECDHE-ECDSA-AES256-SHA 06:45:22.293 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA => ECDHE-ECDSA-AES256-SHA 06:45:22.293 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA => ECDHE-RSA-AES256-SHA 06:45:22.293 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_256_CBC_SHA => ECDHE-RSA-AES256-SHA 06:45:22.293 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA => ECDHE-PSK-AES256-CBC-SHA 06:45:22.293 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_AES_256_CBC_SHA => ECDHE-PSK-AES256-CBC-SHA 06:45:22.294 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_128_GCM_SHA256 => AES128-GCM-SHA256 06:45:22.294 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_128_GCM_SHA256 => AES128-GCM-SHA256 06:45:22.294 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_256_GCM_SHA384 => AES256-GCM-SHA384 06:45:22.294 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_256_GCM_SHA384 => AES256-GCM-SHA384 06:45:22.295 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_128_CBC_SHA => AES128-SHA 06:45:22.295 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_128_CBC_SHA => AES128-SHA 06:45:22.295 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_PSK_WITH_AES_128_CBC_SHA => PSK-AES128-CBC-SHA 06:45:22.295 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_PSK_WITH_AES_128_CBC_SHA => PSK-AES128-CBC-SHA 06:45:22.295 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_256_CBC_SHA => AES256-SHA 06:45:22.295 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_256_CBC_SHA => AES256-SHA 06:45:22.296 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_PSK_WITH_AES_256_CBC_SHA => PSK-AES256-CBC-SHA 06:45:22.296 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_PSK_WITH_AES_256_CBC_SHA => PSK-AES256-CBC-SHA 06:45:22.296 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA 06:45:22.296 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA 06:45:22.297 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Supported protocols (OpenSSL): [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2, TLSv1.3] 06:45:22.297 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Default cipher suites (OpenSSL): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256] 06:45:22.419 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:22.466 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:23.158 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x1b74c3b0, L:/127.0.0.1:55396 - R:localhost/127.0.0.1:8080] HANDSHAKEN: protocol:TLSv1.3 cipher suite:TLS_AES_128_GCM_SHA256 StringString ' I20220817 06:45:23.707774 243447 thriftmux_openssl_trace_bpf_test.cc:118] Client PID: 243649 I20220817 06:45:24.708307 243447 source_connector.cc:36] Initializing source connector: socket_trace_connector I20220817 06:45:24.708359 243447 linux_headers.cc:209] Found Linux kernel version using .note section. I20220817 06:45:24.708366 243447 linux_headers.cc:90] Obtained Linux version string from `uname`: 5.15.0-30-generic I20220817 06:45:24.708372 243447 linux_headers.cc:599] Detected kernel release (uname -r): 5.15.0-30-generic I20220817 06:45:24.708408 243447 bcc_wrapper.cc:120] Using linux headers found at /lib/modules/5.15.0-30-generic/build for BCC runtime. warning: ./src/stirling/source_connectors/socket_tracer/bcc_bpf/grpc_c_trace.c:652:3: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering I20220817 06:45:41.949194 243447 socket_trace_connector.cc:404] Number of kprobes deployed = 40 I20220817 06:45:41.949318 243447 socket_trace_connector.cc:405] Probes successfully deployed. I20220817 06:45:41.949431 243447 socket_trace_connector.cc:340] Initializing perf buffers with ncpus=2 and scaling_factor=0.9 I20220817 06:45:41.950176 243447 socket_trace_connector.cc:329] Total perf buffer usage for kData buffers across all cpus: 188743680 I20220817 06:45:41.950258 243447 socket_trace_connector.cc:329] Total perf buffer usage for kControl buffers across all cpus: 3963614 I20220817 06:45:41.950860 243447 bcc_wrapper.cc:345] Opening perf buffer: socket_data_events [requested_size=18874368 num_pages=8192 size=33554432] (per cpu) I20220817 06:45:41.956878 243447 bcc_wrapper.cc:345] Opening perf buffer: socket_control_events [requested_size=943718 num_pages=256 size=1048576] (per cpu) I20220817 06:45:41.957451 243447 bcc_wrapper.cc:345] Opening perf buffer: conn_stats_events [requested_size=943718 num_pages=256 size=1048576] (per cpu) I20220817 06:45:41.957908 243447 bcc_wrapper.cc:345] Opening perf buffer: mmap_events [requested_size=94371 num_pages=32 size=131072] (per cpu) I20220817 06:45:41.958210 243447 bcc_wrapper.cc:345] Opening perf buffer: go_grpc_events [requested_size=18874368 num_pages=8192 size=33554432] (per cpu) I20220817 06:45:41.962934 243447 bcc_wrapper.cc:345] Opening perf buffer: grpc_c_events [requested_size=18874368 num_pages=8192 size=33554432] (per cpu) I20220817 06:45:41.968755 243447 bcc_wrapper.cc:345] Opening perf buffer: grpc_c_header_events [requested_size=18874368 num_pages=8192 size=33554432] (per cpu) I20220817 06:45:41.974740 243447 bcc_wrapper.cc:345] Opening perf buffer: grpc_c_close_events [requested_size=18874368 num_pages=8192 size=33554432] (per cpu) I20220817 06:45:41.981499 243447 socket_trace_connector.cc:409] Number of perf buffers opened = 8 W20220817 06:45:42.051280 243702 uprobe_symaddrs.cc:620] Unable to find openssl symbol 'OpenSSL_version_num' using dlopen/dlsym. Attempting to find address manually for pid 243532 I20220817 06:45:42.916184 243702 uprobe_manager.cc:755] Number of uprobes deployed = 5 Warning: use -cacerts option to access cacerts keystore I20220817 06:45:43.465761 243447 thriftmux_openssl_trace_bpf_test.cc:102] keytool -importcert command output: 'keytool error: java.lang.Exception: Certificate not imported, alias already exists ' WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.twitter.jvm.Hotspot (file:/app/maven/v1/https/repo1.maven.org/maven2/com/twitter/util-jvm_2.13/22.7.0/util-jvm_2.13-22.7.0.jar) to field sun.management.ManagementFactoryHelper.jvm WARNING: Please consider reporting this to the maintainers of com.twitter.jvm.Hotspot WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Aug 17, 2022 6:45:44 AM com.twitter.finagle.Init$ $anonfun$once$1 INFO: Finagle version 22.7.0 (rev=4d10525ef4fe89732bcfec6286fac7fe426e9082) built at 20220728-183850 Aug 17, 2022 6:45:45 AM com.twitter.finagle.FixedInetResolver$ factory$lzycompute INFO: Successfully loaded a fixed inet resolver: [email protected]3d5 Aug 17, 2022 6:45:45 AM com.twitter.finagle.BaseResolver inetResolver$lzycompute INFO: Using default inet resolver Aug 17, 2022 6:45:45 AM com.twitter.finagle.InetResolver$ factory$lzycompute INFO: Successfully loaded an inet resolver: [email protected] Aug 17, 2022 6:45:45 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[inet] = com.twitter.finagle.InetResolver([email protected]) Aug 17, 2022 6:45:45 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[fixedinet] = com.twitter.finagle.FixedInetResolver([email protected]) Aug 17, 2022 6:45:45 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[neg] = com.twitter.finagle.NegResolver$([email protected]) Aug 17, 2022 6:45:45 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[nil] = com.twitter.finagle.NilResolver$([email protected]) Aug 17, 2022 6:45:45 AM com.twitter.finagle.BaseResolver $anonfun$resolvers$3 INFO: Resolver[fail] = com.twitter.finagle.FailResolver$([email protected]) I20220817 06:45:45.885520 243703 conn_tracker.cc:461] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=-:-1 role=kRoleUnknown protocol=kProtocolUnknown ssl=false New connection tracker I20220817 06:45:45.885609 243703 conn_tracker.cc:471] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleUnknown protocol=kProtocolUnknown ssl=false RemoteAddr updated 127.0.0.1, reason=[Inferred from conn_open.] I20220817 06:45:45.885650 243703 conn_tracker.cc:491] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleUnknown protocol=kProtocolUnknown ssl=false Role updated kRoleUnknown -> kRoleServer, reason=[Inferred from conn_open.]] I20220817 06:45:45.885674 243703 conn_tracker.cc:107] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolUnknown ssl=false conn_open: [type=kConnOpen ts=445990939788410 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] source_fn=kSyscallAccept [addr=[family=2 addr=127.0.0.1 port=26328]]] I20220817 06:45:46.102075 243703 conn_tracker.cc:519] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=false Protocol changed: kProtocolUnknown->kProtocolMux, reason=[inferred from data_event] I20220817 06:45:46.102159 243703 conn_tracker.cc:543] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true SSL state changed: false->true, reason=[inferred from data_event] I20220817 06:45:46.102195 243703 conn_tracker.cc:150] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Data event: attr:[[ts=445991139087516 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] protocol=kProtocolMux role=kRoleServer dir=kIngress ssl=true source_fn=kSSLRead pos=220 size=19 buf_size=19]] msg_size:19 msg:[\x00\x00\x00\x0F\x7F\x00\x00\x01tinit check] I20220817 06:45:46.102247 243703 conn_tracker.cc:150] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Data event: attr:[[ts=445991139891730 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] protocol=kProtocolMux role=kRoleServer dir=kEgress ssl=true source_fn=kSSLWrite pos=0 size=19 buf_size=19]] msg_size:19 msg:[\x00\x00\x00\x0F\x7F\x00\x00\x01tinit check] I20220817 06:45:46.102296 243703 conn_tracker.cc:150] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Data event: attr:[[ts=445991143109532 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] protocol=kProtocolMux role=kRoleServer dir=kIngress ssl=true source_fn=kSSLRead pos=239 size=46 buf_size=46]] msg_size:46 msg:[\x00\x00\x00*D\x00\x00\x01\x00\x01\x00\x00\x00\x0Amux-framer\x00\x00\x00\x04\x7F\xFF\xFF\xFF\x00\x00\x00\x03tls\x00\x00\x00\x03off] I20220817 06:45:46.102355 243703 conn_tracker.cc:150] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kCollecting remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Data event: attr:[[ts=445991143560078 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] protocol=kProtocolMux role=kRoleServer dir=kEgress ssl=true source_fn=kSSLWrite pos=19 size=46 buf_size=46]] msg_size:46 msg:[\x00\x00\x00*\xBC\x00\x00\x01\x00\x01\x00\x00\x00\x0Amux-framer\x00\x00\x00\x04\x7F\xFF\xFF\xFF\x00\x00\x00\x03tls\x00\x00\x00\x03off] I20220817 06:45:46.102818 243703 conn_tracker.h:271] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true req_frames=2 resp_frames=2 I20220817 06:45:46.103344 243703 conn_tracker.h:278] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true records=2 I20220817 06:45:46.214886 243703 conn_tracker.cc:150] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Data event: attr:[[ts=445991198314214 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] protocol=kProtocolMux role=kRoleServer dir=kIngress ssl=true source_fn=kSSLRead pos=285 size=8 buf_size=8]] msg_size:8 msg:[\x00\x00\x00\x04A\x00\x00\x01] I20220817 06:45:46.215129 243703 conn_tracker.cc:150] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Data event: attr:[[ts=445991198705762 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] protocol=kProtocolMux role=kRoleServer dir=kEgress ssl=true source_fn=kSSLWrite pos=65 size=8 buf_size=8]] msg_size:8 msg:[\x00\x00\x00\x04\xBF\x00\x00\x01] I20220817 06:45:46.215293 243703 conn_tracker.cc:150] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Data event: attr:[[ts=445991206331970 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] protocol=kProtocolMux role=kRoleServer dir=kIngress ssl=true source_fn=kSSLRead pos=293 size=204 buf_size=204]] msg_size:204 msg:[\x00\x00\x00\xC8\x02\x00\x00\x02\x00\x03\x00(com.twitter.finagle.tracing.TraceContext\x00 \xA9_\xE5\x18n\xF3\x86\xF8\xA9_\xE5\x18n\xF3\x86\xF8\xA9_\xE5\x18n\xF3\x86\xF8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1Bcom.twitter.finagle.Retries\x00\x04\x00\x00\x00\x00\x00\x1Ccom.twitter.finagle.Deadline\x00\x10\x17\x0C\x0E\xE6\x962\x0D\x80\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00\x00\x00\x00\x80\x01\x00\x01\x00\x00\x00\x05query\x00\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x06String\x00] I20220817 06:45:46.215481 243703 conn_tracker.cc:150] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Data event: attr:[[ts=445991207463787 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] protocol=kProtocolMux role=kRoleServer dir=kEgress ssl=true source_fn=kSSLWrite pos=73 size=48 buf_size=48]] msg_size:48 msg:[\x00\x00\x00,\xFE\x00\x00\x02\x00\x00\x00\x80\x01\x00\x02\x00\x00\x00\x05query\x00\x00\x00\x00\x0B\x00\x00\x00\x00\x00\x0CStringString\x00] I20220817 06:45:46.216001 243703 conn_tracker.h:271] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true req_frames=2 resp_frames=2 I20220817 06:45:46.216141 243703 conn_tracker.h:278] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true records=2 I20220817 06:45:46.334108 243703 conn_tracker.h:271] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true req_frames=0 resp_frames=0 I20220817 06:45:46.334323 243703 conn_tracker.h:278] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true records=0 I20220817 06:45:46.446976 243703 conn_tracker.h:271] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true req_frames=0 resp_frames=0 I20220817 06:45:46.447182 243703 conn_tracker.h:278] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true records=0 I20220817 06:45:46.516883 243447 thriftmux_openssl_trace_bpf_test.cc:109] thriftmux client command output: '243755 06:45:44.361 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework 06:45:44.364 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple 06:45:44.364 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4 06:45:44.381 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false 06:45:44.382 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11 06:45:44.383 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available 06:45:44.384 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available 06:45:44.385 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.storeFence: available 06:45:44.385 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available 06:45:44.386 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled 06:45:44.386 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true 06:45:44.387 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$7 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @4f49f6af 06:45:44.388 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.(long, int): unavailable 06:45:44.388 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available 06:45:44.389 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 2084569088 bytes (maybe) 06:45:44.389 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir) 06:45:44.389 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model) 06:45:44.390 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes 06:45:44.391 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1 06:45:44.391 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available 06:45:44.392 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false 06:45:44.393 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 8 06:45:44.394 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 8 06:45:44.394 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192 06:45:44.394 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 7 06:45:44.394 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 1048576 06:45:44.394 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256 06:45:44.395 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64 06:45:44.395 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768 06:45:44.395 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192 06:45:44.395 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0 06:45:44.395 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: false 06:45:44.396 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023 06:45:44.399 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 06:45:44.399 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 06:45:44.430 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir) 06:45:44.431 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true 06:45:44.431 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true 06:45:44.431 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.detectNativeLibraryDuplicates: true 06:45:44.445 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_6417257840684434158261.so 06:45:44.448 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false 06:45:44.448 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false 06:45:44.450 [main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 127.0.0.1) 06:45:44.451 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096 06:45:45.041 [main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:45.047 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available 06:45:45.427 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 4 06:45:45.481 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 243772 (auto-detected) 06:45:45.483 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 02:42:ac:ff:fe:11:00:02 (auto-detected) 06:45:45.497 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled 06:45:45.497 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0 06:45:45.497 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384 06:45:45.551 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_tcnative_linux_x86_644027974589868220049.so 06:45:45.551 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Loaded library with name 'netty_tcnative_linux_x86_64' 06:45:45.552 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Initialize netty-tcnative using engine: 'default' 06:45:45.552 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative using native library: BoringSSL 06:45:45.621 [finagle/netty4-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true 06:45:45.621 [finagle/netty4-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true 06:45:45.622 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:45.637 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:45.641 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: disabled 06:45:45.641 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: disabled 06:45:45.641 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.chunkSize: disabled 06:45:45.641 [finagle/netty4-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.blocking: disabled 06:45:45.651 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 => ECDHE-ECDSA-AES128-GCM-SHA256 06:45:45.651 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 => ECDHE-ECDSA-AES128-GCM-SHA256 06:45:45.651 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 => ECDHE-RSA-AES128-GCM-SHA256 06:45:45.651 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256 => ECDHE-RSA-AES128-GCM-SHA256 06:45:45.651 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384 06:45:45.652 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384 06:45:45.652 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 => ECDHE-RSA-AES256-GCM-SHA384 06:45:45.652 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_256_GCM_SHA384 => ECDHE-RSA-AES256-GCM-SHA384 06:45:45.652 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-ECDSA-CHACHA20-POLY1305 06:45:45.652 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-ECDSA-CHACHA20-POLY1305 06:45:45.652 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-RSA-CHACHA20-POLY1305 06:45:45.652 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-RSA-CHACHA20-POLY1305 06:45:45.653 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-PSK-CHACHA20-POLY1305 06:45:45.653 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 => ECDHE-PSK-CHACHA20-POLY1305 06:45:45.653 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA => ECDHE-ECDSA-AES128-SHA 06:45:45.653 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA => ECDHE-ECDSA-AES128-SHA 06:45:45.653 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA => ECDHE-RSA-AES128-SHA 06:45:45.653 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA => ECDHE-RSA-AES128-SHA 06:45:45.653 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA => ECDHE-PSK-AES128-CBC-SHA 06:45:45.654 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_AES_128_CBC_SHA => ECDHE-PSK-AES128-CBC-SHA 06:45:45.654 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA => ECDHE-ECDSA-AES256-SHA 06:45:45.654 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA => ECDHE-ECDSA-AES256-SHA 06:45:45.654 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA => ECDHE-RSA-AES256-SHA 06:45:45.654 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_RSA_WITH_AES_256_CBC_SHA => ECDHE-RSA-AES256-SHA 06:45:45.654 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA => ECDHE-PSK-AES256-CBC-SHA 06:45:45.654 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_ECDHE_PSK_WITH_AES_256_CBC_SHA => ECDHE-PSK-AES256-CBC-SHA 06:45:45.655 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_128_GCM_SHA256 => AES128-GCM-SHA256 06:45:45.655 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_128_GCM_SHA256 => AES128-GCM-SHA256 06:45:45.655 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_256_GCM_SHA384 => AES256-GCM-SHA384 06:45:45.655 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_256_GCM_SHA384 => AES256-GCM-SHA384 06:45:45.655 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_128_CBC_SHA => AES128-SHA 06:45:45.655 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_128_CBC_SHA => AES128-SHA 06:45:45.655 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_PSK_WITH_AES_128_CBC_SHA => PSK-AES128-CBC-SHA 06:45:45.655 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_PSK_WITH_AES_128_CBC_SHA => PSK-AES128-CBC-SHA 06:45:45.656 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_AES_256_CBC_SHA => AES256-SHA 06:45:45.656 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_AES_256_CBC_SHA => AES256-SHA 06:45:45.656 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_PSK_WITH_AES_256_CBC_SHA => PSK-AES256-CBC-SHA 06:45:45.656 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_PSK_WITH_AES_256_CBC_SHA => PSK-AES256-CBC-SHA 06:45:45.656 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: TLS_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA 06:45:45.656 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.CipherSuiteConverter - Cipher suite mapping: SSL_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA 06:45:45.657 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Supported protocols (OpenSSL): [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2, TLSv1.3] 06:45:45.657 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Default cipher suites (OpenSSL): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256] 06:45:45.797 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:45.844 [finagle/netty4-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: [email protected] 06:45:46.050 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xeab700d9, L:/127.0.0.1:55398 - R:localhost/127.0.0.1:8080] HANDSHAKEN: protocol:TLSv1.3 cipher suite:TLS_AES_128_GCM_SHA256 StringString ' I20220817 06:45:46.516991 243447 thriftmux_openssl_trace_bpf_test.cc:118] Client PID: 243755 I20220817 06:45:46.555589 243703 conn_tracker.cc:136] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true conn_close: [type=kConnClose ts=445991588344702 conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] source_fn=kSyscallClose [wr_bytes=121 rd_bytes=497]] I20220817 06:45:46.555630 243703 conn_tracker.cc:593] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Marked for death, countdown=3 I20220817 06:45:46.555649 243703 conn_tracker.cc:195] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true ConnStats timestamp=445991588353438 wr=121 rd=497 close=2 I20220817 06:45:46.556063 243703 conn_tracker.h:271] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true req_frames=0 resp_frames=0 I20220817 06:45:46.556085 243703 conn_tracker.h:278] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true records=0 I20220817 06:45:46.556097 243703 conn_tracker.cc:793] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Death countdown=2 I20220817 06:45:46.671743 243703 conn_tracker.h:271] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true req_frames=0 resp_frames=0 I20220817 06:45:46.671780 243703 conn_tracker.h:278] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true records=0 I20220817 06:45:46.671792 243703 conn_tracker.cc:793] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Death countdown=1 I20220817 06:45:48.469894 243447 container_runner.cc:59] docker rm -f thriftmux_server_445962533992195 I20220817 06:45:48.696987 243447 conn_tracker.cc:75] conn_id=[upid=243532:44596285 fd=140 gen=445990939787797] state=kTransferring remote_addr=127.0.0.1:55398 role=kRoleServer protocol=kProtocolMux ssl=true Being destroyed [ OK ] OpenSSLTraceRawFptrsTest/0.mtls_thriftmux_client (32478 ms) [----------] 1 test from OpenSSLTraceRawFptrsTest/0 (32478 ms total)

    [----------] Global test environment tear-down [==========] 1 test from 1 test suite ran. (32478 ms total) [ PASSED ] 1 test. I20220817 06:45:48.768321 243447 env.cc:51] Shutting down

    Todo

    • [ ] Rebase once #562 is merged
    • [ ] Determine how to best address RawFptrManager breakage (proc_parser changes)
    • [ ] Correctly identify when to use the offsets for existing openssl support and netty's borginssl
    opened by ddelnano 1
  • Pixie does not work on Minikube on Mac M1 machines

    Pixie does not work on Minikube on Mac M1 machines

    Describe the bug When attempting to deploy Pixie to a new cluster created using minikube, the px deploy command times out at the step labeled "Waiting for Cloud Connector to come online".

    To Reproduce Steps to reproduce the behavior:

    1. minikube start --driver=qemu2 --cni=flannel --cpus=4 --memory=8000 -p=test

      $ minikube start --driver=qemu2 --cni=flannel --cpus=4 --memory=8000 -p=test
      😄  [test] minikube v1.26.1 on Darwin 13.0 (arm64)
      ✨  Using the qemu2 (experimental) driver based on user configuration
      👍  Starting control plane node test in cluster test
      🔥  Creating qemu2 VM (CPUs=4, Memory=8000MB, Disk=20000MB) ...
      🐳  Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
          ▪ Generating certificates and keys ...
          ▪ Booting up control plane ...
          ▪ Configuring RBAC rules ...
      🔗  Configuring Flannel (Container Networking Interface) ...
      🔎  Verifying Kubernetes components...
          ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
      🌟  Enabled addons: storage-provisioner, default-storageclass
      🏄  Done! kubectl is now configured to use "phil-test" cluster and "default" namespace by default
      
      • Note the qemu2 driver. The hyperkit driver is not available for M1 Macs:

        $ minikube start --driver=hyperkit --cni=flannel --cpus=4 --memory=8000 -p=test
        😄  [test] minikube v1.26.1 on Darwin 13.0 (arm64)
        ✨  Using the hyperkit driver based on user configuration
        
        ❌  Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64
        
    2. minikube profile test

      $ minikube profile test
      ✅  minikube profile was successfully set to test
      
    3. px deploy

      $ px deploy
      Pixie CLI
      
      Running Cluster Checks:
       ✔    Kernel version > 4.14.0
       ✕    Cluster type is supported  ERR: Unrecognized minikube driver. Please use kvm2 or HyperKit instead.
      Check pre-check has failed. To bypass pass in --check=false. error=Unrecognized minikube driver. Please use kvm2 or HyperKit instead.
      
    4. px deploy --check=false

      $ px deploy --check=false
      Pixie CLI
      Installing Vizier version: 0.11.9
      Generating YAMLs for Pixie
      Deploying Pixie to the following cluster: test
      Is the cluster correct? (y/n) [y] : y
      Found 1 nodes
       ✔    Installing OLM CRDs
       ✔    Deploying OLM
       ✔    Deploying Pixie OLM Namespace
       ✔    Installing Vizier CRD
       ✔    Deploying OLM Catalog
       ✔    Deploying OLM Subscription
       ✔    Creating namespace
       ✔    Deploying Vizier
       ⠸    Waiting for Cloud Connector to come online
      FATA[0388] Timed out waiting for cluster ID assignment
      

    Expected behavior The px deploy command should succeed, with or without the --check=false option, while using the qemu2 driver on an M1 mac.

    Screenshots See output inline above.

    Logs Please attach the logs by running the following command:

    ./px collect-logs
    

    Logs were provided to @htroisi via Slack, and deemed unhelpful.

    App information (please complete the following information):

    • Pixie version

      Pixie CLI
      0.7.17+Distribution.2f3f26c.20220803165147.1.Homebrew
      
    • K8s cluster version

      $ kubectl version --short
      Client Version: v1.24.3
      Kustomize Version: v4.5.4
      Server Version: v1.24.3
      
    • Node Kernel version

      • Unsure
    • Browser version

      • Firefox Developer Edition 104.0b9 (64-bit)

    Additional context The hyperkit driver is not available for macOS machines on the ARM64 architecture (M1 Macs). Ideally another driver should be recommended in the minikube install guide, and supported throughout the application. See kubernetes/minikube#11885 for additional context.

    triage/needs-information 
    opened by PSalant726 1
  • Support plugin service API calls from Go client

    Support plugin service API calls from Go client

    Is your feature request related to a problem? Please describe. Cannot create/update retention scripts programmatically

    Describe the solution you'd like Add the support to call plugin service API from Go SDK

    Describe alternatives you've considered

    Additional context

    kind/feature good first issue priority/important-longterm area/api 
    opened by tharinduwijewardane 2
  • pixie cloud install in k8s but not minikube

    pixie cloud install in k8s but not minikube

    Describe the bug I tried to install pixie in self-host mode on the real k8s not minikube. When installing pixie cloud, I followed the documentation to set DNS. After running dev_dns_updater.go, the browser still cannot resolve the domain name and cannot be accessed through dev.withpixie.dev. And I would like to know if the installation method of pixie cloud shown in the documentation is only valid for minikube? Expected behavior The browser can access the pixie cloud service through dev.withpixie.dev and set the password of pixie cloud.

    App information (please complete the following information):

    • Pixie version
    • K8s cluster version: 1.12.1
    • Node Kernel version: 5.4.0
    • Browser version: 104.0.5112.79 (Official Build) (64-bit)
    opened by 1006er 2
  • Upgrade finagle to version that includes tls tracing compatible netty-tcnative

    Upgrade finagle to version that includes tls tracing compatible netty-tcnative

    This PR upgrades finagle to install a version that includes the netty changes I upstreamed (https://github.com/netty/netty/pull/12438, https://github.com/netty/netty-tcnative/pull/731) to support #407. Once this is upgraded, the next step is to refresh #463.

    Testing

    • [x] Ran the thriftmux container and verified that the logs of the client show that netty tcnative is used (improperly installed netty tcnative will silently succeed)
    [email protected]:/vagrantbazel run -c dbg src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux:server_image
    
    [email protected]:/vagrant$ docker run -it bazel/src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux:server_image --use-tls true
    
    # Run client inside the server container and see netty-tcnative is used successfully
    / # /usr/bin/java -cp @/app/px/src/stirling/source_connectors/socket_tracer/testing/containers/thriftmux/server_image.classpath Client --use-tls true 2>&1 | grep -i tcnative
    01:03:03.586 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_tcnative_linux_x86_64685160139595718633.so
    01:03:03.587 [finagle/netty4-1-1] DEBUG io.netty.util.internal.NativeLibraryLoader - Loaded library with name 'netty_tcnative_linux_x86_64'
    01:03:03.587 [finagle/netty4-1-1] DEBUG io.netty.handler.ssl.OpenSsl - Initialize netty-tcnative using engine: 'default'
    
    
    opened by ddelnano 2
  • Add pixie-operator pod logs to `px collect-logs` CLI output.

    Add pixie-operator pod logs to `px collect-logs` CLI output.

    Pixie's CLI allows you to gather Pixie logs using the px collect-logs command. The output logs should also include the pixie-operator pod logs, if applicable.

    good first issue area/cli 
    opened by htroisi 2
Releases(release/cli/v0.7.17)
Owner
Pixie Labs
Engineers use Pixie’s auto-telemetry to debug distributed environments in real-time
Pixie Labs
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.

Open Service Mesh (OSM) Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure,

Open Service Mesh 2.5k Aug 16, 2022
🔥 🔥 Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥

CVE-2021-44228 Log4J Vulnerability can be detected at runtime and attack paths can be visualized by ThreatMapper. Live demo of Log4J Vulnerability her

null 1.9k Aug 17, 2022
Hubble - Network, Service & Security Observability for Kubernetes using eBPF

Network, Service & Security Observability for Kubernetes What is Hubble? Getting Started Features Service Dependency Graph Metrics & Monitoring Flow V

Cilium 2.1k Aug 12, 2022
SigNoz helps developers monitor their applications & troubleshoot problems, an open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open source Application Performance Monitoring (APM) & Observability tool

Monitor your applications and troubleshoot problems in your deployed applications, an open-source alternative to DataDog, New Relic, etc. Documentatio

SigNoz 4.7k Sep 24, 2021
Go-watchdog - a web application observability tool built for Go

Go-watchdog is a web application observability tool built for Go, it exposes a status endpoint for application services like databases, caches, message-brokers, mails and storages.

salem ododa 4 Jul 11, 2022
pREST (PostgreSQL REST), simplify and accelerate development, ⚡ instant, realtime, high-performance on any Postgres application, existing or new

pREST pREST (PostgreSQL REST), simplify and accelerate development, instant, realtime, high-performance on any Postgres application, existing or new P

pREST 3.3k Aug 10, 2022
:rocket: Instant live visualization of your Go application runtime statistics (GC, MemStats, etc.) in the browser

Statsviz Instant live visualization of your Go application runtime statistics (GC, MemStats, etc.). Import "github.com/arl/statsviz" Register statsviz

Aurélien Rainone 1.9k Aug 12, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 22 Jun 17, 2022
eBPF based TCP observability.

TCPDog is a total solution from exporting TCP statistics from Linux kernel by eBPF very efficiently to store them at your Elasticsearch or InfluxDB da

Mehrdad Arshad Rad 196 Aug 4, 2022
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

The open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics no matter wher

Grafana Labs 50.5k Aug 19, 2022
Open source Observability Platform. 👉 SigNoz helps developers find issues in their deployed applications & solve them quickly

SigNoz SigNoz is an opensource observability platform. SigNoz uses distributed tracing to gain visibility into your systems and powers data using Kafk

SigNoz 7.4k Aug 16, 2022
A distributed, fault-tolerant pipeline for observability data

Table of Contents What Is Veneur? Use Case See Also Status Features Vendor And Backend Agnostic Modern Metrics Format (Or Others!) Global Aggregation

Stripe 1.6k Aug 7, 2022
TCPProbe is a modern TCP tool and service for network performance observability.

TCPProbe is a modern TCP tool and service for network performance observability. It exposes information about socket’s underlying TCP session, TLS and HTTP (more than 60 metrics). you can run it through command line or as a service. the request is highly customizable and you can integrate it with your application through gRPC. it runs in a Kubernetes cluster as cloud native application and by adding annotations on pods allow a fine control of the probing process.

Mehrdad Arshad Rad 324 Aug 10, 2022
An observability database aims to ingest, analyze and store Metrics, Tracing and Logging data.

BanyanDB BanyanDB, as an observability database, aims to ingest, analyze and store Metrics, Tracing and Logging data. It's designed to handle observab

The Apache Software Foundation 111 Jul 30, 2022
Secure Distributed Thanos Deployment using an Observability Cluster

Atlas Status: BETA - I don't expect breaking changes, but still possible. Atlas, forced by Zeus to support the heavens and the skies on his shoulders.

Atlas 39 Jun 11, 2022
Recipes for observability solutions at AWS

AWS o11y recipes See aws-observability.github.io/aws-o11y-recipes/. Security See CONTRIBUTING for more information. License This library is licensed u

null 96 Aug 11, 2022
ip-masq-agent-v2 aims to solve more specific networking cases, allow for more configuration options, and improve observability compared to the original.

ip-masq-agent-v2 Based on the original ip-masq-agent, v2 aims to solve more specific networking cases, allow for more configuration options, and impro

Microsoft Azure 4 Aug 11, 2022
Measure the overheads of various observability tools, especially profilers.

strong: WIP - NOT READY TO LOOK AT go-observability-bench Terminology Workload: A Go function performing a small task (< 100ms) like parsing a big blo

Felix Geisendörfer 14 Apr 23, 2022
GraphJin - Build APIs in 5 minutes with GraphQL. An instant GraphQL to SQL compiler.

GraphJin - Build APIs in 5 minutes GraphJin gives you a high performance GraphQL API without you having to write any code. GraphQL is automagically co

Vikram Rangnekar 2.3k Aug 8, 2022
Instant loginless chats with people that share an IP with you.

localchat Instant web chat rooms (anything under the /<room> path goes). Defaults to your local public IP, which means in most cases people from the s

fiatjaf 36 Mar 3, 2022
Instant messaging platform. Backend in Go. Clients: Swift iOS, Java Android, JS webapp, scriptable command line; chatbots

Tinode Instant Messaging Server Instant messaging server. Backend in pure Go (license GPL 3.0), client-side binding in Java, Javascript, and Swift, as

Tinode 9.1k Aug 12, 2022
GraphJin - Build APIs in 5 minutes with GraphQL. An instant GraphQL to SQL compiler.

GraphJin - Build APIs in 5 minutes GraphJin gives you a high performance GraphQL API without you having to write any code. GraphQL is automagically co

Vikram Rangnekar 2.3k Aug 10, 2022
Instant messaging server for the Extensible Messaging and Presence Protocol (XMPP).

Instant messaging server for the Extensible Messaging and Presence Protocol (XMPP).

Miguel Ángel Ortuño 1.3k Aug 17, 2022
Instant, disposable, single-binary web based live chat server. Go + VueJS.

Niltalk Niltalk is a web based disposable chat server. It allows users to create password protected disposable, ephemeral chatrooms and invite peers t

Kailash Nadh 854 Jul 5, 2022
Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.

Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.

Pixie Labs 3.7k Aug 10, 2022
null 55 Jun 16, 2022
Open-IM-Server is open source instant messaging Server.Backend in Go.

Open-IM-Server Open-IM-Server: Open source Instant Messaging Server Instant messaging server. Backend in pure Golang, wire transport protocol is JSON

OpenIM Corporation 8.9k Aug 12, 2022
纯Go编写的IM,完全自定义协议的高性能即时通讯服务(High-performance instant messaging service with fully customizable protocol)

LiMaoIM (Everything so easy) This project is a simple and easy to use, powerful performance, simple design concept instant messaging service, fully cu

null 133 Jul 12, 2022
GraphJin - Build APIs in 5 minutes with GraphQL. An instant GraphQL to SQL compiler.

GraphJin gives you a high performance GraphQL API without you having to write any code. GraphQL is automagically compiled into an efficient SQL query. Use it either as a library or a standalone service.

Vikram Rangnekar 2.3k Aug 18, 2022