Kubectl plugin to ease sniffing on kubernetes pods using tcpdump and wireshark

Overview

ksniff

Build Status

A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod in your Kubernetes cluster.

You get the full power of Wireshark with minimal impact on your running pods.

Intro

When working with micro-services, many times it's very helpful to get a capture of the network activity between your micro-service and it's dependencies.

ksniff use kubectl to upload a statically compiled tcpdump binary to your pod and redirecting it's output to your local Wireshark for smooth network debugging experience.

Demo

Demo!

Production Readiness

Ksniff isn't production ready yet, running ksniff for production workloads isn't recommended at this point.

Installation

Installation via krew (https://github.com/GoogleContainerTools/krew)

kubectl krew install sniff

For manual installation, download the latest release package, unzip it and use the attached makefile:

unzip ksniff.zip
make install

Build

Requirements:

  1. libpcap-dev: for tcpdump compilation (Ubuntu: sudo apt-get install libpcap-dev)
  2. go 1.11 or newer

Compiling:

linux:      make linux
windows:    make windows
mac:        make darwin

To compile a static tcpdump binary:

make static-tcpdump

Usage

kubectl < 1.12:
kubectl plugin sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]

kubectl >= 1.12:
kubectl sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]

POD_NAME: Required. the name of the kubernetes pod to start capture it's traffic.
NAMESPACE_NAME: Optional. Namespace name. used to specify the target namespace to operate on.
CONTAINER_NAME: Optional. If omitted, the first container in the pod will be chosen.
INTERFACE_NAME: Optional. Pod Interface to capture from. If omitted, all Pod interfaces will be captured.
CAPTURE_FILTER: Optional. specify a specific tcpdump capture filter. If omitted no filter will be used.
OUTPUT_FILE: Optional. if specified, ksniff will redirect tcpdump output to local file instead of wireshark. Use '-' for stdout.
LOCAL_TCPDUMP_FILE: Optional. if specified, ksniff will use this path as the local path of the static tcpdump binary.
REMOTE_TCPDUMP_FILE: Optional. if specified, ksniff will use the specified path as the remote path to upload static tcpdump to.

Non-Privileged and Scratch Pods

To reduce attack surface and have small and lean containers, many production-ready containers runs as non-privileged user or even as a scratch container.

To support those containers as well, ksniff now ships with the "-p" (privileged) mode. When executed with the -p flag, ksniff will create a new pod on the remote kubernetes cluster that will have access to the node docker daemon.

ksniff will than use that pod to execute a container attached to the target container network namespace and perform the actual network capture.

Piping output to stdout

By default ksniff will attempt to start a local instance of the Wireshark GUI. You can integrate with other tools using the -o - flag to pipe packet cap data to stdout.

Example using tshark:

kubectl sniff pod-name -f "port 80" -o - | tshark -r -

Contribution

More than welcome! please don't hesitate to open bugs, questions, pull requests

Future Work

  1. Instead of uploading static tcpdump, use the future support of "kubectl debug" feature (https://github.com/kubernetes/community/pull/649) which should be a much cleaner solution.

Known Issues

Wireshark and TShark cannot read pcap

Issues 100 and 98

Wireshark may show UNKNOWN in Protocol column. TShark may report the following in output:

tshark: The standard input contains record data that TShark doesn't support.
(pcap: network type 276 unknown or unsupported)

This issue happens when using an old version of Wireshark or TShark to read the pcap created by ksniff. Upgrade Wireshark or TShark to resolve this issue. Ubuntu LTS versions may have this problem with stock package versions but using the Wireshark PPA will help.

Issues
  • Error when run ksniff in privileged mode

    Error when run ksniff in privileged mode

    After running kubectl sniff -p <POD> -c <CONTAINER_NAME> -n <NAMESPACE>. I'm using AKS Kubernetes v1.17.11.

    ksniff version: sniff v1.5.0

    INFO[0000] waiting for pod successful startup           
    INFO[0004] pod: 'ksniff-5pbv6' created successfully on node: 'aks-d8sv3-38575711-vmss000001' 
    INFO[0004] spawning wireshark!                          
    INFO[0004] starting remote sniffing using privileged pod 
    INFO[0004] executing command: '[docker --host unix:///host/var/run/docker.sock run --rm --name=ksniff-container-fLuezHME --net=container:602167f3ac7de5f5156763f8ad765ea15e7b9ed1cfeb242392fe0330c6762aaa maintained/tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-5pbv6', namespace: 'global' 
    INFO[0005] command: '[docker --host unix:///host/var/run/docker.sock run --rm --name=ksniff-container-fLuezHME --net=container:602167f3ac7de5f5156763f8ad765ea15e7b9ed1cfeb242392fe0330c6762aaa maintained/tcpdump -i any -U -w - ]' executing successfully exitCode: '125', stdErr :'docker: Cannot connect to the Docker daemon at unix:///host/var/run/docker.sock. Is the docker daemon running?.
    See 'docker run --help'.
    INFO[0005] remote sniffing using privileged pod completed
    
    opened by ffais 15
  • Unrecognized libpcap format or not libpcap data

    Unrecognized libpcap format or not libpcap data

    Experiment Environment

    • OS: MacOS Mojave 10.14.1
    • Cluster:
      • Client: kubectl v1.12.3, installed from brew
      • Server: Kubernetes v1.10.11

    What do I do?

    I have configured my kubectl to control remote cluster, so this operation is on my laptop, I am trying to use sniff to dump packet traffic go through K8S Pod.

    🚀  kc -n epc1 get pods
    NAME          READY   STATUS    RESTARTS   AGE
    cassandra-0   1/1     Running   0          2h
    hss-0         1/1     Running   0          2h
    mme-0         1/1     Running   0          2h
    
    🚀  kc sniff mme-0 -n epc1
    INFO[0000] using tcpdump path at: '/Users/aweimeow/.krew/store/sniff/afb1a2e2cd093f1c8f8fff511f48cc5a290d2c6ecd18d9f51f9c66500710297b/static-tcpdump'
    INFO[0000] no container specified, taking first container we found in pod.
    INFO[0000] selected container: 'mme'
    INFO[0000] sniffing on pod: 'mme-0' [namespace: 'epc1', container: 'mme', filter: '']
    INFO[0000] checking for static tcpdump binary on: '/tmp/static-tcpdump'
    INFO[0000] couldn't find static tcpdump binary on: '/tmp/static-tcpdump', starting to upload
    INFO[0000] tcpdump uploaded successfully
    INFO[0000] spawning wireshark!
    

    And wireshark shows the following message: image

    If you need more information for helping debug, please let me know :)

    bug 
    opened by aweimeow 15
  • Mount only the docker.sock file instead of the whole root path of the host

    Mount only the docker.sock file instead of the whole root path of the host

    When the pod is created, it mounts the whole / host path under the /host path of the pod. https://github.com/eldadru/ksniff/blob/798b1c7dec735bd2cdc925e7c840ee623b9ffde1/kube/kubernetes_api_service.go#L149

    However, when the Service Account Admission Manager came into the play, it will mount the token under /var/run/secrets/kubernetes.io/serviceaccount, and both the /var/run and /host/var/run is a symlink to /run, the /host/var/run/docker.sock is gone and docker won't be able to connect to the docker daemon.

    May I ask what's the purpose of mounting the whole / directory instead of just the .sock file? The current approach seems a bit dangerous and might have unexpected side effects.

    opened by namco1992 12
  • Wireshark/Tshark isn't reading output correctly

    Wireshark/Tshark isn't reading output correctly

    What's the issue

    When I try to sniff traffic with wireshark or tshark I get an error pcap: network type 276 unknown or unsupported or I just get

    How to reproduce

    $ kubectl sniff my-pod -c my-container -p -n my-namespace -o - | tshark -r -
    INFO[0000] sniffing method: privileged pod
    INFO[0000] sniffing on pod: 'my-pod' [namespace: 'my-namespace', container: 'my-container', filter: '', interface: 'any']
    INFO[0000] creating privileged pod on node: 'my-node'
    INFO[0000] pod created: &Pod{ObjectMeta:{ksniff-qxsxk ksniff- my-namespace /api/v1/namespaces/my-namespace/pods/ksniff-qxsxk 485504a2-a9be-4328-8f86-424a2b41c2e1 56758253 0 2021-02-15 15:58:08 +0100 CET <nil> <nil> map[app:ksniff] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:host,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/,Type:*Directory,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},Volume{Name:container-socket,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/var/run/docker.sock,Type:*Socket,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},Volume{Name:default-token-8h6p9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8h6p9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:ksniff-privileged,Image:docker,Command:[sh -c sleep 10000000],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:container-socket,ReadOnly:true,MountPath:/var/run/docker.sock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-8h6p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:my-node,HostNetwork:false,HostPID:true,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    INFO[0000] waiting for pod successful startup
    INFO[0008] pod: 'ksniff-qxsxk' created successfully on node: 'my-node'
    INFO[0008] output file option specified, storing output in: '-'
    INFO[0008] starting remote sniffing using privileged pod
    INFO[0008] executing command: '[docker --host unix:///var/run/docker.sock run --rm --name=ksniff-container-fQJpKPcY --net=container:b696c45e35a5b9dfe0152685569fb35c6331c2d1e63648ed8987f52211ba0b5f maintained/tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-qxsxk', namespace: 'my-namespace'
    tshark: The standard input contains record data that TShark doesn't support.
    (pcap: network type 276 unknown or unsupported)
    

    I get the same error if I save the output to a file and then try to open it with wireshark.

    However if I try to run ksniff directly to wireshark I get the traffic, but it's not able to decode it correctly
    (Although if you look closely you see that in the raw data there's some HTTP traffic)

    $ kubectl sniff my-pod -c my-container -p -n my-namespace
    

    screenshot_2021-02-15-160651

    Version

    ksniff is built from current master (https://github.com/eldadru/ksniff/commit/f253ce97ae6c3884c545080d9124aceb2f3b4263)

    $ wireshark --version
    Wireshark 3.2.7 (Git v3.2.7 packaged as 3.2.7-1)
    
    Copyright 1998-2020 Gerald Combs <[email protected]> and contributors.
    License GPLv2+: GNU GPL version 2 or later <https://www.gnu.org/licenses/gpl-2.0.html>
    This is free software; see the source for copying conditions. There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    Compiled (64-bit) with Qt 5.14.2, with libpcap, with POSIX capabilities (Linux),
    with libnl 3, with GLib 2.66.0, with zlib 1.2.11, with SMI 0.4.8, with c-ares
    1.16.1, with Lua 5.2.4, with GnuTLS 3.6.15 and PKCS #11 support, with Gcrypt
    1.8.5, with MIT Kerberos, with MaxMind DB resolver, with nghttp2 1.41.0, with
    brotli, with LZ4, with Zstandard, with Snappy, with libxml2 2.9.10, with
    QtMultimedia, without automatic updates, with SpeexDSP (using system library),
    with SBC, with SpanDSP, without bcg729.
    
    Running on Linux 5.8.0-43-generic, with Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz
    (with SSE4.2), with 15709 MB of physical memory, with locale
    LC_CTYPE=en_US.UTF-8, LC_NUMERIC=sv_SE.UTF-8, LC_TIME=sv_SE.UTF-8,
    LC_COLLATE=en_US.UTF-8, LC_MONETARY=sv_SE.UTF-8, LC_MESSAGES=en_US.UTF-8,
    LC_PAPER=sv_SE.UTF-8, LC_NAME=sv_SE.UTF-8, LC_ADDRESS=sv_SE.UTF-8,
    LC_TELEPHONE=sv_SE.UTF-8, LC_MEASUREMENT=sv_SE.UTF-8,
    LC_IDENTIFICATION=sv_SE.UTF-8, with libpcap version 1.9.1 (with TPACKET_V3),
    with GnuTLS 3.6.15, with Gcrypt 1.8.5, with brotli 1.0.9, with zlib 1.2.11,
    binary plugins supported (0 loaded).
    
    Built using gcc 10.2.0.
    
    opened by Xartos 10
  • pod not found

    pod not found

    When I run k plugin sniff my-product-api-c97484d9b-bn6d6 -n my-product-api

    I get

    [+] Sniffing on pod: my-product-api-c97484d9b-bn6d6 container:  namespace: 
    [+] Verifying pod status
    Error from server (NotFound): pods "my-product-api-c97484d9b-bn6d6" not found
    [-] Pod is not existing or on different namespace
    error: exit status 1
    

    But, when I run k describe pod my-product-api-c97484d9b-bn6d6 -n my-product-api then I am getting the expected result.

    Versions

    Client Version: v1.10.4 Server Version: v1.11.1

    bug 
    opened by magick93 9
  • Using ksniff on microk8s (containerd container runtime)

    Using ksniff on microk8s (containerd container runtime)

    Hi,

    I've had ksniff working with a minikube deployment with no issues. I've switched to microk8s and am now getting the following error.

    ERRO[0000] failed to create privileged pod on node: 'mudged-laptop' error="container runtime on node: 'mudged-laptop' isn't docker"

    From what I can see this is down to microk8s using containerd as the container runtime.

    Firstly am I right in my assumption? And are there any plans to support microk8s/containerd in the future?

    opened by mudged 8
  • Optional socket path argument and fallback to default soket path

    Optional socket path argument and fallback to default soket path

    Fixed #87, #82.

    Changes:

    1. Added a default socket path for each runtime (e.g. /var/run/docker.sock for docker);
    2. A new socket argument for passing in the socket path to support the scenario that the socket path is different from the default path (e.g. the DOCKER_SOCKET is changed to another path in the docker conf);
    3. Changed DockerBridge and CrioBridge to pointer receivers. This fixed the BuildCleanupCommand in DockerBridge too.
    4. Return the overlooked error in kube/ops.go and pkg/service/sniffer/privileged_pod_sniffer_service.go so that the error is propagated back and trigger the cleanup as expected.

    Note: Some code formatting changes due to the auto gofmt on my side.

    opened by namco1992 7
  • Adding support for CRI-O and flexibility for more.

    Adding support for CRI-O and flexibility for more.

    Resolves #36 Begins addressing #65 Opens door for #74 (I think)

    This PR includes work to allow ksniff to work in an environment using CRI-O (e.g. OpenShift 4.x) but may open the door for supporting other container runtimes (by implementing ContainerRuntimeBridge).

    I know this wasn't on the roadmap as you've mentioned in a few bugs @eldadru but I still hope this is considered. If not, I'm happy to maintain a fork for CRI-O users!

    opened by bostrt 7
  • [bugfix] Runtimes must use pointer receivers to be modifiable

    [bugfix] Runtimes must use pointer receivers to be modifiable

    Hi! We noticed that the privileged pods were leaving behind docker containers on our hosts. E.g. docker ps would show something like this:

    550b70024582        maintained/tcpdump                                                          "/usr/sbin/tcpdump -…"   4 minutes ago        Up 4 minutes                            ksniff-container-CuGlMCMT
    9f8e9578143a        maintained/tcpdump                                                          "/usr/sbin/tcpdump -…"   7 days ago           Up 7 days                               ksniff-container-tgNktuLk
    8fd1c0b61a6d        maintained/tcpdump                                                          "/usr/sbin/tcpdump -…"   2 weeks ago          Up 2 weeks                              ksniff-container-JpQDcWWx
    

    Looking more fully at the logs from the kubectl sniff invocation, we can see the problem:

    INFO[0001] waiting for pod successful startup
    INFO[0011] pod: 'ksniff-rnbws' created successfully on node: 'XXXXXXXXXX.eu-west-1.compute.internal'
    INFO[0011] spawning wireshark!
    INFO[0011] starting remote sniffing using privileged pod
    INFO[0011] executing command: '[docker --host unix:///host/var/run/docker.sock run --rm --name=ksniff-container-iqDIKJFD --net=container:4e49ced6ccf34a30b4bd3b13706268c7f434de28716f097324fe1119da137a59 maintained/tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-rnbws', namespace: 'demos'
    INFO[0054] starting sniffer cleanup
    INFO[0054] removing privileged container: ''
    INFO[0054] executing command: '[docker rm -f ]' on container: 'ksniff-privileged', pod: 'ksniff-rnbws', namespace: 'demos'
    INFO[0054] command: '[docker rm -f ]' executing successfully exitCode: '1', stdErr :'Container name cannot be empty
    '
    INFO[0054] privileged container: '' removed successfully
    INFO[0054] removing pod: 'ksniff-rnbws'
    INFO[0054] removing privileged pod: 'ksniff-rnbws'
    INFO[0054] privileged pod: 'ksniff-rnbws' removed
    INFO[0054] pod: 'ksniff-rnbws' removed successfully
    INFO[0054] sniffer cleanup completed successfully
    

    Most relevant is this line: INFO[0054] command: '[docker rm -f ]' executing successfully exitCode: '1', stdErr :'Container name cannot be empty ' -- there is no container name being removed.

    Digging in to the code a bit, we found that the receivers for the DockerBridge type are value receivers, meaning that the assignment of d.tcpdumpContainerName = "ksniff-container-" + utils.GenerateRandomString(8) when building the tcpdump command is never actually propagated back to the caller object, but only saved in the local copy. (see https://golang.org/doc/effective_go.html#pointers_vs_values)

    This PR changes the receiver type for both DockerBridge and CrioBridge to use pointer receivers (even though this is not currently needed for the CrioBridge). I've added a test for the DockerBridge that shows the problem.

    After the change, we see this:

    INFO[0011] executing command: '[docker --host unix:///host/var/run/docker.sock rm -f ksniff-container-DMSaHhmy]' on container: 'ksniff-privileged', pod: 'ksniff-fv9bz', namespace: 'demos'
    INFO[0012] command: '[docker --host unix:///host/var/run/docker.sock rm -f ksniff-container-DMSaHhmy]' executing successfully exitCode: '0', stdErr :''
    
    opened by mtl-wgtwo 6
  • Minimum required RBAC for user to successfully sniff

    Minimum required RBAC for user to successfully sniff

    I am wondering if you've done any tuning on figuring out what the minimum required RBAC permissions for a user would been to be to get a successful sniff.

    opened by RevREB 6
  • Update Krew Index with v1.6.1

    Update Krew Index with v1.6.1

    Hey @eldadru, thanks for you nice job in here :)

    Would be possible to update the version of the plugin into Krew Index so https://github.com/eldadru/ksniff/issues/114 is mitigated?

    opened by dntosas 5
  • Annotation fork ksniff

    Annotation fork ksniff

    Hi,

    I want to troubleshoot Consul service mesh that is configured to inject sidecar by default. Ksniff pod can't even start due to consul inject. Consul has the ability ignore specific pods by adding annotations.

    Is is possible to define annotations on the pod created by ksniff ?

    opened by andel7 0
  • for windows pods it's failing

    for windows pods it's failing

    `Events: Type Reason Age From Message


    Warning FailedMount 59s kubelet Unable to attach or mount volumes: unmounted volumes=[container-socket], unattached volumes=[container-socket host default-token-zrwjq]: timed out waiting for the condition Warning FailedMount 54s (x9 over 3m1s) kubelet MountVolume.SetUp failed for volume "container-socket" : hostPath type check failed: /var/run/docker.sock is not a socket file ` Please help for windows pod to achieve this @eldadru.

    opened by gudipudipradeep 0
  • WTAP_ENCAP = 0

    WTAP_ENCAP = 0

    Environment

    I'm running a 2-node k3s cluster. Both nodes run on k3os. I installed krew and then ksniff, as described here:

    • https://krew.sigs.k8s.io/docs/user-guide/setup/install/
    • https://github.com/eldadru/ksniff#installation

    Client OS is Kubuntu 18.04. WireShark version is 2.6.10 (Git v2.6.10 packaged as 2.6.10-1~ubuntu18.04.0). kubectl is installed from a snap.

    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-29T02:41:56Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7+k3s1", GitCommit:"aa768cbdabdb44c95c5c1d9562ea7f5ded073bc0", GitTreeState:"clean", BuildDate:"2021-05-20T01:07:13Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
    WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
    

    Repro

    I ran the following command:

    kubectl sniff --namespace my-grafana my-grafana-6cc8b4687-tg4hk -p --socket /run/k3s/containerd/containerd.sock --container grafana
    

    (-p was necessary because the Grafana container isn't privileged, while the --socket was necessary because k3s has its containerd.sock elsewhere)

    This command correctly started up Wireshark and the packets are flowing in. However, none of them are being dissected correctly. they show with as "WTAP_ENCAP = 0" in the Info column. If I cause some unencrypted traffic, I can read it in the lower pane.

    If I try to save the captured packets to a file, I get the following error:

    Frame 1 has a network type that can't be saved in a "HP-UX nettl trace" file.
    

    WireShark doesn't allow me to save as any other file type other than "HP-UX nettl trace".

    I also tried capturing to a file:

    kubectl sniff --namespace my-grafana my-grafana-6cc8b4687-tg4hk -p --socket /run/k3s/containerd/containerd.sock --container grafana -o /tmp/foo.pcap
    

    Opening that file in Wireshark yields the following error:

    The file "foo.pcap" contains record data that Wireshark doesn't support.
    (pcap: network type 276 unknown or unsupported)
    
    opened by TomyLobo 2
  • Failed to execute error code 126

    Failed to execute error code 126

    I was running ksniff on a remote headless k8s cluster, with command k sniff af1-0 -n plmn-mnc99-mcc208 -o af.pcap, however I got the following error:

    INFO[0000] using tcpdump path at: '/home/unknown/.krew/store/sniff/v1.6.1/static-tcpdump' 
    INFO[0000] no container specified, taking first container we found in pod. 
    INFO[0000] selected container: 'af1'                   
    INFO[0000] sniffing method: upload static tcpdump       
    INFO[0000] sniffing on pod: 'af1-0' [namespace: 'plmn-mnc99-mcc208', container: 'af1', filter: '', interface: 'any'] 
    INFO[0000] uploading static tcpdump binary from: '/home/unknown/.krew/store/sniff/v1.6.1/static-tcpdump' to: '/tmp/static-tcpdump' 
    INFO[0000] uploading file: '/home/unknown/.krew/store/sniff/v1.6.1/static-tcpdump' to '/tmp/static-tcpdump' on container: 'af1' 
    INFO[0000] executing command: '[/bin/sh -c test -f /tmp/static-tcpdump]' on container: 'af1', pod: 'af1-0', namespace: 'plmn-mnc99-mcc208' 
    INFO[0000] command: '[/bin/sh -c test -f /tmp/static-tcpdump]' executing successfully exitCode: '126', stdErr :'' 
    INFO[0000] file not found on: '/tmp/static-tcpdump', starting to upload 
    INFO[0000] tcpdump uploaded successfully                
    INFO[0000] output file option specified, storing output in: 'af.pcap' 
    INFO[0000] start sniffing on remote container           
    INFO[0000] executing command: '[/tmp/static-tcpdump -i any -U -w - ]' on container: 'af1', pod: 'af1-0', namespace: 'plmn-mnc99-mcc208' 
    INFO[0000] command: '[/tmp/static-tcpdump -i any -U -w - ]' executing successfully exitCode: '126', stdErr :'' 
    INFO[0000] starting sniffer cleanup                     
    INFO[0000] sniffer cleanup completed successfully       
    Error: executing sniffer failed, exit code: '126'
    

    Any idea how to fix this?

    opened by zyddnys 0
  • Failed to deploy Ksniff on AKS - Failed volume mount

    Failed to deploy Ksniff on AKS - Failed volume mount

    Good afternoon, how are you?

    I would like help deploying the ksniff pod.

    I performed the installation according to the doc, however the pod was not being deployed, so I found information to run the command with the parameter: --socket /run/k3s/containerd/containerd.sock

    image

    image

    This caused the deployment process to start, however, a new error occurred, related to mounting the volume.

    MountVolume.SetUp failed for volume "container-socket" : hostPath type check failed: /run/k3s/containerd/containerd.sock is not a socket file

    Unable to attach or mount volumes: unmounted volumes=[container-socket], unattached volumes=[container-socket host default-token-7tfcb]: timed out waiting for the condition

    I'm using AKS (Azure).

    Could you help me?

    Logs.txt

    opened by alancdias7 1
Releases(v1.6.2)
Owner
Eldad Rudich
Director of Engineering @stackpulse
Eldad Rudich
Kubectl plugin to run curl commands against kubernetes pods

kubectl-curl Kubectl plugin to run curl commands against kubernetes pods Motivation Sending http requests to kubernetes pods is unnecessarily complica

Segment 152 Jun 19, 2022
A kubectl plugin to evict pods

kubectl-evict A kubectl plugin to evict pods. This plugin is good to remove a pod from your cluster or to test your PodDistruptionBudget. ?? Installat

Shin'ya Ueoka 9 Jun 8, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Trendyol Open Source 353 May 8, 2022
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes

csi-driver-spiffe csi-driver-spiffe is a Container Storage Interface (CSI) driver plugin for Kubernetes to work along cert-manager. This CSI driver tr

null 24 Jun 23, 2022
kubectl-fzf provides a fast and powerful fzf autocompletion for kubectl

Kubectl-fzf kubectl-fzf provides a fast and powerful fzf autocompletion for kubectl. Table of Contents Kubectl-fzf Table of Contents Features Requirem

null 1 Nov 3, 2021
Kubectl golang - kubectl krew template repo

kubectl krew template repo There's a lot of scaffolding needed to set up a good

geodis 0 Jan 11, 2022
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

sigstore 29 Jun 18, 2022
kubectl plugin for generating nginx-ingress compatible basic-auth secrets on kubernetes clusters

kubectl-htpasswd kubectl plugin for easily generating hashed basic auth secrets. Supported hash algorithms bcrypt Examples Create the secret on the cl

Christian Rebischke 14 Jun 16, 2022
A very simple, silly little kubectl plugin / utility that guesses which language an application running in a kubernetes pod was written in.

A very simple, silly little kubectl plugin / utility that guesses which language an application running in a kubernetes pod was written in.

Tom Granot 2 Mar 9, 2022
Viewnode displays Kubernetes cluster nodes with their pods and containers.

viewnode The viewnode shows Kubernetes cluster nodes with their pods and containers. It is very useful when you need to monitor multiple resources suc

NTTDATA-DACH 8 May 5, 2022
gpu-memory-monitor is a metrics server for collecting GPU memory usage of kubernetes pods.

gpu-memory-monitor is a metrics server for collecting GPU memory usage of kubernetes pods. If you have a GPU machine, and some pods are using the GPU device, you can run the container by docker or kubernetes when your GPU device belongs to nvidia. The gpu-memory-monitor will collect the GPU memory usage of pods, you can get those metrics by API of gpu-memory-monitor

null 2 May 17, 2022
Tcpdump-webhook - Toy Sidecar Injection with Mutating Webhook

tcpdump-webhook A simple demonstration of Kubernetes Mutating Webhooks. Injects

Alp Kahvecioglu 2 Feb 8, 2022
A kubectl plugin for easier query and operate k8s cluster.

kube-query A kubectl plug-in that makes it easier to query and manipulate K8S clusters. (what is kubectl plug-in ?) Kube-query support some resource s

Shadow-L 14 Jun 9, 2022
A kubectl plugin for finding decoded secret data with productive search flags.

kubectl-secret-data What is it? This is a kubectl plugin for finding decoded secret data. Since kubectl only outputs base64-encoded secrets, it makes

Keisuke Umegaki 33 Jun 22, 2022
A 'kubectl' plugin for interacting with Clusternet.

kubectl-clusternet A kubectl plugin for interacting with Clusternet. Installation Install With Krew kubectl-clusternet can be installed using Krew, pl

Clusternet 11 May 27, 2022
A kubectl plugin for finding decoded secret data with productive search flags.

kubectl-secret-data What is it? This is a kubectl plugin for finding decoded secret data. Since kubectl outputs base64-encoded secrets basically, it m

Keisuke Umegaki 32 Feb 23, 2022
A kubectl plugin for getting endoflife information about your cluster.

kubectl-endoflife A kubectl plugin that checks your clusters for component compatibility and Kubernetes version end of life. This plugin is meant to a

Ross Edman 3 Oct 10, 2021
🦥 kubectl plugin to easy to view pod

kubectl-lazy Install curl -sSL https://mirror.ghproxy.com/https://raw.githubusercontent.com/togettoyou/kubectl-lazy/main/install.sh | bash Or you can

寻寻觅觅的Gopher 7 Jun 20, 2022
A kubectl plugin to query multiple namespace at the same time.

kubemulti A kubectl plugin to query multiple namespace at the same time. $ kubemulti get pods -n cdi -n default NAMESPACE NAME

R0CKSTAR 3 Mar 1, 2022