Hubble - Network, Service & Security Observability for Kubernetes using eBPF

Overview

Hubble logo

Network, Service & Security Observability for Kubernetes

What is Hubble?

Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.

Hubble can answer questions such as:

Service dependencies & communication map:

  • What services are communicating with each other? How frequently? What does the service dependency graph look like?
  • What HTTP calls are being made? What Kafka topics does a service consume from or produce to?

Operational monitoring & alerting:

  • Is any network communication failing? Why is communication failing? Is it DNS? Is it an application or network problem? Is the communication broken on layer 4 (TCP) or layer 7 (HTTP)?
  • Which services have experienced a DNS resolution problems in the last 5 minutes? Which services have experienced an interrupted TCP connection recently or have seen connections timing out? What is the rate of unanswered TCP SYN requests?

Application monitoring:

  • What is the rate of 5xx or 4xx HTTP response codes for a particular service or across all clusters?
  • What is the 95th and 99th percentile latency between HTTP requests and responses in my cluster? Which services are performing the worst? What is the latency between two services?

Security observability:

  • Which services had connections blocked due to network policy? What services have been accessed from outside the cluster? Which services have resolved a particular DNS name?

Why Hubble?

The Linux kernel technology eBPF is enabling visibility into systems and applications at a granularity and efficiency that was not possible before. It does so in a completely transparent way, without requiring the application to change or for the application to hide information. By building on top of Cilium, Hubble can leverage eBPF for visibility. By leveraging on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed insight where required. Hubble has been created and specifically designed to make best use of these new eBPF powers.

Releases

Version Release Date Supported Cilium Version Artifacts
v0.7 2020-10-22 (v0.7.1) Cilium 1.9, Cilium 1.8 GitHub Release
v0.6 2020-05-29 (v0.6.1) Cilium 1.8 GitHub Release
v0.5 2020-07-28 (v0.5.2) Cilium 1.7 GitHub Release

Component Stability

Hubble project consists of several components (see Architecture section).

While the core Hubble components have been running in production in multiple environments, new components continue to emerge as the project grows and expands in scope.

Some components, due to their relatively young age, are still considered beta and have to be used with caution in critical production workloads.

Component Area State
Hubble CLI Core Stable
Hubble Server Core Stable
Hubble Metrics Core Stable
Hubble Relay Multinode Stable
Hubble UI UI Beta

Architecture

Hubble Architecture

Getting Started

Features

Service Dependency Graph

Troubleshooting microservices application connectivity is a challenging task. Simply looking at "kubectl get pods" does not indicate dependencies between each service or external APIs or databases.

Hubble enables zero-effort automatic discovery of the service dependency graph for Kubernetes Clusters at L3/L4 and even L7, allowing user-friendly visualization and filtering of those dataflows as a Service Map.

See Hubble Service Map Tutorial for more examples.

Service Map

Metrics & Monitoring

The metrics and monitoring functionality provides an overview of the state of systems and allow to recognize patterns indicating failure and other scenarios that require action. The following is a short list of example metrics, for a more detailed list of examples, see the Metrics Documentation.

Networking Behavior

Networking

Network Policy Observation

Network Policy

HTTP Request/Response Rate & Latency

HTTP

DNS Request/Response Monitoring

DNS

Flow Visibility

Flow visibility provides visibility into flow information on the network and application protocol level. This enables visibility into individual TCP connections, DNS queries, HTTP requests, Kafka communication, and much more.

DNS Resolution

Identifying pods which have received DNS response indicating failure:

hubble observe --since=1m -t l7 -j \
   | jq 'select(.l7.dns.rcode==3) | .destination.namespace + "/" + .destination.pod_name' \
   | sort | uniq -c | sort -r
  42 "starwars/jar-jar-binks-6f5847c97c-qmggv"

Successful query & response:

starwars/x-wing-bd86d75c5-njv8k            kube-system/coredns-5c98db65d4-twwdg      DNS Query deathstar.starwars.svc.cluster.local. A
kube-system/coredns-5c98db65d4-twwdg       starwars/x-wing-bd86d75c5-njv8k           DNS Answer "10.110.126.213" TTL: 3 (Query deathstar.starwars.svc.cluster.local. A)

Non-existent domain:

starwars/jar-jar-binks-789c4b695d-ltrzm    kube-system/coredns-5c98db65d4-f4m8n      DNS Query unknown-galaxy.svc.cluster.local. A
starwars/jar-jar-binks-789c4b695d-ltrzm    kube-system/coredns-5c98db65d4-f4m8n      DNS Query unknown-galaxy.svc.cluster.local. AAAA
kube-system/coredns-5c98db65d4-twwdg       starwars/jar-jar-binks-789c4b695d-ltrzm   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy.starwars.svc.cluster.local. A)
kube-system/coredns-5c98db65d4-twwdg       starwars/jar-jar-binks-789c4b695d-ltrzm   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy.starwars.svc.cluster.local. AAAA)

HTTP Protocol

Successful request & response with latency information:

starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    HTTP/1.1 GET http://deathstar/
starwars/deathstar-695d8f7ddc-lvj84:80     starwars/x-wing-bd86d75c5-njv8k:53410     HTTP/1.1 200 1ms (GET http://deathstar/)

TCP/UDP Packets

Successful TCP connection:

starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    TCP Flags: SYN
deathstar.starwars.svc.cluster.local:80    starwars/x-wing-bd86d75c5-njv8k:53410     TCP Flags: SYN, ACK
starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    TCP Flags: ACK, FIN
deathstar.starwars.svc.cluster.local:80    starwars/x-wing-bd86d75c5-njv8k:53410     TCP Flags: ACK, FIN

Connection timeout:

starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN
starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN
starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN

Network Policy Behavior

Denied connection attempt:

starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN
starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN
starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN

Community

Join the Cilium Slack #hubble channel to chat with Cilium Hubble developers and other Cilium / Hubble users. This is a good place to learn about Hubble and Cilium, ask questions, and share your experiences.

Learn more about Cilium.

Authors

Hubble is an open source project licensed under the Apache License. Everybody is welcome to contribute. The project is following the Governance Rules of the Cilium project. See CONTRIBUTING for instructions on how to contribute and details of the Code of Conduct.

Issues
  • Unable to load UI. `Error: getaddrinfo EAI_AGAIN`

    Unable to load UI. `Error: getaddrinfo EAI_AGAIN`

    When I port-forward the hubble-ui service and try to load the UI in a browser, the following happens:

    • the web page remains stuck on the "The application is loading, please wait..." page.
    • the logs of the hubble-ui pod show the following message:
    {
      "name": "frontend",
      "hostname": "hubble-ui-79b6c7c67-z4bs5",
      "pid": 19,
      "req_id": "101ee530-14a9-4580-868a-66fed7c6fd49",
      "user": "[email protected]",
      "level": 50,
      "err": {
        "message": "Can't fetch namespaces via k8s api: Error: getaddrinfo EAI_AGAIN $ENTER_AKS_CLUSTER_DOMAIN_NAME",
        "locations": [
          {
            "line": 4,
            "column": 7
          }
        ],
        "path": [
          "viewer",
          "clusters"
        ],
        "extensions": {
          "code": "INTERNAL_SERVER_ERROR"
        }
      },
      "msg": "",
      "time": "2020-03-08T18:09:56.167Z",
      "v": 0
    }
    
    🖥 area/ui 
    opened by uipo78 37
  • Flows are not being displayed

    Flows are not being displayed

    Hi,

    Flows are not being displayed under service maps.

    Screen Shot 2019-12-18 at 9 15 51 AM

    Environment:

    Fedora 31 Kubernetes 1.15.2 (on-prem) Hubble: latest Cilium: latest

    Thanks

    🐛 kind/bug area/cilium ðŸ–¥ area/ui 
    opened by alexgaganashvili 17
  • Install Hubble from installation guide failing

    Install Hubble from installation guide failing

    Hi, when trying to follow the instructions that appears in this site: https://github.com/cilium/hubble/blob/master/Documentation/installation.md once you reach to hubble and try to run this cmd:

    helm template hubble \
        --namespace kube-system \
        --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
        > hubble.yaml
    

    you will fail on :

    Error: rendering template failed: runtime error: invalid memory address or nil pointer dereference
    

    tried to install also without any metrics and also not working , it looks like the template that exist here not working . can you please update the guidelines if any thing is expected?

    opened by amitrintzler 15
  • Flows don't show up on GKE

    Flows don't show up on GKE

    Flows and arrows are not visible in Hubble UI. Yet flows for "hubble" namespace are visible. Running in GKE.

    Running procedure:

    helm template cilium \
      --namespace cilium \
      --set global.nodeinit.enabled=true \
      --set nodeinit.reconfigureKubelet=true \
      --set nodeinit.removeCbrBridge=true \
      --set global.cni.binPath=/home/kubernetes/bin \
      --set global.tag=v1.7.0-rc1 \
      > cilium.yaml
    
    helm template hubble \
        --namespace hubble \
        --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
        --set ui.enabled=true \
        > hubble.yaml
    

    I can confirm that flows are visible in "cilium monitor", "hubble observe", and "kubectl get cep".

    🐛 kind/bug 
    opened by rubenhak 10
  • network: unable to connect to Cilium daemon

    network: unable to connect to Cilium daemon

    I would like to ask how to clean up the cilium environment

    I follow the official documentation

    # install
    kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.7.0/install/kubernetes/quick-install.yaml
    
    # delete
    kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.7.0/install/kubernetes/quick-install.yaml
    

    After that, I found that all my pods cannot be created properly. about cilium crd,I have deleted. Do i need to delete anything?

    Error message

    # kubectl get pod  | grep httpd
    httpd-596db6fdc4-4r22k                                 0/1     ContainerCreating   0          15m
    httpd-596db6fdc4-5xldk                                 0/1     ContainerCreating   0          15m
    
    # kubectl describe pod
    Events:
      Type     Reason                  Age    From                             Message
      ----     ------                  ----   ----                             -------
      Normal   Scheduled               10m    default-scheduler                Successfully assigned default/httpd-596db6fdc4-5xldk to node001
      Warning  FailedCreatePodSandBox  9m17s  kubelet, node001  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ffcd455f1ab5483a17f87cdad35beaea980e61317dbe35b788cac7953e72c95f" network for pod "httpd-596db6fdc4-5xldk": NetworkPlugin cni failed to set up pod "httpd-596db6fdc4-5xldk_default" network: unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get http:///var/run/cilium/cilium.sock/v1/config: dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
    Is the agent running?
    
    opened by llussy 9
  • Starting in Rancher, Can't start Cilium-Agent

    Starting in Rancher, Can't start Cilium-Agent

    I use the instructions listed in the Installation.md trying to start Hubble (but get stuck on Cilium), and the Cilium deployment gives the following error in the container logs:

    level=error msg="Error while initializing daemon" error="exit status 2" subsys=daemon
    level=fatal msg="Error while creating daemon" error="exit status 2" subsys=daemon
    

    There is a warning message in the logs just before that:

    level=error msg="Command execution failed" cmd="[/var/lib/cilium/bpf/init.sh /var/lib/cilium/bpf /var/run/cilium/state 10.42.0.113 <nil> vxlan    1500 false false  false false /var/run/cilium/cgroupv2 /run/cilium/bpffs ]" error="exit status 2" subsys=datapath-loader
    level=warning msg="+ set -o pipefail" subsys=datapath-loader
    level=warning msg="++ command -v cilium" subsys=datapath-loader
    level=warning msg="+ [[ ! -n /usr/bin/cilium ]]" subsys=datapath-loader
    level=warning msg="+ rm /var/run/cilium/state/encap.state" subsys=datapath-loader
    level=warning msg="+ true" subsys=datapath-loader
    ... [snipped for brevity]
    

    This is while running a Rancher 2.3.1 + Kubernetes 1.15.6 cluster with a single master. I ran all the commands (with kubectl) outside of Rancher as if it were a normal k8s cluster.

    The Rancher UI shows this message: CrashLoopBackOff: Back-off 5m0s restarting failed container=cilium-agent pod=cilium-hjw65_kube-system(e721fdfd-966f-49a7-b996-c3f3e84f275c)

    Note: I'm somewhat new to Kubernetes in general, so if I'm missing something I apologize in advance.

    📬 kind/question area/cilium 
    opened by Vacant0mens 8
  • cmd/node: Refactor & Test output methods

    cmd/node: Refactor & Test output methods

    This PR aims to achieve the following:

    • [x] Refactor, where applicable, to test output functions.
    • [x] Add table driven inputs for invoking certain output functionality.

    Signed-off-by: Simarpreet Singh [email protected]

    🤖 area/CI release-note/misc ready-to-merge 
    opened by simar7 8
  • Remove contrib/scripts/release.sh

    Remove contrib/scripts/release.sh

    Rename the current release make target to local-release, and update the release target to generate release artifacts from inside Docker.

    Signed-off-by: Michi Mutsuzaki [email protected]

    opened by michi-covalent 8
  • Hubble UI cannot render due to Error: unable to get issuer certificate

    Hubble UI cannot render due to Error: unable to get issuer certificate

    Screen Shot 2020-02-21 at 10 04 54 AM

    We cannot render the hubble-ui due to this below error message:

    "message":"Can't fetch namespaces via k8s api: Error: unable to get issuer certificate","locations":[{"line":4,"column":7}],"path":["viewer","clusters"],"extensions":{"code":"INTERNAL_SERVER_ERROR"}}
    

    { name: 'inCluster', caFile: '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt', server: 'https://10.110.121.43:443', skipTLSVerify: false }

    🖥 area/ui 
    opened by CH-anhngo 8
  • OpenTelemetry Support

    OpenTelemetry Support

    Dear Hubble Community,

    We are currently migrating to Cilium as our networking solution and are very excited to use Hubble for observability.

    However, we miss one thing to be happy – OpenTelemetry (OpenTracing) support. I can see it was mentioned in the roadmap around Cilium 1.0 release:

    h3. The Roadmap Ahead Integration with OpenTracing, Jaeger, and Zipkin: The minimal overhead of BPF makes it the ideal technology to provide tracing and telemetry functionality without imposing additional system load.

    However, I haven't found any code/issues connected to it. I thought that might be Cilium Go Extensions is the right place to implement it. Then I checked Hubble, and it looks like all the data required is in place. I can potentially contribute to it if you give some guidance if Hubble Relay the right place for it.

    🌟 kind/feature 
    opened by trnl 7
  • Demo instance

    Demo instance

    I just noticed found out about this beautiful project and I would love to fiddle a bit with it before spinning up Cilium and Hubble. A demo public demo instance which will allow interested users to check out the UI can be a great addition to Hubble.

    opened by tuxiqae 0
  • `hubble observe --json` does not print json

    `hubble observe --json` does not print json

    The command hubble observe --json --last 1000 --follow --namespace default does not print json. It does print a warning that --json flag has been deprecated but the behavior is not the expected one. If a flag is deprecated the behavior should be kept until the flag is removed.

    Flag --json has been deprecated, use '--output json' instead
    Aug  7 23:29:01.675: default/client-f4dd54c78-k8bz8:53030 -> default/server-778bd884d6-5qrbj:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
    Aug  7 23:29:01.676: default/client-f4dd54c78-k8bz8:53030 <- default/server-778bd884d6-5qrbj:80 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
    
    hubble --version
    hubble v0.8.0
    
    🐛 kind/bug 
    opened by aanm 6
  • Connections made to the Kubernetes API show up in hubble-ui as 'world'

    Connections made to the Kubernetes API show up in hubble-ui as 'world'

    Sorry if this has been answered elsewhere, or maybe if it actually belongs to the hubble-ui project.

    As mentioned in the Title, when running on EKS in AWS, connections to the kubernetes api show in the hubble-ui as 'world'

    Is there any way to change this behaviour to make it easier to identify this as Kubernetes api traffic?

    🖥 area/ui 
    opened by netjordan 1
  • hubble status reports Max Flows 0/0 and Unavailable Nodes

    hubble status reports Max Flows 0/0 and Unavailable Nodes

    Trying to enable hubble ui in a cluster where cilium was installed with helm:

    cilium hubble enable --ui --create-ca --relay-version v1.10.3
    

    (The --relay-version is a workaround for https://github.com/cilium/cilium-cli/issues/456)

    After port-forward, hubble status reports Max Flows 0/0 and all Nodes Unavailable even though running cilium status in each cilium pod shows Max Flows 4095/4095.

    No known workaround.

    Is this another case of cilium-cli being incompatible with a helm-installed Cilium? We wouldn't have to blaze that trail if cilium-cli were able to install Cilium chained to eks-vpc-cni.

    📊 kind/community-report â‰ needs/triage 
    opened by joebowbeerxealth 23
  • uI: export flows as json

    uI: export flows as json

    Add ability to export fetched flows as json to file and download it

    🌟 kind/feature ðŸ–¥ area/ui 
    opened by geakstr 0
  • Hubble observe filters

    Hubble observe filters "--tcp-flags ACK" logged also events with "TCP Flags: ACK, FIN" or "TCP Flags: PSH,ACK"

    Dear Maintainers,

    Hubble version 0.8

    I need to log only "TCP Flags: ACK"

    Im using: hubble observe -f --server hubble-relay:80 -o json --tcp-flags ACK

    It also produces outputs with flags TCP Flags: ACK, PSH or TCP Flags: ACK, FIN in example:

    k8s:io.cilium.k8s.policy.serviceaccount=default, k8s:io.kubernetes.pod.namespace=logging, k8s:release=elasticsearch-logging, k8s:statefulset.kubernetes.io/pod-name=elasticsearch-logging-master-0	 - 	TO_OVERLAY	cilium-test-wg-main-worker1	TCP Flags: ACK, PSH
    k8s:io.cilium.k8s.namespace.labels.field.cattle.io/projectId=p-dtt7q, k8s:io.cilium.k8s.policy.cluster=default, k8s:io.cilium.k8s.policy.serviceaccount=coredns, k8s:io.kubernetes.pod.namespace=kube-system, k8s:k8s-app=kube-dns	reserved:host	INGRESS	TO_STACK	cilium-test-wg-main-worker1	TCP Flags: ACK, FIN	true
    

    Setting the filter to: hubble observe -f --server hubble-relay:80 -o json --tcp-flags ACK --not --tcp-flags SYN,PSH,FIN Does not help also.

    So can you please explain:

    1. Why we have both multiple flags logged?
    2. If it possible somehow to filter only for single ACK events?

    Thanks!

    opened by voatsap 2
  • Apllying http-visibility breaks OAuth requests

    Apllying http-visibility breaks OAuth requests

    Hello, after applying http-visibility

    
    apiVersion: cilium.io/v2
    kind: CiliumNetworkPolicy
    metadata:
      name: http-visibility
    spec:
      endpointSelector:
        matchLabels: {}
      ingress:
        - fromEntities:
            - all
          toPorts:
            - ports:
                - port: "80"
                  protocol: TCP
              rules:
                http:
                  - {}
        - fromEntities:
            - all
    

    as described in docs makes my services return 401 responses after authorization.

    How can I check envoy doesn't trim Authorization headers or Cookies.

    NOTE: If I remove the policy everything works as expected.

    Any help is appreciated.

    🐛 kind/bug ðŸ“Š kind/community-report â‰ needs/triage 
    opened by pandarun 2
  • Surface the event type in ` observe -o compact`

    Surface the event type in ` observe -o compact`

    In the following hubble observe output, it is difficult to tell the different event types apart. For example, the third and fourth lines both look like packet drops when there's actually only one packet being dropped, the first line being the policy-verdict for that drop.

    $ cat drops.json | ./hubble observe -o compact
    Jun 10 12:50:15.555: default/cronjob-1623329400-8mg2z:43006 -> default/deployment-85c67465d6-tfgcp:8087 L3-Only FORWARDED (TCP Flags: SYN)
    Jun 10 12:50:15.555: default/cronjob-1623329400-8mg2z:43006 -> default/deployment-85c67465d6-tfgcp:8087 to-stack FORWARDED (TCP Flags: SYN)
    Jun 10 12:50:15.556: 10.8.14.163:43006 <> default/deployment-85c67465d6-tfgcp:8087 Policy denied DROPPED (TCP Flags: SYN)
    Jun 10 12:50:15.556: 10.8.14.163:43006 <> default/deployment-85c67465d6-tfgcp:8087 Policy denied DROPPED (TCP Flags: SYN)
    Jun 10 12:50:16.580: default/cronjob-1623329400-8mg2z:43006 -> default/deployment-85c67465d6-tfgcp:8087 L3-Only FORWARDED (TCP Flags: SYN)
    Jun 10 12:50:16.580: default/cronjob-1623329400-8mg2z:43006 -> default/deployment-85c67465d6-tfgcp:8087 to-endpoint FORWARDED (TCP Flags: SYN)
    
    click to see full `jsonpb` input.
    {"flow":{"time":"2021-06-10T12:50:15.555441618Z","verdict":"FORWARDED","ethernet":{"source":"22:da:eb:b3:0e:cd","destination":"5a:d1:6b:58:d5:0b"},"IP":{"source":"10.8.14.163","destination":"10.8.15.253","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":43006,"destination_port":8087,"flags":{"SYN":true}}},"source":{"ID":2607,"identity":124484,"namespace":"default","labels":["k8s:app.kubernetes.io/component=job","k8s:app.kubernetes.io/name=cronjob","k8s:controller-uid=ff9eb4f5-b227-4f06-8c9e-6c322240f69d","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default","k8s:job-name=cronjob-1623329400"],"pod_name":"cronjob-1623329400-8mg2z"},"destination":{"identity":107099,"namespace":"default","labels":["k8s:app.kubernetes.io/name=deployment","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default"],"pod_name":"deployment-85c67465d6-tfgcp"},"Type":"L3_L4","node_name":"gke-dev-dev-default-c32c1cdd-wd59","event_type":{"type":5},"traffic_direction":"EGRESS","policy_match_type":1,"is_reply":false,"Summary":"TCP Flags: SYN"},"node_name":"gke-dev-dev-default-c32c1cdd-wd59","time":"2021-06-10T12:50:15.555441618Z"}
    {"flow":{"time":"2021-06-10T12:50:15.555447697Z","verdict":"FORWARDED","ethernet":{"source":"22:da:eb:b3:0e:cd","destination":"5a:d1:6b:58:d5:0b"},"IP":{"source":"10.8.14.163","destination":"10.8.15.253","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":43006,"destination_port":8087,"flags":{"SYN":true}}},"source":{"ID":2607,"identity":124484,"namespace":"default","labels":["k8s:app.kubernetes.io/component=job","k8s:app.kubernetes.io/name=cronjob","k8s:controller-uid=ff9eb4f5-b227-4f06-8c9e-6c322240f69d","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default","k8s:job-name=cronjob-1623329400"],"pod_name":"cronjob-1623329400-8mg2z"},"destination":{"identity":107099,"namespace":"default","labels":["k8s:app.kubernetes.io/name=deployment","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default"],"pod_name":"deployment-85c67465d6-tfgcp"},"Type":"L3_L4","node_name":"gke-dev-dev-default-c32c1cdd-wd59","event_type":{"type":4,"sub_type":3},"traffic_direction":"EGRESS","trace_observation_point":"TO_STACK","is_reply":false,"Summary":"TCP Flags: SYN"},"node_name":"gke-dev-dev-default-c32c1cdd-wd59","time":"2021-06-10T12:50:15.555447697Z"}
    {"flow":{"time":"2021-06-10T12:50:15.556273526Z","verdict":"DROPPED","drop_reason":133,"ethernet":{"source":"92:4a:b7:4b:23:89","destination":"46:59:1c:16:4c:16"},"IP":{"source":"10.8.14.163","destination":"10.8.15.253","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":43006,"destination_port":8087,"flags":{"SYN":true}}},"source":{"identity":2,"labels":["reserved:world"]},"destination":{"ID":2140,"identity":107099,"namespace":"default","labels":["k8s:app.kubernetes.io/name=deployment","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default"],"pod_name":"deployment-85c67465d6-tfgcp"},"Type":"L3_L4","node_name":"gke-dev-dev-default-d58bf990-0cwt","event_type":{"type":5},"traffic_direction":"INGRESS","drop_reason_desc":"POLICY_DENIED","Summary":"TCP Flags: SYN"},"node_name":"gke-dev-dev-default-d58bf990-0cwt","time":"2021-06-10T12:50:15.556273526Z"}
    {"flow":{"time":"2021-06-10T12:50:15.556289810Z","verdict":"DROPPED","drop_reason":133,"ethernet":{"source":"92:4a:b7:4b:23:89","destination":"46:59:1c:16:4c:16"},"IP":{"source":"10.8.14.163","destination":"10.8.15.253","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":43006,"destination_port":8087,"flags":{"SYN":true}}},"source":{"identity":2,"labels":["reserved:world"]},"destination":{"ID":2140,"identity":107099,"namespace":"default","labels":["k8s:app.kubernetes.io/name=deployment","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default"],"pod_name":"deployment-85c67465d6-tfgcp"},"Type":"L3_L4","node_name":"gke-dev-dev-default-d58bf990-0cwt","event_type":{"type":1,"sub_type":133},"traffic_direction":"INGRESS","drop_reason_desc":"POLICY_DENIED","Summary":"TCP Flags: SYN"},"node_name":"gke-dev-dev-default-d58bf990-0cwt","time":"2021-06-10T12:50:15.556289810Z"}
    {"flow":{"time":"2021-06-10T12:50:16.580733369Z","verdict":"FORWARDED","ethernet":{"source":"92:4a:b7:4b:23:89","destination":"46:59:1c:16:4c:16"},"IP":{"source":"10.8.14.163","destination":"10.8.15.253","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":43006,"destination_port":8087,"flags":{"SYN":true}}},"source":{"identity":124484,"namespace":"default","labels":["k8s:app.kubernetes.io/component=job","k8s:app.kubernetes.io/name=cronjob","k8s:controller-uid=ff9eb4f5-b227-4f06-8c9e-6c322240f69d","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default","k8s:job-name=cronjob-1623329400"],"pod_name":"cronjob-1623329400-8mg2z"},"destination":{"ID":2140,"identity":107099,"namespace":"default","labels":["k8s:app.kubernetes.io/name=deployment","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default"],"pod_name":"deployment-85c67465d6-tfgcp"},"Type":"L3_L4","node_name":"gke-dev-dev-default-d58bf990-0cwt","event_type":{"type":5},"traffic_direction":"INGRESS","policy_match_type":1,"is_reply":false,"Summary":"TCP Flags: SYN"},"node_name":"gke-dev-dev-default-d58bf990-0cwt","time":"2021-06-10T12:50:16.580733369Z"}
    {"flow":{"time":"2021-06-10T12:50:16.580765230Z","verdict":"FORWARDED","ethernet":{"source":"92:4a:b7:4b:23:89","destination":"46:59:1c:16:4c:16"},"IP":{"source":"10.8.14.163","destination":"10.8.15.253","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":43006,"destination_port":8087,"flags":{"SYN":true}}},"source":{"identity":124484,"namespace":"default","labels":["k8s:app.kubernetes.io/component=job","k8s:app.kubernetes.io/name=cronjob","k8s:controller-uid=ff9eb4f5-b227-4f06-8c9e-6c322240f69d","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default","k8s:job-name=cronjob-1623329400"],"pod_name":"cronjob-1623329400-8mg2z"},"destination":{"ID":2140,"identity":107099,"namespace":"default","labels":["k8s:app.kubernetes.io/name=deployment","k8s:io.cilium.k8s.policy.cluster=dev","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.kubernetes.pod.namespace=default"],"pod_name":"deployment-85c67465d6-tfgcp"},"Type":"L3_L4","node_name":"gke-dev-dev-default-d58bf990-0cwt","event_type":{"type":4},"traffic_direction":"INGRESS","trace_observation_point":"TO_ENDPOINT","is_reply":false,"Summary":"TCP Flags: SYN"},"node_name":"gke-dev-dev-default-d58bf990-0cwt","time":"2021-06-10T12:50:16.580765230Z"}
    

    This could maybe be improved by printing the event type as a string in the compact output, as below.

    Jun 10 12:50:15.555: default/cronjob-1623329400-8mg2z:43006 -> default/deployment-85c67465d6-tfgcp:8087 policy-verdict ALLOWED (TCP Flags: SYN)
    Jun 10 12:50:15.555: default/cronjob-1623329400-8mg2z:43006 -> default/deployment-85c67465d6-tfgcp:8087 to-stack FORWARDED (TCP Flags: SYN)
    Jun 10 12:50:15.556: 10.8.14.163:43006 <> default/deployment-85c67465d6-tfgcp:8087 policy-verdict DENIED (TCP Flags: SYN)
    Jun 10 12:50:15.556: 10.8.14.163:43006 <> default/deployment-85c67465d6-tfgcp:8087 Policy denied DROPPED (TCP Flags: SYN)
    Jun 10 12:50:16.580: default/cronjob-1623329400-8mg2z:43006 -> default/deployment-85c67465d6-tfgcp:8087 policy-verdict ALLOWED (TCP Flags: SYN)
    Jun 10 12:50:16.580: default/cronjob-1623329400-8mg2z:43006 -> default/deployment-85c67465d6-tfgcp:8087 to-endpoint FORWARDED (TCP Flags: SYN)
    

    Note another change in the above example is the policy-verdict result which goes from FORWARDED/DROPPED to ALLOWED/DENIED/AUDITED to better reflect the event type.

    kind/enhancement ðŸ‘ good-first-issue âŒ¨ï¸ area/cli 
    opened by pchaigno 0
  • Add support for specifying raw filters

    Add support for specifying raw filters

    The current flag based filters have limitations when it comes to expressing very complex and/or queries. We should a flag to specify raw custom filters which may not be expressible via the current interface.

    $ hubble observe \
      --include-filter '{"source_ip":"10.0.0.168"}' \
      --include-filter '{"source_port":"69"} \
      --exclude-filter '{"destination_label":["reserved:health"]}'
    

    Where the JSON is of type FlowFilter and is appended to either the include or exclude list in the GetFlowsRequest.

    https://github.com/cilium/cilium/blob/e95a201ffa54d05d313d048d9b61f043a397c566/api/v1/flow/flow.proto#L345-L347

    I've marked this as good-first-issue, but it will might need a bit of mentoring for a first time contributor. I'm happy to mentor such a contribution.

    kind/enhancement ðŸ‘ good-first-issue âŒ¨ï¸ area/cli 
    opened by gandro 1
  • Add `hubble observe flows` as alias for `hubble observe`

    Add `hubble observe flows` as alias for `hubble observe`

    As we start to introduce additional types of events in Hubble besides flows, we need to distinguish between the different types of events which are being observed. While we want to keep hubble observe for compatibility reasons, we can make it consistent with the newly added debug and agent events by adding a hubble observe flows alias:

    https://github.com/cilium/hubble/pull/537#issuecomment-830422131

    kind/enhancement âŒ¨ï¸ area/cli 
    opened by gandro 0
Releases(v0.8.2)
Owner
Cilium
eBPF-based Networking, Security, and Observability
Cilium
This manager helps handle the life cycle of your eBPF programs

eBPF Manager This repository implements a manager on top of Cilium's eBPF library. This declarative manager simplifies attaching and detaching eBPF pr

Datadog, Inc. 8 Oct 22, 2021
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

Red Hat Communities of Practice 2 Oct 14, 2021
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

null 0 Oct 18, 2021
Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform What is KubeSphere English | 中文 KubeSphere is a distributed operating system providing cloud native stack with Kubernete

KubeSphere 7.1k Oct 21, 2021
Kubedd – Check migration issues of Kubernetes Objects while K8s upgrade

Kubedd – Check migration issues of Kubernetes Objects while K8s upgrade

Devtron Labs 95 Oct 20, 2021
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 89 Oct 20, 2021
A K8s ClusterIP HTTP monitoring library based on eBPF

Owlk8s Seamless RED monitoring of k8s ClusterIP HTTP services. This library provides RED (rate,error,duration) monitoring for all(by default but exclu

null 6 May 23, 2021
A Kubernetes Network Fabric for Enterprises that is Rich in Functions and Easy in Operations

中文教程 Kube-OVN, a CNCF Sandbox Level Project, integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network

null 1k Oct 16, 2021
Lightweight, CRD based envoy control plane for kubernetes

Lighweight, CRD based Envoy control plane for Kubernetes: Implemented as a Kubernetes Operator Deploy and manage an Envoy xDS server using the Discove

null 36 Oct 20, 2021
Simple Kubernetes real-time dashboard and management.

Skooner - Kubernetes Dashboard We are changing our name from k8dash to Skooner! Please bear with us as we update our documentation and codebase to ref

null 859 Oct 20, 2021
Kubernetes Cluster API Provider AWS

Kubernetes Cluster API Provider AWS Kubernetes-native declarative infrastructure for AWS. What is the Cluster API Provider AWS The Cluster API brings

null 0 Oct 23, 2021
Lightweight Kubernetes

K3s - Lightweight Kubernetes Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB. Great for:

null 18.2k Oct 22, 2021
Simplified network and services for edge applications

English | 简体中文 EdgeMesh Introduction EdgeMesh is a part of KubeEdge, and provides a simple network solution for the inter-communications between servi

KubeEdge 59 Oct 15, 2021
GitHub中文排行榜,帮助你发现高分优秀中文项目、更高效地吸收国人的优秀经验成果;榜单每周更新一次,敬请关注!

榜单设立目的 ???? GitHub中文排行榜,帮助你发现高分优秀中文项目; 各位开发者伙伴可以更高效地吸收国人的优秀经验、成果; 中文项目只能满足阶段性的需求,想要有进一步提升,还请多花时间学习高分神级英文项目; 榜单设立范围 设立1个总榜(所有语言项目汇总排名)、18个分榜(单个语言项目排名);

kon9chunkit 38.9k Oct 24, 2021
A Kubernetes operator to manage ThousandEyes tests

ThousandEyes Kubernetes Operator ThousandEyes Kubernetes Operator is a Kubernetes operator used to manage ThousandEyes Tests deployed via Kubernetes c

Cisco DevNet 26 Sep 28, 2021
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Kuma 2.4k Oct 16, 2021
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Kubernetes 82.1k Oct 25, 2021
An operator for managing ephemeral clusters in GKE

Test Cluster Operator for GKE This operator provides an API-driven cluster provisioning for integration and performance testing of software that integ

Isovalent 28 Mar 19, 2021
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Kuma 2.3k Aug 10, 2021