Simple Kubernetes real-time dashboard and management.

Related tags

skooner
Overview

Skooner - Kubernetes Dashboard

We are changing our name from k8dash to Skooner! Please bear with us as we update our documentation and codebase to reflect this change. If you previously installed k8dash, you will need to uninstall it from your cluster and install Skooner instead. For most cases this can be done by running the following kubectl delete deployment,service k8dash

Skooner is the easiest way to manage your Kubernetes cluster. Skooner is now a sandbox project of the Cloud Native Computing Foundation!

  • Full cluster management: Namespaces, Nodes, Pods, Replica Sets, Deployments, Storage, RBAC and more
  • Blazing fast and Always Live: no need to refresh pages to see the latest cluster status
  • Quickly visualize cluster health at a glance: Real time charts help quickly track down poorly performing resources
  • Easy CRUD and scaling: plus inline API docs to easily understand what each field does
  • 100% responsive (runs on your phone/tablet)
  • Simple OpenID integration: no special proxies required
  • Simple installation: use the provided yaml resources to have skooner up and running in under 1 minute (no, seriously)
  • See Skooner in action:
    Skooner - Kubernetes Dashboard

Table of Contents

Prerequisites

(Back to Table of Contents)

Getting Started

Deploy Skooner with something like the following...

NOTE: never trust a file downloaded from the internet. Make sure to review the contents of kubernetes-skooner.yaml before running the script below.

kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner.yaml

To access skooner, you must make it publicly visible. If you have an ingress server setup, you can accomplish by adding a route like the following:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: skooner
  namespace: kube-system
spec:
  rules:
  -
    host: skooner.example.com
    http:
      paths:
      -
        path: /
        backend:
          serviceName: skooner
          servicePort: 80

(Back to Table of Contents)

kubectl proxy

Unfortunately, kubectl proxy cannot be used to access Skooner. According to this comment, it seems that kubectl proxy strips the Authorization header when it proxies requests.

this is working as expected. "proxying" through the apiserver will not get you standard proxy behavior (preserving Authorization headers end-to-end), because the API is not being used as a standard proxy

(Back to Table of Contents)

Logging in

There are multiple options for logging into the dashboard: Service Account Token, OIDC, and NodePort.

Service Account Token

The first (and easiest) option is to create a dedicated service account. In the command line:

# Create the service account in the current namespace (we assume default)
kubectl create serviceaccount skooner-sa

# Give that service account root on the cluster
kubectl create clusterrolebinding skooner-sa --clusterrole=cluster-admin --serviceaccount=default:skooner-sa

# Find the secret that was created to hold the token for the SA
kubectl get secrets

# Show the contents of the secret to extract the token
kubectl describe secret skooner-sa-token-xxxxx

Copy the token value from the secret, and enter it into the login screen to access the dashboard.

OIDC

Skooner makes using OpenId Connect for authentication easy. Assuming your cluster is configured to use OIDC, all you need to do is create a secret containing your credentials and apply kubernetes-skooner-oidc.yaml.

To learn more about configuring a cluster for OIDC, check out these great links

You can deploy Skooner with OIDC support using something like the following script...

NOTE: never trust a file downloaded from the internet. Make sure to review the contents of kubernetes-skooner-oidc.yaml before running the script below.

OIDC_URL=<put your endpoint url here... something like https://accounts.google.com>
OIDC_ID=<put your id here... something like blah-blah-blah.apps.googleusercontent.com>
OIDC_SECRET=<put your oidc secret here>

kubectl create secret -n kube-system generic skooner \
--from-literal=url="$OIDC_URL" \
--from-literal=id="$OIDC_ID" \
--from-literal=secret="$OIDC_SECRET"

kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-oidc.yaml

Additionally, you can provide other OIDC options via these environment variables:

  • OIDC_SCOPES: The default value for this value is openid email, but additional scopes can also be added using something like OIDC_SCOPES="openid email groups"
  • OIDC_METADATA: Skooner uses the excellent node-openid-client module. OIDC_METADATA will take a JSON string and pass it to the Client constructor. Docs here. For example, OIDC_METADATA='{"token_endpoint_auth_method":"client_secret_post"}

NodePort

If you do not have an ingress server setup, you can utilize a NodePort service as configured in kubernetes-skooner-nodeport.yaml. This is ideal when creating a single node master, or if you want to get up and running as fast as possible.

This will map Skooner port 4654 to a randomly selected port on the running node. The assigned port can be found using:

$ kubectl get svc --namespace=kube-system

NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
skooner     NodePort    10.107.107.62   
   
            4654:32565/TCP   1m

   

Metrics

Skooner relies heavily on metrics-server to display real time cluster metrics. It is strongly recommended to have metrics-server installed to get the best experience from Skooner.

(Back to Table of Contents)

Development

You will need:

  • A running Kubernetes cluster
    • Installing and running minikube is an easy way to get this.
    • Once minikube is installed, you can run it with the command minikube start --driver=docker
  • Once the cluster is up and running, create some login credentials as described above

(Back to Table of Contents)

Skooner Architecture

Server

To run the server, run npm i from the /server directory to install dependencies and then npm start to run the server. The server is a simple express.js server that is primarily responsible for proxying requests to the Kubernetes api server.

During development, the server will use whatever is configured in ~/.kube/config to connect the desired cluster. If you are using minikube, for example, you can run kubectl config set-context minikube to get ~/.kube/config set up correctly.

Client

The client is a React application (using TypeScript) with minimal other dependencies.

To run the client, open a new terminal tab and navigate to the /client directory, run npm i and then npm start. This will open up a browser window to your local Skooner dashboard. If everything compiles correctly, it will load the site and then an error message will pop up Unhandled Rejection (Error): Api request error: Forbidden.... The error message has an 'X' in the top righthand corner to close that message. After you close it, you should see the UI where you can enter your token.

(Back to Table of Contents)

License

Apache License 2.0

FOSSA Status

(Back to Table of Contents)

Issues
  • Auth issues - call to /tokenreviews fails

    Auth issues - call to /tokenreviews fails

    Environment: AKS (K8s version 1.12.6)

    With ingress (Nginx): Login page is loaded (GET) but any POST fails because endpoint returns 404. Error message: Error occured attempting to login. Instead of contacting API, request is routed back to web.

    Request URL: https://something.com/apis/authentication.k8s.io/v1/tokenreviews
    Request Method: POST
    Status Code: 404
    

    Logs:

    OIDC_URL:  None
    [HPM] Proxy created: /  ->  https://something.hcp.westeurope.azmk8s.io:443
    Server started
    GET /
    GET /static/css/2.7b1d7de3.chunk.css
    GET /static/js/2.ab8f1278.chunk.js
    GET /static/css/main.a9446ed5.chunk.css
    GET /static/js/main.c1206f38.chunk.js
    GET /static/css/2.7b1d7de3.chunk.css.map
    GET /static/css/main.a9446ed5.chunk.css.map
    GET /static/js/2.ab8f1278.chunk.js.map
    GET /oidc
    GET /static/js/main.c1206f38.chunk.js.map
    GET /favicon.ico
    GET /manifest.json
    GET /
    POST /apis/authentication.k8s.io/v1/tokenreviews
    GET /
    

    The same thing happens when port-forwarded.

    Request URL: http://localhost:4654/apis/authentication.k8s.io/v1/tokenreviews
    Request Method: POST
    Status Code: 404 Not Found
    
    opened by frohikey 16
  • Feature Request:  Show ready/not ready, node type via node icons

    Feature Request: Show ready/not ready, node type via node icons

    First great job on the dashboard -- seems a lot more lightweight and less buggy than the standard dashboard. We have a couple of minor UI requests:

    On the nodes page it would be nice if unready nodes showed up by default on top, and maybe with a red icon, instead of the text 'READY' column. Even without the icon change, it would be good to float unready nodes to the top by default (we run on bare metal, so unready nodes are a big deal for us). We have alerts, obviously, but it still seems like a logical change to the UI.

    It would also be great if on that same page it was a bit more obvious which nodes were masters. You can look at the labels, but an icon change (or replacing the READY column with a MASTER column) would be pretty nice.

    Thanks again for all your great work!

    opened by mowings 16
  • Browser user/password?

    Browser user/password?

    On running the install as per the readme I get prompted for a basic auth user & password.

    This prevents me from getting to enter in the auth token

    edit: forgot to mention I was trying to access it doing a kubectl port-forward service/k8dash 8080:80

    opened by jdn-za 13
  • Keycloak support

    Keycloak support

    Hi,

    I'm using keycloak as an OIDC provider, does anyone succeed with k8dash ?

    I still getting invalid credentials" in k8dash, but keycloak is working fine (use it for grafana and legacy kubernetes dashboard).

    Just do a basic openid connect client.

    Sorry for not getting more detailed, but if anyone had this issue....

    Had also a look at the secret base64 encoding but it doesn't seem to be that.

    Thanks

    opened by menardorama 13
  • Support passing bearer token in header

    Support passing bearer token in header

    Without logging in via the UI, if I pass a Bearer token with a request it redirects to the token login screen. It would be nice if it could recognize I already have a token and use that without needing to go through the browser login flow

    opened by tsjnsn 12
  • istio crd issues

    istio crd issues

    Found that once the Istio mesh has been installed crd's keep reporting in k8s as being an issue, when in reality they have completed.

    istio-crd-issue
    opened by thpang 11
  • 'Ram used' pi graph isn't correct.

    'Ram used' pi graph isn't correct.

    Hi

    Total and used ram should show up the other way around. "Ram used" is showing 123.6GB of 14.4GB.

    opened by linuxshokunin 11
  • Invalid sort in workload page

    Invalid sort in workload page

    The sort of the age column doesn't seem to take into account the unit. image

    opened by kvpt 10
  • Invalid credentials with oidc auth with dex

    Invalid credentials with oidc auth with dex

    Hi,

    I get invalid credentials error like below when authenticated with dex as an oidc-provider.

    An error occured during the request { OpenIdConnectError: invalid_client (Invalid client credentials.)
        at Client.requestErrorHandler (/usr/src/app/node_modules/openid-client/lib/helpers/error_handler.js:16:11)
        at processTicksAndRejections (internal/process/next_tick.js:81:5)
      error: 'invalid_client',
      error_description: 'Invalid client credentials.' } POST /oidc
    POST /oidc 500
    

    If I turn off oidc auth, k8dash asks for token and it works if I enter a valid token. Dex is authenticating with github.com and it works fine with kubectl. Here is the kubectl settings

    user:
        auth-provider:
          config:
            client-id: kubernetes
            client-secret: ZXhhbXBsZS1hcHAtc2VjcmV0
            extra-scopes: offline_access openid profile email groups
            id-token: REDACTED
            idp-certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMrakNDQWVLZ0F3SUJBZ0lKQU1lRXJhSHYzNXJWTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIydDFZbVV0WTJFd0hoY05NVGt3TXpNeE1Ua3dPVEE0V2hjTk1Ua3dOREV3TVRrd09UQTRXakFTTVJBdwpEZ1lEVlFRRERBZHJkV0psTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjBkb2NjV3Zpb29xbDRVa05oejFCZ01KV25JU0w5TUExRm1ySEZ4U2hUYysrL1V0VURxMVVlU0xCRXpXTjNZZmcKQm5TQVNBQUNmS0lCRTBDRWJWdzhSTUtodXJReExGT0hQUDBodWtVRGkxNmVnaXBHSjI0WWdWcnJ4cUpVYWxsYQo2cUpaTkdsUHQ3SmxWdWtrSHRlY0hONjVneG0wQjBzMWtwV1VRNFh2L0E2ZldOaHVhV3VqYlRjRWx0SEFtQlJnCmtmMHpRYnV2ZCtMRnl3V0V2VDdBai9ua1FVZko1L21DOTQyUmlYVDNXdUtyc1g1a3F3ellrVU9xN2hOM1B1aVQKU1NYRm9JNUxqQWd5eDVqVEhubDdmb3JWSnhObDYvdEc2eFg4S3BxMmpST3FZSzlUWFdhSFlDVktQeTlMUTFuegpBNG9jTXQyRkFzREY4a2ZMUjBhK2l3SURBUUFCbzFNd1VUQWRCZ05WSFE0RUZnUVVMK1gzejRKWkhDZkg4Ry80Ckl0ZDhUdUZ5ZEV3d0h3WURWUjBqQkJnd0ZvQVVMK1gzejRKWkhDZkg4Ry80SXRkOFR1RnlkRXd3RHdZRFZSMFQKQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQU1rTFB0dkZoZlZxM0VibUJFU3dER09ZdwpVYjFYS0VKb1JEVGV5dlozamZSWGhTVDlmdmM0bC9GMWVOd1ZKZnhXb0piUjdCU0JmbURiNzR5anBOcGVYS2xZClZVWnE1Mmx1dnlwNDlFNHJOQ1JHTDNzL0NjUnFnV0tqVmxKZWZGakg2TU8zYTZnM0NFZElGNXJSZi8zRXFGSDYKZm9tUkZ0MEw5NzZodmpGRXFyMlVYR01yTk1LMUN6YXJreDhaUXNkekwySGFhMzV6ei9aUG1PdFA1a2dzYUlMegpoSC9CQ215N242Q2pDVmx3UXZFRmFUOXVRRDZWa216eVNmQ29oaGo4WFYwanBMa2doeG12cGJRdzFDWmwvcDJSCkRwSTh3aCtNVkhGczMvZzNKa0lqUkU0SVJtV2ROWE5hWTBwMVVZUEVIMys3bDlDOXZTQ2Q3OXgvSTZtOVB3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
            idp-issuer-url: https://dex.example.com:32000
            refresh-token: ChlibzZjeDJyNnMzNWMzZjVoeWpuZm5oem8zEhltaWt3YmRxc3Eyem1qeHAyNmk2ZWlqYnd0
          name: oidc
    

    And this is k8s yaml manifests

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: k8dash
      namespace: kube-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          k8s-app: k8dash
      template:
        metadata:
          labels:
            k8s-app: k8dash
        spec:
          hostAliases:
          - hostnames:
            - dex.example.com
            ip: 10.0.2.100
          containers:
          - name: k8dash
            image: herbrandson/k8dash:dev
            command:
            - sh
            - -c
            - |
              npm config set cafile /ca/dex-ca.pem
              /sbin/tini -- node .
            ports:
            - containerPort: 4654
            livenessProbe:
              httpGet:
                scheme: HTTP
                path: /
                port: 4654
              initialDelaySeconds: 30
              timeoutSeconds: 30
            env:
            - name: OIDC_URL
              valueFrom:
                secretKeyRef:
                  name: k8dash
                  key: url
            - name: OIDC_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: k8dash
                  key: id
            - name: OIDC_SECRET
              valueFrom:
                secretKeyRef:
                  name: k8dash
                  key: secret
            - name: NODE_EXTRA_CA_CERTS
              value: /ca/dex-ca.pem
            - name: OIDC_SCOPES
              value: "openid email groups"
            volumeMounts:
            - name: cafile
              mountPath: /ca
          volumes:
          - name: cafile
            configMap:
              name: k8dash
    
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: k8dash
      namespace: kube-system
    spec:
      ports:
        - port: 80
          targetPort: 4654
      selector:
        k8s-app: k8dash
    
    ---
    apiVersion: v1
    data:
      dex-ca.pem: |
        -----BEGIN CERTIFICATE-----
        MIIC+jCCAeKgAwIBAgIJAMeEraHv35rVMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
        BAMMB2t1YmUtY2EwHhcNMTkwMzMxMTkwOTA4WhcNMTkwNDEwMTkwOTA4WjASMRAw
        DgYDVQQDDAdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
        0doccWviooql4UkNhz1BgMJWnISL9MA1FmrHFxShTc++/UtUDq1UeSLBEzWN3Yfg
        BnSASAACfKIBE0CEbVw8RMKhurQxLFOHPP0hukUDi16egipGJ24YgVrrxqJUalla
        6qJZNGlPt7JlVukkHtecHN65gxm0B0s1kpWUQ4Xv/A6fWNhuaWujbTcEltHAmBRg
        kf0zQbuvd+LFywWEvT7Aj/nkQUfJ5/mC942RiXT3WuKrsX5kqwzYkUOq7hN3PuiT
        SSXFoI5LjAgyx5jTHnl7forVJxNl6/tG6xX8Kpq2jROqYK9TXWaHYCVKPy9LQ1nz
        A4ocMt2FAsDF8kfLR0a+iwIDAQABo1MwUTAdBgNVHQ4EFgQUL+X3z4JZHCfH8G/4
        Itd8TuFydEwwHwYDVR0jBBgwFoAUL+X3z4JZHCfH8G/4Itd8TuFydEwwDwYDVR0T
        AQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAMkLPtvFhfVq3EbmBESwDGOYw
        Ub1XKEJoRDTeyvZ3jfRXhST9fvc4l/F1eNwVJfxWoJbR7BSBfmDb74yjpNpeXKlY
        VUZq52luvyp49E4rNCRGL3s/CcRqgWKjVlJefFjH6MO3a6g3CEdIF5rRf/3EqFH6
        fomRFt0L976hvjFEqr2UXGMrNMK1Czarkx8ZQsdzL2Haa35zz/ZPmOtP5kgsaILz
        hH/BCmy7n6CjCVlwQvEFaT9uQD6VkmzySfCohhj8XV0jpLkghxmvpbQw1CZl/p2R
        DpI8wh+MVHFs3/g3JkIjRE4IRmWdNXNaY0p1UYPEH3+7l9C9vSCd79x/I6m9Pw==
        -----END CERTIFICATE-----
    kind: ConfigMap
    metadata:
      creationTimestamp: null
      name: k8dash
      namespace: kube-system
    ---
    apiVersion: v1
    data:
      id: a3ViZXJuZXRlcw==
      secret: ZXhhbXBsZS1hcHAtc2VjcmV0
      url: aHR0cHM6Ly9kZXguZXhhbXBsZS5jb206MzIwMDA=
    kind: Secret
    metadata:
      creationTimestamp: null
      name: k8dash
      namespace: kube-system
    

    Do you have any idea?

    opened by linuxshokunin 9
  • Add MASTER label + icons for ready status

    Add MASTER label + icons for ready status

    @herbrandson here is a draft PR for #59 . Hacking the sort is a bit trickier than I thought, so for now, it's just some basic things (utf8 symbol icon instead of text and a MASTER label).

    opened by olivergg 8
  • Getting errors while navigating UI

    Getting errors while navigating UI

    Skooner version: stable Metrics server version: 4.2 Getting error at cluster overview tab: Screen Shot 2021-09-22 at 12 35 17 PM Error:

    react-dom.production.min.js:209 TypeError: Cannot read properties of undefined (reading 'usage')
        at podRamChart.tsx:39
        at Ot (lodash.js:954)
        at Function.kn.sumBy (lodash.js:16567)
        at podRamChart.tsx:39
        at Gr (podRamChart.tsx:9)
        at Yo (react-dom.production.min.js:153)
        at Fs (react-dom.production.min.js:175)
        at ya (react-dom.production.min.js:263)
        at lu (react-dom.production.min.js:246)
        at au (react-dom.production.min.js:246)
    

    Getting error at node details tab: Screen Shot 2021-09-22 at 12 38 37 PM Error:

    react-dom.production.min.js:209 TypeError: Cannot read properties of undefined (reading 'conditions')
            at node.tsx:149
            at Sn.render (node.tsx:64)
            at Ns (react-dom.production.min.js:182)
            at js (react-dom.production.min.js:181)
            at ya (react-dom.production.min.js:263)
            at lu (react-dom.production.min.js:246)
            at au (react-dom.production.min.js:246)
            at Qa (react-dom.production.min.js:239)
            at react-dom.production.min.js:123
            at t.unstable_runWithPriority (scheduler.production.min.js:19)
    

    How to solve?

    good first issue 
    opened by ankurpshah 1
  • Not able to see the pods view

    Not able to see the pods view

    image

    Kubernetes version 1.22

    logs from skooner pod

    2021-09-08T09:55:00.421Z GET /api/v1/services 200
    2021-09-08T09:55:00.637Z GET / 200
    [HPM] GET /api/v1/namespaces?watch=1&resourceVersion=21905 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    [HPM] GET /api/v1/services?watch=1&resourceVersion=21905 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    [HPM] Client disconnected
    [HPM] Client disconnected
    [HPM] GET /api/v1/namespaces/ingress-nginx/events -> https://10.96.0.1:443
    [HPM] GET /api/v1/namespaces/ingress-nginx/services/ingress-nginx-controller-admission -> https://10.96.0.1:443
    2021-09-08T09:55:02.836Z GET /api/v1/namespaces/ingress-nginx/events 200
    2021-09-08T09:55:02.839Z GET /api/v1/namespaces/ingress-nginx/services/ingress-nginx-controller-admission 200
    [HPM] GET /api/v1/namespaces/ingress-nginx/events?watch=1&resourceVersion=21909 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    [HPM] GET /api/v1/namespaces/ingress-nginx/services?watch=1&fieldSelector=metadata.name%3Dingress-nginx-controller-admission -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    [HPM] GET /api/v1/namespaces -> https://10.96.0.1:443
    [HPM] GET /apis/apps/v1/replicasets -> https://10.96.0.1:443
    [HPM] Client disconnected
    [HPM] Client disconnected
    2021-09-08T09:55:06.939Z GET /api/v1/namespaces 200
    2021-09-08T09:55:06.943Z GET /apis/apps/v1/replicasets 200
    [HPM] GET /api/v1/namespaces?watch=1&resourceVersion=21919 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    [HPM] GET /apis/apps/v1/replicasets?watch=1&resourceVersion=21919 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    [HPM] Client disconnected
    [HPM] GET /api/v1/namespaces/kube-system/pods -> https://10.96.0.1:443
    [HPM] Client disconnected
    [HPM] GET /apis/apps/v1/namespaces/kube-system/replicasets/coredns-78fcd69978 -> https://10.96.0.1:443
    2021-09-08T09:55:08.848Z GET /apis/apps/v1/namespaces/kube-system/replicasets/coredns-78fcd69978 200
    [HPM] GET /api/v1/namespaces/kube-system/events -> https://10.96.0.1:443
    2021-09-08T09:55:08.988Z GET /api/v1/namespaces/kube-system/events 200
    [HPM] GET /apis/autoscaling/v1/namespaces/kube-system/horizontalpodautoscalers/coredns-78fcd69978 -> https://10.96.0.1:443
    [HPM] GET /apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods -> https://10.96.0.1:443
    2021-09-08T09:55:09.004Z GET /api/v1/namespaces/kube-system/pods 200
    2021-09-08T09:55:09.008Z GET /apis/autoscaling/v1/namespaces/kube-system/horizontalpodautoscalers/coredns-78fcd69978 404
    2021-09-08T09:55:09.021Z GET /apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods 200
    [HPM] GET /apis/apps/v1/namespaces/kube-system/replicasets?watch=1&fieldSelector=metadata.name%3Dcoredns-78fcd69978 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    [HPM] GET /api/v1/namespaces/kube-system/events?watch=1&resourceVersion=21923 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    [HPM] GET /api/v1/namespaces/kube-system/pods?watch=1&resourceVersion=21923 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    2021-09-08T09:55:10.639Z GET / 200
    [HPM] GET /api/v1/namespaces -> https://10.96.0.1:443
    [HPM] GET /apis/metrics.k8s.io/v1beta1/pods -> https://10.96.0.1:443
    [HPM] Client disconnected
    [HPM] Client disconnected
    2021-09-08T09:55:13.922Z GET /api/v1/namespaces 200
    [HPM] Client disconnected
    2021-09-08T09:55:13.939Z GET /apis/metrics.k8s.io/v1beta1/pods 200
    [HPM] GET /api/v1/pods -> https://10.96.0.1:443
    [HPM] GET /api/v1/namespaces?watch=1&resourceVersion=21933 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    2021-09-08T09:55:14.595Z GET /api/v1/pods 200
    [HPM] GET /api/v1/pods?watch=1&resourceVersion=21933 -> https://10.96.0.1:443
    [HPM] Upgrading to WebSocket
    2021-09-08T09:55:20.638Z GET / 200
    [HPM] GET /apis/metrics.k8s.io/v1beta1/pods -> https://10.96.0.1:443
    2021-09-08T09:55:24.150Z GET /apis/metrics.k8s.io/v1beta1/pods 200
    2021-09-08T09:55:30.638Z GET / 200
    [HPM] GET /apis/metrics.k8s.io/v1beta1/pods -> https://10.96.0.1:443
    2021-09-08T09:55:34.135Z GET /apis/metrics.k8s.io/v1beta1/pods 200
    2021-09-08T09:55:40.638Z GET / 200
    [HPM] GET /apis/metrics.k8s.io/v1beta1/pods -> https://10.96.0.1:443
    2021-09-08T09:55:44.111Z GET /apis/metrics.k8s.io/v1beta1/pods 200
    2021-09-08T09:55:50.638Z GET / 200
    [HPM] GET /apis/metrics.k8s.io/v1beta1/pods -> https://10.96.0.1:443
    2021-09-08T09:55:54.162Z GET /apis/metrics.k8s.io/v1beta1/pods 200
    2021-09-08T09:56:00.638Z GET / 200
    [HPM] GET /apis/metrics.k8s.io/v1beta1/pods -> https://10.96.0.1:443
    [HPM] GET /apis/metrics.k8s.io/v1beta1/pods -> https://10.96.0.1:443
    2021-09-08T10:10:24.168Z GET /apis/metrics.k8s.io/v1beta1/pods 200
    2021-09-08T10:10:30.638Z GET / 200
    [HPM] GET /apis/metrics.k8s.io/v1beta1/pods -> https://10.96.0.1:443
    2021-09-08T10:10:34.232Z GET /apis/metrics.k8s.io/v1beta1/pods 200
    [HPM] Client disconnected
    [HPM] Client disconnected
    [HPM] GET /api/v1/namespaces/kube-system/events -> https://10.96.0.1:443
    [HPM] GET /api/v1/namespaces/kube-system/pods/kube-apiserver-kube-410ee99c -> https://10.96.0.1:443
    [HPM] GET /apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-apiserver-kube-410ee99c -> https://10.96.0.1:443
    2021-09-08T10:10:40.389Z GET /api/v1/namespaces/kube-system/pods/kube-apiserver-kube-410ee99c 200
    2021-09-08T10:10:40.393Z GET /api/v1/namespaces/kube-system/events 200
    2021-09-08T10:10:40.400Z GET /apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-apiserver-kube-410ee99c 200
    2021-09-08T10:10:40.638Z GET / 200
    [HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://10.96.0.1:443
    2021-09-08T10:10:40.744Z POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 201
    2021-09-08T10:10:50.638Z GET / 200
    
    
    opened by saiyam1814 3
  • Authorization using token problem

    Authorization using token problem

    I try to authorize using token. But I get the error "401 Unauthorized" . Using kubectl port-forward everything works fine. Еhe problem is that I use the Istio AuthorizationPolicy and Keycloak for authorization. And when I enter the token, as a result, the authorization token that forms by the Keycloak is sent to Skooner.

    Could you please implement this feature? https://github.com/skooner-k8s/skooner/issues/97

    Logs: OIDC_URL: None API URL: https://100.64.0.1:443 [HPM] Proxy created: / -> https://100.64.0.1:443 [HPM] Subscribed to http-proxy events: [ 'error', 'close' ] Server started. Listening on port 4654 (node:6) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version. Version Info: { "buildDate": "2021-07-15T20:56:38Z", "compiler": "gc", "gitCommit": "7a576bc3935a6b555e33346fd73ad77c925e9e4a", "gitTreeState": "clean", "gitVersion": "v1.20.9", "goVersion": "go1.15.14", "major": "1", "minor": "20", "platform": "linux/amd64" } Available APIs: [ "acme.cert-manager.io/v1", "admissionregistration.k8s.io/v1", "apiextensions.k8s.io/v1", "apiregistration.k8s.io/v1", "apps/v1", "authentication.k8s.io/v1", "authorization.k8s.io/v1", "autoscaling/v1", "batch/v1", "cert-manager.io/v1", "certificates.k8s.io/v1", "coordination.k8s.io/v1", "crd.projectcalico.org/v1", "discovery.k8s.io/v1beta1", "events.k8s.io/v1", "extensions/v1beta1", "flowcontrol.apiserver.k8s.io/v1beta1", "install.istio.io/v1alpha1", "metrics.k8s.io/v1beta1", "monitoring.coreos.com/v1", "networking.istio.io/v1beta1", "networking.k8s.io/v1", "node.k8s.io/v1", "policy/v1beta1", "rbac.authorization.k8s.io/v1", "scheduling.k8s.io/v1", "security.istio.io/v1beta1", "snapshot.storage.k8s.io/v1", "storage.k8s.io/v1", "telemetry.istio.io/v1alpha1" ] 021-09-03T08:51:37.846Z GET / 200 [HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://100.64.0.1:443 [HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://100.64.0.1:443 2021-09-03T08:51:40.181Z POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 401 2021-09-03T08:51:40.187Z POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 401 [HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://100.64.0.1:443 2021-09-03T08:51:40.512Z POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 401 Authorization header found. Passing through to client. 2021-09-03T08:51:40.556Z GET / 304 2021-09-03T08:51:40.929Z GET /static/css/2.b522e268.chunk.css 304 2021-09-03T08:51:40.929Z GET /static/css/main.49b3e53b.chunk.css 304 2021-09-03T08:51:41.066Z GET /static/js/2.ca91f3c5.chunk.js 304 2021-09-03T08:51:41.068Z GET /static/js/main.2e62b46d.chunk.js 304 [HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://100.64.0.1:443 2021-09-03T08:51:41.652Z GET /oidc 304 2021-09-03T08:51:41.660Z POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 401 2021-09-03T08:51:47.847Z GET / 200 2021-09-03T08:51:57.844Z GET / 200

    I also tried to configure the OIDC through the Keycloak (OIDC are not configured in the cluster) . Keycloak is used to log into Grafana and other services and it's works fine. But in the case of the Skooner, I also get "401 Unauthorized".

    opened by vrachok-sc 0
  • Bump tar from 6.1.0 to 6.1.11 in /server

    Bump tar from 6.1.0 to 6.1.11 in /server

    Bumps tar from 6.1.0 to 6.1.11.

    Commits
    • e573aee 6.1.11
    • edb8e9a fix: perf regression on hot string munging path
    • a9d9b05 chore(test): Avoid spurious failures packing node_modules/.cache
    • 24b8bda fix(test): use posix path for testing path reservations
    • e5a223c fix(test): make unpack test pass on case-sensitive fs
    • 188badd 6.1.10
    • 23312ce drop dirCache for symlink on all platforms
    • 4f1f4a2 6.1.9
    • 875a37e fix: prevent path escape using drive-relative paths
    • b6162c7 fix: reserve paths properly for unicode, windows
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Put a feature flag to shield all prometheus feature

    Put a feature flag to shield all prometheus feature

    null

    prometheus 
    opened by yuqiuw 0
  • Add a state context to determine whether to use Prometheus or not

    Add a state context to determine whether to use Prometheus or not

    We need a state flag in the frontend to determine whether to use Prometheus or not. We will try to send a request to the Prometheus proxy. If it doesn't return 200, we will flag it false and use the old pie charts.

    prometheus 
    opened by tianni4104 0
  • Unable to load pages with websocket connections

    Unable to load pages with websocket connections

    I've installed skooner into cluster as per instructions. I'm to access all the pages where websockets are not used. When I try to access pages with websockets (e.g nodes) I'm seeing errors in dev console. please refer image. I've also increased nginx proxy-read-timeout and proxy-send-timeout

    Screenshot 2021-08-20 at 5 14 51 PM

    opened by hariprasadiit 1
  • [Question] CRD objects support

    [Question] CRD objects support

    Hi dear Skooner team, Thanks for great dashboard and I have a question , do you have plans add CRD objects visualisation (like in default K8s dashboard), and add ability do some CRUD operation. Thanks a lot. :)

    opened by ozlotusflare 1
  • Bump path-parse from 1.0.6 to 1.0.7 in /server

    Bump path-parse from 1.0.6 to 1.0.7 in /server

    Bumps path-parse from 1.0.6 to 1.0.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(alpha0.0.1)
Simple Kubernetes real-time dashboard and management.

Skooner - Kubernetes Dashboard We are changing our name from k8dash to Skooner! Please bear with us as we update our documentation and codebase to ref

null 859 Oct 20, 2021
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

Red Hat Communities of Practice 2 Oct 14, 2021
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

null 0 Oct 18, 2021
Kubedd – Check migration issues of Kubernetes Objects while K8s upgrade

Kubedd – Check migration issues of Kubernetes Objects while K8s upgrade

Devtron Labs 95 Oct 20, 2021
Kubernetes Cluster API Provider AWS

Kubernetes Cluster API Provider AWS Kubernetes-native declarative infrastructure for AWS. What is the Cluster API Provider AWS The Cluster API brings

null 0 Oct 23, 2021
Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform What is KubeSphere English | 中文 KubeSphere is a distributed operating system providing cloud native stack with Kubernete

KubeSphere 7.1k Oct 21, 2021
A Kubernetes operator to manage ThousandEyes tests

ThousandEyes Kubernetes Operator ThousandEyes Kubernetes Operator is a Kubernetes operator used to manage ThousandEyes Tests deployed via Kubernetes c

Cisco DevNet 26 Sep 28, 2021
GitHub中文排行榜,帮助你发现高分优秀中文项目、更高效地吸收国人的优秀经验成果;榜单每周更新一次,敬请关注!

榜单设立目的 ???? GitHub中文排行榜,帮助你发现高分优秀中文项目; 各位开发者伙伴可以更高效地吸收国人的优秀经验、成果; 中文项目只能满足阶段性的需求,想要有进一步提升,还请多花时间学习高分神级英文项目; 榜单设立范围 设立1个总榜(所有语言项目汇总排名)、18个分榜(单个语言项目排名);

kon9chunkit 38.9k Oct 24, 2021
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Kubernetes 82.1k Oct 25, 2021
Lightweight, CRD based envoy control plane for kubernetes

Lighweight, CRD based Envoy control plane for Kubernetes: Implemented as a Kubernetes Operator Deploy and manage an Envoy xDS server using the Discove

null 36 Oct 20, 2021
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 8.6k Oct 15, 2021
Lightweight Kubernetes

K3s - Lightweight Kubernetes Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB. Great for:

null 18.2k Oct 22, 2021
A web-based simulator for the Kubernetes scheduler

Web-based Kubernetes scheduler simulator Hello world. Here is web-based Kubernetes scheduler simulator. On the simulator, you can create/edit/delete t

Kubernetes SIGs 57 Oct 15, 2021
KubeCube is an open source enterprise-level container platform

KubeCube English | 中文文档 KubeCube is an open source enterprise-level container platform that provides enterprises with visualized management of Kuberne

KubeCube IO 154 Oct 15, 2021
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Clusternet 235 Oct 13, 2021
🦥 Easy and simple Prometheus SLO generator

Sloth Introduction Use the easiest way to generate SLOs for Prometheus. Sloth generates understandable, uniform and reliable Prometheus SLOs for any k

Xabier Larrakoetxea Gallego 767 Oct 19, 2021
Go library to create resilient feedback loop/control controllers.

Gontroller A Go library to create feedback loop/control controllers, or in other words... a Go library to create controllers without Kubernetes resour

Spotahome 126 Oct 7, 2021
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Trendyol Open Source 298 Oct 16, 2021
An example of Kubernetes' Horizontal Pod Autoscaler using costume metrics.

Kubernetes Autoscaling Example In this project, I try to implement Horizontal Pod AutoscalerHPA provided by Kubernetes. The Horizontal Pod Autoscaler

Jaskeerat Singh Randhawa 5 Sep 20, 2021