🐶 Kubernetes CLI To Manage Your Clusters In Style!

Overview

k9s

K9s - Kubernetes CLI To Manage Your Clusters In Style!

K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project is to make it easier to navigate, observe and manage your applications in the wild. K9s continually watches Kubernetes for changes and offers subsequent commands to interact with your observed resources.


Announcement

k9salpha

K9sAlpha RC-0 Is Out!


Fresh off the press K9sAlpha is now available! Please read the details in the docs and checkout the new repo.

NOTE: Upon purchase, in order to activate your license, please send us a valid user name so we can generate your personalized license key. All licenses are valid for a whole year from the date of purchase.

For all other cases, please reach out to us so we can discuss your needs:

  • Corporate licenses
  • Education
  • Non Profit
  • Active K9s sponsors
  • Long term K9s supporters and contributors
  • Can't afford it
  • Others...

Go Report Card golangci badge codebeat badge Build Status Docker Repository on Quay release License Releases


Documentation

Please refer to our K9s documentation site for installation, usage, customization and tips.

Slack Channel

Wanna discuss K9s features with your fellow K9sers or simply show your support for this tool?


Installation

K9s is available on Linux, macOS and Windows platforms.

  • Binaries for Linux, Windows and Mac are available as tarballs in the release page.

  • Via Homebrew for macOS or LinuxBrew for Linux

    brew install k9s
  • Via MacPorts

    sudo port install k9s
  • On Arch Linux

    pacman -S k9s
  • On OpenSUSE Linux distribution

    zypper install k9s
  • Via Scoop for Windows

    scoop install k9s
  • Via Chocolatey for Windows

    choco install k9s
  • Via a GO install

    # NOTE: The dev version will be in effect!
    go get -u github.com/derailed/k9s

Building From Source

K9s is currently using go v1.14 or above. In order to build K9 from source you must:

  1. Clone the repo

  2. Build and run the executable

    make build && ./execs/k9s

Running with Docker

Running the official Docker image

You can run k9s as a Docker container by mounting your KUBECONFIG:

docker run --rm -it -v $KUBECONFIG:/root/.kube/config quay.io/derailed/k9s

For default path it would be:

docker run --rm -it -v ~/.kube/config:/root/.kube/config quay.io/derailed/k9s

Building your own Docker image

You can build your own Docker image of k9s from the Dockerfile with the following:

docker build -t k9s-docker:0.1 .

You can get the latest stable kubectl version and pass it to the docker build command with the --build-arg option. You can use the --build-arg option to pass any valid kubectl version (like v1.18.0 or v1.19.1).

KUBECTL_VERSION=$(make kubectl-stable-version 2>/dev/null)
docker build --build-arg KUBECTL_VERSION=${KUBECTL_VERSION} -t k9s-docker:0.1 .

Run your container:

docker run --rm -it -v ~/.kube/config:/root/.kube/config k9s-docker:0.1

PreFlight Checks

  • K9s uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.

    export TERM=xterm-256color
  • In order to issue manifest edit commands make sure your EDITOR env is set.

    # Kubectl edit command will use this env var.
    export EDITOR=my_fav_editor
    # Should your editor deals with streamed vs on disk files differently, also set...
    export K9S_EDITOR=my_fav_editor
  • K9s prefers recent kubernetes versions ie 1.16+


The Command Line

# List all available CLI options
k9s help
# To get info about K9s runtime (logs, configs, etc..)
k9s info
# To run K9s in a given namespace
k9s -n mycoolns
# Start K9s in an existing KubeConfig context
k9s --context coolCtx
# Start K9s in readonly mode - with all cluster modification commands disabled
k9s --readonly

Logs

Given the nature of the ui k9s does produce logs to a specific location. To view the logs and turn on debug mode, use the following commands:

k9s info
# Will produces something like this
#  ____  __.________
# |    |/ _/   __   \______
# |      < \____    /  ___/
# |    |  \   /    /\___ \
# |____|__ \ /____//____  >
#         \/            \/
#
# Configuration:   /Users/fernand/.k9s/config.yml
# Logs:            /var/folders/8c/hh6rqbgs5nx_c_8k9_17ghfh0000gn/T/k9s-fernand.log
# Screen Dumps:    /var/folders/8c/hh6rqbgs5nx_c_8k9_17ghfh0000gn/T/k9s-screens-fernand

# To view k9s logs
tail -f /var/folders/8c/hh6rqbgs5nx_c_8k9_17ghfh0000gn/T/k9s-fernand.log

# Start K9s in debug mode
k9s -l debug

Key Bindings

K9s uses aliases to navigate most K8s resources.

Action Command Comment
Show active keyboard mnemonics and help ?
Show all available resource alias ctrl-a
To bail out of K9s :q, ctrl-c
View a Kubernetes resource using singular/plural or short-name :po⏎ accepts singular, plural, short-name or alias ie pod or pods
View a Kubernetes resource in a given namespace :alias namespace⏎
Filter out a resource view given a filter /filter⏎ Regex2 supported ie `fred
Inverse regex filer /! filter⏎ Keep everything that doesn't match.
Filter resource view by labels /-l label-selector⏎
Fuzzy find a resource given a filter /-f filter⏎
Bails out of view/command/filter mode <esc>
Key mapping to describe, view, edit, view logs,... d,v, e, l,...
To view and switch to another Kubernetes context :ctx⏎
To view and switch to another Kubernetes context :ctx context-name⏎
To view and switch to another Kubernetes namespace :ns⏎
To view all saved resources :screendump or sd⏎
To delete a resource (TAB and ENTER to confirm) ctrl-d
To kill a resource (no confirmation dialog!) ctrl-k
Launch pulses view :pulses or pu⏎
Launch XRay view :xray RESOURCE [NAMESPACE]⏎ RESOURCE can be one of po, svc, dp, rs, sts, ds, NAMESPACE is optional
Launch Popeye view :popeye or pop⏎ See https://popeyecli.io

Screenshots

  1. Pods
  2. Logs
  3. Deployments


Demo Videos/Recordings


K9s Configuration

K9s keeps its configurations in a .k9s directory in your home directory $HOME/.k9s/config.yml.

NOTE: This is still in flux and will change while in pre-release stage!

# $HOME/.k9s/config.yml
k9s:
  # Represents ui poll intervals. Default 2secs
  refreshRate: 2
  # Number of retries once the connection to the api-server is lost. Default 15.
  maxConnRetry: 5
  # Enable mouse support. Default false
  enableMouse: true
  # Set to true to hide K9s header. Default false
  headless: false
  # Set to true to hide K9s crumbs. Default false
  crumbsless: false
  # Indicates whether modification commands like delete/kill/edit are disabled. Default is false
  readOnly: false
  # Toggles icons display as not all terminal support these chars.
  noIcons: false
  # Logs configuration
  logger:
    # Defines the number of lines to return. Default 100
    tail: 200
    # Defines the total number of log lines to allow in the view. Default 1000
    buffer: 500
    # Represents how far to go back in the log timeline in seconds. Setting to -1 will show all available logs. Default is 5min.
    sinceSeconds: 300
    # Go full screen while displaying logs. Default false
    fullScreenLogs: false
    # Toggles log line wrap. Default false
    textWrap: false
    # Toggles log line timestamp info. Default false
    showTime: false
  # Indicates the current kube context. Defaults to current context
  currentContext: minikube
  # Indicates the current kube cluster. Defaults to current context cluster
  currentCluster: minikube
  # Persists per cluster preferences for favorite namespaces and view.
  clusters:
    coolio:
      namespace:
        active: coolio
        favorites:
        - cassandra
        - default
      view:
        active: po
      featureGates:
        # Toggles NodeShell support. Allow K9s to shell into nodes if needed. Default false.
        nodeShell: false
      # Provide shell pod customization of feature gate is enabled
      shellPod:
        # The shell pod image to use.
        image: killerAdmin
        # The namespace to launch to shell pod into.
        namespace: fred
        # The resource limit to set on the shell pod.
        limits:
          cpu: 100m
          memory: 100Mi
      # The IP Address to use when launching a port-forward.
      portForwardAddress: 1.2.3.4
    kind:
      namespace:
        active: all
        favorites:
        - all
        - kube-system
        - default
      view:
        active: dp

Node Shell

By enabling the nodeShell feature gate on a given cluster, K9s allows you to shell into your cluster nodes. Once enabled, you will have a new s for shell menu option while in node view. K9s will launch a pod on the selected node using a special k9s_shell pod. Furthermore, you can refine your shell pod by using a custom docker image preloaded with the shell tools you love. By default k9s uses a BusyBox image, but you can configure it as follows:

# $HOME/.k9s/config.yml
k9s:
  clusters:
    # Configures node shell on cluster blee
    blee:
      featureGates:
        # You must enable the nodeShell feature gate to enable shelling into nodes
        nodeShell: true
      # You can also further tune the shell pod specification
      shellPod:
        image: cool_kid_admin:42
        namespace: blee
        limits:
          cpu: 100m
          memory: 100Mi

Command Aliases

In K9s, you can define your very own command aliases (shortnames) to access your resources. In your $HOME/.k9s define a file called alias.yml. A K9s alias defines pairs of alias:gvr. A gvr (Group/Version/Resource) represents a fully qualified Kubernetes resource identifier. Here is an example of an alias file:

# $HOME/.k9s/alias.yml
alias:
  pp: v1/pods
  crb: rbac.authorization.k8s.io/v1/clusterrolebindings

Using this alias file, you can now type pp/crb to list pods or ClusterRoleBindings respectively.


HotKey Support

Entering the command mode and typing a resource name or alias, could be cumbersome for navigating thru often used resources. We're introducing hotkeys that allows a user to define their own hotkeys to activate their favorite resource views. In order to enable hotkeys please follow these steps:

  1. Create a file named $HOME/.k9s/hotkey.yml

  2. Add the following to your hotkey.yml. You can use resource name/short name to specify a command ie same as typing it while in command mode.

    # $HOME/.k9s/hotkey.yml
    hotKey:
      # Hitting Shift-0 navigates to your pod view
      shift-0:
        shortCut:    Shift-0
        description: Viewing pods
        command:     pods
      # Hitting Shift-1 navigates to your deployments
      shift-1:
        shortCut:    Shift-1
        description: View deployments
        command:     dp
      # Hitting Shift-2 navigates to your xray deployments
      shift-2:
        shortCut:    Shift-2
        description: Xray Deployments
        command:     xray deploy

Not feeling so hot? Your custom hotkeys will be listed in the help view ?. Also your hotkey file will be automatically reloaded so you can readily use your hotkeys as you define them.

You can choose any keyboard shortcuts that make sense to you, provided they are not part of the standard K9s shortcuts list.

NOTE: This feature/configuration might change in future releases!


Resource Custom Columns

SneakCast v0.17.0 on The Beach! - Yup! sound is sucking but what a setting!

You can change which columns shows up for a given resource via custom views. To surface this feature, you will need to create a new configuration file, namely $HOME/.k9s/views.yml. This file leverages GVR (Group/Version/Resource) to configure the associated table view columns. If no GVR is found for a view the default rendering will take over (ie what we have now). Going wide will add all the remaining columns that are available on the given resource after your custom columns. To boot, you can edit your views config file and tune your resources views live!

NOTE: This is experimental and will most likely change as we iron this out!

Here is a sample views configuration that customize a pods and services views.

# $HOME/.k9s/views.yml
k9s:
  views:
    v1/pods:
      columns:
        - AGE
        - NAMESPACE
        - NAME
        - IP
        - NODE
        - STATUS
        - READY
    v1/services:
      columns:
        - AGE
        - NAMESPACE
        - NAME
        - TYPE
        - CLUSTER-IP

Plugins

K9s allows you to extend your command line and tooling by defining your very own cluster commands via plugins. K9s will look at $HOME/.k9s/plugin.yml to locate all available plugins. A plugin is defined as follows:

  • Shortcut option represents the key combination a user would type to activate the plugin
  • Confirm option (when enabled) lets you see the command that is going to be executed and gives you an option to confirm or prevent execution
  • Description will be printed next to the shortcut in the k9s menu
  • Scopes defines a collection of resources names/short-names for the views associated with the plugin. You can specify all to provide this shortcut for all views.
  • Command represents ad-hoc commands the plugin runs upon activation
  • Background specifies whether or not the command runs in the background
  • Args specifies the various arguments that should apply to the command above

K9s does provide additional environment variables for you to customize your plugins arguments. Currently, the available environment variables are as follows:

  • $NAMESPACE -- the selected resource namespace
  • $NAME -- the selected resource name
  • $CONTAINER -- the current container if applicable
  • $FILTER -- the current filter if any
  • $KUBECONFIG -- the KubeConfig location.
  • $CLUSTER the active cluster name
  • $CONTEXT the active context name
  • $USER the active user
  • $GROUPS the active groups
  • $POD while in a container view
  • $COL-<RESOURCE_COLUMN_NAME> use a given column name for a viewed resource. Must be prefixed by COL-!

Example

This defines a plugin for viewing logs on a selected pod using ctrl-l for shortcut.

# $HOME/.k9s/plugin.yml
plugin:
  # Defines a plugin to provide a `ctrl-l` shortcut to tail the logs while in pod view.
  fred:
    shortCut: Ctrl-L
    confirm: false
    description: Pod logs
    scopes:
    - pods
    command: kubectl
    background: false
    args:
    - logs
    - -f
    - $NAME
    - -n
    - $NAMESPACE
    - --context
    - $CONTEXT

NOTE: This is an experimental feature! Options and layout may change in future K9s releases as this feature solidifies.


Benchmark Your Applications

K9s integrates Hey from the brilliant and super talented Jaana Dogan. Hey is a CLI tool to benchmark HTTP endpoints similar to AB bench. This preliminary feature currently supports benchmarking port-forwards and services (Read the paint on this is way fresh!).

To setup a port-forward, you will need to navigate to the PodView, select a pod and a container that exposes a given port. Using SHIFT-F a dialog comes up to allow you to specify a local port to forward. Once acknowledged, you can navigate to the PortForward view (alias pf) listing out your active port-forwards. Selecting a port-forward and using CTRL-B will run a benchmark on that HTTP endpoint. To view the results of your benchmark runs, go to the Benchmarks view (alias be). You should now be able to select a benchmark and view the run stats details by pressing <ENTER>. NOTE: Port-forwards only last for the duration of the K9s session and will be terminated upon exit.

Initially, the benchmarks will run with the following defaults:

  • Concurrency Level: 1
  • Number of Requests: 200
  • HTTP Verb: GET
  • Path: /

The PortForward view is backed by a new K9s config file namely: $HOME/.k9s/bench-<k8s_context>.yml (note: extension is yml and not yaml). Each cluster you connect to will have its own bench config file, containing the name of the K8s context for the cluster. Changes to this file should automatically update the PortForward view to indicate how you want to run your benchmarks.

Here is a sample benchmarks.yml configuration. Please keep in mind this file will likely change in subsequent releases!

# This file resides in $HOME/.k9s/bench-mycontext.yml
benchmarks:
  # Indicates the default concurrency and number of requests setting if a container or service rule does not match.
  defaults:
    # One concurrent connection
    concurrency: 1
    # Number of requests that will be sent to an endpoint
    requests: 500
  containers:
    # Containers section allows you to configure your http container's endpoints and benchmarking settings.
    # NOTE: the container ID syntax uses namespace/pod-name:container-name
    default/nginx:nginx:
      # Benchmark a container named nginx using POST HTTP verb using http://localhost:port/bozo URL and headers.
      concurrency: 1
      requests: 10000
      http:
        path: /bozo
        method: POST
        body:
          {"fred":"blee"}
        header:
          Accept:
            - text/html
          Content-Type:
            - application/json
  services:
    # Similarly you can Benchmark an HTTP service exposed either via NodePort, LoadBalancer types.
    # Service ID is ns/svc-name
    default/nginx:
      # Set the concurrency level
      concurrency: 5
      # Number of requests to be sent
      requests: 500
      http:
        method: GET
        # This setting will depend on whether service is NodePort or LoadBalancer. NodePort may require vendor port tunneling setting.
        # Set this to a node if NodePort or LB if applicable. IP or dns name.
        host: A.B.C.D
        path: /bumblebeetuna
      auth:
        user: jean-baptiste-emmanuel
        password: Zorg!

K9s RBAC FU

On RBAC enabled clusters, you would need to give your users/groups capabilities so that they can use K9s to explore their Kubernetes cluster. K9s needs minimally read privileges at both the cluster and namespace level to display resources and metrics.

These rules below are just suggestions. You will need to customize them based on your environment policies. If you need to edit/delete resources extra Fu will be necessary.

NOTE! Cluster/Namespace access may change in the future as K9s evolves. NOTE! We expect K9s to keep running even in atrophied clusters/namespaces. Please file issues if this is not the case!

Cluster RBAC scope

---
# K9s Reader ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: k9s
rules:
  # Grants RO access to cluster resources node and namespace
  - apiGroups: [""]
    resources: ["nodes", "namespaces"]
    verbs: ["get", "list", "watch"]
  # Grants RO access to RBAC resources
  - apiGroups: ["rbac.authorization.k8s.io"]
    resources: ["clusterroles", "roles", "clusterrolebindings", "rolebindings"]
    verbs: ["get", "list", "watch"]
  # Grants RO access to CRD resources
  - apiGroups: ["apiextensions.k8s.io"]
    resources: ["customresourcedefinitions"]
    verbs: ["get", "list", "watch"]
  # Grants RO access to metric server (if present)
  - apiGroups: ["metrics.k8s.io"]
    resources: ["nodes", "pods"]
    verbs: ["get", "list", "watch"]

---
# Sample K9s user ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k9s
subjects:
  - kind: User
    name: fernand
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: k9s
  apiGroup: rbac.authorization.k8s.io

Namespace RBAC scope

If your users are constrained to certain namespaces, K9s will need to following role to enable read access to namespaced resources.

---
# K9s Reader Role (default namespace)
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: k9s
  namespace: default
rules:
  # Grants RO access to most namespaced resources
  - apiGroups: ["", "apps", "autoscaling", "batch", "extensions"]
    resources: ["*"]
    verbs: ["get", "list", "watch"]
  # Grants RO access to metric server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs:
      - get
      - list
      - watch

---
# Sample K9s user RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: k9s
  namespace: default
subjects:
  - kind: User
    name: fernand
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: k9s
  apiGroup: rbac.authorization.k8s.io

Skins

Example: Dracula Skin ;)

Dracula Skin

You can style K9s based on your own sense of look and style. Skins are YAML files, that enable a user to change the K9s presentation layer. K9s skins are loaded from $HOME/.k9s/skin.yml. If a skin file is detected then the skin would be loaded if not the current stock skin remains in effect.

You can also change K9s skins based on the cluster you are connecting too. In this case, you can specify the skin file name as $HOME/.k9s/mycontext_skin.yml Below is a sample skin file, more skins are available in the skins directory in this repo, just simply copy any of these in your user's home dir as skin.yml.

Colors can be defined by name or uing an hex representation. Of recent, we've added a color named default to indicate a transparent background color to preserve your terminal background color settings if so desired.

NOTE: This is very much an experimental feature at this time, more will be added/modified if this feature has legs so thread accordingly!

NOTE: Please see K9s Skins for a list of available colors.

# Skin InTheNavy...
k9s:
  # General K9s styles
  body:
    fgColor: dodgerblue
    bgColor: '#ffffff'
    logoColor: '#0000ff'
  # ClusterInfoView styles.
  info:
    fgColor: lightskyblue
    sectionColor: steelblue
  frame:
    # Borders styles.
    border:
      fgColor: dodgerblue
      focusColor: aliceblue
    # MenuView attributes and styles.
    menu:
      fgColor: darkblue
      keyColor: cornflowerblue
      # Used for favorite namespaces
      numKeyColor: cadetblue
    # CrumbView attributes for history navigation.
    crumbs:
      fgColor: white
      bgColor: steelblue
      activeColor: skyblue
    # Resource status and update styles
    status:
      newColor: '#00ff00'
      modifyColor: powderblue
      addColor: lightskyblue
      errorColor: indianred
      highlightcolor: royalblue
      killColor: slategray
      completedColor: gray
    # Border title styles.
    title:
      fgColor: aqua
      bgColor: white
      highlightColor: skyblue
      counterColor: slateblue
      filterColor: slategray
  views:
    # TableView attributes.
    table:
      fgColor: blue
      bgColor: darkblue
      cursorColor: aqua
      # Header row styles.
      header:
        fgColor: white
        bgColor: darkblue
        sorterColor: orange
    # YAML info styles.
    yaml:
      keyColor: steelblue
      colonColor: blue
      valueColor: royalblue
    # Logs styles.
    logs:
      fgColor: white
      bgColor: black

Known Issues

This is still work in progress! If something is broken or there's a feature that you want, please file an issue and if so inclined submit a PR!

K9s will most likely blow up if...

  1. You're running older versions of Kubernetes. K9s works best on Kubernetes latest.
  2. You don't have enough RBAC fu to manage your cluster.

ATTA Girls/Boys!

K9s sits on top of many open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!


Meet The Core Team!

We always enjoy hearing from folks who benefit from our work!

Contributions Guideline

  • File an issue first prior to submitting a PR!
  • Ensure all exported items are properly commented
  • If applicable, submit a test suite against your PR

Imhotep  © 2020 Imhotep Software LLC. All materials licensed under Apache v2.0

Issues
  • K9s extremely slow since 0.9.3

    K9s extremely slow since 0.9.3




    Describe the bug I've compared the versions 1a9a83b34cdd0c9b4e793ed6b4b5c16ea1a949a0 (0.9.3) and fbc25e6c4a49e31f8017089656aa7b841fe06a5f (0.11.0).

    Also cross checked with 0.10.10 which is also very slow.

    The latter is extremely slow compared to the first one. The latter takes ~4 seconds to switch the view and the first one takes ~0.5 seconds to switch the view.

    To Reproduce Steps to reproduce the behavior:

    1. Download both releases 0.9.3 and 0.11.0
    2. Run both binaries and compare speeds for switching views, deleting stuff, ...

    Expected behavior Maybe an improvement in speed, or a small decrease for features, but definitely not such a huge decrease in speed, this makes k9s kinda unusable, if I have to wait 5 seconds between each command

    Screenshots k9s

    Versions (please complete the following information):

    • OS: Arch Linux, kernel 5.4.8-arch1-1
    • K9s: 0.9.3, 0.11.0, 0.10.10
    • K8s: 1.16.2

    Additional context

    performance 
    opened by cwrau 39
  • Fails to start after a while

    Fails to start after a while




    Describe the bug running the version 0.19.5 I am having some issues appearing first after a while and then blocking completly the start of the tool (see at the botton of the issue for logs). So from a fresh cluster (docker-for-mac or K3d) eveything is running fine until I have some error messages appearing at the bottom like [list watch] access denied on resource "default":"v1/pods" Then if I quit K9s and start to relaunch it it fails with the logs below. I am doing some experiments with a webhook admission controller so I wonder if this could be related. If I delete my cluster and start a fresh one the issue disapear and come later somehow.

    To Reproduce it is hard to describe some steps, I am playing with Kyvero ClusterPolicies but and this issu happen's after a while

    Expected behavior not to crash

    Screenshots

    Versions (please complete the following information): On MacOs 10.15.3 It fails on both Docker-for-Desktop : 2.3.0.2 or on k3d version v1.7.0 k9s version: Version: 0.19.5 Commit: 9f1b099e290f6e73d7dead475b34a180a18eb9a5 Date: 2020-05-15T22:35:38Z Additional context start logs

    3:40PM INF 🐶 K9s starting up...
    3:40PM DBG Active Context "k3s-default"
    3:40PM DBG Connecting to API Server https://localhost:6443
    3:40PM DBG RESETING CON!!
    3:40PM INF ✅ Kubernetes connectivity
    3:40PM DBG [Config] Saving configuration...
    3:40PM INF No context specific skin file found -- /Users/myname/.k9s/k3s-default_skin.yml
    3:40PM DBG CURRENT-NS "" -- No active namespace specified
    3:40PM INF No namespace specified using cluster default namespace
    3:40PM DBG Factory START with ns `""
    3:40PM DBG Connecting to API Server https://localhost:6443
    3:40PM WRN   Dial Failed! error="Post \"https://localhost:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews\": context deadline exceeded"
    3:40PM WRN Fail CRDs load error="Post \"https://localhost:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews\": context deadline exceeded"
    3:40PM DBG SkinWatcher watching `/Users/myname/.k9s/skin.yml
    3:40PM DBG CustomView watching `/Users/myname/.k9s/views.yml
    3:40PM WRN   Dial Failed! error="Post \"https://localhost:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews\": context deadline exceeded"
    3:40PM ERR Saved command load failed. Loading default view error="Post \"https://localhost:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews\": context deadline exceeded"
    3:40PM ERR Boom! [list watch] access denied on resource "default":"v1/pods"
    3:40PM ERR goroutine 1 [running]:
    runtime/debug.Stack(0x4195040, 0x2c1c703, 0x0)
    	runtime/debug/stack.go:24 +0x9d
    github.com/derailed/k9s/cmd.run.func1()
    	github.com/derailed/k9s/cmd/root.go:73 +0x11d
    panic(0x2936e00, 0xc00043b450)
    	runtime/panic.go:969 +0x166
    github.com/derailed/k9s/cmd.run(0x41753c0, 0xc00000da80, 0x0, 0x2)
    	github.com/derailed/k9s/cmd/root.go:89 +0x1ef
    github.com/spf13/cobra.(*Command).execute(0x41753c0, 0xc00004c0d0, 0x2, 0x2, 0x41753c0, 0xc00004c0d0)
    	github.com/spf13/[email protected]/command.go:846 +0x29d
    github.com/spf13/cobra.(*Command).ExecuteC(0x41753c0, 0x0, 0x0, 0x0)
    	github.com/spf13/[email protected]/command.go:950 +0x349
    github.com/spf13/cobra.(*Command).Execute(...)
    	github.com/spf13/[email protected]/command.go:887
    github.com/derailed/k9s/cmd.Execute()
    	github.com/derailed/k9s/cmd/root.go:64 +0x2d
    main.main()
    	github.com/derailed/k9s/main.go:27 +0x1a6
    
    question 
    opened by sgandon 37
  • K9s is slow in large clusters

    K9s is slow in large clusters




    Is your feature request related to a problem? Please describe. Im trying to use k9s for my work and i was having issues with k9s being so slow when connecting to large(2k deployments and around 4k pods) cluster.(k8s version v1.14) I have set the refresh time to 10 seconds but it hasnt changed anything. With kubectl(v1.17 locally) command it would take around 2 seconds to retrive all pods or deployments but launcing k9s and waiting even more than 10 seconds.

    Describe the solution you'd like Ive read the doc files both in github and in website and i think it may related to k8s version(not sure how to debug this though) It is said that k9s would work best with latest version of k8s but the truth is in production level k8s will not always be that latest rather it would be a couple of version behind because nobody wants to mess with a working system.It may be also good to support earlier release of k8s or perhaps point out the releases that would work best with that specific k8s version.

    performance 
    opened by fazilhero 30
  • k9s (0.6.x) very very slow on Mac

    k9s (0.6.x) very very slow on Mac




    Describe the bug The 0.6.x versions of k9s are very slow on my mac. So slow that an arrow key or command take more than 10 seconds to respond.

    To Reproduce Steps to reproduce the behavior:

    1. Was running 0.5.2 with no issues
    2. brew upgrade k9s
    3. Started k9s with exact same k8s configuration
    4. So slow... 😟

    Expected behavior No performance regression

    Versions (please complete the following information):

    • OS: Mac OSX
    • K9s: 0.6.0 and 0.6.1
    • K8s: 1.12.x

    Additional context Upgraded from 0.5.2 to 0.6.0 (and later to 0.6.1) using brew upgrade

    bug 
    opened by eldada 29
  • Occasional hang requiring kill -9

    Occasional hang requiring kill -9

    Occasionally, k9s on Linux Fedora 4.20.16-200.fc29.x86_64 hangs. I am unable to input any data into k9s, nor even use Ctrl-C. At this point I have to use kill -9 from another terminal, at which point I can then start k9s again. The terminal itself is not frozen because once the kill -9 is done, the terminal shows k9s as killed, and k9s can be run again in the same terminal.

    I don't know how to reproduce this consistently. If there are some debugging steps I can take the next time this freeze happens, let me know. Perhaps a gdb thread dump?

    Versions (please complete the following information):

    • OS: Fedora 26, kernel 4.20.16-200.fc29.x86_64, terminal is Konsole
    • K9s [e.g. 0.1.0]: 0.5.0 (currently running 0.5.1, will see if it happens again)
    • K8s [e.g. 1.11.0]: 1.12.4
    bug 
    opened by rocketraman 29
  • K9s running very slowly when opening Secrets in namespace with lots of secrets

    K9s running very slowly when opening Secrets in namespace with lots of secrets




    Describe the bug K9s slows down to the point where it is unusable when opening Secrets in a namespace with lots of secrets.

    I have a namespace with 163 secrets. Most of them are from Helm, tracking deployment versions. Opening that namespace and navigating to secrets slows K9s down so much that it is unusable. I have to terminate the terminal window and open a new one.

    To Reproduce Steps to reproduce the behavior:

    1. Open K9s
    2. Navigate to the namespace you want (in my case, I press 2)
    3. SHIFT+colon
    4. sec
    5. ENTER The list of secrets appear, but K9s is too slow to be useful anymore

    Expected behavior K9s doesn't slow down.

    Screenshots If applicable, add screenshots to help explain your problem.

    A video would be more useful, but I would need to redact a significant amount. If I have time this evening, I'll see if I can reproduce it on my cluster at home with something fake.

    Versions (please complete the following information):

    • OS: MacOS Mojave 10.14.6
    • K9s v0.7.12
    • K8s v1.12.7

    Additional context I'm on a corporate-managed laptop with antivirus and firewall junk so if nobody is able to reproduce that may be it, but I hope not...

    Seems like ~50 secrets is when K9s starts to get bogged down a little, and towards ~100 secrets it starts getting really slow.

    bug question 
    opened by RothAndrew 22
  • System color mappings from terminal emulator are not respected

    System color mappings from terminal emulator are not respected




    Describe the bug Prior to 0.8.0, the skin colors were respecting my terminal emulator's color scheme (for the first 16 colors, i.e. system colors). Now they are not (see screenshots, left: 0.7.13, right: 0.8.2, both using the same skin file below). My terminal emulator is guake but it also happens with gnome-terminal.

    Screenshots image

    Skin

    k9s:
      # General K9s styles
      body:
        fgColor: green
        bgColor: black
        logoColor: olive
      # ClusterInfoView styles.
      info:
        fgColor: white
        sectionColor: green
      frame:
        # Borders styles.
        border:
          fgColor: white
          focusColor: green
        # MenuView attributes and styles.
        menu:
          fgColor: white
          keyColor: purple
          # Used for favorite namespaces
          numKeyColor: purple
        # CrumbView attributes for history navigation.
        crumbs:
          fgColor: black
          bgColor: green
          activeColor: olive
        # Resource status and update styles
        status:
          newColor: white
          modifyColor: olive
          addColor: white
          errorColor: maroon
          highlightcolor: teal
          killColor: purple
          completedColor: gray
        # Border title styles.
        title:
          fgColor: teal
          bgColor: black
          highlightColor: olive
          counterColor: white
          filterColor: green
      # TableView attributes.
      table:
        fgColor: white
        bgColor: black
        cursorColor: olive
        # Header row styles.
        header:
          fgColor: white
          bgColor: black
          sorterColor: orange
      views:
        # YAML info styles.
        yaml:
          keyColor: teal
          colonColor: white
          valueColor: white
        # Logs styles.
        logs:
          fgColor: white
          bgColor: black
    
    bug 
    opened by Gerrit-K 20
  • Can't view logs anymore

    Can't view logs anymore

    Latest mac, latest k9s. When trying to view pod logs I can see

    exit status 1
    

    printed at the bottom

    question 
    opened by dodalovic 19
  • k9s container shell broken since 0.24.3 for Windows 10 any shell (windows terminal, cmd.exe, cmder.exe, bash.exe)

    k9s container shell broken since 0.24.3 for Windows 10 any shell (windows terminal, cmd.exe, cmder.exe, bash.exe)




    Describe the bug When I open a shell in any container, k9s doesn't show cursor and broke whole k9s layout after exit from shell

    To Reproduce Steps to reproduce the behavior:

    1. Go to ':pods'
    2. Click on 'any pod'
    3. Press to 's'
    4. See error
    5. Enter exit command
    6. See broken layout and cursor didn't show after exit from k9s

    Expected behavior correct works with shell

    Screenshots

    v0.24.4 video https://recordit.co/ADY6TrH0Bq

    image

    v0.24.2 video on same machine and settings https://recordit.co/laPW7NJAp0

    image

    Versions (please complete the following information):

    • OS: Windows 10 1909
    • K9s: 0.24.3+
    • kubectl: 1.20.5

    Additional context It easy to reproduce with the same behavior (lost cursor, lost layout) inside cmd.exe and Windows Terminal 1.6 and 1.7 preview, and inside cygwin bash or cmder or clink

    bug 
    opened by Slach 19
  • View logs quickly scrolls through entire log when initially loading

    View logs quickly scrolls through entire log when initially loading




    Describe the bug When entering the log view, the screen loads and then quickly scrolls through all the results, which can take some time if there are a lot of logs. Prior versions would just pop into the screen at the end and allow you to scroll back up.

    To Reproduce Run a pod with a lot of log output. View the pod logs in k9s.

    Expected behavior The screen should fairly quickly at the bottom and allow scrolling backwards.

    Versions (please complete the following information):

    • OS: 10.14.6
    • K9s: 0.22.0
    • K8s: 1.18.6

    Additional context I retested this in 0.21.3 and it does not have this behavior. Seems new to 0.22.0

    question 
    opened by longwa 18
  • k9s - popeye run instructions

    k9s - popeye run instructions




    Describe the bug Hello, what are the prerequisite steps to run popeye from k9s? I can run popeye fine as standalone and get results as well running it as cluster job and looking at finished pod logs But when trying to run popeye from k9s with command popeye it always returns empty list (Popeye(Score 0 -- n/a)[0]), so am not sure if I'm missing something. Tried to run k9s with -l debug flag but nothing interesting pops up in a log file

    To Reproduce Steps to reproduce the behavior:

    1. (Note I can run latest popeye as standalone from path or installed as cron job inside cluster)
    2. Run command 'popeye' from k9s
    3. Empty popeye result log returned

    Expected behavior latest popeye scan result or list of popeye scan results is displayed with ability to show scan result details

    Screenshots If applicable, add screenshots to help explain your problem.

    Versions (please complete the following information):

    • OS: Linux
    • K9s: v0.24.14
    • Popeye: 0.9.7
    • K8s: v1.21.1

    Additional context Add any other context about the problem here.

    opened by jolet 1
  • make `mark` advance to the next line

    make `mark` advance to the next line




    Is your feature request related to a problem? Please describe. I noticed that marking a number of resources takes a lot of key strokes in k9s. I'd like to mark 3 resources (cert-manager-webhook, whoami, external-dns), one is on the current line, the other two are located two lines down from the current line, something like this with < > indicating the current line:

    NAMESPACE   NAME                                             TYPE
    cert-manager   cert-manager                                  ClusterIP
    <cert-manager  cert-manager-webhook                          ClusterIP>
    cert-manager   cert-manager-webhook-hetzner                  ClusterIP
    default        whoami                                        ClusterIP
    external-dns   external-dns                                  ClusterIP
    

    In my understanding the shortest path to marking the resources takes 6 key strokes (Space, Down, Down, Space, Down, Space).

    Describe the solution you'd like I propose that k9s advances the cursor to the next resource once a resource has been marked. This would reduce the number of key strokes to 4 (Space, Down, Space, Space).

    Describe alternatives you've considered The mark range feature serves a similar purpose but would not be faster, it would take even more key strokes if the Ctrl modifier is counted as a key stroke (Space, Down, Down, Space, Down, Ctrl-Space)

    Additional context However, one thing to consider is when the next resource to select is above the currently selected resource. In this case an additional key stroke (Up) is required to navigate to it. This could be mitigated by providing a Shift-Space binding in addition to Space, that would advance to the previous rather than the next resource.

    opened by jceb 0
  • k9s does not remember last view I was in when switching contexts

    k9s does not remember last view I was in when switching contexts




    Describe the bug When I switch context (outside of k9s) and open k9s, the last view I was in is not restored. k9s starts in the Pods(all) view.

    To Reproduce Steps to reproduce the behavior:

    1. Exit current k9s in a known view
    2. Switch context and open k9s
    3. Exit k9s and switch context back
    4. Open k9s and see it falls to Pod(all) view and not the last known

    Expected behavior k9s opens last known view as set in the config.yml

    Screenshots If applicable, add screenshots to help explain your problem.

    Versions (please complete the following information):

    • OS: Mac OS (latest)
    • K9s: v0.24.14
    • K8s: 1.17 (and newer)

    Additional context I deleted the contexts from config.yml to try and resolve this. It did not help.

    question 
    opened by eldada 3
  • Bump github.com/derailed/popeye from 0.9.0 to 0.9.6

    Bump github.com/derailed/popeye from 0.9.0 to 0.9.6

    Bumps github.com/derailed/popeye from 0.9.0 to 0.9.6.

    Release notes

    Sourced from github.com/derailed/popeye's releases.

    v0.9.6

    Release v0.9.6

    Notes

    Thank you to all that contributed with flushing out issues and enhancements for Popeye! I'll try to mark some of these issues as fixed. But if you don't mind grab the latest rev and see if we're happier with some of the fixes! If you've filed an issue please help me verify and close. Your support, kindness and awesome suggestions to make Popeye better is as ever very much noticed and appreciated!

    This project offers a GitHub Sponsor button (over here 👆). As you well know this is not pimped out by big corps with deep pockets. If you feel Popeye is saving you cycles diagnosing potential cluster issues please consider sponsoring this project!! It does go a long way in keeping our servers lights on and beers in our fridge.

    Also if you dig this tool, please make some noise on social! @​kitesurfer


    Maintenance Release!


    Resolved Bugs/PRs

    • [Issue #188](derailed/popeye#188) Can't run Popeye: No resource meta found for networking.k8s.io/v1/ingresses. With Feelings!
    • [Issue #178](derailed/popeye#178) Custom handling of client-go/rest warnings. Sending to logger instead

      © 2020 Imhotep Software LLC. All materials licensed under Apache v2.0

    v0.9.5

    Release v0.9.5

    Notes

    Thank you to all that contributed with flushing out issues and enhancements for Popeye! I'll try to mark some of these issues as fixed. But if you don't mind grab the latest rev and see if we're happier with some of the fixes! If you've filed an issue please help me verify and close. Your support, kindness and awesome suggestions to make Popeye better is as ever very much noticed and appreciated!

    This project offers a GitHub Sponsor button (over here 👆). As you well know this is not pimped out by big corps with deep pockets. If you feel Popeye is saving you cycles diagnosing potential cluster issues please consider sponsoring this project!! It does go a long way in keeping our servers lights on and beers in our fridge.

    Also if you dig this tool, please make some noise on social! @​kitesurfer


    Maintenance Release!


    Resolved Bugs/PRs

    • [Issue #163](derailed/popeye#163) popeye 0.9.0 with K8S 1.21.0 bug on PodDisruptionBudget - Wrong default API - With Feelings!

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies go 
    opened by dependabot[bot] 0
  • Bump github.com/gdamore/tcell/v2 from 2.3.1 to 2.4.0

    Bump github.com/gdamore/tcell/v2 from 2.3.1 to 2.4.0

    Bumps github.com/gdamore/tcell/v2 from 2.3.1 to 2.4.0.

    Commits
    • 5d53415 Fix links to tslocum's packages. Add link to tutorial from readme.
    • 0de353b Convert documentation to Markdown.
    • b60a903 Add Screen.ChannelEvents v2 (#465)
    • 7946eb8 Wait for output to drain.
    • 6582146 Close the tty device when finishing.
    • 15d485c Add a stdin version of Tty, and handle unset terminal sizes sanely.
    • da8f206 Reset colors to default on suspend.
    • b7e369f Add aretext to the list of examples in the README
    • 0fb69ae Add support for the foot terminal
    • 4f213fd Make sun-color 256 color by default, and just drop the sun-256color.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies go 
    opened by dependabot[bot] 0
  • Bump golang from 1.16.5-alpine3.13 to 1.16.6-alpine3.13

    Bump golang from 1.16.5-alpine3.13 to 1.16.6-alpine3.13

    Bumps golang from 1.16.5-alpine3.13 to 1.16.6-alpine3.13.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies docker 
    opened by dependabot[bot] 0
  • Change text color for selected button

    Change text color for selected button

    Screenshot 2021-07-19 at 18 31 12


    It' more a question instead a feature request.

    When I want to scale my statefulset I get a dialog, as soon as I select one of the buttons at the bottom I can't read the text anymore.

    Is there a possibility to change the color of the selected button in the dialog screen?

    Regards, Manuel

    opened by mguggi 0
  • Remove current state from config.yml

    Remove current state from config.yml




    Is your feature request related to a problem? Please describe. If dotfiles are managed via git everytime k9s is opened the config.yml changes. If it only contain the defaults for the client it would only change if you want new defaults.

    Describe the solution you'd like Move the current state out of config.yml maybe into state.yml

    opened by NemesisRE 0
  • K9s does not exit when ssh connection is closed (using up all system ram after some time)

    K9s does not exit when ssh connection is closed (using up all system ram after some time)




    Describe the bug K9s does not exit when ssh connection is closed (using up all system ram after some time).

    To Reproduce Steps to reproduce the behavior:

    1. ssh to server
    2. start k9s
    3. kill console ssh is started from
    4. ssh exits, k9s is still running und uses more and more RAM until node goes down.

    Expected behavior k9s should exit when ssh exits, maybe sighup is ignored?

    Versions (please complete the following information):

    • OS: linux
    • K9s: 0.24.10
    • K8s: v1.19.12

    Additional context Add any other context about the problem here.

    opened by gebi 0
  • Selecting an item should move the marker to the next one in the list.

    Selecting an item should move the marker to the next one in the list.

    Describe the bug All items are selectable with the spacebar. When selecting an item, the "cursor" should move to the next item, so bulk selection can be done easily with multiple presses of the spacebar key.

    To Reproduce Steps to reproduce the behaviour:

    1. Go to whatever view, e.g. "pods"
    2. Select a pod by pressing the spacebar
    3. Press the spacebar once more

    Expected behaviour Two pods should be selected

    Actual behaviour No pod is selected, because pressing the spacebar twice, without moving the marker, just toggles back the selection

    Versions (please complete the following information):

    • K9s: 0.24.14 (latest)
    opened by damyan 0
Releases(v0.24.14)
Owner
Fernand Galiana
Owner of Imhotep Software a consultancy specializing in architecture, software development and corporate training for GO and Kubernetes
Fernand Galiana
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Clusternet 69 Jul 23, 2021
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Fernand Galiana 12.8k Jul 21, 2021
An operator for managing ephemeral clusters in GKE

Test Cluster Operator for GKE This operator provides an API-driven cluster provisioning for integration and performance testing of software that integ

Isovalent 28 Mar 19, 2021
Interactive Cloud-Native Environment Client

Fenix-CLI:Interactive Cloud-Native Environment Client English | 简体中文 Fenix-CLI is an interactive cloud-native operating environment client. The goal i

IcyFenix 18 Jul 12, 2021
Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform What is KubeSphere English | 中文 KubeSphere is a distributed operating system providing cloud native stack with Kubernete

KubeSphere 6.2k Jul 27, 2021
👀 A Kubernetes cluster resource sanitizer

Popeye - A Kubernetes Cluster Sanitizer Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources an

Fernand Galiana 3k Jul 25, 2021
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Kubernetes 79.2k Jul 20, 2021
Lightweight Kubernetes

K3s - Lightweight Kubernetes Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB. Great for:

null 17.3k Jul 23, 2021
Command Line Interface for Scaleway

Scaleway CLI (v2) Scaleway CLI is a tool to help you pilot your Scaleway infrastructure directly from your terminal. Installation With a Package Manag

Scaleway 678 Jul 27, 2021
kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters

kubequery powered by Osquery kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters kubequery will be packaged as

Uptycs Inc 47 Jul 13, 2021
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

Kilo Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes. Overview Kilo connects nodes in a cluster by providing an e

Lucas Servén Marín 981 Jul 21, 2021
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 783 Jul 25, 2021
k0s - Zero Friction Kubernetes

k0s - Zero Friction Kubernetes k0s is an all-inclusive Kubernetes distribution with all the required bells and whistles preconfigured to make building

k0s - Kubernetes distribution - OSS Project 3.8k Jul 22, 2021
KubeCube is an open source enterprise-level container platform

KubeCube English | 中文文档 KubeCube is an open source enterprise-level container platform that provides enterprises with visualized management of Kuberne

KubeCube IO 25 Jul 22, 2021