k0s - Zero Friction Kubernetes

Overview

Go build k0s network conformance Slack Go Reference GitHub release (latest by date) GitHub release (latest SemVer including pre-releases) GitHub commits since latest release (by date)

GitHub Repo stars Releases

k0s - Zero Friction Kubernetes

k0s logo

k0s is an all-inclusive Kubernetes distribution with all the required bells and whistles preconfigured to make building a Kubernetes clusters a matter of just copying an executable to every host and running it.

Key Features

  • Packaged as a single static binary
  • Self-hosted, isolated control plane
  • Variety of storage backends: etcd, SQLite, MySQL (or any compatible), PostgreSQL
  • Elastic control-plane
  • Vanilla upstream Kubernetes
  • Supports custom container runtimes (containerd is the default)
  • Supports custom Container Network Interface (CNI) plugins (kube-router is the default)
  • Supports x86-64, ARM64 and ARMv7

Try k0s

If you'd like to try k0s, please jump to our:

  • Quick Start Guide - Create a full Kubernetes cluster with a single node that includes both the controller and the worker.
  • Install using k0sctl - Deploy and upgrade multi-node clusters with one command.
  • NanoDemo - Watch a .gif recording of how to create a k0s instance.
  • Run k0s in Docker - Run k0s controllers and workers in containers.
  • For docs, tutorials, and other k0s resources, see docs main page.

Join the Community

If you'd like to help build k0s, please check out our guide to Contributing and our Code of Conduct.

Motivation

We have seen a gap between the host OS and Kubernetes that runs on top of it: How to ensure they work together as they are upgraded independent from each other? Who’s responsible for vulnerabilities or performance issues originating from the host OS that affect the K8S on top?

k0s is fully self contained. It’s distributed as a single binary with no host OS deps besides the kernel. Any vulnerability or perf issues may be fixed in k0s Kubernetes.

We have seen K8S with partial FIPS security compliance: How to ensure security compliance for critical applications if only part of the system is FIPS compliant?

k0s core + all included host OS dependencies + components on top may be compiled and packaged as a 100% FIPS compliant distribution using a proper toolchain.

We have seen Kubernetes with cumbersome lifecycle management, high minimum system requirements, weird host OS and infra restrictions, and/or need to use different distros to meet different use cases.

k0s is designed to be lightweight at its core. It comes with a tool to automate cluster lifecycle management. It works on any host OS and infrastructure, and may be extended to work with any use cases such as edge, IoT, telco, public clouds, private data centers, and hybrid & hyper converged cloud applications without sacrificing the pure Kubernetes compliance or amazing developer experience.

Other Features

  • Kubernetes 1.20, 1.21
  • Container Runtime:
    • ContainerD (default)
    • Custom (bring-your-own)
  • Control plane storage options:
    • etcd (default for multi-node clusters)
    • sqlite (default for single node clusters)
    • PostgreSQL (external)
    • MySQL (external)
  • CNI providers
    • Kube-Router (default)
    • Calico
    • Custom (bring-your-own)
  • Control plane isolation:
    • Fully isolated (default)
    • Tainted worker
  • Control plane - node communication
    • Konnectivity service (default)
  • CoreDNS
  • Metrics-server

Status

k0s is ready for production (starting from v1.21.0+k0s.0). Since the initial release of k0s back in November 2020, we have made numerous releases, improved stability, added new features, and most importantly, listened to our users and community in an effort to create the most modern Kubernetes product out there. The active development continues to make k0s even better.

Scope

While some Kubernetes distros package everything and the kitchen sink, k0s tries to minimize the amount of "add-ons" to bundle in. Instead, we aim to provide a robust and versatile "base" for running Kubernetes in various setups. Of course we will provide some ways to easily control and setup various "add-ons", but we will not bundle many of those into k0s itself. There are a couple of reasons why we think this is the correct way:

  • Many of the addons such as ingresses, service meshes, storage etc. are VERY opinionated. We try to build this base with less opinions. :D
  • Keeping up with the upstream releases with many external addons is very maintenance heavy. Shipping with old versions does not make much sense either.

With strong enough arguments we might take in new addons, but in general those should be something that are essential for the "core" of k0s.

Build

k0s can be built in 3 different ways:

Fetch official binaries (except konnectivity-server, which are built from source):

make EMBEDDED_BINS_BUILDMODE=fetch

Build Kubernetes components from source as static binaries (requires docker):

make EMBEDDED_BINS_BUILDMODE=docker

Build k0s without any embedded binaries (requires that Kubernetes binaries are pre-installed on the runtime system):

make EMBEDDED_BINS_BUILDMODE=none

Builds can be done in parallel:

make -j$(nproc)

Smoke test

To run a smoke test after build:

make check-basic
Comments
  • Installation k0s installation fails on Oracle Linux 7

    Installation k0s installation fails on Oracle Linux 7

    Installation k0s intallation fails on Oracle Linux 7

    Linux lxdocapa16 3.10.0-1160.11.1.el7.x86_64 #1 SMP Tue Dec 15 11:58:45 PST 2020 x86_64 x86_64 x86_64 GNU/Linux

    2021-08-30 09:39:50.488567 I | starting debug server under :6060

    INFO[2021-08-30 09:39:50] no config file given, using defaults
    ...
    Error: failed to create controller users: user: lookup username etcd: input/output error
    user: lookup username kube-apiserver: input/output error
    user: lookup username konnectivity-server: input/output error
    user: lookup username kube-scheduler: input/output error
    2021-08-30 09:39:50.492615 I | failed to create controller users: user: lookup username etcd: input/output error
    user: lookup username kube-apiserver: input/output error
    user: lookup username konnectivity-server: input/output error
    user: lookup username kube-scheduler: input/output error
    

    Based on the documentation:

    Though the Quick Start material is written for Debian/Ubuntu, you can use it for any Linux distro that is running either a Systemd or OpenRC init system.

    OL 7 is running systemd, so this requirement is OK.

    The other requirements also fullfill:

    • Host operating system -->Linux (kernel v3.10 or later)
    • Architecture --> x86-64

    Maybe be issue non-supporting OL7?

    bug 
    opened by wheestermans31 41
  • K0s Installation issue

    K0s Installation issue

    Hi, I'm trying to install K0S according two modes: Installation ran in a loop without a break in both modes. 1- download release k0s-v0.7.0-amd64 and then below command:

    ./k0s-v0.7.0-amd64 server

    2- this is the second mode.

    curl -sSLf get.k0s.sh | sh

    k0s default-config > K0s.yaml

    K0s server

    E1119 16:22:14.129973 3741 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request W1119 16:22:14.144906 3741 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService E1119 16:22:14.158842 3741 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1119 16:22:15.331960 3741 request.go:645] Throttling request took 1.168666422s, request: GET:https://localhost:6443/api/v1/replicationcontrollers?limit=1 INFO[2020-11-19 16:22:19] current config matches existing, not gonna do anything component=kubeproxy INFO[2020-11-19 16:22:19] current config matches existing, not gonna do anything component=coredns INFO[2020-11-19 16:22:19] current config matches existing, not gonna do anything component=calico INFO[2020-11-19 16:22:21] I1119 16:22:21.293906 3772 request.go:645] Throttling request took 1.047690596s, request: GET:https://localhost:6443/apis/certificates.k8s.io/v1?timeout=32s component=kube-controller-manager INFO[2020-11-19 16:22:22] W1119 16:22:22.145217 3772 garbagecollector.go:642] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] component=kube-controller-manager INFO[2020-11-19 16:22:24] E1119 16:22:24.095814 3772 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request component=kube-controller-manager E1119 16:22:24.540955 3741 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request W1119 16:22:24.555543 3741 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService E1119 16:22:24.570493 3741 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1119 16:22:25.742798 3741 request.go:645] Throttling request took 1.168060948s, request: GET:https://localhost:6443/api/v1/persistentvolumeclaims?limit=1 INFO[2020-11-19 16:22:29] current config matches existing, not gonna do anything component=kubeproxy INFO[2020-11-19 16:22:29] current config matches existing, not gonna do anything component=coredns INFO[2020-11-19 16:22:29] current config matches existing, not gonna do anything component=calico E1119 16:22:34.952734 3741 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request W1119 16:22:34.969553 3741 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService E1119 16:22:34.984388 3741 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request INFO[2020-11-19 16:22:35] I1119 16:22:35.127204 3764 client.go:360] parsed scheme: "passthrough" component=kube-apiserver INFO[2020-11-19 16:22:35] I1119 16:22:35.127280 3764 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } component=kube-apiserver INFO[2020-11-19 16:22:35] I1119 16:22:35.127302 3764 clientconn.go:948] ClientConn switching balancer to "pick_first" component=kube-apiserver I1119 16:22:36.154436 3741 request.go:645] Throttling request took 1.165143012s, request: GET:https://localhost:6443/api/v1/replicationcontrollers?limit=1 INFO[2020-11-19 16:22:39] current config matches existing, not gonna do anything component=kubeproxy INFO[2020-11-19 16:22:39] current config matches existing, not gonna do anything component=coredns INFO[2020-11-19 16:22:39] current config matches existing, not gonna do anything component=calico E1119 16:22:45.367790 3741 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request W1119 16:22:45.392337 3741 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService E1119 16:22:45.404739 3741 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1119 16:22:46.577008 3741 request.go:645] Throttling request took 1.167891545s, request: GET:https://localhost:6443/api/v1/secrets?limit=1 INFO[2020-11-19 16:22:49] current config matches existing, not gonna do anything component=kubeproxy INFO[2020-11-19 16:22:49] current config matches existing, not gonna do anything component=coredns INFO[2020-11-19 16:22:49] current config matches existing, not gonna do anything component=calico INFO[2020-11-19 16:22:53] I1119 16:22:53.795733 3772 request.go:645] Throttling request took 1.04805695s, request: GET:https://localhost:6443/apis/batch/v1beta1?timeout=32s component=kube-controller-manager

    Please advise. Thanks

    opened by zioncohen123 30
  • Issue cluster - component - konnectivity

    Issue cluster - component - konnectivity

    I have a up-and-running cluster with three controllers and two workers.

    sh-4.2# /usr/local/bin/k0s kubectl get node
    NAME         STATUS   ROLES    AGE   VERSION
    lxdocapa22   Ready    <none>   75m   v1.22.1+k0s
    lxdocapa23   Ready    <none>   71m   v1.22.1+k0s
    sh-4.2# /usr/local/bin/k0s kubectl -n kube-system get pod -o wide
    NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
    coredns-5ccbdcc4c4-8d9fz          1/1     Running   0          45m   10.244.1.25      lxdocapa23   <none>           <none>
    coredns-5ccbdcc4c4-trgvz          1/1     Running   0          45m   10.244.0.41      lxdocapa22   <none>           <none>
    konnectivity-agent-8zbw4          1/1     Running   0          41m   10.244.0.47      lxdocapa22   <none>           <none>
    konnectivity-agent-lfdq7          1/1     Running   0          41m   10.244.1.32      lxdocapa23   <none>           <none>
    kube-proxy-76fwz                  1/1     Running   0          45m   10.100.114.101   lxdocapa23   <none>           <none>
    kube-proxy-7m8mw                  1/1     Running   0          45m   10.100.114.100   lxdocapa22   <none>           <none>
    kube-router-7jt6l                 1/1     Running   0          44m   10.100.114.101   lxdocapa23   <none>           <none>
    kube-router-kzgfz                 1/1     Running   0          44m   10.100.114.100   lxdocapa22   <none>           <none>
    metrics-server-6bd95db5f4-vckcb   1/1     Running   0          45m   10.244.0.42      lxdocapa22   <none>           <none>
    sh-4.2# 
    

    The kubectl is not giving me any issue, from that point of view it looks fine.

    But the kubectl command is fast on one of the controllers, and very slow on the two others. probably fast on the leader of the cluster.

    The only point I see are the following logging related to the konnectivity component.

    Logs on node that is fast:

    Sep  7 17:08:12 lxdocapa20 k0s: time="2021-09-07 17:08:12" level=info msg="E0907 17:08:12.677469   18582 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:08:12 lxdocapa20 k0s: time="2021-09-07 17:08:12" level=info msg="E0907 17:08:12.678119   18582 server.go:736] \"could not get frontend client\" err=\"can't find connID 79 in the frontends[a8980db4-e38e-47be-a189-c73aef053331]\" connectionID=79" component=konnectivity
    
    

    But on the ones that are slow I see the following logs:

    Sep  7 17:05:46 lxdocapa17 k0s: time="2021-09-07 17:05:46" level=info msg="E0907 17:05:46.132937   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:46 lxdocapa17 k0s: time="2021-09-07 17:05:46" level=info msg="E0907 17:05:46.132950   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:46 lxdocapa17 k0s: time="2021-09-07 17:05:46" level=info msg="E0907 17:05:46.132995   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:46 lxdocapa17 k0s: time="2021-09-07 17:05:46" level=info msg="E0907 17:05:46.132998   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:46 lxdocapa17 k0s: time="2021-09-07 17:05:46" level=info msg="E0907 17:05:46.133012   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.133034   27623 trace.go:205] Trace[641514455]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:46.131) (total time: 5001ms):" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="Trace[641514455]: [5.001040281s] [5.001040281s] END" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.133057   27623 trace.go:205] Trace[787933724]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:46.131) (total time: 5001ms):" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="Trace[787933724]: [5.001144092s] [5.001144092s] END" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.133097   27623 trace.go:205] Trace[679539168]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:46.131) (total time: 5001ms):" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="Trace[679539168]: [5.001097475s] [5.001097475s] END" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.133124   27623 trace.go:205] Trace[497685480]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:46.131) (total time: 5001ms):" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="Trace[497685480]: [5.001187233s] [5.001187233s] END" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.133176   27623 trace.go:205] Trace[1558194979]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:46.132) (total time: 5001ms):" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="Trace[1558194979]: [5.001150444s] [5.001150444s] END" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.138133   27623 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139243   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139271   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139245   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139252   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139252   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.139497   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.139498   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.139559   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.139567   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="I0907 17:05:51.139494   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139631   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139631   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139655   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139668   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:51 lxdocapa17 k0s: time="2021-09-07 17:05:51" level=info msg="E0907 17:05:51.139681   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:52 lxdocapa17 k0s: time="2021-09-07 17:05:52" level=info msg="E0907 17:05:52.997037   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:52 lxdocapa17 k0s: time="2021-09-07 17:05:52" level=info msg="I0907 17:05:52.997255   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:52 lxdocapa17 k0s: time="2021-09-07 17:05:52" level=info msg="E0907 17:05:52.997373   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:54 lxdocapa17 k0s: time="2021-09-07 17:05:54" level=info msg="current cfg matches existing, not gonna do anything" component=kubeproxy
    Sep  7 17:05:54 lxdocapa17 k0s: time="2021-09-07 17:05:54" level=info msg="current cfg matches existing, not gonna do anything" component=coredns
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.139050   27623 trace.go:205] Trace[701055894]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:51.138) (total time: 5000ms):" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="Trace[701055894]: [5.000532546s] [5.000532546s] END" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.139050   27623 trace.go:205] Trace[1382020849]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:51.138) (total time: 5000ms):" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="Trace[1382020849]: [5.00075802s] [5.00075802s] END" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.139073   27623 trace.go:205] Trace[1290110452]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:51.138) (total time: 5000ms):" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="Trace[1290110452]: [5.000770225s] [5.000770225s] END" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.139096   27623 trace.go:205] Trace[1456862550]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:51.138) (total time: 5000ms):" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="Trace[1456862550]: [5.000653482s] [5.000653482s] END" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.139119   27623 trace.go:205] Trace[77502346]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:51.138) (total time: 5000ms):" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="Trace[77502346]: [5.000738134s] [5.000738134s] END" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.146144   27623 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147087   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147106   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147110   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147107   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147087   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.147250   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.147255   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.147287   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.147300   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="I0907 17:05:56.147257   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147338   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147337   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147388   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147395   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:56 lxdocapa17 k0s: time="2021-09-07 17:05:56" level=info msg="E0907 17:05:56.147431   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:57 lxdocapa17 k0s: time="2021-09-07 17:05:57" level=info msg="E0907 17:05:57.549191   27648 server.go:413] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity
    Sep  7 17:05:57 lxdocapa17 k0s: time="2021-09-07 17:05:57" level=info msg="I0907 17:05:57.549484   27623 client.go:117] DialResp not recognized; dropped" component=kube-apiserver
    Sep  7 17:05:57 lxdocapa17 k0s: time="2021-09-07 17:05:57" level=info msg="E0907 17:05:57.549664   27648 server.go:382] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
    Sep  7 17:05:57 lxdocapa17 k0s: time="2021-09-07 17:05:57" level=info msg="I0907 17:05:57.996948   27623 trace.go:205] Trace[1232079086]: \"Proxy via grpc protocol over uds\" address:10.97.148.61:443 (07-Sep-2021 17:05:52.996) (total time: 5000ms):" component=kube-apiserver
    Sep  7 17:05:57 lxdocapa17 k0s: time="2021-09-07 17:05:57" level=info msg="Trace[1232079086]: [5.000754682s] [5.000754682s] END" component=kube-apiserver
    
    

    Probably related...

    bug 
    opened by wheestermans31 29
  • LXC Ubuntu 20.10 Unable to bootstrap

    LXC Ubuntu 20.10 Unable to bootstrap

    Version

    $ k0s version
    latest
    

    Platform LXC Ubuntu 20.10

    $ lsb_release -a
    [email protected]:~# lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 20.10
    Release:        20.10
    Codename:       groovy
    

    What happened? A clear and concise description of what the bug is. Unable to bootstrap the server. How To Reproduce How can we reproduce this issue? (as minimally and as precisely as possible) Downloaded the binary, made it executable, and generated a config file as per docs Expected behavior To have a master cluster up and running.

    Screenshots & Logs If applicable, add screenshots to help explain your problem. Also add any output from kubectl if applicable:

    $ export KUBECONFIG=/var/lib/k0s/pki/admin.conf
    $ kubectl logs ...
    

    Additional context Add any other context about the problem here.

    bug documentation need more info 
    opened by uhlhosting 25
  • worker fails to start after reboot

    worker fails to start after reboot

    Version

    v0.8.1
    

    Platform

    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 20.04.1 LTS
    Release:	20.04
    Codename:	focal
    

    What happened? Worker started, shows up as a node in master. Then I added crontab

    @reboot screen -dmS k0s k0s worker --token-file /root/join-token
    

    and then rebooted and it's not coming up anymore, see https://gist.githubusercontent.com/matti/f24e0f0080298e79d7c2c9e4500b5a89/raw/ae42398487051ccb7c3bd48c1ca5f153c6545cea/k0s-worker.txt

    bug area/worker component/kubelet component/containerd priority/P1 
    opened by matti 25
  • How do I configure storage class on existing k0s cluster

    How do I configure storage class on existing k0s cluster

    When I run kos kubectl get storageclass I notice it is not provisioned and I have to add it manually. I am little unsure though on the exact procedure. The Documentation here says k0s comes with OpenEBS installed .

    What I am unsure of is how to enable this installation in the config file.

    My existing config file has this only (I assume this should be the default config file. Is this correct ?) : spec: api: externalAddress

    As per the guide I have stopped k0s k0s stop then amended the config file as follows:

    spec: api: externalAddress extensions: storage: type: openebs_local_storage

    Do I need to generate a new k0s.yaml and discard the existing one as per Documentation.

    Do I need to also reinstall/install k0s, am a bit unsure because the existing k0s.yaml seems to have very little config parameters as compared to the sample given in the link .

    What am I missing ?

    EDIT:

    I have proceeded to generate a new k0s.yaml that uses the existing default settings. Turns out it creates 2 k0s.yaml files with same content but different filenames (k0s.yaml and k0s.yaml.save). I have also effected changes as in the documentation to produce this : apiVersion: k0s.k0sproject.io/v1beta1 kind: ClusterConfig metadata: creationTimestamp: null name: k0s spec: api: address: 10.XXX.XXX.XXX k0sApiPort: 9443 port: 6443 sans: - 10.XX.XX.XXX - 172.XX.XX.XXX - 10.XX.XX.XXX - 10.XX.XX.XXX - fe80::XXX:XXX:XXX:XXX - fe80::XXX:XXX:XXX:XXX - fe80::XXX:XXX:XXX:XXX - fe80::XXX:XXX:XXX:XXX - fe80::XXX:XXX:XXX:XXX - fe80::XXX:XXX:XXX:XXX tunneledNetworkingMode: false controllerManager: {} extensions: helm: charts: null repositories: null **storage: create_default_storage_class: true type: openebs_local_storage** images: calico: cni: image: docker.io/calico/cni version: v3.21.2 kubecontrollers: image: docker.io/calico/kube-controllers

    After this change I proceed to do a reinstall of the cluster : k0s install controller -c /etc/k0s/k0s.yaml

    but I am getting an error Error: failed to install k0s service: failed to install service: Init already exists: /etc/systemd/system/k0scontroller.service

    question 
    opened by Ed87 20
  • Pods cannot communicate between worker nodes

    Pods cannot communicate between worker nodes

    k0s version v1.22.2+k0s.1

    Platform The master and the 3 worker node run on Ubuntu 20.04 LTS

    What happened? When deploying several Pods in the cluster, it appears that Pods running on a worker cannot communicate with another Pods running on another worker node.

    I am sure it is not a bug from K0s, but I cannot find out my mistakes.

    How to reproduce //Details of worker nodes

    $ kubectl get no -o wide
    NAME    STATUS   ROLES    AGE     VERSION       INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
    node2   Ready    <none>   7d      v1.22.2+k0s   192.168.64.3   <none>        Ubuntu 20.04.3 LTS   5.4.0-89-generic   containerd://1.5.7
    node3   Ready    <none>   7d      v1.22.2+k0s   192.168.64.4   <none>        Ubuntu 20.04.3 LTS   5.4.0-89-generic   containerd://1.5.7
    node4   Ready    <none>   3d23h   v1.22.2+k0s   192.168.64.5   <none>        Ubuntu 20.04.3 LTS   5.4.0-89-generic   containerd://1.5.7
    

    //Details of running Pods //One pod is running on node2 in the cluster: 'app-basic' is listenning on 8282 port //One pod is running on node4 in the cluster: 'curl' is a basic busybox to test Pod to Pod Communication

    $ kubectl get pods -o wide
    NAME                         READY   STATUS             RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
    app-basic-6dcbbc46c8-s822d   1/1     Running            0          22h   10.244.0.8   node2   <none>           <none>
    curl                         1/1     Running            0          23h   10.244.2.6   node4   <none>           <none>
    

    //From the worker node node2, where the Pod is deployed, try to connect directly to app-basic Pod is OK

    node2:~$ curl http://10.244.0.8:8282
    You''ve hit app-basic-6dcbbc46c8-s822d
    

    //From the worker node node4, try to connect directly to app-basic iPod is KO

    node4:~$ curl http://10.244.0.8:8282
    curl: (28) Failed to connect to 10.244.0.8 port 8282: Connection timed out
    

    Expected behavior //From the worker node node4, the call to the app-basic Pod should work like from node2.

    bug 
    opened by nleeuskadi 19
  • Issue exec and logs on pod

    Issue exec and logs on pod

    I have create K0s cluster, three managers and two workers.

    sh-4.2# /usr/local/bin/k0s kubectl get node
    NAME         STATUS   ROLES    AGE   VERSION
    lxdocapa22   Ready    <none>   71m   v1.22.1+k0s
    lxdocapa23   Ready    <none>   71m   v1.22.1+k0s
    
    

    I'm able to execute get commands with success, but when I try to exec to a pod or get the logs of a pod, I always receive the error below:

    sh-4.2# /usr/local/bin/k0s kubectl -n kube-system get pod -A -o wide
    NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
    kube-system   coredns-5ccbdcc4c4-pvvnz          1/1     Running   0          69m   10.244.0.220     lxdocapa22   <none>           <none>
    kube-system   coredns-5ccbdcc4c4-vhbw6          1/1     Running   0          69m   10.244.0.217     lxdocapa22   <none>           <none>
    kube-system   konnectivity-agent-6wc9v          1/1     Running   0          43m   10.244.0.229     lxdocapa22   <none>           <none>
    kube-system   konnectivity-agent-7xsrw          1/1     Running   0          43m   10.244.1.191     lxdocapa23   <none>           <none>
    kube-system   kube-proxy-mqh22                  1/1     Running   0          69m   10.100.114.100   lxdocapa22   <none>           <none>
    kube-system   kube-proxy-rklmg                  1/1     Running   0          69m   10.100.114.101   lxdocapa23   <none>           <none>
    kube-system   kube-router-9pp54                 1/1     Running   0          69m   10.100.114.101   lxdocapa23   <none>           <none>
    kube-system   kube-router-xq8bf                 1/1     Running   0          69m   10.100.114.100   lxdocapa22   <none>           <none>
    kube-system   metrics-server-6bd95db5f4-jf58n   1/1     Running   0          69m   10.244.0.218     lxdocapa22   <none>           <none>
    
    **sh-4.2# /usr/local/bin/k0s kubectl -n kube-system logs -f coredns-5ccbdcc4c4-vhbw6  
    Error from server: Get "https://10.100.114.100:10250/containerLogs/kube-system/coredns-5ccbdcc4c4-vhbw6/coredns?follow=true": dial timeout, backstop**
    
    

    The port 10250 seems to be the kubelet, but doesn't seems to reply.

    The cluster has been configured for HA:

    apiVersion: k0s.k0sproject.io/v1beta1
    kind: Cluster
    metadata:
      name: k0s
    spec:
      api:
        address: 10.100.114.94
        port: 6443
        k0sApiPort: 9443
        **externalAddress: k8s-k0s-test.toyota-europe.com**
        sans:
        - 10.100.114.94
    #    - 10.100.80.171
    #    - 10.100.13.119    
    #    - 10.100.8.112
    #    - 10.100.13.110
    #    - 172.17.0.1
        **- k8s-k0s-test.toyota-europe.com**
    ...
    
    bug 
    opened by wheestermans31 18
  • k0s on Raspian

    k0s on Raspian

    Version

    $ k0s version
    

    Platform Which platform did you run k0s on?

    Distributor ID: Raspbian
    Description:    Raspbian GNU/Linux 10 (buster)
    Release:        10
    Codename:       buster
    

    What happened? cannot execute binary file: Exec format error

    How To Reproduce k0s version (on Raspbian)

    Expected behavior v0.12.1 (per documentation)

    Screenshots & Logs

    Additional context I'm hoping that I don't have to reimage 6 rpi4's to Ubuntu for Raspian.

    TIA

    bug 
    opened by thisisbenwoo 17
  • k0s server --enable-worker failed

    k0s server --enable-worker failed

    Version

    $ k0s version
    

    k0s version v0.8.1 Platform Which platform did you run k0s on? ubuntu 18.04

    $ lsb_release -a
    lsb_release -a
    LSB Version:    core-9.20170808ubuntu1-noarch:security-9.20170808ubuntu1-noarch
    Distributor ID: Ubuntu
    Description:    Ubuntu 18.04.5 LTS
    Release:        18.04
    Codename:       bionic
    

    What happened? A clear and concise description of what the bug is.

    k0s server --enable-worker

    NFO[2020-12-14 13:25:42] E1214 13:25:42.090082 10249 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.2": failed to pull image "k8s.gcr.io/pause:3.2": failed to pull and unpack image "k8s.gcr.io/pause:3.2": failed to resolve reference "k8s.gcr.io/pause:3.2": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.2": dial tcp 64.233.189.82:443: i/o timeout component=kubelet INFO[2020-12-14 13:25:42] E1214 13:25:42.090156 10249 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "kube-proxy-x2qsd_kube-system(3d64d01a-75aa-4182-ad37-62033b3841f0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.2": failed to pull image "k8s.gcr.io/pause:3.2": failed to pull and unpack image "k8s.gcr.io/pause:3.2": failed to resolve reference "k8s.gcr.io/pause:3.2": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.2": dial tcp 64.233.189.82:443: i/o timeout component=kubelet INFO[2020-12-14 13:25:42] E1214 13:25:42.090176 10249 kuberuntime_manager.go:741] createPodSandbox for pod "kube-proxy-x2qsd_kube-system(3d64d01a-75aa-4182-ad37-62033b3841f0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.2": failed to pull image "k8s.gcr.io/pause:3.2": failed to pull and unpack image "k8s.gcr.io/pause:3.2": failed to resolve reference "k8s.gcr.io/pause:3.2": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.2": dial tcp 64.233.189.82:443: i/o timeout component=kubelet INFO[2020-12-14 13:25:42] E1214 13:25:42.090241 10249 pod_workers.go:191] Error syncing pod 3d64d01a-75aa-4182-ad37-62033b3841f0 ("kube-proxy-x2qsd_kube-system(3d64d01a-75aa-4182-ad37-62033b3841f0)"), skipping: failed to "CreatePodSandbox" for "kube-proxy-x2qsd_kube-system(3d64d01a-75aa-4182-ad37-62033b3841f0)" with CreatePodSandboxError: "CreatePodSandbox for pod "kube-proxy-x2qsd_kube-system(3d64d01a-75aa-4182-ad37-62033b3841f0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.2": failed to pull image "k8s.gcr.io/pause:3.2": failed to pull and unpack image "k8s.gcr.io/pause:3.2": failed to resolve reference "k8s.gcr.io/pause:3.2": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.2": dial tcp 64.233.189.82:443: i/o timeout" component=kubelet INFO[2020-12-14 13:25:46] E1214 13:25:46.799269 10249 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized component=kubelet INFO[2020-12-14 13:25:47] current config matches existing, not gonna do anything component=coredns INFO[2020-12-14 13:25:47] current config matches existing, not gonna do anything component=calico INFO[2020-12-14 13:25:47] current config matches existing, not gonna do anything component=kubeproxy

    How To Reproduce How can we reproduce this issue? (as minimally and as precisely as possible)

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots & Logs If applicable, add screenshots to help explain your problem. Also add any output from kubectl if applicable:

    $ export KUBECONFIG=/var/lib/k0s/pki/admin.conf
    $ kubectl logs ...
    

    Additional context Add any other context about the problem here.

    bug 
    opened by tscswcn 17
  • k0s running on Fedora 33 with containerd crashing

    k0s running on Fedora 33 with containerd crashing

    Version

    $ k0s version
    v0.9.1
    

    Platform Which platform did you run k0s on?

    ➜  Downloads neofetch 
              /:-------------:\          [email protected] 
           :-------------------::        ------------------ 
         :-----------/shhOHbmp---:\      OS: Fedora 33 (Workstation Edition) x86_64 
       /-----------omMMMNNNMMD  ---:     Host: Oryx Pro oryp6 
      :-----------sMMMMNMNMP.    ---:    Kernel: 5.10.7-200.fc33.x86_64 
     :-----------:MMMdP-------    ---\   Uptime: 52 mins 
    ,------------:MMMd--------    ---:   Packages: 2218 (rpm), 58 (flatpak) 
    :------------:MMMd-------    .---:   Shell: zsh 5.8 
    :----    oNMMMMMMMMMNho     .----:   Resolution: 1920x1080 
    :--     .+shhhMMMmhhy++   .------/   DE: GNOME 3.38.3 
    :-    -------:MMMd--------------:    WM: Mutter 
    :-   --------/MMMd-------------;     WM Theme: Adwaita 
    :-    ------/hMMMy------------:      Theme: Adwaita [GTK2/3] 
    :-- :dMNdhhdNMMNo------------;       Icons: Adwaita [GTK2/3] 
    :---:sdNMMMMNds:------------:        Terminal: gnome-terminal 
    :------:://:-------------::          CPU: Intel i7-10875H (16) @ 5.100GHz 
    :---------------------://            GPU: Intel UHD Graphics 
                                         GPU: NVIDIA GeForce RTX 2060 Mobile 
                                         Memory: 4433MiB / 31977MiB 
    
                                                                 
                                                                 
    
    
    ➜  Downloads uname -a
    Linux oryx-fedora 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    ➜  Downloads cat /etc/fedora-release 
    Fedora release 33 (Thirty Three)
    

    What happened? I downloaded k0s from the GitHub Releases page. Made the binary executable chmod u+x k0s-v0.9.1-amd64, then ran the commands to start up a single node instance.

    • https://docs.k0sproject.io/v0.9.1/k0s-single-node/

    When running k0s server... with the other options, I get a ton of debug lines, ~65k in 1 minute, and I can query the instance with kubectl, and get the nodes, but when I tried to apply some kube manifests, the pods were in pending state for a long time. The pods when described all said

      Warning  FailedScheduling  24m                default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
    

    There is a ton of out put and I'm not sure what to grab specifically, but I included an output of everything that contained error in the line.

    How To Reproduce Download k0s

    curl --location -O https://github.com/k0sproject/k0s/releases/download/v0.9.1/k0s-v0.9.1-amd64
    

    Make executable

    chmod u+x k0s-v0.9.1-amd64
    

    Run the commands the docs say to do

    mkdir -p ~/.k0s
    ./k0s-v0.9.1-amd64 default-config | tee ~/.k0s/k0s.yaml
    

    Run without backgrounding so I can see what it's doing

    sudo k0s server -c ${HOME}/.k0s/k0s.yaml --enable-worker
    

    Then open a new tab, and follow the rest of the docs

    sudo cat /var/lib/k0s/pki/admin.conf | tee ~/.k0s/kubeconfig
    export KUBECONFIG="${HOME}/.k0s/kubeconfig"
    kubectl get pods --all-namespaces
    

    This is the output of kubectl get pods --all-namespaces

    ➜  Downloads kubectl get pods --all-namespaces
    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
    kube-system   calico-kube-controllers-5f6546844f-hzpnc   0/1     Pending   0          38m
    kube-system   calico-node-v7g6j                          0/1     Pending   0          38m
    kube-system   coredns-5c98d7d4d8-v2mkv                   0/1     Pending   0          38m
    kube-system   kube-proxy-rmp9h                           0/1     Pending   0          38m
    kube-system   metrics-server-7d4bcb75dd-8vq7l            0/1     Pending   0          38m
    

    Expected behavior All pods should be running and Ready.

    Screenshots & Logs If applicable, add screenshots to help explain your problem. Also add any output from kubectl if applicable:

    ➜  Downloads kubectl describe pod calico-kube-controllers-5f6546844f-hzpnc --namespace=kube-system
    Name:                 calico-kube-controllers-5f6546844f-hzpnc
    Namespace:            kube-system
    Priority:             2000000000
    Priority Class Name:  system-cluster-critical
    Node:                 <none>
    Labels:               k8s-app=calico-kube-controllers
                          pod-template-hash=5f6546844f
    Annotations:          <none>
    Status:               Pending
    IP:                   
    IPs:                  <none>
    Controlled By:        ReplicaSet/calico-kube-controllers-5f6546844f
    Containers:
      calico-kube-controllers:
        Image:      calico/kube-controllers:v3.16.2
        Port:       <none>
        Host Port:  <none>
        Readiness:  exec [/usr/bin/check-status -r] delay=0s timeout=1s period=10s #success=1 #failure=3
        Environment:
          ENABLED_CONTROLLERS:  node
          DATASTORE_TYPE:       kubernetes
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from calico-kube-controllers-token-lrb4g (ro)
    Conditions:
      Type           Status
      PodScheduled   False 
    Volumes:
      calico-kube-controllers-token-lrb4g:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  calico-kube-controllers-token-lrb4g
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  kubernetes.io/os=linux
    Tolerations:     CriticalAddonsOnly op=Exists
                     node-role.kubernetes.io/master:NoSchedule
                     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                     node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason            Age                 From               Message
      ----     ------            ----                ----               -------
      Warning  FailedScheduling  31m                 default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
      Warning  FailedScheduling  13m                 default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
      Warning  FailedScheduling  6m52s               default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
      Warning  FailedScheduling  40m (x2 over 40m)   default-scheduler  no nodes available to schedule pods
      Warning  FailedScheduling  32m (x10 over 40m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
    

    Additional context

    This is the ourput I collected over 1min grepping for error.

    ➜  Downloads sudo ./k0s-v0.9.1-amd64 server -c "${HOME}/.k0s/k0s.yaml" --enable-worker --debug | grep -i error
    [sudo] password for filbot: 
    time="2021-01-18 18:24:08" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:08" level=info msg="      --alsologtostderr                  log to standard error as well as files" component=konnectivity
    time="2021-01-18 18:24:08" level=info msg="      --logtostderr                      log to standard error instead of files (default true)" component=konnectivity
    time="2021-01-18 18:24:08" level=info msg="E0118 18:24:08.391051   23336 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    I0118 18:24:08.393781   23288 leaderelection.go:243] attempting to acquire leader lease  kube-node-lease/k0s-manifest-applier...
    E0118 18:24:08.394891   23288 leaderelection.go:321] error retrieving resource lock kube-node-lease/k0s-manifest-applier: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k0s-manifest-applier": dial tcp [::1]:6443: connect: connection refused
    I0118 18:24:08.394978   23288 leaderelection.go:243] attempting to acquire leader lease  kube-node-lease/k0s-endpoint-reconciler...
    E0118 18:24:08.395321   23288 leaderelection.go:321] error retrieving resource lock kube-node-lease/k0s-endpoint-reconciler: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k0s-endpoint-reconciler": dial tcp [::1]:6443: connect: connection refused
    time="2021-01-18 18:24:08" level=info msg="time=\"2021-01-18T18:24:08.447461230-06:00\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" error=\"aufs is not supported (modprobe aufs failed: exit status 1 \\\"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.7-200.fc33.x86_64\\\\n\\\"): skip plugin\" type=io.containerd.snapshotter.v1" component=containerd
    time="2021-01-18 18:24:08" level=info msg="time=\"2021-01-18T18:24:08.447632849-06:00\" level=warning msg=\"failed to load plugin io.containerd.snapshotter.v1.devmapper\" error=\"devmapper not configured\"" component=containerd
    time="2021-01-18 18:24:08" level=info msg="time=\"2021-01-18T18:24:08.447797226-06:00\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" error=\"path /var/lib/k0s/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1" component=containerd
    time="2021-01-18 18:24:08" level=info msg="time=\"2021-01-18T18:24:08.447829825-06:00\" level=warning msg=\"could not use snapshotter devmapper in metadata plugin\" error=\"devmapper not configured\"" component=containerd
    time="2021-01-18 18:24:08" level=info msg="E0118 18:24:08.827807   23335 instance.go:392] Could not construct pre-rendered responses for ServiceAccountIssuerDiscovery endpoints. Endpoints will not be enabled. Error: issuer URL must use https scheme, got: api" component=kube-apiserver
    time="2021-01-18 18:24:10" level=info msg="E0118 18:24:10.590365   23335 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.86.23, ResourceVersion: 0, AdditionalErrorMsg: " component=kube-apiserver
    time="2021-01-18 18:24:10" level=info msg="E0118 18:24:10.595759   23338 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io \"kube-controller-manager\" is forbidden: User \"system:kube-controller-manager\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-system\"" component=kube-controller-manager
    time="2021-01-18 18:24:10" level=info msg="W0118 18:24:10.596559   23337 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" component=kube-scheduler
    time="2021-01-18 18:24:11" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
    time="2021-01-18 18:24:12" level=info msg="E0118 18:24:12.148901   23335 controller.go:116] loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable" component=kube-apiserver
    time="2021-01-18 18:24:13" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:13" level=info msg="      --alsologtostderr                  log to standard error as well as files" component=konnectivity
    time="2021-01-18 18:24:13" level=info msg="      --logtostderr                      log to standard error instead of files (default true)" component=konnectivity
    time="2021-01-18 18:24:13" level=info msg="E0118 18:24:13.408742   23582 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:16" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
    I0118 18:24:17.054930   23288 leaderelection.go:253] successfully acquired lease kube-node-lease/k0s-manifest-applier
    2021-01-18 18:24:17.055149 I | [INFO] acquired leader lease
    time="2021-01-18 18:24:18" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:18" level=info msg="      --alsologtostderr                  log to standard error as well as files" component=konnectivity
    time="2021-01-18 18:24:18" level=info msg="      --logtostderr                      log to standard error instead of files (default true)" component=konnectivity
    time="2021-01-18 18:24:18" level=info msg="E0118 18:24:18.445055   23731 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    I0118 18:24:19.050759   23288 leaderelection.go:253] successfully acquired lease kube-node-lease/k0s-endpoint-reconciler
    2021-01-18 18:24:19.050854 I | [INFO] acquired leader lease
    E0118 18:24:22.064151   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.064469   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.064521   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.064553   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.064675   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.069556   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.069589   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.069679   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.069717   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.069781   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    W0118 18:24:22.073177   23288 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
    E0118 18:24:22.073916   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.077145   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=helm
    E0118 18:24:22.078964   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:22.079295   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=konnectivity
    E0118 18:24:22.079815   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=bootstraprbac
    E0118 18:24:22.083892   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=kubelet
    E0118 18:24:22.085218   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=defaultpsp
    time="2021-01-18 18:24:22" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
    E0118 18:24:23.090674   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=calico_init
    I0118 18:24:23.272169   23288 request.go:645] Throttling request took 1.190265199s, request: GET:https://localhost:6443/api/v1/serviceaccounts?limit=1
    E0118 18:24:23.422695   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:23.424184   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:23.432574   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:23.436373   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:23.437268   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:23.438793   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    E0118 18:24:23.439851   23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    W0118 18:24:23.442483   23288 warnings.go:67] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
    W0118 18:24:23.443998   23288 warnings.go:67] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
    E0118 18:24:23.444469   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    W0118 18:24:23.445815   23288 warnings.go:67] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
    W0118 18:24:23.449173   23288 warnings.go:67] rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
    W0118 18:24:23.452038   23288 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
    E0118 18:24:23.454191   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=kubeproxy
    E0118 18:24:23.456863   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=coredns
    E0118 18:24:23.459208   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=metricserver
    time="2021-01-18 18:24:23" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:23" level=info msg="      --alsologtostderr                  log to standard error as well as files" component=konnectivity
    time="2021-01-18 18:24:23" level=info msg="      --logtostderr                      log to standard error instead of files (default true)" component=konnectivity
    time="2021-01-18 18:24:23" level=info msg="E0118 18:24:23.461497   23892 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    E0118 18:24:23.462609   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=calico
    time="2021-01-18 18:24:28" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
    time="2021-01-18 18:24:28" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:28" level=info msg="      --alsologtostderr                  log to standard error as well as files" component=konnectivity
    time="2021-01-18 18:24:28" level=info msg="      --logtostderr                      log to standard error instead of files (default true)" component=konnectivity
    time="2021-01-18 18:24:28" level=info msg="E0118 18:24:28.498272   24048 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:33" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:33" level=info msg="      --alsologtostderr                  log to standard error as well as files" component=konnectivity
    time="2021-01-18 18:24:33" level=info msg="      --logtostderr                      log to standard error instead of files (default true)" component=konnectivity
    time="2021-01-18 18:24:33" level=info msg="E0118 18:24:33.536981   24142 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:33" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
    E0118 18:24:33.854203   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    W0118 18:24:33.860199   23288 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
    E0118 18:24:33.864605   23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
    time="2021-01-18 18:24:33" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=metricserver
    I0118 18:24:35.055628   23288 request.go:645] Throttling request took 1.189638349s, request: GET:https://localhost:6443/api/v1/events?limit=1
    time="2021-01-18 18:24:38" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:38" level=info msg="      --alsologtostderr                  log to standard error as well as files" component=konnectivity
    time="2021-01-18 18:24:38" level=info msg="      --logtostderr                      log to standard error instead of files (default true)" component=konnectivity
    time="2021-01-18 18:24:38" level=info msg="E0118 18:24:38.577068   24218 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
    time="2021-01-18 18:24:39" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
    
    bug need more info 
    opened by FilBot3 16
  • Bump mkdocs-material from 8.5.10 to 9.0.1 in /docs

    Bump mkdocs-material from 8.5.10 to 9.0.1 in /docs

    Bumps mkdocs-material from 8.5.10 to 9.0.1.

    Release notes

    Sourced from mkdocs-material's releases.

    mkdocs-material-9.0.1

    • Removed pipdeptree dependency for built-in info plugin
    • Fixed appearance of linked tags when hovered (9.0.0 regression)
    • Fixed #4810: Abbreviations run out of screen on touch devices
    • Fixed #4813: View source and edit button links are the same

    mkdocs-material-9.0.0

    Additions and improvements

    • Added support for rich search previews
    • Added support for tokenizer lookahead
    • Added support for better search highlighting
    • Added support for excluding content from search
    • Added support for configurable search pipeline
    • Added support for offline search via offline plugin
    • Added support for multiple instances of built-in tags plugin
    • Added support for removing copy-to-clipboard button
    • Added support for removing footer navigation
    • Added support for button to view the source of a page
    • Improved readability of query string for search sharing
    • Improved stability of search plugin when using --dirtyreload
    • Improved search result group button, now sticky and stable
    • Updated Norwegian translations
    • Updated MkDocs to 1.4.2

    Removals

    • Removed deprecated alternative admonition qualifiers
    • Removed :is() selectors (in output) for easier overriding
    • Removed .title suffix on translations
    • Removed legacy method for providing page title in feedback URL
    • Removed support for indexing only titles in search
    • Removed support for custom search transforms
    • Removed support for custom search workers
    • Removed temporary snow feature (easter egg)

    Fixes

    • Fixed Norwegian and Korean language code
    • Fixed detection of composition events in search interface
    • Fixed search plugin not using title set via front matter
    • Fixed search highlighting of tags
    • Fixed search sharing URL using post transformed string
    • Fixed theme-color meta tag getting out-of-sync with palette toggle
    • Fixed prev/next page keyboard navigation when footer is not present
    • Fixed overflowing navigation tabs not being scrollable
    • Fixed inclusion of code block line numbers from search

    mkdocs-material-9.0.0b4

    Note: this is a beta release – see #4714

    ... (truncated)

    Changelog

    Sourced from mkdocs-material's changelog.

    mkdocs-material-9.0.1 (2022-01-03)

    • Removed pipdeptree dependency for built-in info plugin
    • Fixed appearance of linked tags when hovered (9.0.0 regression)
    • Fixed #4810: Abbreviations run out of screen on touch devices
    • Fixed #4813: View source and edit button links are the same

    mkdocs-material-9.0.0 (2023-01-02)

    Additions and improvements

    • Added support for rich search previews
    • Added support for tokenizer lookahead
    • Added support for better search highlighting
    • Added support for excluding content from search
    • Added support for configurable search pipeline
    • Added support for offline search via offline plugin
    • Added support for multiple instances of built-in tags plugin
    • Added support for removing copy-to-clipboard button
    • Added support for removing footer navigation
    • Added support for button to view the source of a page
    • Improved readability of query string for search sharing
    • Improved stability of search plugin when using --dirtyreload
    • Improved search result group button, now sticky and stable
    • Updated Norwegian translations
    • Updated MkDocs to 1.4.2

    Removals

    • Removed deprecated alternative admonition qualifiers
    • Removed :is() selectors (in output) for easier overriding
    • Removed .title suffix on translations
    • Removed legacy method for providing page title in feedback URL
    • Removed support for indexing only titles in search
    • Removed support for custom search transforms
    • Removed support for custom search workers
    • Removed temporary snow feature (easter egg)

    Fixes

    • Fixed Norwegian and Korean language code
    • Fixed detection of composition events in search interface
    • Fixed search plugin not using title set via front matter
    • Fixed search highlighting of tags
    • Fixed search sharing URL using post transformed string
    • Fixed theme-color meta tag getting out-of-sync with palette toggle
    • Fixed prev/next page keyboard navigation when footer is not present
    • Fixed overflowing navigation tabs not being scrollable
    • Fixed inclusion of code block line numbers from search

    ... (truncated)

    Upgrade guide

    Sourced from mkdocs-material's upgrade guide.

    How to upgrade

    Upgrade to the latest version with:

    pip install --upgrade --force-reinstall mkdocs-material
    

    Show the currently installed version with:

    pip show mkdocs-material
    

    Upgrading from 8.x to 9.x

    This major release includes a brand new search implementation that is faster and allows for rich previews, advanced tokenization and better highlighting. It was available as part of Insiders for over a year, and now that the funding goal was hit, makes its way into the community edition.

    Changes to mkdocs.yml

    content.code.copy

    The copy-to-clipboard buttons are now opt-in and can be enabled or disabled per block. If you wish to enable them for all code blocks, add the following lines to mkdocs.yml:

    theme:
      features:
        - content.code.copy
    

    content.action.*

    A "view source" button can be shown next to the "edit this page" button, both of which must now be explicitly enabled. Add the following lines to mkdocs.yml:

    theme:
      features:
        - content.action.edit
        - content.action.view
    

    navigation.footer

    ... (truncated)

    Commits
    • d0d6f19 Documentation
    • 305e79f Prepare 9.0.1 release
    • feef8cd Fixed view source button link
    • 1287ce8 Updated changelog
    • 7f0d6e8 Removed pipdeptree dependency for info plugin
    • 0de33b3 Fixed appearance of linked tags when hovered (9.0.0 regression)
    • fbfb662 Fixed positioning of abbreviations on touch devices
    • 788f087 Updated languages in documentation and schema
    • 7c5b972 Documentation
    • 2b9136d Added documentation for navigation footer and code actions
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python 
    opened by dependabot[bot] 0
  • Bump github.com/go-openapi/jsonpointer from 0.19.5 to 0.19.6

    Bump github.com/go-openapi/jsonpointer from 0.19.5 to 0.19.6

    Bumps github.com/go-openapi/jsonpointer from 0.19.5 to 0.19.6.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies go 
    opened by dependabot[bot] 0
  • MetalLB docs updated

    MetalLB docs updated

    Signed-off-by: Alexey Makhov [email protected]

    Description

    Fixes #2543

    Type of change

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [x] Documentation update

    How Has This Been Tested?

    • [ ] Manual test
    • [ ] Auto test added

    Checklist:

    • [ ] My code follows the style guidelines of this project
    • [ ] My commit messages are signed-off
    • [ ] I have performed a self-review of my own code
    • [ ] I have commented my code, particularly in hard-to-understand areas
    • [ ] I have made corresponding changes to the documentation
    • [ ] My changes generate no new warnings
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [ ] New and existing unit tests pass locally with my changes
    • [ ] Any dependent changes have been merged and published in downstream modules
    • [ ] I have checked my code and corrected any misspellings
    opened by makhov 0
  • Bump watchdog from 2.1.9 to 2.2.1 in /docs

    Bump watchdog from 2.1.9 to 2.2.1 in /docs

    Bumps watchdog from 2.1.9 to 2.2.1.

    Release notes

    Sourced from watchdog's releases.

    2.2.1

    • Enable mypy to discover type hints as specified in PEP 561 (#933)
    • [ci] Set the expected Python version when building release files
    • [ci] Update actions versions in use
    • [watchmedo] [regression] Fix usage of missing signal.SIGHUP attribute on non-Unix OSes (#935)

    :heart_decoration: Thanks to our beloved contributors: @​BoboTiG, @​simon04, @​piotrpdev

    2.2.0

    • [build] Wheels are now available for Python 3.11 (#932)
    • [documentation] HTML documentation builds are now tested for errors (#902)
    • [documentation] Fix typos here, and there (#910)
    • [fsevents2] The fsevents2 observer is now deprecated (#909)
    • [tests] The error message returned by musl libc for error code -1 is now allowed (#923)
    • [utils] Remove unnecessary code in dirsnapshot.py (#930)
    • [watchmedo] Handle shutdown events from SIGHUP (#912)

    :heart_decoration: Thanks to our beloved contributors: @​kurtmckee, @​babymastodon, @​QuantumEnergyE, @​timgates42, @​BoboTiG

    Changelog

    Sourced from watchdog's changelog.

    2.2.1

    
    2023-01-01 • `full history <https://github.com/gorakhargosh/watchdog/compare/v2.2.0...v2.2.1>`__
    
    • Enable mypy to discover type hints as specified in PEP 561 ([#933](https://github.com/gorakhargosh/watchdog/issues/933) &lt;https://github.com/gorakhargosh/watchdog/pull/933&gt;__)
    • [ci] Set the expected Python version when building release files
    • [ci] Update actions versions in use
    • [watchmedo] [regression] Fix usage of missing signal.SIGHUP attribute on non-Unix OSes ([#935](https://github.com/gorakhargosh/watchdog/issues/935) &lt;https://github.com/gorakhargosh/watchdog/pull/935&gt;__)
    • Thanks to our beloved contributors: @​BoboTiG, @​simon04, @​piotrpdev

    2.2.0

    2022-12-05 • full history <https://github.com/gorakhargosh/watchdog/compare/v2.1.9...v2.2.0>__

    • [build] Wheels are now available for Python 3.11 ([#932](https://github.com/gorakhargosh/watchdog/issues/932) <https://github.com/gorakhargosh/watchdog/pull/932>__)
    • [documentation] HTML documentation builds are now tested for errors ([#902](https://github.com/gorakhargosh/watchdog/issues/902) <https://github.com/gorakhargosh/watchdog/pull/902>__)
    • [documentation] Fix typos here, and there ([#910](https://github.com/gorakhargosh/watchdog/issues/910) <https://github.com/gorakhargosh/watchdog/pull/910>__)
    • [fsevents2] The fsevents2 observer is now deprecated ([#909](https://github.com/gorakhargosh/watchdog/issues/909) <https://github.com/gorakhargosh/watchdog/pull/909>__)
    • [tests] The error message returned by musl libc for error code -1 is now allowed ([#923](https://github.com/gorakhargosh/watchdog/issues/923) <https://github.com/gorakhargosh/watchdog/pull/923>__)
    • [utils] Remove unnecessary code in dirsnapshot.py ([#930](https://github.com/gorakhargosh/watchdog/issues/930) <https://github.com/gorakhargosh/watchdog/pull/930>__)
    • [watchmedo] Handle shutdown events from SIGHUP ([#912](https://github.com/gorakhargosh/watchdog/issues/912) <https://github.com/gorakhargosh/watchdog/pull/912>__)
    • Thanks to our beloved contributors: @​kurtmckee, @​babymastodon, @​QuantumEnergyE, @​timgates42, @​BoboTiG
    Commits
    • 858c890 Version 2.2.1
    • 37cfcc1 [ci] Set the expected Python version when building release files
    • 6687c99 [watchmedo] [regression] Fix usage of missing signal.SIGHUP attribute on no...
    • da7bc03 doc: time to move forward
    • 68ee5cd Add more files tro MANIFEST.in
    • 293a31e Enable mypy to discover type hints as specified in PEP 561 (#933)
    • 14e95bb [ci] Update actions versions in use
    • 82e3a3b Bump the version to 2.2.1
    • 7773a25 Version 2.2.0
    • d493ec2 doc: tweak changelog
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python 
    opened by dependabot[bot] 0
  • Bump pygments from 2.13.0 to 2.14.0 in /docs

    Bump pygments from 2.13.0 to 2.14.0 in /docs

    Bumps pygments from 2.13.0 to 2.14.0.

    Release notes

    Sourced from pygments's releases.

    2.14.0

    • Added lexers:

    • Updated lexers:

      • Abap: Update keywords (#2281)

      • Alloy: Update for Alloy 6 (#1963)

      • C family (C, C++ and many others):

        • Fix an issue where a chunk would be wrongly recognized as a function definition due to braces in comments (#2210)
        • Improve parantheses handling for function definitions (#2207, #2208)
      • C#: Fix number and operator recognition (#2256, #2257)

      • CSound: Updated builtins (#2268)

      • F#: Add .fsx file extension (#2282)

      • gas (GNU assembler): recognize braces as punctuation (#2230)

      • HTTP: Add CONNECT keyword (#2242)

      • Inform 6: Fix lexing of properties and doubles (#2214)

      • INI: Allow comments that are not their own line (#2217, #2161)

      • Java properties: Fix issue with whitespace-delimited keys, support comments starting with ! and escapes, no longer support undocumented ; and // comments (#2241)

      • LilyPond: Improve heuristics, add \maxima duration (#2283)

      • LLVM: Add opaque pointer type (#2269)

      • Macaulay2: Update keywords (#2305)

      • Minecraft-related lexers (SNB and Minecraft function) moved to pygments.lexers.minecraft (#2276)

      • Nim: General improvements (#1970)

      • Nix: Fix single quotes inside indented strings (#2289)

      • Objective J: Fix catastrophic backtracking (#2225)

      • NASM: Add support for SSE/AVX/AVX-512 registers as well as 'rel' and 'abs' address operators (#2212)

      • Powershell:

        • Add local: keyword (#2254)
        • Allow continuations without markers (#2262, #2263)
      • Solidity: Add boolean operators (#2292)

      • Spice: Add enum keyword and fix a bug regarding binary, hexadecimal and octal number tokens (#2227)

      • YAML: Accept colons in key names (#2277)

    ... (truncated)

    Changelog

    Sourced from pygments's changelog.

    Version 2.14.0

    (released January 1st, 2023)

    • Added lexers:

    • Updated lexers:

      • Abap: Update keywords (#2281)

      • Alloy: Update for Alloy 6 (#1963)

      • C family (C, C++ and many others):

        • Fix an issue where a chunk would be wrongly recognized as a function definition due to braces in comments (#2210)
        • Improve parantheses handling for function definitions (#2207, #2208)
      • C#: Fix number and operator recognition (#2256, #2257)

      • CSound: Updated builtins (#2268)

      • F#: Add .fsx file extension (#2282)

      • gas (GNU assembler): recognize braces as punctuation (#2230)

      • HTTP: Add CONNECT keyword (#2242)

      • Inform 6: Fix lexing of properties and doubles (#2214)

      • INI: Allow comments that are not their own line (#2217, #2161)

      • Java properties: Fix issue with whitespace-delimited keys, support comments starting with ! and escapes, no longer support undocumented ; and // comments (#2241)

      • LilyPond: Improve heuristics, add \maxima duration (#2283)

      • LLVM: Add opaque pointer type (#2269)

      • Macaulay2: Update keywords (#2305)

      • Minecraft-related lexers (SNB and Minecraft function) moved to pygments.lexers.minecraft (#2276)

      • Nim: General improvements (#1970)

      • Nix: Fix single quotes inside indented strings (#2289)

      • Objective J: Fix catastrophic backtracking (#2225)

      • NASM: Add support for SSE/AVX/AVX-512 registers as well as 'rel' and 'abs' address operators (#2212)

      • Powershell:

    ... (truncated)

    Commits
    • 77a939e Prepare for 2.14.0 release.
    • b52ecf0 Update CHANGES.
    • dd52102 Improve the Smithy metadata matcher.
    • 92b77b2 Merge pull request #2311 from not-my-profile/styles-pep257
    • 61fd608 styles gallery: Make docstring in example follow PEP 257
    • b710ccf Update CHANGES.
    • 55a3b46 Merge pull request #2308 from sol/sol-patch-1
    • dbc177c Add filenames pattern for HspecLexer
    • f0afb01 Update CHANGES
    • d5203d9 Update Macaulay2 symbols for version 1.21 (#2305)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python 
    opened by dependabot[bot] 0
  • ETCD wal: max entry size limit exceeded

    ETCD wal: max entry size limit exceeded

    Before creating an issue, make sure you've checked the following:

    • [X] You are running the latest released version of k0s
    • [X] Make sure you've searched for existing issues, both open and closed
    • [X] Make sure you've searched for PRs too, a fix might've been merged already
    • [X] You're looking at docs for the released version, "main" branch docs are usually ahead of released versions.

    Platform

    Linux 5.4.0-109-generic #123-Ubuntu SMP Fri Apr 8 09:10:54 UTC 2022 x86_64 GNU/Linux
    NAME="Ubuntu"
    VERSION="20.04.4 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.4 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal
    

    Version

    1.25.4

    Sysinfo

    `k0s sysinfo`
    
    Machine ID: "e0ef25ef8af3f38d9ceab96ec23467ae28184c848892451a758d27ccd4019018" (from machine) (pass)
    Total memory: 1.9 GiB (pass)
    Disk space available for /var/lib/k0s: 9.4 GiB (pass)
    Operating system: Linux (pass)
      Linux kernel release: 5.4.0-109-generic (pass)
      Max. file descriptors per process: current: 1048576 / max: 1048576 (pass)
      Executable in path: modprobe: /usr/sbin/modprobe (pass)
      /proc file system: mounted (0x9fa0) (pass)
      Control Groups: version 1 (pass)
        cgroup controller "cpu": available (pass)
        cgroup controller "cpuacct": available (pass)
        cgroup controller "cpuset": available (pass)
        cgroup controller "memory": available (pass)
        cgroup controller "devices": available (pass)
        cgroup controller "freezer": available (pass)
        cgroup controller "pids": available (pass)
        cgroup controller "hugetlb": available (pass)
        cgroup controller "blkio": available (pass)
      CONFIG_CGROUPS: Control Group support: built-in (pass)
        CONFIG_CGROUP_FREEZER: Freezer cgroup subsystem: built-in (pass)
        CONFIG_CGROUP_PIDS: PIDs cgroup subsystem: built-in (pass)
        CONFIG_CGROUP_DEVICE: Device controller for cgroups: built-in (pass)
        CONFIG_CPUSETS: Cpuset support: built-in (pass)
        CONFIG_CGROUP_CPUACCT: Simple CPU accounting cgroup subsystem: built-in (pass)
        CONFIG_MEMCG: Memory Resource Controller for Control Groups: built-in (pass)
        CONFIG_CGROUP_HUGETLB: HugeTLB Resource Controller for Control Groups: built-in (pass)
        CONFIG_CGROUP_SCHED: Group CPU scheduler: built-in (pass)
          CONFIG_FAIR_GROUP_SCHED: Group scheduling for SCHED_OTHER: built-in (pass)
            CONFIG_CFS_BANDWIDTH: CPU bandwidth provisioning for FAIR_GROUP_SCHED: built-in (pass)
        CONFIG_BLK_CGROUP: Block IO controller: built-in (pass)
      CONFIG_NAMESPACES: Namespaces support: built-in (pass)
        CONFIG_UTS_NS: UTS namespace: built-in (pass)
        CONFIG_IPC_NS: IPC namespace: built-in (pass)
        CONFIG_PID_NS: PID namespace: built-in (pass)
        CONFIG_NET_NS: Network namespace: built-in (pass)
      CONFIG_NET: Networking support: built-in (pass)
        CONFIG_INET: TCP/IP networking: built-in (pass)
          CONFIG_IPV6: The IPv6 protocol: built-in (pass)
        CONFIG_NETFILTER: Network packet filtering framework (Netfilter): built-in (pass)
          CONFIG_NETFILTER_ADVANCED: Advanced netfilter configuration: built-in (pass)
          CONFIG_NETFILTER_XTABLES: Netfilter Xtables support: module (pass)
            CONFIG_NETFILTER_XT_TARGET_REDIRECT: REDIRECT target support: module (pass)
            CONFIG_NETFILTER_XT_MATCH_COMMENT: "comment" match support: module (pass)
            CONFIG_NETFILTER_XT_MARK: nfmark target and match support: module (pass)
            CONFIG_NETFILTER_XT_SET: set target and match support: module (pass)
            CONFIG_NETFILTER_XT_TARGET_MASQUERADE: MASQUERADE target support: module (pass)
            CONFIG_NETFILTER_XT_NAT: "SNAT and DNAT" targets support: module (pass)
            CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: "addrtype" address type match support: module (pass)
            CONFIG_NETFILTER_XT_MATCH_CONNTRACK: "conntrack" connection tracking match support: module (pass)
            CONFIG_NETFILTER_XT_MATCH_MULTIPORT: "multiport" Multiple port match support: module (pass)
            CONFIG_NETFILTER_XT_MATCH_RECENT: "recent" match support: module (pass)
            CONFIG_NETFILTER_XT_MATCH_STATISTIC: "statistic" match support: module (pass)
          CONFIG_NETFILTER_NETLINK: module (pass)
          CONFIG_NF_CONNTRACK: Netfilter connection tracking support: module (pass)
          CONFIG_NF_NAT: module (pass)
          CONFIG_IP_SET: IP set support: module (pass)
            CONFIG_IP_SET_HASH_IP: hash:ip set support: module (pass)
            CONFIG_IP_SET_HASH_NET: hash:net set support: module (pass)
          CONFIG_IP_VS: IP virtual server support: module (pass)
            CONFIG_IP_VS_NFCT: Netfilter connection tracking: built-in (pass)
          CONFIG_NF_CONNTRACK_IPV4: IPv4 connetion tracking support (required for NAT): unknown (warning)
          CONFIG_NF_REJECT_IPV4: IPv4 packet rejection: module (pass)
          CONFIG_NF_NAT_IPV4: IPv4 NAT: unknown (warning)
          CONFIG_IP_NF_IPTABLES: IP tables support: module (pass)
            CONFIG_IP_NF_FILTER: Packet filtering: module (pass)
              CONFIG_IP_NF_TARGET_REJECT: REJECT target support: module (pass)
            CONFIG_IP_NF_NAT: iptables NAT support: module (pass)
            CONFIG_IP_NF_MANGLE: Packet mangling: module (pass)
          CONFIG_NF_DEFRAG_IPV4: module (pass)
          CONFIG_NF_CONNTRACK_IPV6: IPv6 connetion tracking support (required for NAT): unknown (warning)
          CONFIG_NF_NAT_IPV6: IPv6 NAT: unknown (warning)
          CONFIG_IP6_NF_IPTABLES: IP6 tables support: module (pass)
            CONFIG_IP6_NF_FILTER: Packet filtering: module (pass)
            CONFIG_IP6_NF_MANGLE: Packet mangling: module (pass)
            CONFIG_IP6_NF_NAT: ip6tables NAT support: module (pass)
          CONFIG_NF_DEFRAG_IPV6: module (pass)
        CONFIG_BRIDGE: 802.1d Ethernet Bridging: module (pass)
          CONFIG_LLC: module (pass)
          CONFIG_STP: module (pass)
      CONFIG_EXT4_FS: The Extended 4 (ext4) filesystem: built-in (pass)
      CONFIG_PROC_FS: /proc file system support: built-in (pass)
    
    

    What happened?

    Error with etcd after VM restart (before them k0s fails because space on device is over) and increasing disk size:

    etcd.go:204\",\"msg\":\"discovery failed\",\"error\":\"wal: max entry size limit exceeded, recBytes: 955, fileSize(64000000) - offset(63999464) - padBytes(5) = entryLimit(531)
    

    Steps to reproduce

    1. install cluster
    2. wait until disk is full, turn off VM and increase it size
    3. start VM and got an error

    Expected behavior

    working cluster controller

    Actual behavior

    No response

    Screenshots and logs

    Dec 24 08:51:36 cl1lr3g2d1m8nm5mep8g-eleq k0s[2455]: time="2022-12-24 08:51:36" level=info msg="{\"level\":\"info\",\"ts\":\"2022-12-24T08:51:36.932Z\",\"caller\":\"etcdserver/backend.go:81\",\"msg\":\"opened backend db\",\"path\":\"/var/lib/k0s/etcd/member/snap/db\",\"took\":\"62.300357ms\"}" component=etcd
    Dec 24 08:51:37 cl1lr3g2d1m8nm5mep8g-eleq k0s[2455]: time="2022-12-24 08:51:37" level=info msg="{\"level\":\"info\",\"ts\":\"2022-12-24T08:51:37.233Z\",\"caller\":\"embed/etcd.go:371\",\"msg\":\"closing etcd server\",\"name\":\"cl1lr3g2d1m8nm5mep8g-eleq\",\"data-dir\":\"/var/lib/k0s/etcd\",\"advertise-peer-urls\":[\"https://10.200.0.19:2380\"],\"advertise-client-urls\":[\"https://127.0.0.1:2379\"]}" component=etcd
    Dec 24 08:51:37 cl1lr3g2d1m8nm5mep8g-eleq k0s[2455]: time="2022-12-24 08:51:37" level=info msg="{\"level\":\"info\",\"ts\":\"2022-12-24T08:51:37.234Z\",\"caller\":\"embed/etcd.go:373\",\"msg\":\"closed etcd server\",\"name\":\"cl1lr3g2d1m8nm5mep8g-eleq\",\"data-dir\":\"/var/lib/k0s/etcd\",\"advertise-peer-urls\":[\"https://10.200.0.19:2380\"],\"advertise-client-urls\":[\"https://127.0.0.1:2379\"]}" component=etcd
    Dec 24 08:51:37 cl1lr3g2d1m8nm5mep8g-eleq k0s[2455]: time="2022-12-24 08:51:37" level=info msg="{\"level\":\"fatal\",\"ts\":\"2022-12-24T08:51:37.234Z\",\"caller\":\"etcdmain/etcd.go:204\",\"msg\":\"discovery failed\",\"error\":\"wal: max entry size limit exceeded, recBytes: 955, fileSize(64000000) - offset(63999464) - padBytes(5) = entryLimit(531)\",\"stacktrace\":\"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\\n\\t/etcd/server/etcdmain/etcd.go:204\\ngo.etcd.io/etcd/server/v3/etcdmain.Main\\n\\t/etcd/server/etcdmain/main.go:40\\nmain.main\\n\\t/etcd/server/main.go:32\\nruntime.main\\n\\t/usr/local/>
    Dec 24 08:51:37 cl1lr3g2d1m8nm5mep8g-eleq k0s[2455]: time="2022-12-24 08:51:37" level=warning msg="exit status 1" component=etcd
    Dec 24 08:51:37 cl1lr3g2d1m8nm5mep8g-eleq k0s[2455]: time="2022-12-24 08:51:37" level=info msg="respawning in 5s" component=etcd
    Dec 24 08:51:37 cl1lr3g2d1m8nm5mep8g-eleq k0s[2455]: {"level":"warn","ts":"2022-12-24T08:51:37.836Z","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d68000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
    Dec 24 08:51:38 cl1lr3g2d1m8nm5mep8g-eleq k0s[2455]: {"level":"warn","ts":"2022-12-24T08:51:38.837Z","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000b6e1c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
    

    Additional context

    https://github.com/etcd-io/etcd/issues/14025

    bug 
    opened by roquie 2
Releases(v1.25.4+k0s.0)
Owner
k0s - Kubernetes distribution - OSS Project
k0s - Kubernetes distribution - OSS Project
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
go-zero 从零到 k8s 部署

启动: 注意事项: dockerfile 文件配置了 LOCAL_HOST 环境变量 1、项目目录下执行 ./docker.sh 脚本生成 rpc服务docker镜像 ./docker.sh 2、docker-compose-db 创建 mysql redis etcd 容器 执行命令

liukai 97 Dec 7, 2022
CDK - Zero Dependency Container Penetration Toolkit

CDK is an open-sourced container penetration toolkit, offering stable exploitation in different slimmed containers without any OS dependency. It comes with penetration tools and many powerful PoCs/EXPs helps you to escape container and takeover K8s cluster easily.

null 2.7k Dec 29, 2022
[WIP] Cheap, portable and secure NAS based on the Raspberry Pi Zero - with encryption, backups, and more

PortaDisk - Affordable Raspberry Pi Portable & Secure NAS Project Project Status: Early work in progress. web-unlock is still not ready for production

null 2 Nov 23, 2022
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Litmus Chaos 3.4k Jan 1, 2023
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.9k Jan 7, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2.3k Jan 4, 2023
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 102 Dec 18, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Opstree Container Kit 111 Oct 15, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Open Cloud-native Game-application Initiative 31 Nov 25, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 832 Dec 30, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 24 Sep 27, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 11k Jan 5, 2023
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

null 10 Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 64 Dec 19, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Dec 14, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 86 Nov 13, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 69 Jan 3, 2023
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022