Version
$ k0s version
v0.9.1
Platform
Which platform did you run k0s on?
➜ Downloads neofetch
/:-------------:\ [email protected]
:-------------------:: ------------------
:-----------/shhOHbmp---:\ OS: Fedora 33 (Workstation Edition) x86_64
/-----------omMMMNNNMMD ---: Host: Oryx Pro oryp6
:-----------sMMMMNMNMP. ---: Kernel: 5.10.7-200.fc33.x86_64
:-----------:MMMdP------- ---\ Uptime: 52 mins
,------------:MMMd-------- ---: Packages: 2218 (rpm), 58 (flatpak)
:------------:MMMd------- .---: Shell: zsh 5.8
:---- oNMMMMMMMMMNho .----: Resolution: 1920x1080
:-- .+shhhMMMmhhy++ .------/ DE: GNOME 3.38.3
:- -------:MMMd--------------: WM: Mutter
:- --------/MMMd-------------; WM Theme: Adwaita
:- ------/hMMMy------------: Theme: Adwaita [GTK2/3]
:-- :dMNdhhdNMMNo------------; Icons: Adwaita [GTK2/3]
:---:sdNMMMMNds:------------: Terminal: gnome-terminal
:------:://:-------------:: CPU: Intel i7-10875H (16) @ 5.100GHz
:---------------------:// GPU: Intel UHD Graphics
GPU: NVIDIA GeForce RTX 2060 Mobile
Memory: 4433MiB / 31977MiB
➜ Downloads uname -a
Linux oryx-fedora 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
➜ Downloads cat /etc/fedora-release
Fedora release 33 (Thirty Three)
What happened?
I downloaded k0s
from the GitHub Releases page. Made the binary executable chmod u+x k0s-v0.9.1-amd64
, then ran the commands to start up a single node instance.
- https://docs.k0sproject.io/v0.9.1/k0s-single-node/
When running k0s server...
with the other options, I get a ton of debug lines, ~65k in 1 minute, and I can query the instance with kubectl, and get the nodes, but when I tried to apply some kube manifests, the pods were in pending state for a long time. The pods when described all said
Warning FailedScheduling 24m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
There is a ton of out put and I'm not sure what to grab specifically, but I included an output of everything that contained error in the line.
How To Reproduce
Download k0s
curl --location -O https://github.com/k0sproject/k0s/releases/download/v0.9.1/k0s-v0.9.1-amd64
Make executable
chmod u+x k0s-v0.9.1-amd64
Run the commands the docs say to do
mkdir -p ~/.k0s
./k0s-v0.9.1-amd64 default-config | tee ~/.k0s/k0s.yaml
Run without backgrounding so I can see what it's doing
sudo k0s server -c ${HOME}/.k0s/k0s.yaml --enable-worker
Then open a new tab, and follow the rest of the docs
sudo cat /var/lib/k0s/pki/admin.conf | tee ~/.k0s/kubeconfig
export KUBECONFIG="${HOME}/.k0s/kubeconfig"
kubectl get pods --all-namespaces
This is the output of kubectl get pods --all-namespaces
➜ Downloads kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5f6546844f-hzpnc 0/1 Pending 0 38m
kube-system calico-node-v7g6j 0/1 Pending 0 38m
kube-system coredns-5c98d7d4d8-v2mkv 0/1 Pending 0 38m
kube-system kube-proxy-rmp9h 0/1 Pending 0 38m
kube-system metrics-server-7d4bcb75dd-8vq7l 0/1 Pending 0 38m
Expected behavior
All pods should be running and Ready.
Screenshots & Logs
If applicable, add screenshots to help explain your problem.
Also add any output from kubectl if applicable:
➜ Downloads kubectl describe pod calico-kube-controllers-5f6546844f-hzpnc --namespace=kube-system
Name: calico-kube-controllers-5f6546844f-hzpnc
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: <none>
Labels: k8s-app=calico-kube-controllers
pod-template-hash=5f6546844f
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/calico-kube-controllers-5f6546844f
Containers:
calico-kube-controllers:
Image: calico/kube-controllers:v3.16.2
Port: <none>
Host Port: <none>
Readiness: exec [/usr/bin/check-status -r] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ENABLED_CONTROLLERS: node
DATASTORE_TYPE: kubernetes
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from calico-kube-controllers-token-lrb4g (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
calico-kube-controllers-token-lrb4g:
Type: Secret (a volume populated by a Secret)
SecretName: calico-kube-controllers-token-lrb4g
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 31m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Warning FailedScheduling 13m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Warning FailedScheduling 6m52s default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Warning FailedScheduling 40m (x2 over 40m) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 32m (x10 over 40m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Additional context
This is the ourput I collected over 1min grepping for error
.
➜ Downloads sudo ./k0s-v0.9.1-amd64 server -c "${HOME}/.k0s/k0s.yaml" --enable-worker --debug | grep -i error
[sudo] password for filbot:
time="2021-01-18 18:24:08" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:08" level=info msg=" --alsologtostderr log to standard error as well as files" component=konnectivity
time="2021-01-18 18:24:08" level=info msg=" --logtostderr log to standard error instead of files (default true)" component=konnectivity
time="2021-01-18 18:24:08" level=info msg="E0118 18:24:08.391051 23336 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
I0118 18:24:08.393781 23288 leaderelection.go:243] attempting to acquire leader lease kube-node-lease/k0s-manifest-applier...
E0118 18:24:08.394891 23288 leaderelection.go:321] error retrieving resource lock kube-node-lease/k0s-manifest-applier: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k0s-manifest-applier": dial tcp [::1]:6443: connect: connection refused
I0118 18:24:08.394978 23288 leaderelection.go:243] attempting to acquire leader lease kube-node-lease/k0s-endpoint-reconciler...
E0118 18:24:08.395321 23288 leaderelection.go:321] error retrieving resource lock kube-node-lease/k0s-endpoint-reconciler: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k0s-endpoint-reconciler": dial tcp [::1]:6443: connect: connection refused
time="2021-01-18 18:24:08" level=info msg="time=\"2021-01-18T18:24:08.447461230-06:00\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" error=\"aufs is not supported (modprobe aufs failed: exit status 1 \\\"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.7-200.fc33.x86_64\\\\n\\\"): skip plugin\" type=io.containerd.snapshotter.v1" component=containerd
time="2021-01-18 18:24:08" level=info msg="time=\"2021-01-18T18:24:08.447632849-06:00\" level=warning msg=\"failed to load plugin io.containerd.snapshotter.v1.devmapper\" error=\"devmapper not configured\"" component=containerd
time="2021-01-18 18:24:08" level=info msg="time=\"2021-01-18T18:24:08.447797226-06:00\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" error=\"path /var/lib/k0s/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1" component=containerd
time="2021-01-18 18:24:08" level=info msg="time=\"2021-01-18T18:24:08.447829825-06:00\" level=warning msg=\"could not use snapshotter devmapper in metadata plugin\" error=\"devmapper not configured\"" component=containerd
time="2021-01-18 18:24:08" level=info msg="E0118 18:24:08.827807 23335 instance.go:392] Could not construct pre-rendered responses for ServiceAccountIssuerDiscovery endpoints. Endpoints will not be enabled. Error: issuer URL must use https scheme, got: api" component=kube-apiserver
time="2021-01-18 18:24:10" level=info msg="E0118 18:24:10.590365 23335 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.86.23, ResourceVersion: 0, AdditionalErrorMsg: " component=kube-apiserver
time="2021-01-18 18:24:10" level=info msg="E0118 18:24:10.595759 23338 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io \"kube-controller-manager\" is forbidden: User \"system:kube-controller-manager\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-system\"" component=kube-controller-manager
time="2021-01-18 18:24:10" level=info msg="W0118 18:24:10.596559 23337 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" component=kube-scheduler
time="2021-01-18 18:24:11" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
time="2021-01-18 18:24:12" level=info msg="E0118 18:24:12.148901 23335 controller.go:116] loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable" component=kube-apiserver
time="2021-01-18 18:24:13" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:13" level=info msg=" --alsologtostderr log to standard error as well as files" component=konnectivity
time="2021-01-18 18:24:13" level=info msg=" --logtostderr log to standard error instead of files (default true)" component=konnectivity
time="2021-01-18 18:24:13" level=info msg="E0118 18:24:13.408742 23582 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:16" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
I0118 18:24:17.054930 23288 leaderelection.go:253] successfully acquired lease kube-node-lease/k0s-manifest-applier
2021-01-18 18:24:17.055149 I | [INFO] acquired leader lease
time="2021-01-18 18:24:18" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:18" level=info msg=" --alsologtostderr log to standard error as well as files" component=konnectivity
time="2021-01-18 18:24:18" level=info msg=" --logtostderr log to standard error instead of files (default true)" component=konnectivity
time="2021-01-18 18:24:18" level=info msg="E0118 18:24:18.445055 23731 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
I0118 18:24:19.050759 23288 leaderelection.go:253] successfully acquired lease kube-node-lease/k0s-endpoint-reconciler
2021-01-18 18:24:19.050854 I | [INFO] acquired leader lease
E0118 18:24:22.064151 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.064469 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.064521 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.064553 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.064675 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.069556 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.069589 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.069679 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.069717 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.069781 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0118 18:24:22.073177 23288 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
E0118 18:24:22.073916 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.077145 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=helm
E0118 18:24:22.078964 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:22.079295 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=konnectivity
E0118 18:24:22.079815 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=bootstraprbac
E0118 18:24:22.083892 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=kubelet
E0118 18:24:22.085218 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:22" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=defaultpsp
time="2021-01-18 18:24:22" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
E0118 18:24:23.090674 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=calico_init
I0118 18:24:23.272169 23288 request.go:645] Throttling request took 1.190265199s, request: GET:https://localhost:6443/api/v1/serviceaccounts?limit=1
E0118 18:24:23.422695 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:23.424184 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:23.432574 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:23.436373 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:23.437268 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:23.438793 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:24:23.439851 23288 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0118 18:24:23.442483 23288 warnings.go:67] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0118 18:24:23.443998 23288 warnings.go:67] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
E0118 18:24:23.444469 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0118 18:24:23.445815 23288 warnings.go:67] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0118 18:24:23.449173 23288 warnings.go:67] rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
W0118 18:24:23.452038 23288 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
E0118 18:24:23.454191 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=kubeproxy
E0118 18:24:23.456863 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=coredns
E0118 18:24:23.459208 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=metricserver
time="2021-01-18 18:24:23" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:23" level=info msg=" --alsologtostderr log to standard error as well as files" component=konnectivity
time="2021-01-18 18:24:23" level=info msg=" --logtostderr log to standard error instead of files (default true)" component=konnectivity
time="2021-01-18 18:24:23" level=info msg="E0118 18:24:23.461497 23892 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
E0118 18:24:23.462609 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:23" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=calico
time="2021-01-18 18:24:28" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
time="2021-01-18 18:24:28" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:28" level=info msg=" --alsologtostderr log to standard error as well as files" component=konnectivity
time="2021-01-18 18:24:28" level=info msg=" --logtostderr log to standard error instead of files (default true)" component=konnectivity
time="2021-01-18 18:24:28" level=info msg="E0118 18:24:28.498272 24048 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:33" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:33" level=info msg=" --alsologtostderr log to standard error as well as files" component=konnectivity
time="2021-01-18 18:24:33" level=info msg=" --logtostderr log to standard error instead of files (default true)" component=konnectivity
time="2021-01-18 18:24:33" level=info msg="E0118 18:24:33.536981 24142 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:33" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
E0118 18:24:33.854203 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0118 18:24:33.860199 23288 warnings.go:67] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
E0118 18:24:33.864605 23288 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2021-01-18 18:24:33" level=debug msg="error in api discovery for pruning: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request" stack=metricserver
I0118 18:24:35.055628 23288 request.go:645] Throttling request took 1.189638349s, request: GET:https://localhost:6443/api/v1/events?limit=1
time="2021-01-18 18:24:38" level=info msg="Error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:38" level=info msg=" --alsologtostderr log to standard error as well as files" component=konnectivity
time="2021-01-18 18:24:38" level=info msg=" --logtostderr log to standard error instead of files (default true)" component=konnectivity
time="2021-01-18 18:24:38" level=info msg="E0118 18:24:38.577068 24218 main.go:69] error: failed to run the master server: failed to get uds listener: failed to listen(unix) name /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: listen unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: bind: address already in use" component=konnectivity
time="2021-01-18 18:24:39" level=info msg="\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" component=kubelet
bug need more info