Lightweight Kubernetes

Overview

K3s - Lightweight Kubernetes

Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB.

Great for:

  • Edge
  • IoT
  • CI
  • Development
  • ARM
  • Embedding k8s
  • Situations where a PhD in k8s clusterology is infeasible

What is this?

K3s is a fully conformant production-ready Kubernetes distribution with the following changes:

  1. It is packaged as a single binary.
  2. It adds support for sqlite3 as the default storage backend. Etcd3, MySQL, and Postgres are also supported.
  3. It wraps Kubernetes and other components in a single, simple launcher.
  4. It is secure by default with reasonable defaults for lightweight environments.
  5. It has minimal to no OS dependencies (just a sane kernel and cgroup mounts needed).
  6. It eliminates the need to expose a port on Kubernetes worker nodes for the kubelet API by exposing this API to the Kubernetes control plane nodes over a websocket tunnel.

K3s bundles the following technologies together into a single cohesive distribution:

These technologies can be disabled or swapped out for technolgoies of your choice.

Additionally, K3s simplifies Kubernetes operations by maintaining functionality for:

  • Managing the TLS certificates of Kubernetes componenents
  • Managing the connection between worker and server nodes
  • Auto-deploying Kubernetes resources from local manifests, in realtime as they are changed.
  • Managing an embedded etcd cluster (work in progress)

What's with the name?

We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10 letter word stylized as k8s. So something half as big as Kubernetes would be a 5 letter word stylized as K3s. There is no long form of K3s and no official pronunciation.

Is this a fork?

No, it's a distribution. A fork implies continued divergence from the original. This is not K3s's goal or practice. K3s explicitly intends to not change any core Kubernetes functionality. We seek to remain as close to upstream Kubernetes as possible. We do maintain a small set of patches (well under 1000 lines) important to K3s's usecase and deployment model. We maintain patches for other components as well. When possible, we contribute these changes back to the upstream projects, for example with SELinux support in containerd. This is a common practice amongst software distributions.

K3s is a distribution because it packages additional components and services necessary for a fully functional cluster that go beyond vanilla Kubernetes. These are opinionated choices on technologies for components like ingress, storage class, network policy, service load balancer, and even container runtime. These choices and technologies are touched on in more detail in the What is this? section.

How is this lightweight or smaller than upstream Kubernetes?

There are two major ways that K3s is lighter weight than upstream Kubernetes:

  1. The memory footprint to run it is smaller
  2. The binary, which contains all the non-containerized components needed to run a cluster, is smaller

The memory footprint is reduced primarily by running many components inside of a single process. This eliminates significant overhead that would otherwise be duplicated for each component.

The binary is smaller by removing third-party storage drivers and cloud providers, which is explained in more detail below.

What have you removed from upstream Kubernetes?

This is a common point of confusion because it has changed over time. Early versions of K3s had much more removed than current version. K3s currently removes two things:

  1. In-tree storage drivers
  2. In-tree cloud provider

Both of these have out-of-tree alternatives in the form of CSI and CCM, which work in K3s and which upstream is moving towards.

We remove these to achieve a smaller binary size. They can be removed while remaining conformant because neither affect core Kubernetes functionality. They are also dependent on third-party cloud or data center technologies/services, which may not be available in many of K3s's usecases.

What's next?

Check out our roadmap to see what we have planned moving forward.

Release cadence

K3s maintains pace with upstream Kubernetes releases. Our goal is to release patch releases on the same day as upstream and minor releases within a few days.

Our release versioning reflects the version of upstream Kubernetes that is being released. For example, the K3s release v1.18.6+k3s1 maps to the v1.18.6 Kubernetes release. We add a postfix in the form of +k3s<number> to allow us to make additional releases using the same version of upstream Kubernetes, while remaining semver compliant. For example, if we discovered a high severity bug in v1.18.6+k3s1 and needed to release an immediate fix for it, we would release v1.18.6+k3s2.

Documentation

Please see the official docs site for complete documentation.

Quick-Start - Install Script

The install.sh script provides a convenient way to download K3s and add a service to systemd or openrc.

To install k3s as a service just run:

curl -sfL https://get.k3s.io | sh -

A kubeconfig file is written to /etc/rancher/k3s/k3s.yaml and the service is automatically started or restarted. The install script will install K3s and additional utilities, such as kubectl, crictl, k3s-killall.sh, and k3s-uninstall.sh, for example:

sudo kubectl get nodes

K3S_TOKEN is created at /var/lib/rancher/k3s/server/node-token on your server. To install on worker nodes we should pass K3S_URL along with K3S_TOKEN or K3S_CLUSTER_SECRET environment variables, for example:

curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -

Manual Download

  1. Download k3s from latest release, x86_64, armhf, and arm64 are supported.
  2. Run server.
sudo k3s server &
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
sudo k3s kubectl get nodes

# On a different node run the below. NODE_TOKEN comes from
# /var/lib/rancher/k3s/server/node-token on your server
sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}

Contributing

Please check out our contributing guide if you're interested in contributing to K3s.

Security

Security issues in K3s can be reported by sending an email to [email protected]. Please do not file issues about security issues.

Issues
  • Support for ipv6 Kubernetes cluster

    Support for ipv6 Kubernetes cluster

    Is your feature request related to a problem? Please describe.

    Since 1.9 K8s supports ipv6-only but it is still in alpha after 5 minor releases and >1.5 years. In that sense it does not fit in the k3s concept with "no alpha features". However the main reason for the lingering alpha state is lack of e2e testing. This is aggressively addressed now for the upcoming dual-stack support in k8s.

    To bring up a ipv6-only k8s cluster is currently not for the faint hearted and I think if the simplicity of k3s can also include ipv6 it would be greatly appreciated. Also dual-stack is on the way IMHO support for ipv6-only is an important pro-active step.

    Describe the solution you'd like

    A --ipv6 option :smile:

    This would setup node addresses, service and pod CIDRs etc with ipv6 addresses but keep the image loading (containerd) configured for ipv4. The image loading should still be ipv4 because the internet and ISP's are still mostly ipv4-only and for ipv6 users the way images gets loaded is of no concern.

    A requirement will then be that the nodes running k3s must have a working dual-stack setup (but the k8s cluster would be ipv6-only).

    Describe alternatives you've considered

    The "not for the faint hearted" does not mean that setting up an ipv6-only k8s cluster is particularly complex, more that most users have a fear of the unknown and that support in the popular installation tools is lacking or not working. To setup k8s for ipv6-only is basically just to provide ipv6 addresses in all configuration and command line options. That may even be possible without modifications to k3s (I have not yet tried). It may be more complex to support the "extras" such as the internal lb and traefik, so I would initially say that those are not supported for ipv6. Coredns with ipv6 in k3s should be supported though (coredns supports ipv6 already).

    The flannel CNI-plugin afaik does not support ipv6 (issue). So the --no-flannel flag must be specified and a CNI-plugin with ipv6 support must be used.

    Additional context

    I will start experimenting with this and possibly come up with some PR's. The ammount of time I can spend may be limited.

    I am currently adding k3s in my xcluster environment where I already have ipv6-only support in my own k8s setup.

    kind/enhancement 
    opened by uablrek 77
  • CPU and memory usage of k3s

    CPU and memory usage of k3s

    Environmental Info: K3s Version: k3s version v1.18.8+k3s1 (6b595318)
    Running on CentOS 7.8

    Node(s) CPU architecture, OS, and Version: Linux k3s 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
    AMD GX-412TC SOC with 2GB RAM

    Cluster Configuration: Single node installation

    Describe the bug:

    When deploying the latest stable k3s on a single node, the CPU and memory usage may look important. I understand that Kubernetes isn't lightweight by definition, but the k3s is really interessing for creating/deploying appliances. On small (embedded) systems, the default CPU and memory usage is important (I'm not speaking here for modern servers). Is-there a way to optimize these ressources usage or at least to understand the k3s usage of ressources when nothing is deployed?

    Steps To Reproduce:

    • Installed K3s:
      curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --no-deploy traefik" sh

    Expected behavior:

    Maybe less CPU and memory usage when nothing is deployed and running

    Actual behavior:

    500MB of memory used and 5% of CPU usage on each core (4 cores CPU) when idle

    Additional context / logs:

    opened by sraillard 72
  • Getting Real Client IP with k3s

    Getting Real Client IP with k3s

    Is your feature request related to a problem? Please describe. I am unable to obtain Real Client IP when using k3s and Traefik v2.2. I always get the cluster IP.

    Kernel version
    4.4.0-174-generic
    OS Image
    Ubuntu 16.04.6 LTS
    Container runtime version
    containerd://1.3.0-k3s.4
    kubelet version
    v1.16.3-k3s.2
    kube-proxy version
    v1.16.3-k3s.2
    Operating system
    linux
    Architecture
    amd64
    
    
    Images
    traefik:2.2.0
    

    Describe the solution you'd like I would like to obtain the client IP.

    Describe alternatives you've considered I already set externalTrafficPolicy: Local in Traefik's Service. Additional context Issue can be reproduced by deploying containous/whoami image in cluster Expected Response

    Hostname: a19d325823bb
    IP: 127.0.0.1
    IP: 10.0.0.147
    IP: 172.18.0.4
    RemoteAddr: 10.0.0.144:56246
    GET / HTTP/1.1
    Host: whoami.civo.com
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
    Accept-Encoding: gzip, deflate, br
    Accept-Language: en-US,en;q=0.9
    Sec-Fetch-Dest: document
    Sec-Fetch-Mode: navigate
    Sec-Fetch-Site: none
    Upgrade-Insecure-Requests: 1
    X-Apache-Ip: 102.69.228.66
    X-Forwarded-For: 102.69.228.66, 102.69.228.66, 172.18.0.1
    X-Forwarded-Host: whoami.civo.com
    X-Forwarded-Port: 443
    X-Forwarded-Proto: https
    X-Forwarded-Server: bc3b51f28353
    X-Real-Ip: 102.69.228.66
    

    Current Response

    Hostname: whoami-76d6dfb846-jltlm
    IP: 127.0.0.1
    IP: ::1
    IP: 192.168.0.33
    IP: fe80::7863:88ff:fe45:2ad5
    RemoteAddr: 192.168.1.4:36146
    GET / HTTP/1.1
    Host: who.civo.com
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
    Accept-Encoding: gzip, deflate, br
    Accept-Language: en-US,en;q=0.9
    Sec-Fetch-Dest: document
    Sec-Fetch-Mode: navigate
    Sec-Fetch-Site: none
    Upgrade-Insecure-Requests: 1
    X-Forwarded-For: 192.168.0.5
    X-Forwarded-Host: who.civo.com
    X-Forwarded-Port: 443
    X-Forwarded-Proto: https
    X-Forwarded-Server: traefik-8477c7d8f-fbhdg
    X-Real-Ip: 192.168.0.5
    

    Service LoadBalancer Logs

    + trap exit TERM INT
    /usr/bin/entry: line 6: can't create /proc/sys/net/ipv4/ip_forward: Read-only file system
    + echo 1
    + true
    + cat /proc/sys/net/ipv4/ip_forward
    + '[' 1 '!=' 1 ]
    + iptables -t nat -I PREROUTING '!' -s 192.168.183.229/32 -p TCP --dport 8080 -j DNAT --to 192.168.183.229:8080
    + iptables -t nat -I POSTROUTING -d 192.168.183.229/32 -p TCP -j MASQUERADE
    + '[' '!' -e /pause ]
    + mkfifo /pause
    

    Related https://github.com/rancher/k3s/pull/955 Related Discussion https://github.com/rancher/k3s/issues/679#issuecomment-516367437

    @erikwilson @btashton

    kind/enhancement kind/question 
    opened by jawabuu 71
  • Cadvisor not reporting Container/Image metadata

    Cadvisor not reporting Container/Image metadata

    Describe the bug When making the call to retrieve metrics via Cadvisor, the Container and Images values are empty in all values.

    container_tasks_state{container="",container_name="",id="/system.slice/lxd.socket",image="",name="",namespace="",pod="",pod_name="",state="running"} 0 1557525150119
    

    To Reproduce Install k3s via multipass https://medium.com/@zhimin.wen/running-k3s-with-multipass-on-mac-fbd559966f7c

    kubectl get --raw /api/v1/nodes/k3s/proxy/metrics/cadvisor
    

    Expected behavior container and image values should be populated

    Additional context Wondering if it might be related to https://github.com/rancher/k3s/issues/213

    kind/bug 
    opened by cfchad 62
  • k3s causes a high load average

    k3s causes a high load average

    Describe the bug I'm not sure if it's a bug, but I think it's not an expected behaviour. When running k3s on any computer, it causes a very high load average. To have a concrete example, I'll explain the situation of my raspberry pi3 node.

    When running k3s, I have a load average usage of:

    load average: 2.69, 1.52, 1.79
    

    Without running it, but still having the containers up, I have a load average of:

    load average: 0.24, 1.01, 1.72
    

    To Reproduce I just run it without any special arguments, just how is installed by the sh installer.

    Expected behavior The load average should be under 1.

    status/more-info 
    opened by drym3r 60
  • Can't reach internet from pod / container

    Can't reach internet from pod / container

    Environmental Info: K3s Version:

    k3s -v
    k3s version v1.22.7+k3s1 (8432d7f2)
    go version go1.16.10
    

    Host OS Version:

    cat /etc/os-release 
    NAME="Ubuntu"
    VERSION="20.04.4 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.4 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal
    

    IP Forwarding:

    # sysctl net.ipv4.ip_forward
    net.ipv4.ip_forward = 1
    

    Node(s) CPU architecture, OS, and Version:

    Linux ansible-awx 5.4.0-105-generic #119-Ubuntu SMP Mon Mar 7 18:49:24 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    

    Cluster Configuration: Single node.

    # k3s kubectl get nodes -o wide
    NAME          STATUS   ROLES                  AGE     VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
    ansible-awx   Ready    control-plane,master   5d10h   v1.22.7+k3s1   10.164.12.6   <none>        Ubuntu 20.04.4 LTS   5.4.0-105-generic   containerd://1.5.9-k3s1
    

    Describe the bug: I cannot connect to the internet from within the pod / container:

    # time curl https://www.google.de
    curl: (7) Failed to connect to www.google.de port 443: Connection timed out
    
    real    2m11.892s
    user    0m0.005s
    sys     0m0.005s
    

    Steps To Reproduce: Install one node k3s cluster with curl -sfL https://get.k3s.io | sh on a ubuntu 20.04 VM. Setup a simple workload (in my case AWX - https://github.com/ansible/awx-operator#basic-install-on-existing-cluster). Enter a container and try to access the internet (for example with curl on a public address).

    Expected behavior: Accessing the internet should be working the same way like it is from the host.

    Actual behavior: No connectivity to the internet from the pod / container at all.

    Additional context / logs:

    # cat /etc/resolv.conf 
    search awx.svc.cluster.local svc.cluster.local cluster.local mydomain.com
    nameserver 10.43.0.10
    options ndots:5
    
    opened by apiening 58
  • Formally add support for CentOS 7

    Formally add support for CentOS 7

    We need to expand our testing and identify any issues that prevent us from formally supporting CentOS. Keep in mind K3s is expected to work fine on CentOS 7. This issue is to track the testing effort required to formally support and certify the operating system (See https://rancher.com/docs/k3s/latest/en/installation/node-requirements/#operating-systems )

    Currently there are existing issues with the os/centos label, but take care to note that these issues are not all necessarily caused just by utilizing CentOS. As such, it makes sense to review those GitHub issues, but we need to execute some testing and identify any other issues. As needed, we'll need to resolve these issues so we may fully support CentOS.

    SELinux support is also needed, which is tracked separately here: https://github.com/rancher/k3s/issues/1372

    gz#9311

    gz#9743

    kind/enhancement os/centos internal 
    opened by davidnuzik 58
  • K3s Install on Raspberry Pi 4b failed (TLS Handshake Timeout pi3, pi4, etc)

    K3s Install on Raspberry Pi 4b failed (TLS Handshake Timeout pi3, pi4, etc)

    Thanks for helping us to improve k3s! We welcome all bug reports. Please fill out each area of the template so we can better help you. You can delete this message portion of the bug report.

    Version: Provide the output from k3s -v and provide the flags used to install or run k3s server.

    [email protected]:/home/pi# k3s -v
    k3s version v0.10.0 (f9888ca3)
    

    OS version: Linux raspberrypi 4.19.75-v7l+ rancher/k3s#1270 SMP Tue Sep 24 18:51:41 BST 2019 armv7l bootloader version:

    [email protected]:~# vcgencmd bootloader_version
    Sep 10 2019 10:41:50
    version f626c772b15ba1b7e0532a8d50a761b3ccbdf3bb (release)
    timestamp 1568112110
    

    Describe the bug A clear and concise description of what the bug is. After run install command "curl -sfL https://get.k3s.io | sh -", installation can't be completed, and TLS handshake timeout error prompted

    To Reproduce Steps to reproduce the behavior: Run command 'curl -sfL https://get.k3s.io | sh -' on Raspberry Pi 4b 4G memory

    Expected behavior A clear and concise description of what you expected to happen.

    Actual behavior A clear and concise description of what actually happened. TLS handshake timeout

    Additional context Add any other context about the problem here. I put some error logs below, hope them can help:

    [email protected]:/home/pi# journalctl -u k3s.service
    -- Logs begin at Thu 2019-09-26 01:24:23 BST, end at Sun 2019-10-27 01:22:17 GMT. --
    Oct 27 01:19:58 raspberrypi systemd[1]: Starting Lightweight Kubernetes...
    Oct 27 01:19:58 raspberrypi k3s[3688]: time="2019-10-27T01:19:58Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/3f43b16ca97dbb7ba58868cdb2137a72ad7215762a2852ed944237bf45d44f07"
    Oct 27 01:20:13 raspberrypi k3s[3688]: time="2019-10-27T01:20:13.437098936Z" level=info msg="Starting k3s v0.10.0 (f9888ca3)"
    Oct 27 01:20:13 raspberrypi k3s[3688]: time="2019-10-27T01:20:13.945042885Z" level=info msg="Kine listening on unix://kine.sock"
    Oct 27 01:20:13 raspberrypi k3s[3688]: time="2019-10-27T01:20:13.947965657Z" level=info msg="Fetching bootstrap data from etcd"
    Oct 27 01:20:15 raspberrypi k3s[3688]: time="2019-10-27T01:20:15.186636567Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib
    Oct 27 01:20:15 raspberrypi k3s[3688]: Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
    Oct 27 01:20:15 raspberrypi k3s[3688]: I1027 01:20:15.189751    3688 server.go:650] external host was not specified, using 192.168.199.80
    Oct 27 01:20:15 raspberrypi k3s[3688]: I1027 01:20:15.191063    3688 server.go:162] Version: v1.16.2-k3s.1
    Oct 27 01:20:19 raspberrypi k3s[3688]: I1027 01:20:19.782703    3688 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultT
    Oct 27 01:20:19 raspberrypi k3s[3688]: I1027 01:20:19.782801    3688 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeCl
    Oct 27 01:20:19 raspberrypi k3s[3688]: I1027 01:20:19.785373    3688 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultT
    Oct 27 01:20:19 raspberrypi k3s[3688]: I1027 01:20:19.785425    3688 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeCl
    Oct 27 01:20:19 raspberrypi k3s[3688]: I1027 01:20:19.856982    3688 master.go:259] Using reconciler: lease
    Oct 27 01:20:19 raspberrypi k3s[3688]: I1027 01:20:19.966350    3688 rest.go:115] the default service ipfamily for this cluster is: IPv4
    Oct 27 01:20:20 raspberrypi k3s[3688]: W1027 01:20:20.788011    3688 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
    Oct 27 01:20:20 raspberrypi k3s[3688]: W1027 01:20:20.853703    3688 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
    Oct 27 01:20:20 raspberrypi k3s[3688]: W1027 01:20:20.919549    3688 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
    Oct 27 01:20:20 raspberrypi k3s[3688]: W1027 01:20:20.931880    3688 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
    Oct 27 01:20:20 raspberrypi k3s[3688]: W1027 01:20:20.973747    3688 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
    Oct 27 01:20:21 raspberrypi k3s[3688]: W1027 01:20:21.043638    3688 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
    Oct 27 01:20:21 raspberrypi k3s[3688]: W1027 01:20:21.043695    3688 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
    Oct 27 01:20:21 raspberrypi k3s[3688]: I1027 01:20:21.078307    3688 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultT
    Oct 27 01:20:21 raspberrypi k3s[3688]: I1027 01:20:21.078434    3688 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeCl
    Oct 27 01:20:21 raspberrypi k3s[3688]: time="2019-10-27T01:20:21.096613858Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
    Oct 27 01:20:21 raspberrypi k3s[3688]: time="2019-10-27T01:20:21.098945424Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/s
    Oct 27 01:20:21 raspberrypi k3s[3688]: I1027 01:20:21.119387    3688 controllermanager.go:161] Version: v1.16.2-k3s.1
    Oct 27 01:20:21 raspberrypi k3s[3688]: I1027 01:20:21.121660    3688 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
    Oct 27 01:20:21 raspberrypi k3s[3688]: I1027 01:20:21.127479    3688 server.go:143] Version: v1.16.2-k3s.1
    Oct 27 01:20:21 raspberrypi k3s[3688]: I1027 01:20:21.127709    3688 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
    Oct 27 01:20:21 raspberrypi k3s[3688]: W1027 01:20:21.139439    3688 authorization.go:47] Authorization is disabled
    Oct 27 01:20:21 raspberrypi k3s[3688]: W1027 01:20:21.139494    3688 authentication.go:79] Authentication is disabled
    Oct 27 01:20:21 raspberrypi k3s[3688]: I1027 01:20:21.139527    3688 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
    Oct 27 01:20:31 raspberrypi k3s[3688]: time="2019-10-27T01:20:31.111017958Z" level=fatal msg="starting tls server: Get https://127.0.0.1:6444/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions: net/http: TLS handshake timeout"
    Oct 27 01:20:31 raspberrypi systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
    Oct 27 01:20:31 raspberrypi systemd[1]: k3s.service: Failed with result 'exit-code'.
    Oct 27 01:20:31 raspberrypi systemd[1]: Failed to start Lightweight Kubernetes.
    Oct 27 01:20:36 raspberrypi systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
    Oct 27 01:20:36 raspberrypi systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1.
    Oct 27 01:20:36 raspberrypi systemd[1]: Stopped Lightweight Kubernetes.
    Oct 27 01:20:36 raspberrypi systemd[1]: Starting Lightweight Kubernetes...
    
    status/stale 
    opened by gm12367 58
  • Traefik 2.0 integration

    Traefik 2.0 integration

    Is your feature request related to a problem? Please describe. The feature tls-passthrough is missing. Es. Installing argocd on the cluster is difficult due the missing of this feature.

    Describe the solution you'd like Substitute the actual version < 2.0 with the actual (that reach the GA with version 2.0)

    Describe alternatives you've considered Describe a reproducible way for remove the actual version in favor of the most updated version.

    Additional context The version of Traefik 2.0 seems most kubernetes friendly so, this seems to me a very natural step to do!

    kind/feature priority/important-soon 
    opened by Zikoel 50
  • Job for k3s.service failed because the control process exited with error code

    Job for k3s.service failed because the control process exited with error code

    Hello Team,

    Trying to run k3s cluster on raspberrypi using official doc but causing this issue. ● k3s.service - Lightweight Kubernetes Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2019-06-20 12:18:07 UTC; 4min 13s ago Docs: https://k3s.io Process: 1722 ExecStart=/usr/local/bin/k3s server (code=exited, status=1/FAILURE) Process: 1719 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS) Process: 1716 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS) Main PID: 1722 (code=exited, status=1/FAILURE) CPU: 2.150s

    Jun 20 12:18:06 master systemd[1]: k3s.service: Unit entered failed state. Jun 20 12:18:06 master systemd[1]: k3s.service: Failed with result 'exit-code'. Jun 20 12:18:07 master systemd[1]: k3s.service: Service hold-off time over, scheduling restart. Jun 20 12:18:07 master systemd[1]: Stopped Lightweight Kubernetes. Jun 20 12:18:07 master systemd[1]: k3s.service: Start request repeated too quickly. Jun 20 12:18:07 master systemd[1]: Failed to start Lightweight Kubernetes. Jun 20 12:18:07 master systemd[1]: k3s.service: Unit entered failed state. Jun 20 12:18:07 master systemd[1]: k3s.service: Failed with result 'exit-code'.

    status/more-info 
    opened by Aliabbask08 44
  • Embedded etcd server does not account for exceeding database space

    Embedded etcd server does not account for exceeding database space

    Describe the bug: When you ran k3s long enough with etcd store, you are probably going to see this:

    Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.{"level":"warn","ts":"2021-12-19T17:34:48.509+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc01a9ac000/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = ResourceExhausted desc = etcdserver: mvcc: database space exceeded"}
    panic: etcdserver: mvcc: database space exceeded
    goroutine 417 [running]:
    github.com/rancher/k3s/pkg/cluster.(*Cluster).Start.func1(0xc0336094a0, 0x5959b78, 0xc000936900, 0xc00159cc80)
            /go/src/github.com/rancher/k3s/pkg/cluster/cluster.go:103 +0x1e5created by github.com/rancher/k3s/pkg/cluster.(*Cluster).Start
            /go/src/github.com/rancher/k3s/pkg/cluster/cluster.go:98 +0x6bf
    

    Steps To Reproduce: Find a k3s server with large enough etcd store

    Expected behavior: k3s should automatically compact etcd store and continue as usual; if not, start a emergency etcd server that allow the operators to do some rescue work.

    Actual behavior: It just crash, so I can't even do manual compaction myself

    Backporting

    • [x] Needs backporting to older releases
    opened by stevefan1999-personal 43
  • Updated flannel to v0.19.1

    Updated flannel to v0.19.1

    Proposed Changes

    Update flannel version to v0.19.1

    Types of Changes

    CNI version update

    Verification

    Testing

    Linked Issues

    #5961

    User-Facing Change

    
    

    Further Comments

    opened by rbrtbnfgl 0
  • k3s fails to start with (slow?)nfs and sqlite datastore endpoint

    k3s fails to start with (slow?)nfs and sqlite datastore endpoint

    Environmental Info: K3s Version: rancher/k3s:v1.24.1-k3s1

    Node(s) CPU architecture, OS, and Version: Linux c1-0 5.15.0-40-generic #43-Ubuntu SMP Wed Jun 15 12:54:21 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

    Cluster Configuration: k3s is being deployed via loft/vcluster in a 3 node vanilla k8s cluster (1.24.2) running on ubuntu jammy (nodes like above)

    Describe the bug: k3s controller pod continually crashes due to timeouts on startup. This appears to only happen when running the default sqlite datastore endpoint on a cluster with nfs as the default storageclass.

    Steps To Reproduce:

    • have a cluster with nfs as default storage class -- perhaps from google engineering on this it may have to be a "slow" nfs store that is causing this? (this is in a home lab scenario running nfs off a truenas vm in promox w/ ssd passthrough disks to it, the k8s cluster/k3s pod is running in the same physical server on different ubuntu vms)
    • install vcluster into the cluster (via helm/loft/vcluster cli) with default settings (we use k3s as our default distro)

    Expected behavior: k3s controller starts up and operates normally

    Actual behavior: k3s pod crashes repeatedly, logs show loads of "etcd" (kine I guess) deadline exceeded errors, and we see controllers failing to register.

    Additional context / logs: Changing the datastore endpoint to etcd causes the vcluster/k3s pod to operate normally. Disabling storage persistence in vcluster also "fixes" this issue (meaning we dont run the datastore in a pvc, but just in EmptyDir)

    See issue in vcluster as well

    Full pod logs: localdir-logs.log

    And logs from a successful vcluster/k3s startup where we use the persistance=>false setting (so emptyDir instead of pvc on nfs): pvc-logs.log

    opened by carlmontanari 4
  • CI: update Fedora 34 -> 36

    CI: update Fedora 34 -> 36

    Proposed Changes

    Update Fedora 34 to 36.

    Fedora 34 has reached EOL on 2022-06-07 https://docs.fedoraproject.org/en-US/releases/eol/

    Types of Changes

    CI

    Verification

    Observe the CI result

    Testing

    Observe the CI result

    Linked Issues

    🚫

    User-Facing Change

    🚫

    Further Comments

    🚫

    opened by AkihiroSuda 3
Releases(v1.24.3+k3s1)
Lightweight, CRD based envoy control plane for kubernetes

Lighweight, CRD based Envoy control plane for Kubernetes: Implemented as a Kubernetes Operator Deploy and manage an Envoy xDS server using the Discove

null 48 Mar 19, 2022
Cmsnr - cmsnr (pronounced "commissioner") is a lightweight framework for running OPA in a sidecar alongside your applications in Kubernetes.

cmsnr Description cmsnr (pronounced "commissioner") is a lightweight framework for running OPA in a sidecar alongside your applications in Kubernetes.

John Hooks 4 Jan 13, 2022
K8s-ingress-health-bot - A K8s Ingress Health Bot is a lightweight application to check the health of the ingress endpoints for a given kubernetes namespace.

k8s-ingress-health-bot A K8s Ingress Health Bot is a lightweight application to check the health of qualified ingress endpoints for a given kubernetes

Aaron Tam 0 Jan 2, 2022
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Litmus Chaos 3.1k Aug 1, 2022
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.3k Aug 1, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 1.8k Aug 7, 2022
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 97 Jun 12, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Opstree Container Kit 111 Apr 28, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Open Cloud-native Game-application Initiative 30 Jul 28, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 687 Jul 26, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 22 Jun 17, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10.2k Aug 6, 2022
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

null 10 Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 63 Jul 27, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 53 Jul 25, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 52 Jul 20, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 58 Jul 23, 2022