Complete container management platform

Overview

Rancher

Build Status Docker Pulls Go Report Card

Rancher is an open source project that provides a container management platform built for organizations that deploy containers in production. Rancher makes it easy to run Kubernetes everywhere, meet IT requirements, and empower DevOps teams.

Looking for Rancher 1.6.x info? Click here

Latest Release

  • Latest - v2.5.7 - rancher/rancher:latest - Read the full release notes.

  • Stable - v2.5.7 - rancher/rancher:stable - Read the full release notes.

To get automated notifications of our latest release, you can watch the announcements category in our forums, or subscribe to the RSS feed https://forums.rancher.com/c/announcements.rss.

Quick Start

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher

Open your browser to https://localhost

Installation

Rancher can be deployed in either a single node or multi-node setup. Please refer to the following for guides on how to get Rancher up and running.

No internet access? Refer to our Air Gap Installation for instructions on how to use your own private registry to install Rancher.

Minimum Requirements

  • Operating Systems
    • Ubuntu 16.04 (64-bit)
    • Red Hat Enterprise Linux 7.5 (64-bit)
    • RancherOS 1.4 (64-bit)
  • Hardware
    • 4 GB of Memory
  • Software
    • Docker v1.12.6, 1.13.1, 17.03.2

Using Rancher

To learn more about using Rancher, please refer to our Rancher Documentation.

Source Code

This repo is a meta-repo used for packaging and contains the majority of rancher codebase. Rancher does include other Rancher projects including:

Rancher also includes other open source libraries and projects, see go.mod for the full list.

Support, Discussion, and Community

If you need any help with Rancher or RancherOS, please join us at either our Rancher forums, #rancher IRC channel or Slack where most of our team hangs out at.

Please submit any Rancher bugs, issues, and feature requests to rancher/rancher.

Please submit any RancherOS bugs, issues, and feature requests to rancher/os.

For security issues, please email [email protected]ancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.

License

Copyright (c) 2014-2020 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Issues
  • rancher-server Observed a panic:

    rancher-server Observed a panic: "invalid memory address or nil pointer dereference"

    Rancher Server Setup

    • Rancher version: 2.6.0
    • Installation option (Docker install/Helm Chart): Helm Chart
      • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): RKE2 1.21.4

    Information about the Cluster

    • Kubernetes version: RKE2 1.21.4
    • Cluster Type (Local/Downstream): Local

    Describe the bug When we are trying to restore the cluster using velero rancher-server is getting crashed randomly with the below error. Any idea what's causing this and how to fix this ?

    We have observed that even on the source cluster git-webhook-api-service service account has metadata.ownerReferences set to null or empty or not being defined

    And this not consistent, some restores are being successful and some aren't

    2022/06/20 16:15:13 [INFO] Starting cluster.x-k8s.io/v1alpha3, Kind=Cluster controller
    2022/06/20 16:15:13 [INFO] Watching metadata for cluster.x-k8s.io/v1alpha3, Kind=MachineSet
    2022/06/20 16:15:13 [INFO] Watching metadata for cluster.x-k8s.io/v1alpha3, Kind=MachineHealthCheck
    2022/06/20 16:15:13 [INFO] Starting cluster.x-k8s.io/v1alpha3, Kind=MachineHealthCheck controller
    2022/06/20 16:15:13 [INFO] Starting cluster.x-k8s.io/v1alpha3, Kind=MachineSet controller
    2022/06/20 16:15:14 [ERROR] error syncing 'git-webhook': handler apiservice: failed to update cattle-system/git-webhook-api-service /v1, Kind=ServiceAccount for apiservice git-webhook: ServiceAccount "git-webhook-api-service" is invalid: [metadata.ownerReferences.apiVersion: Invalid value: "": version must not be empty, metadata.ownerReferences.kind: Invalid value: "": kind must not be empty, metadata.ownerReferences.name: Invalid value: "": name must not be empty], requeuing
    2022/06/20 16:15:14 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-bcdbv to namespace=kube-system
    2022/06/20 16:15:15 [INFO] Updating global catalog library
    2022/06/20 16:15:15 [ERROR] error syncing 'git-webhook': handler apiservice: failed to update cattle-system/git-webhook-api-service /v1, Kind=ServiceAccount for apiservice git-webhook: ServiceAccount "git-webhook-api-service" is invalid: [metadata.ownerReferences.apiVersion: Invalid value: "": version must not be empty, metadata.ownerReferences.kind: Invalid value: "": kind must not be empty, metadata.ownerReferences.name: Invalid value: "": name must not be empty], requeuing
    2022/06/20 16:15:17 [INFO] kontainerdriver azurekubernetesservice listening on address 127.0.0.1:35675
    2022/06/20 16:15:17 [INFO] kontainerdriver googlekubernetesengine listening on address 127.0.0.1:41777
    2022/06/20 16:15:17 [INFO] kontainerdriver amazonelasticcontainerservice listening on address 127.0.0.1:45839
    E0620 16:15:17.118103      34 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
    goroutine 3105 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic(0x39b9940, 0x6920620)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x39b9940, 0x6920620)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    google.golang.org/grpc.(*Server).Stop(0x0)
    	/go/pkg/mod/google.golang.org/[email protected]/server.go:1482 +0x4a
    github.com/rancher/rancher/pkg/kontainer-engine/types.(*GrpcServer).Stop(...)
    	/go/src/github.com/rancher/rancher/pkg/kontainer-engine/types/rpc_server.go:157
    github.com/rancher/rancher/pkg/kontainer-engine/service.(*RunningDriver).Stop(0xc00bafcce0)
    	/go/src/github.com/rancher/rancher/pkg/kontainer-engine/service/service.go:491 +0x45
    github.com/rancher/rancher/pkg/kontainer-engine/service.(*EngineService).GetDriverCreateOptions(0xc00bafcf90, 0x48ec250, 0xc000060060, 0xc00e1badf8, 0x16, 0xc012c5a000, 0x0, 0x0, 0x0, 0x0, ...)
    

    To Reproduce #1 Take backup using velero and restore the same using velero

    Result rancher-server getting crashed during restore with panic

    Expected Result rancher-server should be up and running

    opened by rajivml 0
  • Imported cluster agent cannot connect when applying import instructions twice

    Imported cluster agent cannot connect when applying import instructions twice

    Rancher Server Setup

    • Rancher version: 2.6.5
    • Installation option (Docker install/Helm Chart): Helm Chart
      • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): Tested on RKE1, managed clusters (OVH, Civo) with k8s 1.20, 1.22 (k3s), 1.23
    • Proxy/Cert Details: cert-manager, with nginx adn traefik

    Information about the Cluster

    • Kubernetes version: 1.20, 1.22, 1.23
    • Cluster Type (Local/Downstream): Downstream
      • If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): Imported (OVH, Civo)

    User Information

    • What is the role of the user logged in? Admin

    Describe the bug When you apply the setup manifests twice (more than one time), the imported cluster cannot connect to rancher anymore. I've tested with multiple k8s versions and providers, I always have the issue.

    To Reproduce

    1. Create a generci imported cluster
    2. Apply setup manifests to install the cattle-cluster-agent on the downstream cluster, and then see it registered to rancher
    3. Apply setup manifest a 2nd time. The downstream cattle-cluster-agent restarts and cannot connect to rancher anymore

    Result The downstream cluster is not connected anymore. I tried to uninstall/reinstall all the downstream cluster manifests (cattle-cluster-agent), the cluster is still unable to connect.

    The only way to add it to rancher is to create a new cluster.

    Expected Result The downstream cluster should connect to rancher without issue.

    Additional context Some logs :

    1. When applying twice the setup manifests, I see that the deployment.apps/cattle-cluster-agent resource changes, using another secret cattle-credentials

    2. Logs on the downstream server:

    NFO: Environment: CATTLE_ADDRESS=10.42.2.9 CATTLE_CA_CHECKSUM= CATTLE_CLUSTER=true CATTLE_CLUSTER_AGENT_PORT=tcp://10.43.174.159:80 CATTLE_CLUSTER_AGENT_PORT_443_TCP=tcp://10.43.174.159:443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_ADDR=10.43.174.159 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PORT=443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_PORT_80_TCP=tcp://10.43.174.159:80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_ADDR=10.43.174.159 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PORT=80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_SERVICE_HOST=10.43.174.159 CATTLE_CLUSTER_AGENT_SERVICE_PORT=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTP=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTPS_INTERNAL=443 CATTLE_CLUSTER_REGISTRY= CATTLE_INGRESS_IP_DOMAIN=sslip.io CATTLE_INSTALL_UUID=5e01b0fd-e971-4e08-aee5-2f95434f0ce4 CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-75f78d47d-pfspq CATTLE_SERVER=https://rancher.cluster1.civo.vrchr.fr CATTLE_SERVER_VERSION=v2.6.5
    INFO: Using resolv.conf: search cattle-system.svc.cluster.local svc.cluster.local cluster.local nameserver 10.43.0.10 options ndots:5
    INFO: https://rancher.cluster1.civo.vrchr.fr/ping is accessible
    INFO: rancher.cluster1.civo.vrchr.fr resolves to 74.220.16.174
    time="2022-06-23T10:36:06Z" level=info msg="Listening on /tmp/log.sock"
    time="2022-06-23T10:36:06Z" level=info msg="Rancher agent version v2.6.5 is starting"
    time="2022-06-23T10:36:06Z" level=info msg="Connecting to wss://rancher.cluster1.civo.vrchr.fr/v3/connect/register with token starting with 944gl5l45h6d8fvdpn8dm4z7bwc"
    time="2022-06-23T10:36:06Z" level=info msg="Connecting to proxy" url="wss://rancher.cluster1.civo.vrchr.fr/v3/connect/register"
    
    1. Logs on the rancher server, nothing really revelent:
    2022/06/23 10:42:44 [ERROR] Error during subscribe websocket: close sent
    2022/06/23 10:42:44 [ERROR] Error during subscribe websocket: close sent
    
    opened by rverchere 0
  • Rancher Instance not able to communicate to downstream cluster post upgrade of kubernetes version to v1.22.9

    Rancher Instance not able to communicate to downstream cluster post upgrade of kubernetes version to v1.22.9

    Setup

    • Rancher version: v2.6.4
    • Browser type & version:

    Describe the bug Rancher Instance not able to communicate to downstream cluster post upgrade of kubernetes version to v1.22.9

    To Reproduce kubectl get nodes Unable to connect to the server: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

    Result kubectl get nodes Unable to connect to the server: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Expected Result

    Screenshots

    Additional context

    Detailed Description

    Context

    kind/bug 
    opened by rameshraya3010 6
  • deployment of portworx conflicts neuvector enforcer pods

    deployment of portworx conflicts neuvector enforcer pods

    • Rancher version: RKE2
    • server node x 1; worker nodes x 3

    Successfully deployed Neuvector v5.0.0 on 3 x workers nodes and all pods are running properly. As soon as I deployed Portworx, Neuvector Enforcers' Pods would become unstable. State colour on the web console will change from green to orange.

    image

    Logs captured from one of the Enforcers' pods: -

    logs.txt

    opened by cia-aic 0
  • After clusterrole/role is created, yaml shows resource item is uppercase

    After clusterrole/role is created, yaml shows resource item is uppercase

    Rancher Server Setup

    • Rancher version: v2.6.5
    • Installation option (Docker install/Helm Chart): docker install
      • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): k3s Node Setup: 1 node Version: v1.23.6+k3s1
    • Proxy/Cert Details: self-signed

    Information about the Cluster

    • Kubernetes version: v1.23.7+rke2r2
    • Cluster Type (Local/Downstream): Local

    User Information

    • What is the role of the user logged in? Admin

    Describe the bug After clusterrole/role is created, yaml shows that resource item is uppercase, it should be lowercase. eg: Node is selected, yaml shows Node too

    To Reproduce

    1. create clusterrole/role page.
    2. select any verbs and any resources, then click save.
    3. click edit yaml.

    Result

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      creationTimestamp: "2022-06-23T02:31:22Z"
      managedFields:
      - apiVersion: rbac.authorization.k8s.io/v1
        fieldsType: FieldsV1
        fieldsV1:
          f:rules: {}
        manager: rancher
        operation: Update
        time: "2022-06-23T02:31:22Z"
      name: test
      resourceVersion: "2423173"
      uid: c53fa817-f06c-4670-bc07-395188ccd2fb
    rules:
    - apiGroups:
      - ""
      resources:
      - Node
      verbs:
      - list
      - update
    
    

    Expected Result resources item must comply with k8s rbac rules, eg: after Node is selected, yaml shows and requests to rancher is nodes.

    Screenshots image image

    Additional context

    opened by chrisho 1
  • Errors seen in rancher logs when group memberships are refreshed on a rancher upgrade

    Errors seen in rancher logs when group memberships are refreshed on a rancher upgrade

    Rancher Server Setup

    • Rancher version: v2.5.14 upgraded to v2.6-head
    • Installation option (Docker install/Helm Chart): Docker install
      • If Helm Chart, Kubernetes Info: k3s Cluster Type (RKE1, RKE2, k3s, EKS, etc): Node Setup: 1 node Version: v1.21.3+k3s1
    • Proxy/Cert Details: self-signed

    Information about the Cluster

    • Kubernetes version: v1.20.15
    • Cluster Type (Local/Downstream): Downstream infrastructure provider 3 worker, 1 etcd, 1 cp RKE1

    Describe the bug Refreshing the group memberships of the users, returns an error when azure AD auth is enabled.

    To Reproduce

    1. Install rancher v2.5.14
    2. Enable azure AD auth
    3. Login as users from azure AD and create a few downstream clusters
    4. Upgrade the rancher server to the latest v2.6 head version
    5. Refresh group membership from Users and auth >> Users >> Refresh group memberships or from Users and auth >> groups >> Refresh group memberships

    Result Verify the rancher logs and notice the following error logs:

    2022/06/23 00:16:17 [ERROR] Error refreshing token principals, skipping: graphrbac.UsersClient#Get: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="Unknown" Message="Unknown service error" Details=[{"odata.error":{"code":"Request_BadRequest","message":{"lang":"en","value":"Bad request. Please fix the request before retrying."}}}]
    

    Expected Result No error logs

    Additional context Does not happen on a fresh install of rancher on v2.6-head and azure AD enabled.

    kind/bug-qa team/area1 
    opened by anupama2501 0
Releases(v2.6.4-debug-1.23)
  • v2.6.6-rc1(Jun 9, 2022)

    Images with -rc

    rancher/aks-operator v1.0.6-rc1 rancher/rancher v2.6.6-rc1 rancher/rancher-agent v2.6.6-rc1 rancher/rancher-csp-adapter v0.1.0-rc8 rancher/rancher-runtime v2.6.6-rc1 rancher/rancher-webhook v0.2.6-rc3

    Components with -rc

    CLI_VERSION v2.6.6-rc4 DASHBOARD_UI_VERSION v2.6.6-rc1 UI_VERSION 2.6.6-rc1 AKS-OPERATOR v1.0.6-rc2

    Min version components with -rc

    RANCHER_WEBHOOK_MIN_VERSION 1.0.5+up0.2.6-rc3

    RKE Kubernetes versions

    v1.18.20-rancher1-3 v1.19.16-rancher1-5 v1.20.15-rancher1-3 v1.21.12-rancher1-1 v1.22.9-rancher1-1 v1.23.6-rancher1-1

    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(565 bytes)
    rancher-data.json(3.30 MB)
    rancher-images-digests-linux-amd64.txt(54.13 KB)
    rancher-images-digests-linux-arm64.txt(41.30 KB)
    rancher-images-digests-linux-s390x.txt(37.08 KB)
    rancher-images-digests-windows-1809.txt(1.92 KB)
    rancher-images-digests-windows-ltsc2022.txt(1.92 KB)
    rancher-images-sources.txt(27.07 KB)
    rancher-images.txt(18.92 KB)
    rancher-load-images.ps1(2.58 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(786 bytes)
    rancher-mirror-to-rancher-org.sh(24.13 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(118 bytes)
    rancher-save-images.ps1(2.12 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(873 bytes)
    rancher-windows-images.txt(570 bytes)
    sha256sum.txt(1.31 KB)
  • v2.5.14(May 24, 2022)

    It is important to review the Install/Upgrade Notes below before upgrading to any Rancher version.

    Security Fixes for Rancher Vulnerabilities

    This release addresses one security issue found in Rancher:

    • Fixed an issue in RKE Templates when creating a cluster with Weave as CNI: the template was not configuring a password for network traffic encryption in Weave, and therefore network traffic in the cluster was being sent unencrypted. For more information, see CVE-2022-21951.

    For more details, see the security advisories page.

    Major Bug Fixes

    • Rancher now successfully provisions all downstream cluster types with private registries with the RKE template as expected. See #37515.
    • Fixed an issue in which a new log was introduced under error level rather than debug level. Note that if conditions occur that result in these log messages, no user intervention is necessary. The buffer will simply be paused from taking any input until it is no longer full. See #36584.
    • After the YAML is directly modified in RKE clusters, support for multiple private registries now functions as expected. See #37732.

    Install/Upgrade Notes

    If you are installing Rancher for the first time, your environment must fulfill the installation requirements.

    Upgrade Requirements

    • Creating backups:
      • We strongly recommend creating a backup before upgrading Rancher. To roll back Rancher after an upgrade, you must back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to its state when a backup was created, any changes post upgrade will not be included after the restore. For more information, see the documentation on backing up Rancher.
    • Helm version:
      • Rancher install or upgrade must occur with Helm 3.2.x+ due to the changes with the latest cert-manager release. See #29213.
    • Kubernetes version:
      • The local Kubernetes cluster for the Rancher server should be upgraded to Kubernetes 1.17+ before installing Rancher 2.5+.
    • CNI requirements:
      • For Kubernetes v1.19 and newer, we recommend disabling firewalld as it has been found to be incompatible with various CNI plugins. See #28840.
      • If upgrading or installing to a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or newer, users should upgrade to RKE1 v1.19.2 or later to get Flannel version v0.13.0 that supports nf_tables. See Flannel #1317.
      • For users upgrading from >=v2.4.4 to v2.5.x with clusters where ACI CNI is enabled, note that upgrading Rancher will result in automatic cluster reconciliation. This is applicable for Kubernetes versions v1.17.16-rancher1-1, v1.17.17-rancher1-1, v1.17.17-rancher2-1, v1.18.14-rancher1-1, v1.18.15-rancher1-1, v1.18.16-rancher1-1, and v1.18.17-rancher1-1. Please refer to the workaround BEFORE upgrading to v2.5.x. See #32002.
    • Requirements for air-gapped environments:
      • For installing or upgrading Rancher in an air-gapped environment, please add the flag --no-hooks to the helm template command to skip rendering files for Helm's hooks. See #3226.
      • If using a proxy in front of an air-gapped Rancher, you must pass additional parameters to NO_PROXY. See the documentation and #2725.
    • Cert-manager version requirements:
      • Recent changes to cert-manager require an upgrade if you have a high-availability install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation for information on how to upgrade cert-manager.
    • Requirements for Docker installs:
      • When starting the Rancher Docker container, the privileged flag must be used. See the documentation.
      • When installing in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command as shown in the K3s documentation. If the registry has certs, then you will need to also supply those. See #28969.
      • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container comes up and works as expected. See #33685.
    • RKE requirements:
      • For users upgrading from <=v2.4.8 (<= RKE v1.1.6) to v2.4.12+ (RKE v1.1.13+)/v2.5.0+ (RKE v1.2.0+), please note that Edit and Save cluster (even with no changes or a trivial change like cluster name) will result in cluster reconciliation and upgrading kube-proxy on all nodes because of a change in kube-proxy binds. This only happens on the first edit, and later edits shouldn't affect the cluster. See #32216.
    • EKS requirements:
      • There was a setting for Rancher versions prior to 2.5.8 that allowed users to configure the length of refresh time in cron format: eks-refresh-cron. That setting is now deprecated and has been migrated to a standard seconds format in a new setting: eks-refresh. If previously set, the migration will happen automatically. See #31789.
    • Fleet-agent:
      • When upgrading <=v2.5.7 to >=v2.5.8, you may notice that in Apps & Marketplace, there is a fleet-agent release stuck at uninstalling. This is caused by migrating fleet-agent release name. It is safe to delete fleet-agent release as it is no longer used, and doing so should not delete the real fleet-agent deployment since it has been migrated. See #362.

    Rancher Behavior Changes

    • Upgrades and Rollbacks:
      • Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or roll back to change the Rancher version.
      • Please be aware when upgrading to v2.3.0+, any edits to a Rancher-launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
      • Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.
      • Existing GKE clusters and imported clusters will continue to operate as is. Only new creations and registered clusters will use the new full lifecycle management.
      • The process to roll back Rancher has been updated for versions v2.5.0 and above. Refer to the documentation for the new instructions.
    • Important:
      • When rolling back, we are expecting you to roll back to the state at the time of your upgrade. Any changes post-upgrade would not be reflected.
    • The local cluster can no longer be turned off:
      • In older Rancher versions, the local cluster could be hidden to restrict admin access to the Rancher server's local Kubernetes cluster, but that feature has been deprecated. The local Kubernetes cluster can no longer be hidden and all admins will have access to the local cluster. If you would like to restrict permissions to the local cluster, there is a new restricted-admin role that must be used. Access to the local cluster can now be disabled by setting hide_local_cluster to true from the v3/settings API. See the documentation and #29325. For more information on upgrading from Rancher with a hidden local cluster, see the documentation.

    Versions

    Please refer to the README for latest and stable versions.

    Please review our version documentation for more details on versioning and tagging conventions.

    Images

    • rancher/rancher:v2.5.14
    • rancher/rancher-agent:v2.5.14

    Tools

    Kubernetes Versions

    • 1.20.15 (Default)
    • 1.19.16
    • 1.18.20
    • 1.17.17

    Other Notes

    Deprecated Features

    |Feature | Justification | |---|---| |Cluster Manager - Rancher Monitoring | Monitoring in Cluster Manager UI has been replaced with a new monitoring chart available in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - Rancher Alerts and Notifiers| Alerting and notifiers functionality is now directly integrated with a new monitoring chart available in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - Rancher Logging | Functionality replaced with a new logging solution using a new logging chart available in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - MultiCluster Apps| Deploying to multiple clusters is now recommended to be handled with Rancher Continuous Delivery powered by Fleet available in Cluster Explorer.| |Cluster Manager - Kubernetes CIS 1.4 Scanning| Kubernetes CIS 1.5+ benchmark scanning is now replaced with a new scan tool deployed with a cis benchmarks chart available in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - Rancher Pipelines| Git-based deployment pipelines is now recommend to be handled with Rancher Continuous Delivery powered by Fleet available in Cluster Explorer. | |Cluster Manager - Istio v1.5| The Istio project has ended support for Istio 1.5 and has recommended all users upgrade. Newer Istio versions are now available as a chart in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - Provision Kubernetes v1.16 Clusters | We have ended support for Kubernetes v1.16. Cluster Manager no longer provisions new v1.16 clusters. If you already have a v1.16 cluster, it is unaffected. |

    Experimental Features

    RancherD was introduced as part of Rancher v2.5.4 through v2.5.10 as an experimental feature but is now deprecated. See #33423.

    Duplicated Features in Cluster Manager and Cluster Explorer

    • Only one version of the feature may be installed at any given time due to potentially conflicting CRDs.
    • Each feature should only be managed by the UI that it was deployed from.
    • If you have installed a feature in Cluster Manager, you must uninstall it in Cluster Manager before attempting to install the new version in Cluster Explorer dashboard.

    Cluster Explorer Feature Caveats and Upgrades

    • General:
      • Not all new features are currently installable on a hardened cluster.
      • New features are expected to be deployed using the Helm 3 CLI and not with the Rancher CLI.
    • UI Shell:
      • After closing the shell in the Rancher UI, be aware that the corresponding processes remain running indefinitely for each shell in the pod. See #16192.
    • Continuous Delivery:
      • Restricted admins are not able to create git repos from the Continuous Delivery option under Cluster Explorer; the screen will become stuck in a loading status. See #4909.
    • Rancher Backup:
      • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location; it must continue to use the same URL.
    • Monitoring:
      • Monitoring sometimes errors on installation because it can't identify CRDs. See #29171.
    • Istio:
      • Be aware that when upgrading from Istio v1.7.4 or earlier to any later version, there may be connectivity issues. See upgrade notes and #31811.
      • Starting in v1.8.x, DNS is supported natively. This means that the additional addon component istioCoreDNS is deprecated in v1.8.x and is not supported in v1.9x. If you are upgrading from v1.8.x to v1.9.x and you are using the istioCoreDNS addon, it is recommended that you disable it and switch to the natively supported DNS prior to upgrade. If you upgrade without disabling it, you will need to manually clean up your installation as it will not get removed automatically. See #31761 and #31265.
      • Istio v1.10 and earlier versions are now End-of-life but are required for the upgrade path in order to not skip a minor version. See #33824.

    Cluster Manager Feature Caveats and Upgrades

    • GKE:
      • Basic authentication must be explicitly disabled in GCP before upgrading a GKE cluster to 1.19+ in Rancher. See #32312.
      • When creating GKE clusters in Terraform, the labels field cannot be empty: at least one label must be set. See #32553.
    • EKS & GKE:
      • When creating EKS and GKE clusters in Terraform, string fields cannot be set to empty. See #32440.

    Known Major Issues

    • Kubernetes Cluster Distributions
      • RKE:
        • Rotating encryption keys with a custom encryption provider is not supported. See #30539.
        • After migrating from the in-tree vSphere cloud provider to the out-of-tree cloud provider, attempts to upgrade the cluster will not complete. This is due to nodes containing workloads with bound volumes before the migration failing to drain. Users will observe these nodes stuck in a draining state. Follow this workaround to continue with the upgrade. See #35102.
      • AKS:
        • Azure Container Registry-based Helm charts cannot be added in Cluster Explorer but do work in the Apps feature of Cluster Manager. Note that when using a Helm chart repository, the disableSameOriginCheck setting controls when credentials are attached to requests. See documentation and #35940 for more information.
    • Cluster Tools
      • Hardened clusters:
        • Not all cluster tools can currently be installed on a hardened cluster.
      • Monitoring:
        • Deploying Monitoring V2 on a Windows cluster with win_prefix_path set requires users to deploy Rancher Wins Upgrader to restart wins on the hosts to start collecting metrics in Prometheus. See #32535.
        • Monitoring V2 fails to scrape ingress-nginx pods on any nodes except for the one Prometheus is deployed on, if the security group used by worker nodes blocks incoming requests to port 10254. The workaround for this issue is to open up port 10254 on all hosts. See #32563.
      • Logging:
        • Logging (Cluster Explorer): Windows nodeAgents are not deleted when performing Helm upgrade after disabling Windows logging on a Windows cluster. See #32325.
      • Istio versions:
        • Istio v1.5 is not supported in air-gapped environments. Please note that the Istio project has ended support for Istio v1.5.
        • Istio v1.10 support ended on January 7th, 2022.
      • Legacy Monitoring:
        • In air-gapped setups, the generated rancher-images.txt that is used to mirror images on private registries does not contain the images required to run Legacy Monitoring, also called Monitoring V1, which is compatible with Kubernetes 1.15 clusters. If you are running Kubernetes 1.15 clusters in an air-gapped environment, and you want to either install Monitoring V1 or upgrade Monitoring V1 to the latest that is offered by Rancher for Kubernetes 1.15 clusters, you will need to take one of the following actions:
          • Upgrade the Kubernetes version so that you can use v0.2.x of the Monitoring application Helm chart.
          • Manually import the necessary images into your private registry for the Monitoring application to use.
      • Installation requirements:
        • Importing a Kubernetes v1.21 cluster might not work properly and is unsupported.
      • Backup and Restore:
        • Reinstalling Rancher 2.5.x on the same cluster may fail due to a lingering rancher.cattle.io. MutatingWebhookConfiguration object from a previous installation. Manually deleting it will resolve the issue.
      • Docker installs:
        • UI issues may occur due to a longer startup time.
        • Users may receive an error message when logging in for the first time. See #28800.
        • Users may be redirected to the login screen before a password and default view have been set. See #28798.
    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(184 bytes)
    rancher-images-digests-linux-amd64.txt(23.01 KB)
    rancher-images-digests-linux-arm64.txt(20.10 KB)
    rancher-images-digests-windows-1809.txt(1.97 KB)
    rancher-images-digests-windows-2004.txt(1.97 KB)
    rancher-images-digests-windows-20H2.txt(1.97 KB)
    rancher-images-sources.txt(11.75 KB)
    rancher-images.txt(7.58 KB)
    rancher-load-images.ps1(2.58 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(763 bytes)
    rancher-mirror-to-rancher-org.sh(9.89 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(80 bytes)
    rancher-save-images.ps1(2.12 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(843 bytes)
    rancher-windows-images.txt(547 bytes)
    rancherd-amd64(152.05 MB)
    rancherd-amd64.tar.gz(40.89 MB)
    sha256sum.txt(1.39 KB)
  • v2.6.5(May 12, 2022)

    Release v2.6.5

    Note: Rancher 2.6.5 contains a bug which causes downstream clusters to become unavailable via most methods. A temporary workaround is to restart the Rancher pods. #37250

    It is important to review the Install/Upgrade Notes below before upgrading to any Rancher version.

    In Rancher v2.6.4, the cluster-api module has been upgraded from v0.4.4 to v1.0.2 in which the apiVersion of CAPI CRDs are upgraded from cluster.x-k8s.io/v1alpha4 to cluster.x-k8s.io/v1beta1. This has the effect of causing rollbacks from Rancher v2.6.4 to any previous version of Rancher v2.6.x to fail because the previous version the CRDs needed to roll back are no longer available in v1beta1. To avoid this, the Rancher resource cleanup script should be run before the restore or rollback is attempted. This script can be found in the rancherlabs/support-tools repo and the usage of the script can be found in the backup-restore operator docs. In addition, when users roll back Rancher on the same cluster using the Rancher Backup and Restore app in 2.6.4+, the updated steps to create the Restore Custom Resource must be followed. See also #36803 for more details.

    Features and Enhancements

    New Integration with Rancher: NeuVector Security Platform

    Rancher 2.6.5 introduces NeuVector, the first open-source container-centric security platform, as a new integration. NeuVector can be enabled through a Helm chart that may be installed either through Apps & Marketplace or through the Cluster Tools button in Cluster Explorer in the UI. Once NeuVector is enabled, users can deploy and manage NeuVector clusters within Rancher. See the Neuvector documentation for more information on deploying and managing NeuVector through Rancher. Refer also to the Rancher documentation for more.

    • Features
      • Provides real-time compliance, visibility, and protection for critical apps and data.
      • Supports scanning of SUSE Linux operating systems and SUSE Rancher Kubernetes distributions (RKE1 and RKE2).
      • Features built-in navigation to deploy the NeuVector console using single sign-on (SSO).
    • Installation Details
      • The integration with Rancher works with NeuVector 5.0.0 or higher only at this time.
      • NeuVector container images are available for installation from the Rancher Apps & Marketplace.
      • NeuVector deployment will deploy containers into the cattle-neuvector-system namespace.
    • Installation Recommendations
      • When NeuVector is installed through the Rancher chart, users can log in through Rancher to log in directly to the NeuVector console. It is highly recommended to log into NeuVector directly and modify the default admin password.
      • The NeuVector vulnerability scanner image is released daily, to include the latest security advisory update. In addition, the scanner image is mirrored into the Rancher registry at rancher/mirrored-neuvector-scanner daily. It is recommended that you mirror the scanner image into your private registry, if needed, based on your schedule.
    • Support Limitations
      • Only admins and cluster owners are currently supported.
      • Fleet multi-cluster deployment is not supported.
      • NeuVector is not supported on clusters with Windows nodes.
      • NeuVector installation is not supported on hardened clusters.
      • NeuVector installation is not supported on SELinux clusters.
    • Other Limitations
      • Previous deployments from Rancher, such as from our Partners chart repository or the primary NeuVector Helm chart, must be completely removed in order to update to the new integrated feature chart. See #37447.
      • When NeuVector is deployed in an air-gapped Rancher setup, the NeuVector rancher/mirrored-neuvector-scanner and rancher/mirrored-neuvector-updater containers will not be regularly updated by default. They will only contain the database of CVEs at the time the images are pulled into your own private registry. Updating these images in your private registry on a regular cadence will ensure your vulnerability database stays up to date.
      • Container runtime is not auto-detected for different cluster types when installing the NeuVector chart. To work around this, you can specify the runtime manually. See #37481.
      • Sometimes when the controllers are not ready, the NeuVector UI is not accessible from the Rancher UI. During this time, controllers will try to restart, and it takes a few minutes for the controllers to be active. See #37400.

    New in Project Monitoring v2

    Project Monitoring v2, also known as Prometheus Federator, is now available and supported.

    • Monitoring v2 is now at parity with Monitoring v1. Please note that Monitoring v1 was deprecated in Rancher v2.5 and will be removed in an upcoming release.
    • Project Monitoring v2 introduces a custom resource called ProjectHelmCharts. This custom resource solves a problem where, if you are a project owner, you may not have permission to install/upgrade real Helm charts, but you may still need to configure monitoring across the namespaces in your project. With this, you may now create Project Monitors to enable monitoring in projects.
    • Users can deploy Monitoring v2 through Rancher's Apps & Marketplace.
    • Limitations
      • When enabling Prometheus Federator on an RKE2 cluster, the embedded Helm controller in Prometheus Federator should be disabled in favor of using the Helm controller embedded into RKE2 that is responsible for managing the state of internal Kubernetes components (since the RKE2 embedded Helm controller has a global scope in implementing HelmChart resources in the cluster). This can be provided on installing the chart by setting helmProjectOperator.helmController.enabled=false and is exposed as an option on the chart installation page's UI on Apps & Marketplace. See #37694.
      • At this time, there are no migration instructions from Monitoring v2 to Project Monitoring v2. The existing instructions for migrating from Monitoring v1 to Monitoring v2 will be updated in the next release.

    New in RKE2 Provisioning

    RKE2 provisioning is now GA for Kubernetes v1.22 and up.

    • New S3 snapshot and restore feature added. See #34417.
    • RKE2 Windows Clusters
      • RKE2 provisioning is now available for vSphere node driver in Windows on RKE2, RKE2 custom clusters, and Project Network Isolation (PNI) for Calico. See #15 and #13.
    • Support Limitations
      • There is currently no support for Encryption Key Rotation functionality. See #35436.

    New in Rancher

    • Rancher on IBM Z is now in tech preview.

    New in the Rancher UI

    • Rancher Dashboard
      • Ability to scale workload up and down from the Workload detail page has been added. See #5114.
    • NeuVector
      • A new sidebar and button were added to allow users to connect to the NeuVector UI in Rancher. See #5019.
      • NeuVector link has been added to the Rancher UI. See #5556.
    • RKE2
      • Users may provision an RKE2 cluster using the Oracle node driver plugin. See #3263.
      • Support for OCI credentials is now available for RKE2 clusters. See #3600.
    • Known Issues
      • After installing an app from a partner chart repo, the partner chart will upgrade to feature charts if the chart also exists in the feature charts default repo. See #5655.
      • In some instances under Users and Authentication, no users are listed and clicking Create to create a new user does not display the entire form. To work around this when encountered, perform a hard refresh to be able to log back in. See Dashboard #5336.

    New in RKE1 and RKE2

    • RKE2 Windows
      • The system-agent uninstall script now has Linux and Windows feature parity. See #171.
    • RKE1 Windows
      • Note that on September 1, 2022, RKE1 Windows will be end-of-life (EOL). For more information, see #179.
    • Behavior Changes in RKE2 Clusters
      • Custom clusters in RKE2 and K3s will get to an active state before adding worker nodes. This is a behavior change from RKE1, which depends on worker nodes to schedule CoreDNS. See #37017.
      • cluster state changes to Provisioning when a worker node is deleted in an RKE2 cluster, which is expected behavior. In RKE1, the cluster state remains Active when a snapshot is triggered or a worker node is deleted. See #36689.
      • cluster state changes to Provisioning or Updating when a snapshot is taken in an RKE2 cluster, which is expected behavior. In RKE1, Rancher is responsible for taking the snapshots. See #36504.
    • Known Issues
      • RKE2 cluster provisioning fails when using the RHEL 8.5 golden public AWS AMI from Rancher. See #36731. To work around this issue, please see this note.
      • RKE2 snapshots display different sizes and are working as expected. See #36713.
      • The RKE2 cluster name cannot exceed 63 characters. In addition, removing such clusters from Rancher because they fail to provision causes Rancher server to crash. See #37544.
      • OPA Gatekeeper gets stuck when uninstalling on Windows clusters. Note that this applies to both RKE1 and RKE2 clusters. See #37029. A fix is scheduled for 2.6.6.
      • Any RKE2 Windows cluster created prior to v2.6.5 through the provisioning v2 framework cannot be upgraded using v2.6.5. Only RKE2 Windows clusters provisioned on v2.6.5+ can be upgraded. See #76.
      • The Windows agent for RKE2 Windows nodes does not support auto-upgrades at this time. This functionality is planned for v2.6.6. See #181.
      • Windows worker nodes in RKE2 clusters that require the use of a proxy for downstream nodes will not successfully provision. This issue exists only in the system-agent implementation for RKE2 Windows, as it is unable to use a proxy when attempting to pull images while applying a plan. See #37688.
      • The Calico CNI will not run on SLE Micro without additional configuration as the filesystem is read-only, and Calico tries to create a flexVolume with a path at /usr/libexec. Note that this affects v1.23.6+rke2r2 and v1.22.9+rke2r2 as well as earlier RKE2 versions, but this may be fixed in newer versions that can be added to Rancher via a KDM update. To view the workaround and this issue, refer to #2886.
      • When Rancher is air-gapped, Windows worker nodes in RKE2 clusters will not be able to successfully provision. This issue occurs when a custom CA certificate is required when making calls in Go on Windows when a restconfig needs to be built, and it requires a custom CA certificate, which is part of the system-agent installation flow for RKE2 Windows. See #37695.
      • Hardening guide template for RKE2 currently only supports CNI Canal. See #154.

    Major Bug Fixes

    • RKE2 node driver cluster provisions successfully in both Kubernetes v1.22.x and v1.23.x. See #36939.
    • On the Cluster Management page, snapshot-related actions such as create/restore and rotate certificate are now available for a standard user in RKE1. See Dashboard #5011.
    • Cluster usage metrics have been removed from the Rancher homepage cluster list. Users may still check usage per cluster by using the cluster dashboard. See #5430.
    • When performing a backup/restore using Helm, the command will now work as expected if Let's Encrypt is used. See #37060.
    • If you set restricted PSP as default to cluster and create a new namespace with an unrestricted PSP, the pod/deployment creation in the new project no longer fails. See #37443.
    • A warning message is now present as expected to state that the monitoring and logging apps must be upgraded to deploy on newly added Windows nodes. See #5530.
    • After installing Monitoring v2, the rancher-monitoring-crd now successfully installs. See #35744.
    • API audit logging was fixed to log to either the sidecar or the hostPath, but not both. See #26192.

    Install/Upgrade Notes

    • If you are installing Rancher for the first time, your environment must fulfill the installation requirements.
    • The namespace where the local Fleet agent runs has been changed to cattle-fleet-local-system. This change does not impact GitOps workflows.

    Upgrade Requirements

    • Creating backups: We strongly recommend creating a backup before upgrading Rancher. To roll back Rancher after an upgrade, you must back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to its state when a backup was created, any changes post upgrade will not be included after the restore. For more information, see the documentation on backing up Rancher.
    • Helm version: Rancher install or upgrade must occur with Helm 3.2.x+ due to the changes with the latest cert-manager release. See #29213.
    • Kubernetes version:
      • The local Kubernetes cluster for the Rancher server should be upgraded to Kubernetes 1.18+ before installing Rancher 2.6+.
      • When using Kubernetes v1.21 with Windows Server 20H2 Standard Core, the patch "2019-08 Servicing Stack Update for Windows Server" must be installed on the node. See #72.
    • CNI requirements:
      • For Kubernetes v1.19 and newer, we recommend disabling firewalld as it has been found to be incompatible with various CNI plugins. See #28840.
      • If upgrading or installing to a Linux distribution which uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or newer, users should upgrade to RKE1 v1.19.2 or later to get Flannel version v0.13.0 that supports nf_tables. See Flannel #1317.
      • For users upgrading from >=v2.4.4 to v2.5.x with clusters where ACI CNI is enabled, note that upgrading Rancher will result in automatic cluster reconciliation. This is applicable for Kubernetes versions v1.17.16-rancher1-1, v1.17.17-rancher1-1, v1.17.17-rancher2-1, v1.18.14-rancher1-1, v1.18.15-rancher1-1, v1.18.16-rancher1-1, and v1.18.17-rancher1-1. Please refer to the workaround BEFORE upgrading to v2.5.x. See #32002.
    • Requirements for air gapped environments:
      • For installing or upgrading Rancher in an air gapped environment, please add the flag --no-hooks to the helm template command to skip rendering files for Helm's hooks. See #3226.
      • If using a proxy in front of an air gapped Rancher, you must pass additional parameters to NO_PROXY. See the documentation and related issue #2725.
    • Cert-manager version requirements: Recent changes to cert-manager require an upgrade if you have a high-availability install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager. See documentation.
    • Requirements for Docker installs:
      • When starting the Rancher Docker container, the privileged flag must be used. See documentation.
      • When installing in an air gapped environment, you must supply a custom registries.yaml file to the docker run command as shown in the K3s documentation. If the registry has certificates, then you will need to also supply those. See #28969.
      • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container comes up and is working as expected. See #33685.

    Rancher Behavior Changes

    • Cert-Manager:
      • Rancher now supports cert-manager versions 1.6.2 and 1.7.1. We recommend v1.7.x because v 1.6.x will reach end-of-life on March 30, 2022. To read more, see the documentation.
      • When upgrading Rancher and cert-manager, you will need to use Option B: Reinstalling Rancher and cert-manager from the Rancher docs.
      • There are several versions of cert-manager which, due to their backwards incompatibility, are not recommended for use with Rancher. You can read more about which versions are affected by this issue in the cert-manager docs. As a result, only versions 1.6.2 and 1.7.1 are recommended for use at this time.
      • For instructions on upgrading cert-manager from version 1.5 to 1.6, see the relevant cert-manager docs.
      • For instructions on upgrading cert-manager from version 1.6 to 1.7, see the relevant cert-manager docs.
    • Readiness and Liveness Check:
      • Users can now configure the Readiness Check and Liveness Check of coredns-autoscaler. See #24939.
    • Legacy Features:
      • Users upgrading from Rancher <=v2.5.x will automatically have the --legacy feature flag enabled. New installations that require legacy features need to enable the flag on install or through the UI.
      • When workloads created using the legacy UI are deleted, the corresponding services are not automatically deleted. Users will need to manually remove these services. A message will be displayed notifying the user to manually delete the associated services when such a workload is deleted. See #34639.
    • Library and Helm3-Library Catalogs:
      • Users will no longer be able to launch charts from the library and helm3-library catalogs, which are available through the legacy apps and multi-cluster-apps pages. Any existing legacy app that was deployed from a previous Rancher version will continue to be able to edit its currently deployed chart. Note that the Longhorn app will still be available from the library for new installs but will be removed in the next Rancher version. All users are recommended to deploy Longhorn from the Apps & Marketplace section of the Rancher UI instead of through the Legacy Apps pages.
    • Local Cluster:
      • In older Rancher versions, the local cluster could be hidden to restrict admin access to the Rancher server's local Kubernetes cluster, but that feature has been deprecated. The local Kubernetes cluster can no longer be hidden and all admins will have access to the local cluster. If you would like to restrict permissions to the local cluster, there is a new restricted-admin role that must be used. The access to local cluster can now be disabled by setting hide_local_cluster to true from the v3/settings API. See the documentation and #29325. For more information on upgrading from Rancher with a hidden local cluster, see the documentation.
    • Upgrading the Rancher UI:
      • After upgrading to v2.6+, users will be automatically logged out of the old Rancher UI and must log in again to access Rancher and the new UI. See #34004.
    • Fleet:
      • For users upgrading from v2.5.x to v2.6.x, note that Fleet will be enabled by default as it is required for operation in v2.6+. This will occur even if Fleet was disabled in v2.5.x. During the upgrade process, users will observe restarts of the rancher pods, which is expected. See #31044 and #32688.
      • Starting with Rancher v2.6.1, Fleet allows for two agents in the local cluster for scenarios where "Fleet is managing Fleet". The true local agent runs in the new cattle-fleet-local-system namespace. The agent downstream from another Fleet management cluster runs in cattle-fleet-system, similar to the agent pure downstream clusters. See #34716 and #531.
    • Editing and Saving Clusters:
      • For users upgrading from <=v2.4.8 (<= RKE v1.1.6) to v2.4.12+ (RKE v1.1.13+)/v2.5.0+ (RKE v1.2.0+) , please note that Edit and save cluster (even with no changes or a trivial change like cluster name) will result in cluster reconciliation and upgrading kube-proxy on all nodes because of a change in kube-proxy binds. This only happens on the first edit and later edits shouldn't affect the cluster. See #32216.
    • EKS Cluster:
      • There is currently a setting allowing users to configure the length of refresh time in cron format: eks-refresh-cron. That setting is now deprecated and has been migrated to a standard seconds format in a new setting: eks-refresh. If previously set, the migration will happen automatically. See #31789.
    • System Components:
      • Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
    • GKE and AKS Clusters:
      • Existing GKE and AKS clusters and imported clusters will continue to operate as-is. Only new creations and registered clusters will use the new full lifecycle management.
    • Rolling Back Rancher:
      • The process to roll back Rancher has been updated for versions v2.5.0 and above. New steps require scaling Rancher down to 0 replica before restoring the backup. Please refer to the documentation for the new instructions.
    • RBAC:
      • Due to the change of the provisioning framework, the Manage Nodes role will no longer be able to scale up/down machine pools. The user would need the ability to edit the cluster to manage the machine pools #34474.
    • Azure Cloud Provider for RKE2:
      • For RKE2, the process to set up an Azure cloud provider is different than for RKE1 clusters. Users should refer to the documentation for the new instructions. See #34367 for original issue.
    • Machines vs. Kube Nodes:
      • In previous versions, Rancher only displayed Nodes, but with v2.6, there are the concepts of machines and kube nodes. Kube nodes are the Kubernetes node objects and are only accessible if the Kubernetes API server is running and the cluster is active. Machines are the cluster's machine object which defines what the cluster should be running.
    • Rancher's External IP Webhook:
      • In v1.22, upstream Kubernetes has enabled the admission controller to reject usage of external IPs. As such, the rancher-external-ip-webhook chart that was created as a workaround is no longer needed, and support for it is now capped to Kubernetes v1.21 and below. See #33893.
    • Memory Limit for Legacy Monitoring:
      • The default value of the Prometheus memory limit in the legacy Rancher UI is now 2000Mi to prevent the pod from restarting due to a OOMKill. See #34850.
    • Memory Limit for Monitoring:
      • The default value of the Prometheus memory limit in the new Rancher UI is now 3000Mi to prevent the pod from restarting due to a OOMKill. See #34850.

    Versions

    Please refer to the README for latest and stable versions.

    Please review our version documentation for more details on versioning and tagging conventions.

    Images

    • rancher/rancher:v2.6.5

    Tools

    Kubernetes Versions

    • v1.23.6 (Default)
    • v1.22.9
    • v1.21.12
    • v1.20.15
    • v1.19.16
    • v1.18.20

    Rancher Helm Chart Versions

    Starting in 2.6.0, many of the Rancher Helm charts available in the Apps & Marketplace will start with a major version of 100. This was done to avoid simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also brings us into compliance with semver, which is a requirement for newer versions of Helm. You can now see the upstream version of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #32294.

    Other Notes

    Feature Flags

    Feature flags introduced in 2.6.0 and the Harvester feature flag introduced in 2.6.1 are listed below for reference:

    Feature Flag | Default Value | Description ---|---|--- harvester | true | Used to manage access to the Harvester list page where users can navigate directly to Harvester host clusters and have the ability to import them. fleet| true | The previous fleet feature flag is now required to be enabled as the fleet capabilities are leveraged within the new provisioning framework. If you had this feature flag disabled in earlier versions, upon upgrading to Rancher, the flag will automatically be enabled. gitops | true | If you want to hide the "Continuous Delivery" feature from your users, then please use the newly introduced gitops feature flag, which hides the ability to leverage Continuous Delivery. rke2 | true | Used to enable the ability to provision RKE2 clusters. By default, this feature flag is enabled, which allows users to attempt to provision these type of clusters. legacy | false for new installs, true for upgrades | There are a set of features from previous versions that are slowly being phased out of Rancher for newer iterations of the feature. This is a mix of deprecated features as well as features that will eventually be moved to newer variations in Rancher. By default, this feature flag is disabled for new installations. If you are upgrading from a previous version, this feature flag would be enabled. token-hashing | false | Used to enable new token-hashing feature. Once enabled, existing tokens will be hashed and all new tokens will be hashed automatically using the SHA256 algorithm. Once a token is hashed it cannot be undone. Once this feature flag is enabled it cannot be disabled.

    Experimental Features

    • Dual-stack and IPv6-only support for RKE1 clusters using the Flannel CNI will be experimental starting in v1.23.x. See the upstream Kubernetes docs. Dual-stack is not currently supported on Windows. See #165.

    • RancherD was introduced as part of Rancher v2.5.4 through v2.5.10 as an experimental feature but is now deprecated. See #33423.

    Legacy Features

    Legacy features are features hidden behind the legacy feature flag, which are various features/functionality of Rancher that was available in previous releases. These are features that Rancher doesn't intend for new users to consume, but if you have been using past versions of Rancher, you'll still want to use this functionality.

    When you first start 2.6, there is a card in the Home page that outlines the location of where these features are now located.

    The deprecated features from v2.5 are now behind the legacy feature flag. Please review our deprecation policy for questions.

    The following legacy features are no longer supported on Kubernetes v1.21+ clusters:

    • Logging
    • CIS Scans
    • Istio 1.5
    • Pipelines

    The following legacy feature is no longer supported past Kubernetes v1.21+ clusters:

    • Monitoring v1

    Known Major Issues

    • Kubernetes Cluster Distributions:
      • RKE:
        • Rotating encryption keys with a custom encryption provider is not supported. See #30539.
      • RKE2:
        • Amazon ECR Private Registries are not functional. See #33920.
        • When provisioning using a RKE2 cluster template, the rootSize for AWS EC2 provisioners does not currently take an integer when it should, and an error is thrown. To work around this issue, wrap the EC2 rootSize in quotes. See Dashboard #3689.
        • RKE2 node driver cluster gets stuck in provisioning state after an upgrade to v2.6.4 and rollback to v2.6.3. See #36859.
        • RKE2 node driver cluster has its nodes redeployed when upgrading Rancher from v2.6.3 to v2.6.4. See #36627.
        • The communication between the ingress controller and the pods doesn't work when you create an RKE2 cluster with Cilium as the CNI and activate project network isolation. See documentation and #34275.
      • RKE2 - Windows:
        • In v2.6.5, v1.21.x of RKE2 will remain experimental and unsupported for RKE2 Windows. End users should not use v1.21.x of RKE2 for any RKE2 cluster that will have Windows worker nodes. This is due to an upstream Calico bug that was not backported to the minor version of Calico (3.19.x) that is present in v1.21.x of RKE2. See #131.
        • CSI Proxy for Windows will not work in an air-gapped environment.
        • NodePorts do not work on Windows Server 2022 in RKE2 clusters due to a Windows kernel bug. See #159.
        • When upgrading Windows nodes in RKE2 clusters via the Rancher UI, Windows worker nodes will require a reboot after the upgrade is completed. See #37645.
      • AKS:
        • When editing or upgrading the AKS cluster, do not make changes from the Azure console or CLI at the same time. These actions must be done separately. See #33561.
        • Windows node pools are not currently supported. See #32586.
        • Azure Container Registry-based Helm charts cannot be added in Cluster Explorer, but do work in the Apps feature of Cluster Manager. Note that when using a Helm chart repository, the disableSameOriginCheck setting controls when credentials are attached to requests. See documentation and #34584 for more information.
      • GKE:
        • Basic authentication must be explicitly disabled in GCP before upgrading a GKE cluster to 1.19+ in Rancher. See #32312.
      • AWS:
        • On RHEL8.4 SELinux in AWS AMI, Kubernetes v1.22 fails to provision on AWS. As Rancher will not install RPMs on the nodes, users may work around this issue either by using AMI with this package already installed, or by installing AMI via cloud-init. Users will encounter this issue on upgrade to v1.22 as well. When upgrading to 1.22, users must manually upgrade/install the rancher-selinux package on all the nodes in the cluster, then upgrade the Kubernetes version. See #36509.
    • Infrastructures:
      • vSphere:
        • PersistentVolumes are unable to mount to custom vSphere hardened clusters using CSI charts. See #35173.
    • Harvester:
      • Upgrades from Harvester v0.3.0 are not supported.
      • Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #35049.
    • Cluster Tools:
      • Fleet:
        • Multiple fleet-agent pods may be created and deleted during initial downstream agent deployment; rather than just one. This resolves itself quickly, but is unintentional behavior. See #33293.
      • Hardened clusters:
        • Not all cluster tools can currently be installed on a hardened cluster.
      • Rancher Backup:
        • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
        • When running a newer version of the rancher-backup app to restore a backup made with an older version of the app, the resourceSet named rancher-resource-set will be restored to an older version that might be different from the one defined in the current running rancher-backup app. The workaround is to edit the rancher-backup app to trigger a reconciliation. See #34495.
        • Because Kubernetes v1.22 drops the apiVersion apiextensions.k8s.io/v1beta1, trying to restore an existing backup file into a v1.22 cluster will fail because the backup file contains CRDs with the apiVersion v1beta1. There are two options to work around this issue: update the default resourceSet to collect the CRDs with the apiVersion v1, or update the default resourceSet and the client to use the new APIs internally. See documentation and #34154.
      • Monitoring:
        • Deploying Monitoring on a Windows cluster with win_prefix_path set requires users to deploy Rancher Wins Upgrader to restart wins on the hosts to start collecting metrics in Prometheus. See #32535.
      • Logging:
        • Windows nodeAgents are not deleted when performing helm upgrade after disabling Windows logging on a Windows cluster. See #32325.
      • Istio Versions:
        • Istio 1.12 and below do not work on Kubernetes 1.23 clusters. To use the Istio charts, please do not update to Kubernetes 1.23 until the next charts' release.
        • Istio 1.5 is not supported in air-gapped environments. Please note that the Istio project has ended support for Istio 1.5.
        • Istio 1.9 support ended on October 8th, 2021.
        • The Kiali dashboard bundled with 100.0.0+up1.10.2 errors on a page refresh. Instead of refreshing the page when needed, simply access Kiali using the dashboard link again. Everything else works in Kiali as expected, including the graph auto-fresh. See #33739.
        • A failed calling webhook "validation.istio.io" error will occur in air gapped environments if the istiod-istio-system ValidatingWebhookConfiguration exists, and you attempt a fresh install of Istio 1.11.x and higher. To work around this issue, run the command kubectl delete validatingwebhookconfiguration istiod-istio-system and attempt your install again. See #35742.
        • Deprecated resources are not automatically removed and will cause errors during upgrades. Manual steps must be taken to migrate and/or cleanup resources before an upgrade is performed. See #34699.
        • Applications injecting Istio sidecars, fail on SELinux RHEL 8.4 enabled clusters. A temporary workaround for this issue is to run the following command on each cluster node before creating a cluster: mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni. See #33291.
      • Legacy Monitoring:
        • The Grafana instance inside Cluster Manager's Monitoring is not compatible with Kubernetes v1.21. To work around this issue, disable the BoundServiceAccountTokenVolume feature in Kubernetes v1.21 and above. Note that this workaround will be deprecated in Kubernetes v1.22. See #33465.
        • In air gapped setups, the generated rancher-images.txt that is used to mirror images on private registries does not contain the images required to run Legacy Monitoring which is compatible with Kubernetes v1.15 clusters. If you are running Kubernetes v1.15 clusters in an air gapped environment, and you want to either install Legacy Monitoring or upgrade Legacy Monitoring to the latest that is offered by Rancher for Kubernetes v1.15 clusters, you will need to take one of the following actions:
          • Upgrade the Kubernetes version so that you can use v0.2.x of the Monitoring application Helm chart.
          • Manually import the necessary images into your private registry for the Monitoring application to use.
        • When deploying any downstream cluster, Rancher logs errors that seem to be related to Monitoring even when Monitoring is not installed onto either cluster; specifically, Rancher logs that it failed on subscribe to the Prometheus CRs in the cluster because it is unable to get the resource prometheus.meta.k8s.io. These logs appear in a similar fashion for other Prometheus CRs (namely Alertmanager, ServiceMonitors, and PrometheusRules), but do not seem to cause any other major impact in functionality. See #32978.
        • Legacy Monitoring does not support Kubernetes v1.22 due to the feature-gates flag no longer being supported. See #35574.
        • After performing an upgrade to Rancher v2.6.3 from v2.6.2, the Legacy Monitoring custom metric endpoint stops working. To work around this issue, delete the service that is being targeted by the servicemonitor and allow it to be recreated; this will reload the pods that need to be targeted on a service sync. See #35790.
    • Docker Installations:
      • UI issues may occur due to a longer startup time. User will receive an error message when launching Docker for the first time #28800, and user is directed to username/password screen when accessing the UI after a Docker install of Rancher. See #28798.
      • On a Docker install upgrade and rollback, Rancher logs will repeatedly display the messages "Updating workload ingress-nginx/nginx-ingress-controller" and "Updating service frontend with public endpoints". Ingresses and clusters are functional and active, and logs resolve eventually. See #35798.
      • Rancher single node wont start on Apple M1 devices with Docker Desktop 4.3.0 or newer. See #35930.
    • Rancher UI:
      • Deployment securityContext section is missing when a new workload is created. This prevents pods from starting when Pod Security Policy Support is enabled. See #4815.
    • Legacy UI:
      • When using the Rancher v2.6 UI to add a new port of type ClusterIP to an existing Deployment created using the legacy UI, the new port will not be created upon saving. To work around this issue, repeat the procedure to add the port again. Users will notice the Service Type field will display as Do not create a service. Change this to ClusterIP and upon saving, the new port will be created successfully during this subsequent attempt. See #4280.
    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(222 bytes)
    rancher-data.json(3.30 MB)
    rancher-images-digests-linux-amd64.txt(53.85 KB)
    rancher-images-digests-linux-arm64.txt(41.09 KB)
    rancher-images-digests-linux-s390x.txt(36.93 KB)
    rancher-images-digests-windows-1809.txt(1.92 KB)
    rancher-images-digests-windows-20H2.txt(1.92 KB)
    rancher-images-digests-windows-ltsc2022.txt(1.85 KB)
    rancher-images-sources.txt(26.86 KB)
    rancher-images.txt(18.79 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(782 bytes)
    rancher-mirror-to-rancher-org.sh(23.98 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(118 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(869 bytes)
    rancher-windows-images.txt(566 bytes)
    sha256sum.txt(1.31 KB)
  • v2.6.5-rc10(May 10, 2022)

    Images with -rc

    rancher/backup-restore-operator v2.1.2-rc4 rancher/cis-operator v1.0.8-rc2 rancher/helm-project-operator v0.0.1-rc4 rancher/prometheus-federator v0.0.1-rc3 rancher/rancher v2.6.5-rc10 rancher/rancher-agent v2.6.5-rc10 rancher/rancher-runtime v2.6.5-rc10 rancher/security-scan v0.2.7-rc1

    Components with -rc

    CLI_VERSION v2.6.5-rc1 DASHBOARD_UI_VERSION v2.6.5-rc9 UI_VERSION 2.6.5-rc9 AKS-OPERATOR v1.0.5-rc1 RKE v1.3.11-rc3

    RKE Kubernetes versions

    v1.18.20-rancher1-3 v1.19.16-rancher1-5 v1.20.15-rancher1-3 v1.21.12-rancher1-1 v1.22.9-rancher1-1 v1.23.6-rancher1-1

    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(587 bytes)
    rancher-data.json(3.30 MB)
    rancher-images-digests-linux-amd64.txt(53.77 KB)
    rancher-images-digests-linux-arm64.txt(41.01 KB)
    rancher-images-digests-linux-s390x.txt(36.92 KB)
    rancher-images-digests-windows-1809.txt(1.92 KB)
    rancher-images-digests-windows-20H2.txt(1.92 KB)
    rancher-images-digests-windows-ltsc2022.txt(1.92 KB)
    rancher-images-sources.txt(26.64 KB)
    rancher-images.txt(18.79 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(787 bytes)
    rancher-mirror-to-rancher-org.sh(23.97 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(118 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(874 bytes)
    rancher-windows-images.txt(571 bytes)
    sha256sum.txt(1.31 KB)
  • v2.6.5-rc8(May 6, 2022)

    Images with -rc

    rancher/aks-operator v1.0.5-rc1 rancher/backup-restore-operator v2.1.2-rc3 rancher/cis-operator v1.0.8-rc2 rancher/rancher v2.6.5-rc8 rancher/rancher-agent v2.6.5-rc8 rancher/rancher-runtime v2.6.5-rc8 rancher/rke2-runtime v1.22.9-rc1-rke2r2 rancher/rke2-runtime v1.22.9-rc1-rke2r2-windows-amd64 rancher/rke2-runtime v1.23.6-rc1-rke2r2 rancher/rke2-runtime v1.23.6-rc1-rke2r2-windows-amd64 rancher/rke2-upgrade v1.22.9-rc1-rke2r2 rancher/rke2-upgrade v1.23.6-rc1-rke2r2 rancher/security-scan v0.2.7-rc1 rancher/system-agent-installer-rke2 v1.21.12-rc1-rke2r2 rancher/system-agent-installer-rke2 v1.22.9-rc1-rke2r2 rancher/system-agent-installer-rke2 v1.23.6-rc1-rke2r2

    Components with -rc

    CLI_VERSION v2.6.5-rc1 DASHBOARD_UI_VERSION v2.6.5-rc7 UI_VERSION 2.6.5-rc7 AKS-OPERATOR v1.0.5-rc1 RKE v1.3.11-rc2

    RKE Kubernetes versions

    v1.18.20-rancher1-3 v1.19.16-rancher1-5 v1.20.15-rancher1-3 v1.21.12-rancher1-1 v1.22.9-rancher1-1 v1.23.6-rancher1-1

    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(969 bytes)
    rancher-data.json(3.30 MB)
    rancher-images-digests-linux-amd64.txt(54.65 KB)
    rancher-images-digests-linux-arm64.txt(41.89 KB)
    rancher-images-digests-linux-s390x.txt(37.25 KB)
    rancher-images-digests-windows-1809.txt(1.92 KB)
    rancher-images-digests-windows-20H2.txt(1.92 KB)
    rancher-images-digests-windows-ltsc2022.txt(1.92 KB)
    rancher-images-sources.txt(26.98 KB)
    rancher-images.txt(19.05 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(786 bytes)
    rancher-mirror-to-rancher-org.sh(24.31 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(118 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(873 bytes)
    rancher-windows-images.txt(570 bytes)
    sha256sum.txt(1.31 KB)
  • v2.6.5-rc7(May 6, 2022)

    Images with -rc

    rancher/aks-operator v1.0.5-rc1 rancher/backup-restore-operator v2.1.2-rc3 rancher/cis-operator v1.0.8-rc2 rancher/rancher v2.6.5-rc7 rancher/rancher-agent v2.6.5-rc7 rancher/rancher-runtime v2.6.5-rc7 rancher/rke2-runtime v1.22.9-rc1-rke2r2 rancher/rke2-runtime v1.22.9-rc1-rke2r2-windows-amd64 rancher/rke2-runtime v1.23.6-rc1-rke2r2 rancher/rke2-runtime v1.23.6-rc1-rke2r2-windows-amd64 rancher/rke2-upgrade v1.22.9-rc1-rke2r2 rancher/rke2-upgrade v1.23.6-rc1-rke2r2 rancher/security-scan v0.2.7-rc1 rancher/system-agent-installer-rke2 v1.21.12-rc1-rke2r2 rancher/system-agent-installer-rke2 v1.22.9-rc1-rke2r2 rancher/system-agent-installer-rke2 v1.23.6-rc1-rke2r2

    Components with -rc

    CLI_VERSION v2.6.5-rc1 DASHBOARD_UI_VERSION v2.6.5-rc7 UI_VERSION 2.6.5-rc7 AKS-OPERATOR v1.0.5-rc1 RKE v1.3.11-rc2

    RKE Kubernetes versions

    v1.18.20-rancher1-3 v1.19.16-rancher1-5 v1.20.15-rancher1-3 v1.21.12-rancher1-1 v1.22.9-rancher1-1 v1.23.6-rancher1-1

    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(969 bytes)
    rancher-data.json(3.30 MB)
    rancher-images-sources.txt(26.98 KB)
    rancher-images.txt(19.05 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(786 bytes)
    rancher-mirror-to-rancher-org.sh(24.31 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(118 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(873 bytes)
    rancher-windows-images.txt(570 bytes)
    sha256sum.txt(1.31 KB)
  • v2.6.5-rc5(Apr 28, 2022)

    Images with -rc

    rancher/aks-operator v1.0.5-rc1 rancher/backup-restore-operator v2.1.2-rc3 rancher/k3s-upgrade v1.21.12-rc3-k3s1 rancher/k3s-upgrade v1.22.9-rc4-k3s1 rancher/k3s-upgrade v1.23.6-rc3-k3s1 rancher/rancher v2.6.5-rc5 rancher/rancher-agent v2.6.5-rc5 rancher/rancher-runtime v2.6.5-rc5 rancher/rke2-runtime v1.21.12-rc4-rke2r1 rancher/rke2-runtime v1.22.9-rc4-rke2r1 rancher/rke2-runtime v1.23.6-rc4-rke2r1 rancher/rke2-upgrade v1.21.12-rc4-rke2r1 rancher/rke2-upgrade v1.22.9-rc4-rke2r1 rancher/rke2-upgrade v1.23.6-rc4-rke2r1 rancher/system-agent-installer-k3s v1.21.12-rc3-k3s1 rancher/system-agent-installer-k3s v1.22.9-rc4-k3s1 rancher/system-agent-installer-k3s v1.23.6-rc3-k3s1 rancher/system-agent-installer-rke2 v1.21.12-rc4-rke2r1 rancher/system-agent-installer-rke2 v1.22.9-rc4-rke2r1 rancher/system-agent-installer-rke2 v1.23.6-rc4-rke2r1

    Components with -rc

    CLI_VERSION v2.6.5-rc1 DASHBOARD_UI_VERSION v2.6.5-rc4 UI_VERSION 2.6.5-rc4 AKS-OPERATOR v1.0.5-rc1 RKE v1.3.10-rc7

    RKE Kubernetes versions

    v1.18.20-rancher1-3 v1.19.16-rancher1-5 v1.20.15-rancher1-3 v1.21.12-rancher1-1 v1.22.9-rancher1-1 v1.23.6-rancher1-1

    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(1.12 KB)
    rancher-data.json(3.29 MB)
    rancher-images-digests-linux-amd64.txt(59.50 KB)
    rancher-images-digests-linux-arm64.txt(43.63 KB)
    rancher-images-digests-linux-s390x.txt(38.57 KB)
    rancher-images-digests-windows-1809.txt(1.92 KB)
    rancher-images-digests-windows-20H2.txt(1.92 KB)
    rancher-images-digests-windows-ltsc2022.txt(1.92 KB)
    rancher-images-sources.txt(28.74 KB)
    rancher-images.txt(20.73 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(786 bytes)
    rancher-mirror-to-rancher-org.sh(26.42 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(118 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(873 bytes)
    rancher-windows-images.txt(570 bytes)
    sha256sum.txt(1.31 KB)
  • v2.6.5-rc4(Apr 28, 2022)

    Images with -rc

    rancher/aks-operator v1.0.5-rc1 rancher/backup-restore-operator v2.1.2-rc2 rancher/k3s-upgrade v1.21.12-rc3-k3s1 rancher/k3s-upgrade v1.22.9-rc4-k3s1 rancher/k3s-upgrade v1.23.6-rc3-k3s1 rancher/rancher v2.6.5-rc4 rancher/rancher-agent v2.6.5-rc4 rancher/rancher-runtime v2.6.5-rc4 rancher/rke2-runtime v1.21.12-rc4-rke2r1 rancher/rke2-runtime v1.22.9-rc4-rke2r1 rancher/rke2-runtime v1.23.6-rc4-rke2r1 rancher/rke2-upgrade v1.21.12-rc4-rke2r1 rancher/rke2-upgrade v1.22.9-rc4-rke2r1 rancher/rke2-upgrade v1.23.6-rc4-rke2r1 rancher/system-agent-installer-k3s v1.21.12-rc3-k3s1 rancher/system-agent-installer-k3s v1.22.9-rc4-k3s1 rancher/system-agent-installer-k3s v1.23.6-rc3-k3s1 rancher/system-agent-installer-rke2 v1.21.12-rc4-rke2r1 rancher/system-agent-installer-rke2 v1.22.9-rc4-rke2r1 rancher/system-agent-installer-rke2 v1.23.6-rc4-rke2r1

    Components with -rc

    CLI_VERSION v2.6.5-rc1 DASHBOARD_UI_VERSION v2.6.5-rc3 UI_VERSION 2.6.5-rc3 AKS-OPERATOR v1.0.5-rc1 RKE v1.3.10-rc5

    RKE Kubernetes versions

    v1.18.20-rancher1-3 v1.19.16-rancher1-5 v1.20.15-rancher1-3 v1.21.12-rancher1-1 v1.22.9-rancher1-1 v1.23.6-rancher1-1

    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(1.12 KB)
    rancher-data.json(3.29 MB)
    rancher-images-sources.txt(28.74 KB)
    rancher-images.txt(20.73 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(786 bytes)
    rancher-mirror-to-rancher-org.sh(26.42 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(118 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(873 bytes)
    rancher-windows-images.txt(570 bytes)
    sha256sum.txt(1.31 KB)
  • v2.5.13(Apr 15, 2022)

    Release v2.5.13

    It is important to review the Install/Upgrade Notes below before upgrading to any Rancher version.

    Security Fixes for Rancher Vulnerabilities

    This release addresses security issues found in Rancher:

    • Improved authorization check to prevent privilege escalation of users who have create and update permissions on Global Roles. For more information see: CVE-2021-36784.
    • Fixed an issue that granted write permissions on CatalogTemplates or CatalogTemplateVersions to regular users when restricted-admin is enabled as default user in Rancher. For more information see: CVE-2021-4200.
    • Updated version of external go-getter library used in Fleet to avoid leaking SSH private keys in Rancher UI and in Fleet's deployment pod logs. For more information see: GHSA-wm2r-rp98-8pmh.

    For more details, see the security advisories page.

    Major Bug Fixes

    • When an auth group is assigned to the Restricted Admin global role, not all permissions are properly applied and the Rancher logs are spammed with the error RoleBinding.rbac.authorization.k8s.io "my-role-name" is invalid. See #36621.
    • Equinix Metal driver version updated. See #34742.
    • Cannot create a backup in Cluster Explorer after switching the interface language to Simplified Chinese. See #33654.

    Install/Upgrade Notes

    If you are installing Rancher for the first time, your environment must fulfill the installation requirements.

    Upgrade Requirements

    • Creating backups:
      • We strongly recommend creating a backup before upgrading Rancher. To roll back Rancher after an upgrade, you must back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to its state when a backup was created, any changes post upgrade will not be included after the restore. For more information, see the documentation on backing up Rancher.
    • Helm version:
      • Rancher install or upgrade must occur with Helm 3.2.x+ due to the changes with the latest cert-manager release. See #29213.
    • Kubernetes version:
      • The local Kubernetes cluster for the Rancher server should be upgraded to Kubernetes 1.17+ before installing Rancher 2.5+.
    • CNI requirements:
      • For Kubernetes v1.19 and newer, we recommend disabling firewalld as it has been found to be incompatible with various CNI plugins. See #28840.
      • If upgrading or installing to a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or newer, users should upgrade to RKE1 v1.19.2 or later to get Flannel version v0.13.0 that supports nf_tables. See Flannel #1317.
      • For users upgrading from >=v2.4.4 to v2.5.x with clusters where ACI CNI is enabled, note that upgrading Rancher will result in automatic cluster reconciliation. This is applicable for Kubernetes versions v1.17.16-rancher1-1, v1.17.17-rancher1-1, v1.17.17-rancher2-1, v1.18.14-rancher1-1, v1.18.15-rancher1-1, v1.18.16-rancher1-1, and v1.18.17-rancher1-1. Please refer to the workaround BEFORE upgrading to v2.5.x. See #32002.
    • Requirements for air-gapped environments:
      • For installing or upgrading Rancher in an air-gapped environment, please add the flag --no-hooks to the helm template command to skip rendering files for Helm's hooks. See #3226.
      • If using a proxy in front of an air-gapped Rancher, you must pass additional parameters to NO_PROXY. See the documentation and #2725.
    • Cert-manager version requirements:
      • Recent changes to cert-manager require an upgrade if you have a high-availability install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation for information on how to upgrade cert-manager.
    • Requirements for Docker installs:
      • When starting the Rancher Docker container, the privileged flag must be used. See the documentation.
      • When installing in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command as shown in the K3s documentation. If the registry has certs, then you will need to also supply those. See #28969.
      • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container comes up and works as expected. See #33685.
    • RKE requirements:
      • For users upgrading from <=v2.4.8 (<= RKE v1.1.6) to v2.4.12+ (RKE v1.1.13+)/v2.5.0+ (RKE v1.2.0+), please note that Edit and Save cluster (even with no changes or a trivial change like cluster name) will result in cluster reconciliation and upgrading kube-proxy on all nodes because of a change in kube-proxy binds. This only happens on the first edit, and later edits shouldn't affect the cluster. See #32216.
    • EKS requirements:
      • There was a setting for Rancher versions prior to 2.5.8 that allowed users to configure the length of refresh time in cron format: eks-refresh-cron. That setting is now deprecated and has been migrated to a standard seconds format in a new setting: eks-refresh. If previously set, the migration will happen automatically. See #31789.
    • Fleet-agent:
      • When upgrading <=v2.5.7 to >=v2.5.8, you may notice that in Apps & Marketplace, there is a fleet-agent release stuck at uninstalling. This is caused by migrating fleet-agent release name. It is safe to delete fleet-agent release as it is no longer used, and doing so should not delete the real fleet-agent deployment since it has been migrated. See #362.

    Rancher Behavior Changes

    • Upgrades and Rollbacks:
      • Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or roll back to change the Rancher version.
      • Please be aware when upgrading to v2.3.0+, any edits to a Rancher-launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
      • Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.
      • Existing GKE clusters and imported clusters will continue to operate as is. Only new creations and registered clusters will use the new full lifecycle management.
      • The process to roll back Rancher has been updated for versions v2.5.0 and above. Refer to the documentation for the new instructions.
    • Important:
      • When rolling back, we are expecting you to roll back to the state at the time of your upgrade. Any changes post-upgrade would not be reflected.
    • The local cluster can no longer be turned off:
      • In older Rancher versions, the local cluster could be hidden to restrict admin access to the Rancher server's local Kubernetes cluster, but that feature has been deprecated. The local Kubernetes cluster can no longer be hidden and all admins will have access to the local cluster. If you would like to restrict permissions to the local cluster, there is a new restricted-admin role that must be used. Access to the local cluster can now be disabled by setting hide_local_cluster to true from the v3/settings API. See the documentation and #29325. For more information on upgrading from Rancher with a hidden local cluster, see the documentation.

    Versions

    Please refer to the README for latest and stable versions.

    Please review our version documentation for more details on versioning and tagging conventions.

    Images

    • rancher/rancher:v2.5.13
    • rancher/rancher-agent:v2.5.13

    Tools

    Kubernetes Versions

    • 1.20.15 (Default)
    • 1.19.16
    • 1.18.20
    • 1.17.17

    Other Notes

    Deprecated Features

    |Feature | Justification | |---|---| |Cluster Manager - Rancher Monitoring | Monitoring in Cluster Manager UI has been replaced with a new monitoring chart available in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - Rancher Alerts and Notifiers| Alerting and notifiers functionality is now directly integrated with a new monitoring chart available in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - Rancher Logging | Functionality replaced with a new logging solution using a new logging chart available in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - MultiCluster Apps| Deploying to multiple clusters is now recommended to be handled with Rancher Continuous Delivery powered by Fleet available in Cluster Explorer.| |Cluster Manager - Kubernetes CIS 1.4 Scanning| Kubernetes CIS 1.5+ benchmark scanning is now replaced with a new scan tool deployed with a cis benchmarks chart available in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - Rancher Pipelines| Git-based deployment pipelines is now recommend to be handled with Rancher Continuous Delivery powered by Fleet available in Cluster Explorer. | |Cluster Manager - Istio v1.5| The Istio project has ended support for Istio 1.5 and has recommended all users upgrade. Newer Istio versions are now available as a chart in the Apps & Marketplace in Cluster Explorer. | |Cluster Manager - Provision Kubernetes v1.16 Clusters | We have ended support for Kubernetes v1.16. Cluster Manager no longer provisions new v1.16 clusters. If you already have a v1.16 cluster, it is unaffected. |

    Experimental Features

    RancherD was introduced in v2.5 as an easy-to-use installation binary. With the introduction of RKE2 provisioning, this project is being rewritten and will be available at a later time. See #33423.

    Duplicated Features in Cluster Manager and Cluster Explorer

    • Only one version of the feature may be installed at any given time due to potentially conflicting CRDs.
    • Each feature should only be managed by the UI that it was deployed from.
    • If you have installed a feature in Cluster Manager, you must uninstall it in Cluster Manager before attempting to install the new version in Cluster Explorer dashboard.

    Cluster Explorer Feature Caveats and Upgrades

    • General:
      • Not all new features are currently installable on a hardened cluster.
      • New features are expected to be deployed using the Helm 3 CLI and not with the Rancher CLI.
    • UI Shell:
      • After closing the shell in the Rancher UI, be aware that the corresponding processes remain running indefinitely for each shell in the pod. See #16192.
    • Continuous Delivery:
      • Restricted admins are not able to create git repos from the Continuous Delivery option under Cluster Explorer; the screen will become stuck in a loading status. See #4909.
    • Rancher Backup:
      • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location; it must continue to use the same URL.
    • Monitoring:
      • Monitoring sometimes errors on installation because it can't identify CRDs. See #29171.
    • Istio:
      • Be aware that when upgrading from Istio v1.7.4 or earlier to any later version, there may be connectivity issues. See upgrade notes and #31811.
      • Starting in v1.8.x, DNS is supported natively. This means that the additional addon component istioCoreDNS is deprecated in v1.8.x and is not supported in v1.9x. If you are upgrading from v1.8.x to v1.9.x and you are using the istioCoreDNS addon, it is recommended that you disable it and switch to the natively supported DNS prior to upgrade. If you upgrade without disabling it, you will need to manually clean up your installation as it will not get removed automatically. See #31761 and #31265.
      • Istio v1.10 and earlier versions are now End-of-life but are required for the upgrade path in order to not skip a minor version. See #33824.

    Cluster Manager Feature Caveats and Upgrades

    • GKE:
      • Basic authentication must be explicitly disabled in GCP before upgrading a GKE cluster to 1.19+ in Rancher. See #32312.
      • When creating GKE clusters in Terraform, the labels field cannot be empty: at least one label must be set. See #32553.
    • EKS & GKE:
      • When creating EKS and GKE clusters in Terraform, string fields cannot be set to empty. See #32440.

    Known Major Issues

    • Kubernetes Cluster Distributions
      • RKE:
        • Rotating encryption keys with a custom encryption provider is not supported. See #30539.
        • After migrating from the in-tree vSphere cloud provider to the out-of-tree cloud provider, attempts to upgrade the cluster will not complete. This is due to nodes containing workloads with bound volumes before the migration failing to drain. Users will observe these nodes stuck in a draining state. Follow this workaround to continue with the upgrade. See #35102.
      • AKS:
        • Azure Container Registry-based Helm charts cannot be added in Cluster Explorer but do work in the Apps feature of Cluster Manager. Note that when using a Helm chart repository, the disableSameOriginCheck setting controls when credentials are attached to requests. See documentation and #35940 for more information.
    • Cluster Tools
      • Hardened clusters:
        • Not all cluster tools can currently be installed on a hardened cluster.
      • Monitoring:
        • Deploying Monitoring V2 on a Windows cluster with win_prefix_path set requires users to deploy Rancher Wins Upgrader to restart wins on the hosts to start collecting metrics in Prometheus. See #32535.
        • Monitoring V2 fails to scrape ingress-nginx pods on any nodes except for the one Prometheus is deployed on, if the security group used by worker nodes blocks incoming requests to port 10254. The workaround for this issue is to open up port 10254 on all hosts. See #32563.
      • Logging:
        • Logging (Cluster Explorer): Windows nodeAgents are not deleted when performing Helm upgrade after disabling Windows logging on a Windows cluster. See #32325.
      • Istio versions:
        • Istio v1.5 is not supported in air-gapped environments. Please note that the Istio project has ended support for Istio v1.5.
        • Istio v1.10 support ended on January 7th, 2022.
      • Legacy Monitoring:
        • In air-gapped setups, the generated rancher-images.txt that is used to mirror images on private registries does not contain the images required to run Legacy Monitoring, also called Monitoring V1, which is compatible with Kubernetes 1.15 clusters. If you are running Kubernetes 1.15 clusters in an air-gapped environment, and you want to either install Monitoring V1 or upgrade Monitoring V1 to the latest that is offered by Rancher for Kubernetes 1.15 clusters, you will need to take one of the following actions:
          • Upgrade the Kubernetes version so that you can use v0.2.x of the Monitoring application Helm chart.
          • Manually import the necessary images into your private registry for the Monitoring application to use.
      • Installation requirements:
        • Importing a Kubernetes v1.21 cluster might not work properly and is unsupported.
      • Backup and Restore:
        • Reinstalling Rancher 2.5.x on the same cluster may fail due to a lingering rancher.cattle.io. MutatingWebhookConfiguration object from a previous installation. Manually deleting it will resolve the issue.
      • Docker installs:
        • UI issues may occur due to a longer startup time.
        • Users may receive an error message when logging in for the first time. See #28800.
        • Users may be redirected to the login screen before a password and default view have been set. See #28798.
    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(43 bytes)
    rancher-images-digests-linux-amd64.txt(23.22 KB)
    rancher-images-digests-linux-arm64.txt(20.16 KB)
    rancher-images-digests-windows-1809.txt(1.97 KB)
    rancher-images-digests-windows-2004.txt(1.97 KB)
    rancher-images-digests-windows-20H2.txt(1.97 KB)
    rancher-images-sources.txt(11.74 KB)
    rancher-images.txt(7.58 KB)
    rancher-load-images.ps1(2.58 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(763 bytes)
    rancher-mirror-to-rancher-org.sh(9.89 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-save-images.ps1(2.12 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(843 bytes)
    rancher-windows-images.txt(547 bytes)
    rancherd-amd64(152.05 MB)
    rancherd-amd64.tar.gz(40.89 MB)
    sha256sum.txt(1.30 KB)
  • v2.6.3-patch3-rc1(Apr 2, 2022)

    Components

    UI_VERSION 2.6.3-patch1 DASHBOARD_UI_VERSION v2.6.3-patch3 API_UI_VERSION 1.1.9 RKE v1.3.7 MACHINE v0.15.0-rancher73 CLI_VERSION v2.6.0 ETCD_VERSION v3.4.13 FLEET_MIN_VERSION 100.0.2+up0.3.8 HELM_VERSION v2.16.8-rancher1 K3S_VERSION v1.21.7+k3s1 MACHINE_VERSION v0.15.0-rancher73 RANCHER_WEBHOOK_MIN_VERSION 1.0.3+up0.2.4 SYSTEM_AGENT_VERSION v0.1.1 AKS-OPERATOR v1.0.3 APISERVER v0.0.0-20211025232108-df28932a5627 CHANNELSERVER v0.5.1-0.20210618172430-5cbefd383369 CONTROLLER-RUNTIME v0.0.0-20211217013041-3c6118a30611 DYNAMICLISTENER v0.3.1-0.20211104200948-cd5d71f2fe95 EKS-OPERATOR v1.1.2 GKE-OPERATOR v1.1.2 KUBERNETES-PROVIDER-DETECTOR v0.1.5 LASSO v0.0.0-20211217013041-3c6118a30611 NORMAN v0.0.0-20211201154850-abe17976423e RDNS-SERVER v0.0.0-20180802070304-bf662911db6a REMOTEDIALER v0.2.6-0.20220120012928-4ea2198e0966 SECURITY-SCAN v0.1.7-0.20200222041501-f7377f127168 STEVE v0.0.0-20210915171517-ae8b16260899 WRANGLER v0.8.10

    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(951 bytes)
    rancher-images-digests-linux-amd64.txt(42.93 KB)
    rancher-images-digests-linux-arm64.txt(31.14 KB)
    rancher-images-digests-windows-1809.txt(1.56 KB)
    rancher-images-digests-windows-20H2.txt(1.56 KB)
    rancher-images-digests-windows-ltsc2022.txt(1.56 KB)
    rancher-images-sources.txt(21.13 KB)
    rancher-images.txt(14.88 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(620 bytes)
    rancher-mirror-to-rancher-org.sh(19.01 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(702 bytes)
    rancher-windows-images.txt(452 bytes)
    sha256sum.txt(1.12 KB)
  • v2.6.3-patch3(Apr 7, 2022)

    What's Changed

    • [v2.6.3-patch3] drop the field ecrCredentialPlugin when there are no values in it by @jiaqiluo in https://github.com/rancher/rancher/pull/37090
    • Tick UI Dashboard index to release-2.6.3-patch3 by @rak-phillip in https://github.com/rancher/rancher/pull/37125
    • pin kdm data for v2.6.3 patch releases by @kinarashah in https://github.com/rancher/rancher/pull/37149
    • [v2.6.3-patch3] Private registry cleanup job 2.6.3 patch3 by @thedadams in https://github.com/rancher/rancher/pull/37158

    Full Changelog: https://github.com/rancher/rancher/compare/v2.6.3-patch2...v2.6.3-patch3

    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(951 bytes)
    rancher-images-digests-linux-amd64.txt(44.12 KB)
    rancher-images-digests-linux-arm64.txt(31.99 KB)
    rancher-images-digests-windows-1809.txt(1.67 KB)
    rancher-images-digests-windows-20H2.txt(1.67 KB)
    rancher-images-digests-windows-ltsc2022.txt(1.67 KB)
    rancher-images-sources.txt(21.61 KB)
    rancher-images.txt(15.28 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(663 bytes)
    rancher-mirror-to-rancher-org.sh(19.52 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(740 bytes)
    rancher-windows-images.txt(483 bytes)
    sha256sum.txt(1.12 KB)
  • v2.6.4(Mar 31, 2022)

    Release v2.6.4

    Note: Rancher 2.6.4 contains a bug which causes downstream clusters to become unavailable via most methods. A temporary workaround is to restart the Rancher pods. #37250


    It is important to review the Install/Upgrade Notes below before upgrading to any Rancher version.

    In Rancher v2.6.4, the cluster-api module has been upgraded from v0.4.4 to v1.0.2 in which the apiVersion of CAPI CRDs are upgraded from cluster.x-k8s.io/v1alpha4 to cluster.x-k8s.io/v1beta1. This has the effect of causing rollbacks from Rancher v2.6.4 to any previous version of Rancher v2.6.x to fail because the previous version the CRDs needed to roll back are no longer available in v1beta1. To avoid this, the Rancher resource cleanup script should be run before the restore or rollback is attempted. This script can be found in the rancherlabs/support-tools repo and the usage of the script can be found in the backup-restore operator docs. See also #36803 for more details.

    Features and Enhancements

    New in Rancher

    • Kubernetes v1.22 is no longer experimental and is now supported. Kubernetes v1.23 is experimental.
    • Kubernetes v1.22 and v1.23 are available as Kubernetes version options when provisioning clusters as well as upgrading imported RKE2/k3s clusters.
    • Rancher on IBM Z is now in tech preview.

    New in Cert-Manager

    • Rancher now supports cert-manager versions 1.6.2 and 1.7.1. We recommend v1.7.x because v 1.6.x will reach end-of-life on March 30, 2022. To read more, see the documentation.
    • When upgrading Rancher and cert-manager, you will need to use Option B: Reinstalling Rancher and cert-manager from the Rancher docs.
    • There are several versions of cert-manager which, due to their backwards incompatibility, are not recommended for use with Rancher. You can read more about which versions are affected by this issue in the cert-manager docs. As a result, only versions 1.6.2 and 1.7.1 are recommended for use at this time.
    • For instructions on upgrading cert-manager from version 1.5 to 1.6, see the relevant cert-manager docs.
    • For instructions on upgrading cert-manager from version 1.6 to 1.7, see the relevant cert-manager docs.

    New in RKE2 - Tech Preview

    • UI Enhancements in RKE2 Cluster Provisioning
      • The UI now provides an option to rotate certificates for RKE2 provisioned clusters, including an option to rotate certificates for an individual component. See Dashboard #4485.
      • S3 configuration support has been re-enabled for RKE2 snapshots; this is required for RKE2 provisioning parity with RKE1. See Dashboard #4551.
      • When restoring RKE2 cluster etcd snapshots, more restore options are available. See Dashboard #4539.
      • “Auto Replace” option support enabled for RKE2 machine pools. See Dashboard #4449.
      • Ability to scale down individual nodes for RKE2-provisioned clusters added. See Dashboard #4446.
      • Enhanced RKE2 cluster provisioning on Linode. See Dashboard #3262.
      • Added user-configurable OS field for vSphere machine pools. See Dashboard #4859.
      • "Drain Before Delete" support for RKE2 machine pools has been added. Note that when editing nodes and enabling the "Drain Before Delete" option, the existing control plane nodes and worker nodes are deleted and new nodes are created. It should be noted that for etcd nodes, the same behavior does not apply. See #35274 and Dashboard #4448.
      • Health checks were added to support self-healing RKE2 machine pools. See #35275.
    • Bug Fixes
      • RKE2 clusters can now be provisioned using the AWS cloud provider with Kubernetes v1.22. See #35618.
      • Clusters can be created from an RKE2 template as non-admin user. #34844.
    • Known Issues in Windows
      • Experimental Support for RKE2 Provisioning tech preview for Windows will only work on v1.22 and up of RKE2. End users should not use v1.21.x of RKE2 for any RKE2 cluster that will have Windows worker nodes. This is due to an upstream Calico bug that was not backported to the minor version of Calico (3.19.x) that is present in v1.21.x of RKE2. See #131.
      • When upgrading Windows nodes in RKE2 clusters via the Rancher UI, Windows worker nodes will require a reboot after the upgrade is completed.
      • CSI Proxy for Windows will not work in an air-gapped environment.
      • NodePorts do not work on Windows Server 2022 in RKE2 clusters due to a Windows kernel bug. See #159.
    • Other Known Issues
      • RKE2 node driver cluster gets stuck in provisioning state after an upgrade to v2.6.4 and rollback to v2.6.3. See #36859.
      • RKE2 node driver cluster has its nodes redeployed when upgrading Rancher from v2.6.3 to v2.6.4. See #36627.
      • RKE2 node driver cluster gets stuck in provisioning state in Kubernetes v1.23.x. See #36939.
      • The communication between the ingress controller and the pods doesn't work when you create an RKE2 cluster with Cillium as the CNI and activate project network isolation. See documentation and #34275
      • cluster state changes to Provisioning when a worker node is deleted in an RKE2 cluster. See #36689.
      • cluster state changes to Provisioning when a snapshot is taken in an RKE2 cluster. See #36504.

    UI Enhancements in Fleet

    • Added a Bundles tab to the GitRepo detail view. See Dashboard #4794.
    • Added a Detail view for the Fleet Bundle resource. See Dashboard #4793.
    • Fleet controller logs are now viewable on cluster dashboard. See Dashboard #3668.
    • In the Continuous Delivery dashboard, badge colors have been updated to feature counts that convey health status information. See Dashboard #5232.
    • On the Git Repos detail page, error notifications were added to the Conditions tab. See Dashboard #5218.
    • Added a new dashboard view to Fleet at the top of the navigation list; this is the default page that the user will land on. See Dashboard #5048.
    • In the Git Repos detail page, a warning icon now displays when the GitRepo does not apply to any clusters. See Dashboard #4929.

    Other UI Enhancements

    • Nvidia GPU Reservation can be edited on the workloads page. See Dashboard #5005.
    • The UI now uses a nested structure to determine whether to display global or cluster-scoped resources in the role creation forms, and which API group should be auto-populated when a resource is selected. See Dashboard #4888.
    • An edit and detail UI has been added for NetworkPolicies. See Dashboard #4729.
    • On the namespace detail page, a new tab was added that shows the workloads. See Dashboard #5115.
    • In Apps and Marketplace, existing charts have been updated with annotations such that users will better understand what occurs in their mixed Linux/Windows workload cluster if they deploy a chart. See Dashboard #5137.
    • If a custom consent banner is enabled, after logging into Rancher, a user must now accept or deny consent. By default, consent is required to use Rancher, but can be configured to be optional. See Dashboard #4719.
    • Improvements added to the Rollback Diff view: ability to switch between the side-by-side and inline diff and the ability to collapse (or hide) the fields that are non-standard. See Dashboard #4636.
    • Users can now create a ConfigMap at the project scope. See Dashboard #4571.

    Security Enhancements

    Other New Features

    • Users can now provision node driver clusters from an air gapped cluster configured to use a proxy for outbound connections. See the documentation and #28411.
    • The "rancher/install-docker" script now supports the Linux distributions SLES / OpenSUSE / RHEL / Rocky Linux for the Docker versions 20.10.8 / 20.10.9 / 20.10.10. See #34615.
    • Users can now configure the Readiness Check and Liveness Check of coredns-autoscaler. See #24939.

    Behavior Changes

    Major Bug Fixes

    UI

    • Fixed an issue in which restricted admins couldn't create git repos from the continuous delivery menu. See Dashboard #4909.
    • Cluster Dashboard resource gauges now show correct max values. See Dashboard #2892.
    • Can now add or edit cluster labels via the UI. See Dashboard #3086.
    • Added support for cluster icons for drivers that are not built in. See Dashboard #3124.
    • Editing deployment image names via "Edit Config" no longer results in failure. See Dashboard #3827.
    • Templates are now listing from vSphere content library as expected. See Dashboard #4302.
    • "Restore snapshot" action is now available in Cluster management page. See Dashboard #4606.
    • Rollback option on workload works as expected. See Dashboard #4664.
    • Group list now sorts alphabetically when attempting to add role grant for SAML group. See Dashboard #4685.
    • Namespace resource settings for quota override now display in UI. See Dashboard #4704.
    • Fixed an issue in which there were differences in behavior of conditional logic in questions.yaml between older Cluster Manager Apps and the Cluster Explorer in Apps & Marketplace. See Dashboard #4706.
    • UI now shows network-attachment-definitions correctly. See Dashboard #4748.
    • Added form field for ssl_verify and ssl_version when enabling an HTTPS flow. See Dashboard #4753.
    • Fixed issue in which advanced selector settings for GitRepo get lost when editing. See Dashboard #4788.
    • UI change of replicas are now enabled. See Dashboard #4828.
    • Form elements are now displayed as expected in Continuous Delivery Git Repo create/edit form. See Dashboard #4830.
    • Added back pod selection checkboxes and delete button to workload views in Cluster Explorer. See Dashboard #4831.
    • Removed Rancher OS AMI link from EC2 provisioning UI. See Dashboard #4833.
    • Restricted admins can now see Assign Global Roles button in Users & Authentication -> Groups. See Dashboard #4834.
    • Project/Cluster Member with Project Monitoring role granted on System Project able to view workload /node metrics tab in Explorer UI. See Dashboard #4835.
    • Init containers are now visible in the Cluster Explorer UI. See Dashboard #4840.
    • Users may create a cronjob in Rancher UI as expected. See Dashboard #4842.
    • API Key Expired date displays correctly now at first login as standard user. See Dashboard #4863.
    • Secrets can be used as Environment Variables in Deployments. See Dashboard #4866.
    • Dashboard now shows the API endpoint URL in the API Keys page. See Dashboard #4869.
    • Unnecessary "%" that was displayed in Maximum Worker Nodes Unavailable screen has been removed. See Dashboard #4870.
    • Events table in cluster dashboard has been replaced with monitoring alerts table. See Dashboard #4911.
    • Kubernetes Job now displays in Rancher UI when job is created from CronJob with kubectl 1.19+ version. See Dashboard #4963.
    • Resolved Rancher error when browsing clusters and refreshing the page. See Dashboard #4967.
    • New User Default Role Toggle now functions as expected in Rancher UI. See Dashboard #4992.
    • Fixed login error Cannot read properties of null (reading 'startsWith') Home. See Dashboard #5004.
    • Users can now change default role via UI after configuring ADFS authentication. See Dashboard #5066.
    • Namespace view now provides detail for the summary pills. See Dashboard #5074.
    • Dropdown added so that viewing previous container logs is easier. See Dashboard #5075.
    • Helm UI now displays logs of a deployment. See Dashboard #5079.
    • Component Status pills now show on cluster dashboard view. See Dashboard #5085.
    • Fixed an issue in which config map view was inconsistent for binary data. See Dashboard #5311.

    Rancher

    • Fixed error in which standard users received an error when creating a cluster from the Edit as YAML and Edit as a Form buttons. See #35868.
    • If the rancher-node security group is used, the existing security group is not modified, as expected. See #24337.
    • For namespaces created inside projects via kubectl, when a resource limit exceeds the remaining amount in the project, Rancher no longer assigns an all-restrictive quota limit. Instead, a zero limit is set for the exceeding resource only. See #35647.
    • When deploying a cluster on v1.22, monitoring installation works as expected. See #35571.
    • Fixed an issue in which Calico probes failed on Windows nodes. See #36910.
    • Enabling Cluster Monitoring through the API or the UI will now set the same memory limits. See #25103.
    • When creating a RKE cluster with the Rancher CLI, the values from a config file are now applied. See #25416.
    • The Container Default Resource Limits are now applied when a new namespace is created in a project via kubectl (bypassing the UI). See #27750.
    • Fixed issue in which deploying clusters using the Azure node driver caused Pod Predicate NodeAffinity failed error on default-http-backend pod. See #29882.
    • Hairpin rules are now added when using IPVS with a cloud provider enabled. See #30363.
    • Cluster namespaces no longer remain after deleting a cluster. See #31546.
    • Upgrading Kubernetes version on downstream clusters no longer causes a memory consumption increase. See #31640.
    • Configuring Keycloak (SAML) authentication no longer fails with decoding error Unknown error: SAML: cannot initialize saml SP, cannot decode IDP Metadata content from the config. See #33709.
    • Fixed issue in which intermittent health check failure caused unreachable downstream clusters. See #34819.
    • Backups taking longer than 5 minutes will no longer cause Rancher to start a new backup and delete the one that is currently running, generating a backup loop. See #34890.
    • K3s Docker image should now be used in the Dockerfile instead of downloading the K3s binary. See #35101.
    • When Logging v1 and v2 are both installed, the v1 fluentd pods no longer get stuck with the crashlooping error. See #35125.
    • Syslog output can now be sent through UDP in Logging v2. See #35197.
    • The cattle-node-cleanup job will be deleted as expected after it times out. See #35334.
    • If a new namespace does not fit the Project quota limits, a zero quota limit will be created only for the new namespace's resources. See #35647.
    • When upgrading a RKE cluster, the status is seen as active even though the add-ons are still being updated. See #35750.
    • rancher-webhook certificate renewal workaround updated. See #35860.
    • Default value for ingressNginx.serviceMonitor.interval is set to 30s in the rancher-monitoring charts. See #36070.
    • Importing KEv2 clusters with the Rancher client have their config correctly rewritten. See #36128.
    • The rancher-monitoring ports for node-exporter and push-proxy-clients are no longer opened on the host. See #36140.
    • DNS redirect iptables rules are now correctly created by istio-cni. See #36159.
    • When using Fleet to apply the configuration, it will only apply the final configMap entry. See #36242.
    • When creating a cluster, RKE cluster will no longer panic if rotate_encryption_key is enabled and secrets_encryption_config disabled. See #36333.
    • Actions can now be performed by a cluster member, Download KubeConfig and Take Snapshot, are correctly being shown as available on RKE1 clusters in a Rancher setup. See #35828.

    Security

    • Role bindings are now cleaned up from deleted users. See #36550.
    • Fixed error when syncing handler restrictedAdmin RBAC cluster. See #36650.
    • Fixed error when syncing handler grb-cluster-sync: RoleBinding.rbac.authorization.k8s.io. See #36650.

    Install/Upgrade Notes

    • If you are installing Rancher for the first time, your environment must fulfill the installation requirements.
    • The namespace where the local Fleet agent runs has been changed to cattle-fleet-local-system. This change does not impact GitOps workflows.

    Upgrade Requirements

    • Creating backups: We strongly recommend creating a backup before upgrading Rancher. To roll back Rancher after an upgrade, you must back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to its state when a backup was created, any changes post upgrade will not be included after the restore. For more information, see the documentation on backing up Rancher.
    • Helm version: Rancher install or upgrade must occur with Helm 3.2.x+ due to the changes with the latest cert-manager release. See #29213.
    • Kubernetes version:
      • The local Kubernetes cluster for the Rancher server should be upgraded to Kubernetes 1.18+ before installing Rancher 2.6+.
      • When using Kubernetes v1.21 with Windows Server 20H2 Standard Core, the patch "2019-08 Servicing Stack Update for Windows Server" must be installed on the node. See #72.
    • CNI requirements:
      • For Kubernetes v1.19 and newer, we recommend disabling firewalld as it has been found to be incompatible with various CNI plugins. See #28840.
      • If upgrading or installing to a Linux distribution which uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or newer, users should upgrade to RKE1 v1.19.2 or later to get Flannel version v0.13.0 that supports nf_tables. See Flannel #1317.
      • For users upgrading from >=v2.4.4 to v2.5.x with clusters where ACI CNI is enabled, note that upgrading Rancher will result in automatic cluster reconciliation. This is applicable for Kubernetes versions v1.17.16-rancher1-1, v1.17.17-rancher1-1, v1.17.17-rancher2-1, v1.18.14-rancher1-1, v1.18.15-rancher1-1, v1.18.16-rancher1-1, and v1.18.17-rancher1-1. Please refer to the workaround BEFORE upgrading to v2.5.x. See #32002.
    • Requirements for air gapped environments:
      • For installing or upgrading Rancher in an air gapped environment, please add the flag --no-hooks to the helm template command to skip rendering files for Helm's hooks. See #3226.
      • If using a proxy in front of an air gapped Rancher, you must pass additional parameters to NO_PROXY. See the documentation and related issue #2725.
    • Cert-manager version requirements: Recent changes to cert-manager require an upgrade if you have a high-availability install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager. See documentation.
    • Requirements for Docker installs:
      • When starting the Rancher Docker container, the privileged flag must be used. See documentation.
      • When installing in an air gapped environment, you must supply a custom registries.yaml file to the docker run command as shown in the K3s documentation. If the registry has certificates, then you will need to also supply those. See #28969.
      • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container comes up and is working as expected. See #33685.

    Rancher Behavior Changes

    • Legacy features are gated behind a feature flag. Users upgrading from Rancher <=v2.5.x will automatically have the --legacy feature flag enabled. New installations that require legacy features need to enable the flag on install or through the UI.
    • Users must manually remove legacy services. When workloads created using the legacy UI are deleted, the corresponding services are not automatically deleted. Users will need to manually remove these services. A message will be displayed notifying the user to manually delete the associated services when such a workload is deleted. See #34639.
    • Charts from library and helm3-library catalogs can no longer be launched. Users will no longer be able to launch charts from the library and helm3-library catalogs, which are available through the legacy apps and multi-cluster-apps pages. Any existing legacy app that was deployed from a previous Rancher version will continue to be able to edit its currently deployed chart. Note that the Longhorn app will still be available from the library for new installs but will be removed in the next Rancher version. All users are recommended to deploy Longhorn from the Apps & Marketplace section of the Rancher UI instead of through the Legacy Apps pages.
    • The local cluster can no longer be turned off. In older Rancher versions, the local cluster could be hidden to restrict admin access to the Rancher server's local Kubernetes cluster, but that feature has been deprecated. The local Kubernetes cluster can no longer be hidden and all admins will have access to the local cluster. If you would like to restrict permissions to the local cluster, there is a new restricted-admin role that must be used. The access to local cluster can now be disabled by setting hide_local_cluster to true from the v3/settings API. See the documentation and #29325. For more information on upgrading from Rancher with a hidden local cluster, see the documentation.
    • Users must log in again. After upgrading to v2.6+, users will be automatically logged out of the old Rancher UI and must log in again to access Rancher and the new UI. See #34004.
    • Fleet is now always enabled. For users upgrading from v2.5.x to v2.6.x, note that Fleet will be enabled by default as it is required for operation in v2.6+. This will occur even if Fleet was disabled in v2.5.x. During the upgrade process, users will observe restarts of the rancher pods, which is expected. See #31044 and #32688.
    • The Fleet agent in the local cluster now lives in cattle-fleet-local-system. Starting with Rancher v2.6.1, Fleet allows for two agents in the local cluster for scenarios where "Fleet is managing Fleet". The true local agent runs in the new cattle-fleet-local-system namespace. The agent downstream from another Fleet management cluster runs in cattle-fleet-system, similar to the agent pure downstream clusters. See #34716 and #531.
    • Editing and saving clusters can result in cluster reconciliation. For users upgrading from <=v2.4.8 (<= RKE v1.1.6) to v2.4.12+ (RKE v1.1.13+)/v2.5.0+ (RKE v1.2.0+) , please note that Edit and save cluster (even with no changes or a trivial change like cluster name) will result in cluster reconciliation and upgrading kube-proxy on all nodes because of a change in kube-proxy binds. This only happens on the first edit and later edits shouldn't affect the cluster. See #32216.
    • The EKS cluster refresh interval setting changed. There is currently a setting allowing users to configure the length of refresh time in cron format: eks-refresh-cron. That setting is now deprecated and has been migrated to a standard seconds format in a new setting: eks-refresh. If previously set, the migration will happen automatically. See #31789.
    • System components will restart. Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
    • New GKE and AKS clusters will use Rancher's new lifecycle management features. Existing GKE and AKS clusters and imported clusters will continue to operate as-is. Only new creations and registered clusters will use the new full lifecycle management.
    • New steps for rolling back Rancher. The process to roll back Rancher has been updated for versions v2.5.0 and above. New steps require scaling Rancher down to 0 replica before restoring the backup. Please refer to the documentation for the new instructions.
    • RBAC differences around Manage Nodes for RKE2 clusters. Due to the change of the provisioning framework, the Manage Nodes role will no longer be able to scale up/down machine pools. The user would need the ability to edit the cluster to manage the machine pools #34474.
    • New procedure to set up Azure cloud provider for RKE2. For RKE2, the process to set up an Azure cloud provider is different than for RKE1 clusters. Users should refer to the documentation for the new instructions. See #34367 for original issue.
    • Machines vs Kube Nodes. In previous versions, Rancher only displayed Nodes, but with v2.6, there are the concepts of machines and kube nodes. Kube nodes are the Kubernetes node objects and are only accessible if the Kubernetes API server is running and the cluster is active. Machines are the cluster's machine object which defines what the cluster should be running.
    • Rancher's External IP Webhook chart no longer supported in v1.22. In v1.22, upstream Kubernetes has enabled the admission controller to reject usage of external IPs. As such, the rancher-external-ip-webhook chart that was created as a workaround is no longer needed, and support for it is now capped to Kubernetes v1.21 and below. See #33893.
    • Increased memory limit for Legacy Monitoring. The default value of the Prometheus memory limit in the legacy Rancher UI is now 2000Mi to prevent the pod from restarting due to a OOMKill. See #34850.
    • Increased memory limit for Monitoring. The default value of the Prometheus memory limit in the new Rancher UI is now 3000Mi to prevent the pod from restarting due to a OOMKill. See #34850.

    Versions

    Please refer to the README for latest and stable versions.

    Please review our version documentation for more details on versioning and tagging conventions.

    Images

    • rancher/rancher:v2.6.4

    Tools

    Kubernetes Versions

    • v1.23.4 (Experimental)
    • v1.22.7 (Default)
    • v1.21.10
    • v1.20.15
    • v1.19.16
    • v1.18.20

    Rancher Helm Chart Versions

    Starting in 2.6.0, many of the Rancher Helm charts available in the Apps & Marketplace will start with a major version of 100. This was done to avoid simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also brings us into compliance with semver, which is a requirement for newer versions of Helm. You can now see the upstream version of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #32294.

    Other Notes

    Feature Flags

    Feature flags introduced in 2.6.0 and the Harvester feature flag introduced in 2.6.1 are listed below for reference:

    Feature Flag | Default Value | Description ---|---|--- harvester | true | Used to manage access to the Harvester list page where users can navigate directly to Harvester host clusters and have the ability to import them. fleet| true | The previous fleet feature flag is now required to be enabled as the fleet capabilities are leveraged within the new provisioning framework. If you had this feature flag disabled in earlier versions, upon upgrading to Rancher, the flag will automatically be enabled. gitops | true | If you want to hide the "Continuous Delivery" feature from your users, then please use the newly introduced gitops feature flag, which hides the ability to leverage Continuous Delivery. rke2 | true | We have introduced the ability to provision RKE2 clusters as tech preview. By default, this feature flag is enabled, which allows users to attempt to provision these type of clusters. legacy | false for new installs, true for upgrades | There are a set of features from previous versions that are slowly being phased out of Rancher for newer iterations of the feature. This is a mix of deprecated features as well as features that will eventually be moved to newer variations in Rancher. By default, this feature flag is disabled for new installations. If you are upgrading from a previous version, this feature flag would be enabled. token-hashing | false | Used to enable new token-hashing feature. Once enabled, existing tokens will be hashed and all new tokens will be hashed automatically using the SHA256 algorithm. Once a token is hashed it cannot be undone. Once this feature flag is enabled it cannot be disabled.

    Experimental Features

    RancherD was introduced in 2.5 as an easy-to-use installation binary. With the introduction of RKE2 provisioning, this project is being re-written and will be available at a later time. See #33423.

    Legacy Features

    Legacy features are features hidden behind the legacy feature flag, which are various features/functionality of Rancher that was available in previous releases. These are features that Rancher doesn't intend for new users to consume, but if you have been using past versions of Rancher, you'll still want to use this functionality.

    When you first start 2.6, there is a card in the Home page that outlines the location of where these features are now located.

    The deprecated features from v2.5 are now behind the legacy feature flag. Please review our deprecation policy for questions.

    The following legacy features are no longer supported on Kubernetes v1.21+ clusters:

    • Logging
    • CIS Scans
    • Istio 1.5
    • Pipelines

    The following legacy feature is no longer supported past Kubernetes v1.21+ clusters:

    • Monitoring V1

    Known Major Issues

    • Kubernetes Cluster Distributions:
      • RKE:
        • Rotating encryption keys with a custom encryption provider is not supported. See #30539.
        • On the Cluster Management page, snapshot-related actions such as create/restore and rotate certificate are not available for a standard user in RKE1. See Dashboard #5011.
        • Dual stack is not supported for RKE1 Windows clusters, specifically v1.23.4 with dual stack networking becoming GA in upstream Kubernetes. See Windows #165.
        • Due to a bug in rancher-machine which ignores the use of NO_PROXY when running inside of a Rancher jail, certain air-gapped Rancher environments may face issues with provisioning an RKE node driver cluster if Rancher should not use the proxy to communicate with the downstream cluster. It is recommended that End-users with environments matching these conditions do not upgrade to 2.6.4. See #37295.
      • RKE2 - Tech Preview: There are several known issues as this feature is in tech preview, but here are some major issues to consider before using RKE2.
        • Amazon ECR Private Registries are not functional. See #33920.
        • When provisioning using a RKE2 cluster template, the rootSize for AWS EC2 provisioners does not currently take an integer when it should, and an error is thrown. To work around this issue, wrap the EC2 rootSize in quotes. See Dashboard #3689.
      • RKE2 - Windows:
        • Windows clusters do not yet support Rancher Apps & Marketplace. See #34405.
        • Windows clusters do not yet support upgrading RKE2 versions. See Windows #76.
      • AKS:
        • When editing or upgrading the AKS cluster, do not make changes from the Azure console or CLI at the same time. These actions must be done separately. See #33561.
        • Windows node pools are not currently supported. See #32586.
        • Azure Container Registry-based Helm charts cannot be added in Cluster Explorer, but do work in the Apps feature of Cluster Manager. Note that when using a Helm chart repository, the disableSameOriginCheck setting controls when credentials are attached to requests. See documentation and #34584 for more information.
      • GKE:
        • Basic authentication must be explicitly disabled in GCP before upgrading a GKE cluster to 1.19+ in Rancher. See #32312.
      • AWS:
        • In an HA Rancher server on Kubernetes v1.20, ingresses on AWS EC2 node driver clusters do not go through and result in a failed calling webhook error. Please refer to the workaround. See #35754.
        • On RHEL8.4 SELinux in AWS AMI, Kubernetes v1.22 fails to provision on AWS. As Rancher will not install RPMs on the nodes, users may work around this issue either by using AMI with this package already installed, or by installing AMI via cloud-init. Users will encounter this issue on upgrade to v1.22 as well. When upgrading to 1.22, users must manually upgrade/install the rancher-selinux package on all the nodes in the cluster, then upgrade the Kubernetes version. See #36509.
    • Infrastructures:
      • vSphere:
        • The vSphere CSI Driver does not support Kubernetes v1.22 due to unsupported v1beta1 CRD APIs. Support will be added in a later release, but in the meantime users with the CSIMigrationvSphere feature enabled should not upgrade to Kubernetes v1.22. See #33848.
        • PersistentVolumes are unable to mount to custom vSphere hardened clusters using CSI charts. See #35173.
    • Harvester:
      • Upgrades from Harvester v0.3.0 are not supported.
      • Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #35049.
    • Cluster Tools:
      • Fleet:
        • Multiple fleet-agent pods may be created and deleted during initial downstream agent deployment; rather than just one. This resolves itself quickly, but is unintentional behavior. See #33293.
      • Hardened clusters:
        • Not all cluster tools can currently be installed on a hardened cluster.
      • Rancher Backup:
        • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
        • When running a newer version of the rancher-backup app to restore a backup made with an older version of the app, the resourceSet named rancher-resource-set will be restored to an older version that might be different from the one defined in the current running rancher-backup app. The workaround is to edit the rancher-backup app to trigger a reconciliation. See #34495.
        • Because Kubernetes v1.22 drops the apiVersion apiextensions.k8s.io/v1beta1, trying to restore an existing backup file into a v1.22 cluster will fail because the backup file contains CRDs with the apiVersion v1beta1. There are two options to work around this issue: update the default resourceSet to collect the CRDs with the apiVersion v1, or update the default resourceSet and the client to use the new APIs internally. See documentation and #34154.
        • When performing a backup/restore using Helm, the command will fail if Let's Encrypt is used. See #37060.
      • Monitoring:
        • Deploying Monitoring on a Windows cluster with win_prefix_path set requires users to deploy Rancher Wins Upgrader to restart wins on the hosts to start collecting metrics in Prometheus. See #32535.
        • Monitoring fails to upgrade when CRD is in a failed state. To work around this issue, use Helm to install the rancher-monitoring chart into the cluster directly, rather than using the Rancher UI. In order to set nodeSelector or tolerations to the rancher-monitoring-crd chart, you need to install the rancher-monitoring-crd and rancher-monitoring chart by using the Helm command via command line. Rancher UI will add support soon. See #35744.
      • Logging:
        • Windows nodeAgents are not deleted when performing helm upgrade after disabling Windows logging on a Windows cluster. See #32325.
      • Istio Versions:
        • Istio 1.5 is not supported in air gapped environments. Please note that the Istio project has ended support for Istio 1.5.
        • Istio 1.9 support ended on October 8th, 2021.
        • The Kiali dashboard bundled with 100.0.0+up1.10.2 errors on a page refresh. Instead of refreshing the page when needed, simply access Kiali using the dashboard link again. Everything else works in Kiali as expected, including the graph auto-fresh. See #33739.
        • A failed calling webhook "validation.istio.io" error will occur in air gapped environments if the istiod-istio-system ValidatingWebhookConfiguration exists, and you attempt a fresh install of Istio 1.11.x and higher. To work around this issue, run the command kubectl delete validatingwebhookconfiguration istiod-istio-system and attempt your install again. See #35742.
        • Deprecated resources are not automatically removed and will cause errors during upgrades. Manual steps must be taken to migrate and/or cleanup resources before an upgrade is performed. See #34699.
        • Applications injecting Istio sidecars, fail on SELinux RHEL 8.4 enabled clusters. A temporary workaround for this issue is to run the following command on each cluster node before creating a cluster: mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni. See #33291.
      • Legacy Monitoring:
        • The Grafana instance inside Cluster Manager's Monitoring is not compatible with Kubernetes v1.21. To work around this issue, disable the BoundServiceAccountTokenVolume feature in Kubernetes v1.21 and above. Note that this workaround will be deprecated in Kubernetes v1.22. See #33465.
        • In air gapped setups, the generated rancher-images.txt that is used to mirror images on private registries does not contain the images required to run Legacy Monitoring which is compatible with Kubernetes v1.15 clusters. If you are running Kubernetes v1.15 clusters in an air gapped environment, and you want to either install Legacy Monitoring or upgrade Legacy Monitoring to the latest that is offered by Rancher for Kubernetes v1.15 clusters, you will need to take one of the following actions:
          • Upgrade the Kubernetes version so that you can use v0.2.x of the Monitoring application Helm chart
          • Manually import the necessary images into your private registry for the Monitoring application to use
        • When deploying any downstream cluster, Rancher logs errors that seem to be related to Monitoring even when Monitoring is not installed onto either cluster; specifically, Rancher logs that it failed on subscribe to the Prometheus CRs in the cluster because it is unable to get the resource prometheus.meta.k8s.io. These logs appear in a similar fashion for other Prometheus CRs (namely Alertmanager, ServiceMonitors, and PrometheusRules), but do not seem to cause any other major impact in functionality. See #32978.
        • Legacy Monitoring does not support Kubernetes v1.22 due to the feature-gates flag no longer being supported. See #35574.
        • After performing an upgrade to Rancher v2.6.3 from v2.6.2, the Legacy Monitoring custom metric endpoint stops working. To work around this issue, delete the service that is being targeted by the servicemonitor and allow it to be recreated; this will reload the pods that need to be targeted on a service sync. See #35790.
    • Docker Installations:
      • UI issues may occur due to a longer startup time. User will receive an error message when launching Docker for the first time #28800, and user is directed to username/password screen when accessing the UI after a Docker install of Rancher. See #28798.
      • On a Docker install upgrade and rollback, Rancher logs will repeatedly display the messages "Updating workload ingress-nginx/nginx-ingress-controller" and "Updating service frontend with public endpoints". Ingresses and clusters are functional and active, and logs resolve eventually. See #35798.
      • Rancher single node wont start on Apple M1 devices with Docker Desktop 4.3.0 or newer. See #35930.
    • Login to Rancher using Active Directory with TLS:
      • Upon an upgrade to v2.6.0, authenticating via Rancher against an Active Directory server using TLS can fail if the certificates on the AD server do not support SAN attributes. This is a check enabled by default in Go v1.15. See #34325.
      • The error received is "Error creating SSL connection: LDAP Result Code 200 "Network Error": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0"
      • To resolve this, the certificates on the AD server should be updated or replaced with new ones that support the SAN attribute. Alternatively this error can be ignored by setting GODEBUG=x509ignoreCN=0 as an environment variable to Rancher server container.
    • Rancher UI:
      • In some instances under Users and Authentication, no users are listed and clicking Create to create a new user does not display the entire form. To work around this when encountered, perform a hard refresh to be able to log back in. See Dashboard #5336.
      • Deployment securityContext section is missing when a new workload is created. This prevents pods from starting when Pod Security Policy Support is enabled. See #4815.
    • Legacy UI:
      • When using the Rancher v2.6 UI to add a new port of type ClusterIP to an existing Deployment created using the legacy UI, the new port will not be created upon saving. To work around this issue, repeat the procedure to add the port again. Users will notice the Service Type field will display as Do not create a service. Change this to ClusterIP and upon saving, the new port will be created successfully during this subsequent attempt. See #4280.
    Source code(tar.gz)
    Source code(zip)
    rancher-components.txt(187 bytes)
    rancher-data.json(3.19 MB)
    rancher-images-digests-linux-amd64.txt(53.62 KB)
    rancher-images-digests-linux-arm64.txt(39.82 KB)
    rancher-images-digests-linux-s390x.txt(33.86 KB)
    rancher-images-digests-windows-1809.txt(1.92 KB)
    rancher-images-digests-windows-20H2.txt(1.85 KB)
    rancher-images-digests-windows-ltsc2022.txt(1.92 KB)
    rancher-images-sources.txt(25.75 KB)
    rancher-images.txt(18.53 KB)
    rancher-load-images.ps1(2.65 KB)
    rancher-load-images.sh(3.45 KB)
    rancher-mirror-to-rancher-org.ps1(782 bytes)
    rancher-mirror-to-rancher-org.sh(23.68 KB)
    rancher-namespace.yaml(62 bytes)
    rancher-rke-k8s-versions.txt(118 bytes)
    rancher-save-images.ps1(2.18 KB)
    rancher-save-images.sh(1.31 KB)
    rancher-windows-images-sources.txt(869 bytes)
    rancher-windows-images.txt(566 bytes)
    sha256sum.txt(1.31 KB)
Generate random, pronounceable, sometimes even memorable, "superhero like" codenames - just like Docker does with container names.

Codename an RFC1178 implementation to generate pronounceable, sometimes even memorable, "superheroe like" codenames, consisting of a random combinatio

Luca Sepe 81 May 12, 2022
Monitoring Go application inside docker container by InfluxDB, Telegraf, Grafana

REST API for TreatField app Docker compose for TIG and Golang simple app: https://github.com/tochytskyi/treatfield-api/blob/main/docker-compose.yml Gr

Volodymyr Tochytskyi 0 Nov 6, 2021
Generic-list-go - Go container/list but with generics

generic-list-go Go container/list but with generics. The code is based on contai

Arne Bahlo 5 May 16, 2022
literature management for bio programmers

literature management for bio programmers

latch.ai 50 Dec 17, 2021
An unified key management system to make life easier.

Safebox An unified key management system to make life easier. The main goal of safebox is to make key backup easier with single main key to derive the

xtaci 52 May 28, 2022
Transaction management tool for taxable investments

Market Lot Robot Transaction management tool for taxable investments. How it works Run the web socket server with the following command: go run . Visi

theoperator.eth 0 Oct 19, 2021
Small proof of concept project to try temporal.io with Dispatch Incident Management from Netflix.

temporal-dispatch-poc Small POC project to try out the Temporal workflow engine together with Netflix's Dispatch Incident Management System. Supported

Jørgen 1 Nov 12, 2021
🕕Todo management through emails

??Todo management through emails

Changkun Ou 2 Nov 15, 2021
Product Lifecycle Management (PLM) in Git

Product Lifecycle Management (PLM) in Git. This repo contains a set of best practices and an application that is used to manage information needed to

Git PLM 13 May 31, 2022
Golang-module-references - A reference for how to setup a Golang project with modules - Task Management + Math Examples

Golang Module Project The purpose of this project is to act as a reference for setting up future Golang projects using modules. This project has a mat

Bob Bass 0 Jan 2, 2022
Go-turing-i2c-cmdline - Controlling the i2c management bus of the turing pi with i2c works fine

go-turing-i2c-cmdline What is it? Controlling the i2c management bus of the turi

null 2 Jan 24, 2022
This Go based project of Aadhyarupam Innovators demonstrate the code examples for building microservices, integration with cloud services (Google Cloud Firestore), application configuration management (Viper) etc.

This Go based project of Aadhyarupam Innovators demonstrate the code examples for building microservices, integration with cloud services (Google Cloud Firestore), application configuration management (Viper) etc.

Aadhyarupam 0 Jan 31, 2022
go.mod file is the root of dependency management in Go

go.mod file is the root of dependency management in Go. All the modules which are needed or to be used in the project are maintained in go.mod file. I

Kranthi Lakum 0 Feb 9, 2022
Supply chain management indie game... IN SPACE!

Ship shape Supply chain management indie game ... IN SPACE! Current state is preliminary - there's a six-level tutorial, about an hour's worth of game

null 4 May 13, 2022
cross-platform, normalized battery information library

battery Cross-platform, normalized battery information library. Gives access to a system independent, typed battery state, capacity, charge and voltag

null 204 Jun 6, 2022
Cross-platform file system notifications for Go.

File system notifications for Go fsnotify utilizes golang.org/x/sys rather than syscall from the standard library. Ensure you have the latest version

fsnotify 7k Jun 22, 2022
An example client implementation written in GO to access the CyberVox platform API

About This is an example client implementation written in GO to access the CyberVox platform API.

Cyberlabs AI 16 May 11, 2021
golang script for bypass AV and work only in windows platform

antivirus bypass protection requirements golang installed usage 1 - create your payload go run create.go <ip> <port> <secret> <any url>

null 30 Apr 8, 2022
Neko is a cross-platform open-source animated cursor-chasing cat. This is the reimplementation write in Go.

Neko Neko is a cat that chases the mouse cursor across the screen, an app written in the late 1980s and ported for many platforms. This code is a re-i

Cesar Gimenes 28 Jun 19, 2022