Validation of best practices in your Kubernetes clusters

Overview
Polaris Logo

Best Practices for Kubernetes Workload Configuration

Fairwinds' Polaris keeps your clusters sailing smoothly. It runs a variety of checks to ensure that Kubernetes pods and controllers are configured using best practices, helping you avoid problems in the future.

Polaris can be run in three different modes:

  • As a dashboard, so you can audit what's running inside your cluster.
  • As an admission controller, so you can automatically reject workloads that don't adhere to your organization's policies.
  • As a command-line tool, so you can test local YAML files, e.g. as part of a CI/CD process.

Polaris Architecture

Want to learn more? Reach out on the Slack channel (request invite), send an email to [email protected], or join us for office hours on Zoom

Documentation

Check out the documentation at docs.fairwinds.com

Integration with Fairwinds Insights

Fairwinds Insights

Fairwinds Insights is a platform for auditing Kubernetes clusters and enforcing policy. If you'd like to:

  • manage Polaris across a fleet of clusters
  • track findings over time
  • send results to services like Slack and Datadog
  • add additional checks from tools like Trivy, Goldilocks, and OPA

you can sign up for a free account here.

Contributing

PRs welcome! Check out the Contributing Guidelines and Code of Conduct for more information.

Further Information

A history of changes to this project can be viewed in the Changelog

If you'd like to learn more about Polaris, or if you'd like to speak with a Kubernetes expert, you can contact [email protected] or visit our website


Polaris Dashboard

Comments
  • Design updated configuration schema that will support all of v1

    Design updated configuration schema that will support all of v1

    An update configuration schema should be designed that will support all of the existing validations along with all planned validations before v1, including:

    • Pull policy always (warning)
    • No host networking
    • No host port
    • No host IPC
    • No host pid
    • Restricting kernel capabilities by default like SYS_ADMIN (warning)
    • No privileged containers (warning)
    • No root user (warning)
    • Should use read only root filesystem (warning)
    • Don't mount /var/run/docker.sock

    As is likely obvious here, this will also need to find some kind of way to differentiate from errors and warnings.

    opened by robscott 23
  • Check metadataAndNameMismatched not found

    Check metadataAndNameMismatched not found

    Installation Process

    Docker

    Polaris Version

    3.2.1

    Expected Behavior

    Expected audit to run

    Actual Behavior

    WARN[0000] An error occurred validating controller:Check metadataAndNameMismatched not found 
    ERRO[0000] Error while running audit on resources: Check metadataAndNameMismatched not found
    

    Steps to Reproduce

    docker run -ti \
      -v "$PWD/pwd" -v ~/github/k8s:/k8s \
      -v ~/.kube/config:/opt/app/config:ro \
      quay.io/fairwinds/polaris:3.2.1 polaris audit \
        --kubeconfig /opt/app/config \
        --audit-path /pwd \
        --config /k8s/polaris-config.yaml \
        -f pretty --only-show-failed-tests
    WARN[0000] An error occurred validating controller:Check metadataAndNameMismatched not found 
    ERRO[0000] Error while running audit on resources: Check metadataAndNameMismatched not found
    

    The config file I am using is copied from here:

    https://raw.githubusercontent.com/FairwindsOps/polaris/master/examples/config.yaml

    question 
    opened by HariSekhon 17
  • pkg/dashboard: setup basePath as a path prefix in routing

    pkg/dashboard: setup basePath as a path prefix in routing

    Awesome project! It works for me with a port-forward, but not with -dashboard-path-prefix. After this change I can load the dashboard with a base path (/polaris/ in my case), but I haven't tested building a Docker image for my cluster yet.

    ~~Btw, I was unable to sign the CLA at https://cla-assistant.io/fairwinds/polaris (404)~~ With ?pullRequest=201 I could sign the CLA.

    opened by adamdecaf 15
  • Ignore orphaned pods

    Ignore orphaned pods

    Installation Process

    EKS cluster, version 1.15, using helm.

    Command line:

    $ helm upgrade --install polaris fairwinds-stable/polaris --version 1.0.2 --namespace kube-system -f polaris-values.yaml
    

    polaris-values.yaml contents:

    # Override the tag provided in the chart as the unconstrained "1.0"
    # to be the more constrained "1.0.3" (a specific release).
    image:
      tag: "1.0.3"
    
    # Enable the dashboard over HTTP for internal users
    dashboard:
      enable: true
      ingress:
        enabled: true
        annotations:
          kubernetes.io/ingress.class: nginx-ingress-private
        hosts:
          - "polaris.internal.domain"
    

    Polaris Version

    Version 1.0.3 Docker Image

    $ k -n kube-system get deploy polaris-dashboard -o wide
    NAME                READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                            SELECTOR
    polaris-dashboard   1/1     1            1           15d   dashboard    quay.io/fairwinds/polaris:1.0.3   app=polaris,app.kubernetes.io/instance=polaris,app.kubernetes.io/name=polaris,component=dashboard
    
    

    Expected Behavior

    The dashboard should be working.

    Actual Behavior

    Webpage with contents:

    Error fetching Kubernetes resources
    

    Logs from container:

    $ k -n kube-system logs polaris-dashboard-dcb6c8b9c-9bpkw 
    time="2020-06-04T17:36:42Z" level=info msg="Starting Polaris dashboard server on port 8080"
    time="2020-06-04T17:37:04Z" level=error msg="Cache missed ReplicaSet/prod/telbot-gunicorn-7844c9db9d again"
    time="2020-06-04T17:37:04Z" level=error msg="Error loading controllers from pods: Could not retrieve parent object"
    time="2020-06-04T17:37:04Z" level=error msg="Error fetching Kubernetes resources Could not retrieve parent object"
    

    CURL output of same request:

    $ curl -v -L 'http://polaris.internal.domain/'
    * About to connect() to polaris.internal.domain port 80 (#0)
    *   Trying 10.100.25.113...
    * Connected to polaris.internal.domain (10.100.25.113) port 80 (#0)
    > GET / HTTP/1.1
    > User-Agent: curl/7.29.0
    > Host: polaris.internal.domain
    > Accept: */*
    > 
    < HTTP/1.1 500 Internal Server Error
    < Date: Thu, 04 Jun 2020 17:44:50 GMT
    < Content-Type: text/plain; charset=utf-8
    < Content-Length: 36
    < Connection: keep-alive
    < Server: nginx/1.17.8
    < X-Content-Type-Options: nosniff
    < 
    Error fetching Kubernetes resources
    * Connection #0 to host polaris.internal.domain left intact
    

    Steps to Reproduce

    1. Install with helm.
    2. Open dashboard web interface.
    3. Get Error fetching Kubernetes resources error message

    Additional information

    I'm not sure what further information to provide to help diagnose this issue.

    opened by irasnyd 12
  • Show cluster name/host on dashboard

    Show cluster name/host on dashboard

    Addresses https://github.com/reactiveops/polaris/issues/124

    Uses whatever the user specifies as --cluster-name, falling back to the host named in kubeconfig. Doesn't look like we can get the name field: https://github.com/kubernetes/client-go/issues/530

    Any potential issues with surfacing the host in the dashboard? E.g. could it have basic auth creds?

    Here's what it looks like:

    Screen Shot 2019-06-05 at 4 53 43 PM
    opened by rbren 10
  • static content is not loaded when using nginx-ingress with custom base path

    static content is not loaded when using nginx-ingress with custom base path

    Hi,

    I installed polaris 1.2 by applying the yaml file, I can access the page, but, some contents are not loaded.

    In a post, I saw about this parameter : --dashboard-base-path="/polaris/" but I did not see how to use it .

    This is my ingress configuration:

    apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: "/$1" nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^(/polaris)$ $1/ permanent; name: polaris namespace: polaris spec: rules:

    • host: hostname http: paths:
      • path: /polaris/?(.*) backend: serviceName: polaris-dashboard servicePort: 80

    Thanks for your help

    stale 
    opened by osaffer 9
  • Issue with runAsNonRoot

    Issue with runAsNonRoot

    Not entirely sure that your check for runAsNonRoot is working, or we mis-understand exactly what it's checking. We have a pod running which is set at the pod level as below, yet reactive is still saying it shouldn't run as root... which it isn't. Since the securityContext at the container level is only for overriding what is set at the pod level I hope that being set at the pod level is enough. Any ideas?

    securityContext: runAsNonRoot: true runAsUser: 5000

    bug 
    opened by jclarksnps 8
  • 🛠  Add GitHub Action

    🛠 Add GitHub Action

    I added a Github Action to add polaris as executable to the GitHub Action runners.

    The script downloads the specified polaris version (by tag) and links it to the runners' path.

    It is rather verbose and written in a startup-like code manner but it should be well enough for a first version. We already use my action in our CI now (https://github.com/mambax/setup-polaris - a clone before contribution) and it works as planned 🤝

    Some resources: https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#runs-for-docker-actions https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstepsuses

    Note: In this, I also created https://github.com/github/docs/pull/3468 towards the Github Docs because I nearly went crazy because my Dockerfile_Action did not work (just as a hint for later).

    Btw: Once merged to your side and you being happy with it, you can list it in the GH marketplace also.

    opened by mambax 7
  • Polaris Dashboard and polaris cli output discrepancies

    Polaris Dashboard and polaris cli output discrepancies

    Installation Process

    Polaris dashboard was installed with helm on a kubernetes cluster. Chart version : "1.1.0"

    Polaris Version

    Polaris version:1.1.1
    

    Expected Behavior

    After setting up the current exemptions in the polaris configmap, through helm and a value file:

        - controllerNames:
          - prometheus-prometheus-operator-prometheus
          rules:
          - readinessProbeMissing
          - livenessProbeMissing
        - controllerNames:
          - prometheus-prometheus-operator-prometheus
          rules:
          - notReadOnlyRootFilesystem
    
    

    I can correctly see my polaris dashboard listing the component as green: image

    I expect that running polaris audit -c config (where config contains a list of the mentioned exemptions) I would get the same green status on all components.

    Actual Behavior

    However, when running polaris audit -c config (where config contains a list of the mentioned exemptions) I get:

      {
         "Name": "prometheus-operator-prometheus",
         "Namespace": "monitoring",
         "Kind": "Prometheus",
         "Results": {},
    ...
           "ContainerResults": [
             {
               "Name": "prometheus",
               "Results": {
    ...
                 "notReadOnlyRootFilesystem": {
                   "ID": "notReadOnlyRootFilesystem",
                   "Message": "Filesystem should be read only",
                   "Success": false,
                   "Severity": "warning",
                   "Category": "Security"
                 },
    ....
             {
               "Name": "prometheus-config-reloader",
               "Results": {
    ...
                 "livenessProbeMissing": {
                   "ID": "livenessProbeMissing",
                   "Message": "Liveness probe should be configured",
                   "Success": false,
                   "Severity": "warning",
                   "Category": "Health Checks"
                 },
    

    Namely polaris dashboard lists the component correctly with the name it has on my cluster: prometheus-prometheus-operator-prometheus

    The audit, for some reason, lists it as prometheus-operator-prometheus. I think it might be due to the lenght of the controller name or a difference between the chart and the cli versions. Could you confirm?

    opened by blame19 7
  • Linux binary doesn't handle top level commands as documentation

    Linux binary doesn't handle top level commands as documentation

    Installation Process

    Downloaded and untar'd the tar and placed binary in path. Mint Linux 19.3

    Polaris Version

    Polaris version 0.6.0
    

    Expected Behavior

    The documenation (Usage) describes running top level commands without dashes like this:

    polaris version
    

    Actual Behavior

    On Linux this just runs audit:

    polaris version
    # => Full audit output
    

    To run version (or help/dashboard etc...) you need to add dashes:

    polaris --version
    # =>Polaris version 0.6.0
    

    I am not sure how it acts on Mac, but I think it may be a build config missing in the Linux binary?

    opened by hlascelles 7
  • Please provide additional checks like Podrestartpolicy or maxretries

    Please provide additional checks like Podrestartpolicy or maxretries

    Polaris needs some additional validational checks like Pod's restart policy or maxretries so that it can monitor the usage of such parameters in the kubernetes deployment.

    Reason to add: The containers get restarted constantly whenever they fail. This might be because of system issues or some other factors. It will keep on failing even if it retries a lot of time. If we don’t have the maxRetries on our pod/deployment, the pod will always retry, however it will not succeed and retries take a lot of time and resource.

    enhancement priority: could stale 
    opened by sagarprusty 7
  • yaml scan

    yaml scan

    now yaml scan only supports --- this original format check, does not support yaml check in the format of kind: List, will this follow-up plan support it?

    enhancement triage 
    opened by fengshunli 0
  • Display Polaris check context in details in Insights

    Display Polaris check context in details in Insights

    Is your feature request related to a problem? Please describe. When looking at the details of a triggered Polaris check in Insights it'd be nice to have some more context about why it's being triggered.

    Describe the solution you'd like The following additional details would be handy:

    • Currently only labels are shown, it'd be nice to see annotations as well
    • A code block of the check code for the triggered item for reference (maybe collapse-able?)
    • A code block of the entire manifest YAML for reference (maybe collapse-able?)
    Fairwinds_Insights___Action_Items_Table_-_Vivaldi enhancement triage 
    opened by bkrein-vertex 0
  • Ability to create a single exception for multiple namespaces

    Ability to create a single exception for multiple namespaces

    Is your feature request related to a problem? Please describe. Currently it's possible to create a single exception that applies to multiple controllerNames across all namespaces but there's no way (that I see from the docs) to create a single exception that applies to a list of multiple namespaces.

    Describe the solution you'd like I'd like to be able to create an exemption like this:

    exemptions:
      - namespaces:
          - namespace1
          - namespace2
          - namespace3
        rules:
          - rule1
          - rule2
    

    Describe alternatives you've considered Currently I have to either identify all the controllers within each namespace I want to exclude & list them individually or I have to create a separate exception for each namespace. Neither scales particularly well.

    exemptions:
      - namespace:
          - namespace1
        rules:
          - rule1
    
      - namespace:
          - namespace2
        rules:
          - rule1
    
      - namespace:
          - namespace3
        rules:
          - rule1
    
    enhancement triage 
    opened by bkrein-vertex 0
  • Cannot upgrade helm chart after webhook.mutate is set to true

    Cannot upgrade helm chart after webhook.mutate is set to true

    What happened?

    After enabling webhook.mutate to true, as described in https://polaris.docs.fairwinds.com/admission-controller/#installation, if I try to do a helm rollback OR simply run helm upgrade using the same setting as the one's before. ie, make any revisions to the current one, this error would appear:

    Error: UPGRADE FAILED: cannot patch "polaris-webhook" with kind Deployment: admission webhook "polaris.fairwinds.com" denied the request: invalid JSON Document

    and simply restarting my minikube (using minikube start would delete all my namespaces would restart minikube using the default configuration. while throwing this message before doing so:

    error execution phase addon/coredns: unable to create deployment: admission webhook "polaris.fairwinds.com" denied the request: invalid JSON Document To see the stack trace of this error execute with --v=5 or higher

    What did you expect to happen?

    Move to a new version using the newly supplied values instead of deleting my minikube entirely on simple minikube start and minikube stop.

    How can we reproduce this?

    1. Installing cert-manager with helm
    2. Installing polaris with webhook.enable and webhook.mutate to true. For example: helm upgrade --install polaris [location] --create-namespace=true -f values2.yaml --set dashboard.enable=false (the webhook values have been set inside values2.yaml in my case although can be done also using the --set flag.
    3. re-running 2 without changing the values file OR making any arbitrary change in values file and re-running 2

    Version

    polaris helm chart. current version.

    Search

    • [X] I did search for other open and closed issues before opening this.

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct

    Additional context

    No response

    bug triage 
    opened by arishtj 0
  • Add a check for topologySpreadConstraint

    Add a check for topologySpreadConstraint

    This PR fixes #547

    Checklist

    • [x] I have signed the CLA
    • [x] I have updated/added any relevant documentation

    Description

    What's the goal of this PR?

    See issue #547 - Add a check for pod topologySpreadConstraints - recommending that users set these to ensure high availability across zones and/or hosts

    What changes did you make?

    Add topologySpreadConstraint check

    What alternative solution should we consider, if any?

    There may be more parts of the spec that we want to recommend, and possibly only limit to one topologyKey.

    Also, possibly splitting this into two checks - one that ensures a topologySpreadConstraint exists, and another to verify its configuration

    opened by sudermanjr 2
  • Namespaced audit fails finding resources

    Namespaced audit fails finding resources

    What happened?

    polaris audit on a specific namespace failed

    What did you expect to happen?

    the audit to succeed

    How can we reproduce this?

    home/polaris-demo   ☸ kind-kind
    ✗ polaris audit --format pretty --namespace demo
    INFO[0000] Loading nodes
    INFO[0000] Loading namespaces
    INFO[0000] Loading pods
    INFO[0000] Setting up restmapper
    INFO[0000] Loading rbac.authorization.k8s.io/ClusterRole
    WARN[0000] Error retrieving parent object API v1 and Kind clusterroles because of error: the server could
    not find the requested resource
    ERRO[0000] Error fetching Kubernetes resources the server could not find the requested resource
    

    Version

    Polaris version:7.2.0

    Search

    • [X] I did search for other open and closed issues before opening this.

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct

    Additional context

    No response

    bug triage 
    opened by sudermanjr 0
Releases(7.2.0)
Owner
Fairwinds
The Kubernetes Enablement Company
Fairwinds
A best practices Go source project with unit-test and integration test, also use skaffold & helm to automate CI & CD at local to optimize development cycle

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Quang Nguyen 4 Apr 4, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Trendyol Open Source 361 Nov 15, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

null 23 Nov 8, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2.2k Nov 24, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10.8k Nov 22, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 63 Oct 26, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kaisen Linux 0 Feb 14, 2022
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Fernand Galiana 18.7k Nov 28, 2022
Client extension for interacting with Kubernetes clusters from your k6 tests.

⚠️ This is a proof of concept As this is a proof of concept, it won't be supported by the k6 team. It may also break in the future as xk6 evolves. USE

k6 23 Nov 21, 2022
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Clusternet 1.1k Nov 26, 2022
A pain of glass between you and your Kubernetes clusters.

kube-lock A pain of glass between you and your Kubernetes clusters. Sits as a middle-man between you and kubectl, allowing you to lock and unlock cont

Tom Meadows 5 Oct 20, 2022
Hot-swap Kubernetes clusters while keeping your microservices up and running.

Okra Okra is a Kubernetes controller and a set of CRDs which provide advanced multi-cluster appilcation rollout capabilities, such as canary deploymen

Yusuke Kuoka 46 Nov 23, 2022
Kubernetes compliance validation pack for Probr

Probr Kubernetes Service Pack The Probr Kubernetes Service pack provides a variety of provider-agnostic compliance checks. Get the latest stable versi

null 19 Jul 21, 2022
Golang-samples - Help someone need some practices when learning golang

GO Language Samples This project is to help someone need some practices when lea

Gui Chen 1 Jan 11, 2022
kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters

kubequery powered by Osquery kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters kubequery will be packaged as

Uptycs Inc 83 Oct 25, 2022
Manage large fleets of Kubernetes clusters

Introduction Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a si

Rancher 1.2k Nov 22, 2022
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

RD Station 22 Nov 5, 2021
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

vesoft inc. 57 Nov 30, 2022