A controller managing namespaces deployments, statefulsets and cronjobs objects. Inspired by kube-downscaler.

Overview

kube-ns-suspender

Kubernetes controller managing namespaces life cycle.

Goal

This controller watches the cluster's namespaces and "suspends" them by scaling to 0 some of the resources within those namespaces at a given time. However, once a namespace is in a "suspended" state, it will not be restarted automatically the following day (or whatever). This allows to "reactivate" namespaces only when required, and reduce costs.

Usage

Internals

This controller can be splitted into 2 parts:

  • The watcher
  • The suspender

The watcher

The watcher function is charged to check every X seconds (X being set by the flag -watcher-idle or by the KUBE_NS_SUSPENDER_WATCHER_IDLE environement variable) all the namespaces. When it found namespace that have the kube-ns-suspender/desiredState annotation, it sends it to the suspender. It also manages all the metrics that are exposed about the watched namespaces states.

The suspender

The suspender function does all the work of reading namespaces/resources annotations, and (un)suspending them when required.

Flags

/* explain the different flags, the associated env vars... */

Resources

Currently supported resources are:

States

Namespaces watched by kube-ns-suspender can be in 3 differents states:

  • Running: the namespace is "up", and all the resources have the desired number of replicas.
  • Suspended: the namespace is "paused", and all the supported resources are scaled down to 0 or suspended.
  • Running Forced: the namespace has been suspended, and then reactivated manually. It will be "running" for a pre-defined duration then will go back to the "suspended" state.

Annotations

Annotations are employed to save the original state of a resource.

On namespaces

In order for a namespace to be watched by the controller, it needs to have the kube-ns-suspender/desiredState annotation set to any of the supported values, which are:

  • Running
  • RunningForced
  • Suspended

To be suspended at a given time, a namespace must have the annotation kube-ns-suspender/suspendAt set to a valid value. Valid values are any values that match the time.Kitchen time format, for example: 8:15PM, 12:45AM...

On resources

Deployments and Stateful Sets

As those resources have a spec.replicas value, they must have a kube-ns-suspender/originalReplicas annotation that must be the same as the spec.replicas value. This annotation will be used when a resource will be "unsuspended" to set the original number of replicas.

Cronjobs

Cronjobs have a spec.suspend value that indicates if they must be runned or not. As this value is a boolean, no other annotations are required.

Contributing

/* add CONTRIBUTING file at root */

License

MIT

Comments
  • [Feature]: UI Button to suspend namespace

    [Feature]: UI Button to suspend namespace

    Right now you can manually suspend namespace on demand based on changing the annotation. It would be nice to have an option to do this from UI as it is easy to see which namespaces are suspended and which are not.

    enhancement 
    opened by krzwiatrzyk 12
  • [Bug]: No namespaces are detected in new version

    [Bug]: No namespaces are detected in new version

    Version

    v2.1.0

    What happened?

    In UI no namespaces are detected: image

    Configured variables:

            env:
            - name: "KUBE_NS_SUSPENDER_UI_EMBEDDED"
              value: "true"
            - name: "KUBE_NS_SUSPENDER_CONTROLLER_NAME"
              value: "kube-ns-suspender"
    
    ❯ kubectl get ns --show-labels
    
    NAME                STATUS   AGE     LABELS
    
    kube-system         Active   77m     kubernetes.io/metadata.name=kube-system
    
    default             Active   77m     kubernetes.io/metadata.name=default
    
    kube-public         Active   77m     kubernetes.io/metadata.name=kube-public
    
    kube-node-lease     Active   77m     kubernetes.io/metadata.name=kube-node-lease
    
    kube-ns-suspender   Active   74m     kube-ns-suspender/controllerName=kube-ns-suspender,kubernetes.io/metadata.name=kube-ns-suspender
    
    test                Active   5m42s   kube-ns-suspender/controllerName=kube-ns-suspender,kubernetes.io/metadata.name=test
    

    Relevant log output

    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"engine successfully created in 8.479µs"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"kube-ns-suspender version 'ghcr.io/govirtuo/kube-ns-suspender:v2.1.0' (built 2022-06-13_12:28:13TUTC)"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"web UI successfully created"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"timezone: Europe/Paris"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"watcher idle: 15s"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"running duration: 4h0m0s"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"log level: debug"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"json logging: true"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"controller name: kube-ns-suspender"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"annotations prefix: kube-ns-suspender/"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"metrics server successfully created in 74.014µs"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"in-cluster configuration successfully created in 113.159µs"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"clientset successfully created in 1.209969ms"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"starting 'Watcher' and 'Suspender' routines"}
    {"level":"info","routine":"suspender","time":"2022-06-14T11:54:55+02:00","message":"suspender started"}
    {"level":"info","routine":"watcher","time":"2022-06-14T11:54:55+02:00","message":"watcher started"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"starting new namespaces inventory"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"parsing namespaces list"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"Metric - channel length: 0"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"Metric - running namespaces: 0"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"Metric - suspended namespaces: 0"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"Metric - unknown namespaces: 0"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"namespaces inventory ended"}
    
    
    
    ### Anything else?
    
    _No response_
    opened by krzwiatrzyk 5
  • Features for v0.1.0

    Features for v0.1.0

    • [x] Suspend / Unsuspend "deployments" based on NS annotations (MVP)
    • [x] Implement namespace autostop (scheduled)
    • [x] Support CronJobs
    • [x] Support StatefulSets
    • [x] Code and comments refactoring
    • [x] Add GitHunb actions to release the binary
    • [x] Review log levels
    enhancement 
    opened by eze-kiel 4
  • [Bug]: Ingress not working

    [Bug]: Ingress not working

    Version

    v2.0.11

    What happened?

    Using standard kustomization.yaml manifest from `base/run``

    My Ingress:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: kube-ns-suspender-webui
      annotations:
        kubernetes.io/ingress.class: nginx
    spec:
      rules:
      - host: kube-ns-suspend.k3s.home
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kube-ns-suspender-webui
                port:
                  number: 8080
    

    Port-forward is also broken, maybe the current release is broken?

    Relevant log output

    19:54:13 [error] 1409#1409: *57135486 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.7.71, server: kube-ns-suspend.k3s.home, request: "GET /favicon.ico
    
    [email protected] k8s-at-home % kubectl -n kube-ns-suspender port-forward svc/kube-ns-suspender-webui 8080
    Forwarding from 127.0.0.1:8080 -> 8080
    Forwarding from [::1]:8080 -> 8080
    Handling connection for 8080
    E0603 21:53:22.207123    6735 portforward.go:406] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod df5b06140a49690901253dfb0f0bd8da95121bdd0efbf5eaed06b7ff5897b552, uid : failed to execute portforward in network namespace "/var/run/netns/cni-110e7dcd-eefb-0043-6d6e-66300695a2be": failed to connect to localhost:8080 inside namespace "df5b06140a49690901253dfb0f0bd8da95121bdd0efbf5eaed06b7ff5897b552", IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused 
    E0603 21:53:22.207693    6735 portforward.go:234] lost connection to pod
    Handling connection for 8080
    E0603 21:53:22.208050    6735 portforward.go:346] error creating error stream for port 8080 -> 8080: EOF
    
    
    
    ### Anything else?
    
    _No response_
    bug 
    opened by Brice187 3
  • [Bug]: `nextSuspendTime` seems to be drifting when no `dailySuspendTime`

    [Bug]: `nextSuspendTime` seems to be drifting when no `dailySuspendTime`

    Current Behavior:

    When the annotation dailySuspendTime is not present, it seems that nextSuspendTime is drifting, and in the end the namespace will never be suspended.

    Expected Behavior:

    nextSuspendTIme alone should be sufficient to suspend a namespace.

    Steps To Reproduce:

    1. Create a namespace that is watched by kube-ns-suspender without the annotation dailySuspendTime
    2. Unsuspend it to have the annotation nextSuspendTime added
    3. Wait until the nextSuspendTIme is reached
    4. Enjoy

    Anything else:

    n/a

    bug 
    opened by eze-kiel 2
  • [Feature]: Installation guide

    [Feature]: Installation guide

    Is your feature request related to a problem?

    Hi, I have tried to install kube-ns-suspender on my K3d cluster to test it out however it is very difficult to sort it out how to do it. Additionally, I want to install kube-ns-suspender without cloning the repo and using Kustomize and I think it is not possible yet?

    Describe the solution you'd like

    A clear and concise description of how to install kube-ns-suspender on the cluster

    documentation 
    opened by krzwiatrzyk 1
  • refactor: logs

    refactor: logs

    • chore: Rename watcher logger variable
    • chore: Minor logs update on 'main.go'
    • chore: Minor logs update on 'engine/watcher.go'
    • chore: Remove un-needed 'namespace' reference from logger in 'engine/suspender.go'
    • refactor: Add tons of logs on 'engine/suspender.go'
    • chore: Add 'inventory_id' on every log statement on 'engine/watcher.go'
    opened by xakraz 1
  • feat: v2

    feat: v2

    Features updates

    nextSuspendTime

    • [x] Rename auto_nextSuspendTime -> nextSuspendTime
    • [x] Use nextSuspendTime as a reference, drop the in-mem k/v store
    • [x] Support editable nextSuspendTime annotation (Advanced user can define the value nextSuspendtime)

    dailySuspendTime

    • [x] Suspend namespace resources at dailySuspendTime, even if user "unsuspended" the namespace after dailySuspendTime
    • [x] (optional) Make dailySuspendTime optional (to allow advanced user to remove the restriction)
    opened by xakraz 1
  • [Bug]: dry run flag is not working

    [Bug]: dry run flag is not working

    Current Behavior:

    Even when using the -dry-run flag, the objects are downscaled.

    Expected Behavior:

    Do not downscale the objects

    Steps To Reproduce:

    Use -dry-run flag and watch the objects.

    bug 
    opened by eze-kiel 1
  • feat: refactored code to avoid argoCD self heal issues

    feat: refactored code to avoid argoCD self heal issues

    ArgoCD self heal feature detected that the original manifests changed. So now the original annotations are not edited anymore.

    This PR also closes issues #4 and #11

    opened by eze-kiel 1
  • [Bug]: wrong timezone is used when suspending a namespace from

    [Bug]: wrong timezone is used when suspending a namespace from "RunningForced" state

    Current Behavior:

    Currently, the lifespan of a namespace in "RunningForced" state is hardcoded (4 hours, see #4). But after having started a namespace in "RunningForced" state for more than 4 hours, it didn't scale down.

    Expected Behavior:

    Being suspended after 4 hours as planned.

    Steps to reproduce

    Deploy version v0.7.1 and annotate a namespace as "RunningForced". Note the time and wait for 4 hours

    bug 
    opened by eze-kiel 1
  • Implement tests according to specs

    Implement tests according to specs

    While working on v2, we defined tests scenario: https://github.com/govirtuo/kube-ns-suspender/blob/main/docs/misc/1-v2-testsScenario.md

    We need to implement the tests accordingly 😄

    chore 
    opened by xakraz 0
  • Create Grafana dashboards

    Create Grafana dashboards

    As kube-ns-suspender embeds a Prometheus exporter, it can be interesting to add already-made dashboards in the project, for example under dashboards/.

    enhancement 
    opened by eze-kiel 0
Releases(v2.1.0)
Owner
Virtuo
Virtuo
A kubernetes controller that watches the Deployments and “caches” the images

image-cloner This is just an exercise. It's a kubernetes controller that watches

Luca Sepe 1 Dec 20, 2021
Image clone controller is a kubernetes controller to safe guard against the risk of container images disappearing

Image clone controller image clone controller is a kubernetes controller to safe guard against the risk of container images disappearing from public r

Jayadeep KM 0 Oct 10, 2021
A Controller written in kubernetes sample-controller style which watches a custom resource named Bookstore

bookstore-sample-controller A Controller written in kubernetes sample-controller style which watches a custom resource named Bookstore. A resource cre

Abdullah Al Shaad 0 Jan 20, 2022
A fluxcd controller for managing remote manifests with kubecfg

kubecfg-operator A fluxcd controller for managing remote manifests with kubecfg This project is in very early stages proof-of-concept. Only latest ima

Pelotech 55 Sep 2, 2022
Enforcing per team quota (sum of used resources across all their namespaces) and delegating the per namespace quota to users.

Quota Operator Enforcing per team quota (sum of used resources across all their namespaces) and delegating the per namespace quota to users. Instructi

Snapp Cab Incubators 17 Aug 4, 2022
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Weave Development Lab 7 Sep 6, 2022
Command kube-tmux prints Kubernetes context and namespace to tmux status line.

kube-tmux Command kube-tmux prints Kubernetes context and namespace to tmux status line.

null 7 Sep 10, 2021
A general purpose cloud provider for Kube-Vip

kube-vip-cloud-provider The Kube-Vip cloud provider is a general purpose cloud-provider for on-prem bare-metal or virtualised environments. It's desig

kube-vip 56 Sep 21, 2022
Reworking kube-proxy's architecture

Kubernetes Proxy NG The Kubernetes Proxy NG a new design of kube-proxy aimed at allowing Kubernetes business logic to evolve with minimal to no impact

Kubernetes SIGs 156 Sep 22, 2022
scenario system to check the behavior of kube-scheduler

kube-scheduler-simulator-cli: Kubernetes Scheduler simulator on CLI and scenario system. Hello world. This repository is scenario system for kube-sche

Kensei Nakada 2 Jan 25, 2022
Kube - A simple Kubernetes client, based on client-go

kube A simple Kubernetes client, based on client-go.

PengQi Shi 2 Aug 9, 2022
Container image sweeper kube

container-image-sweeper-kube container-image-sweeper-kube は、不要になった Docker イメージを自

Latona, Inc. 0 Jan 24, 2022
A fake kube-apiserver that serves static data from files

Static KAS A fake kube-apiserver that serves static data from an Openshift must-gather. Dynamically discovers resources and supports logs. Requires go

Alvaro Aleman 29 Sep 6, 2022
Progressive delivery Kubernetes operator (Canary, A/B Testing and Blue/Green deployments)

flagger Flagger is a progressive delivery tool that automates the release process for applications running on Kubernetes. It reduces the risk of intro

Flux project 3.8k Sep 20, 2022
Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments.

Apollo Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments. Philosophy Linux-

K T Corp. 1 Feb 7, 2022
🚥 See the status of your vercel deployments on the pimoroni blinkt!

verpi ?? See the status of your vercel deployments on the pimoroni blinkt! verpi ?? Demo ?? Setup your own version ?? Getting the parts ?? Install the

Matt Gleich 33 May 22, 2022
No YAML deployments to K8s

no-yaml No YAML deployments to K8s with following approaches: Pulumi NAML cdk8s We will deploy the ?? ?? CNCF App Delivery SIG Demo podtato-head and u

Engin Diri 11 Apr 26, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 59 Sep 26, 2022