Kubernetes controller managing namespaces life cycle.
This controller watches the cluster's namespaces and "suspends" them by scaling to 0 some of the resources within those namespaces at a given time. However, once a namespace is in a "suspended" state, it will not be restarted automatically the following day (or whatever). This allows to "reactivate" namespaces only when required, and reduce costs.
This controller can be splitted into 2 parts:
- The watcher
- The suspender
The watcher function is charged to check every X seconds (X being set by the flag
-watcher-idle or by the
KUBE_NS_SUSPENDER_WATCHER_IDLE environement variable) all the namespaces. When it found namespace that have the
kube-ns-suspender/desiredState annotation, it sends it to the suspender. It also manages all the metrics that are exposed about the watched namespaces states.
The suspender function does all the work of reading namespaces/resources annotations, and (un)suspending them when required.
/* explain the different flags, the associated env vars... */
Currently supported resources are:
Namespaces watched by
kube-ns-suspender can be in 3 differents states:
- Running: the namespace is "up", and all the resources have the desired number of replicas.
- Suspended: the namespace is "paused", and all the supported resources are scaled down to 0 or suspended.
- Running Forced: the namespace has been suspended, and then reactivated manually. It will be "running" for a pre-defined duration then will go back to the "suspended" state.
Annotations are employed to save the original state of a resource.
In order for a namespace to be watched by the controller, it needs to have the
kube-ns-suspender/desiredState annotation set to any of the supported values, which are:
To be suspended at a given time, a namespace must have the annotation
kube-ns-suspender/suspendAt set to a valid value. Valid values are any values that match the
time.Kitchen time format, for example:
Deployments and Stateful Sets
As those resources have a
spec.replicas value, they must have a
kube-ns-suspender/originalReplicas annotation that must be the same as the
spec.replicas value. This annotation will be used when a resource will be "unsuspended" to set the original number of replicas.
Cronjobs have a
spec.suspend value that indicates if they must be runned or not. As this value is a boolean, no other annotations are required.
/* add CONTRIBUTING file at root */