Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Overview

Flux version 2

CII Best Practices e2e report license release

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Flux version 2 ("v2") is built from the ground up to use Kubernetes' API extension system, and to integrate with Prometheus and other core components of the Kubernetes ecosystem. In version 2, Flux supports multi-tenancy and support for syncing an arbitrary number of Git repositories, among other long-requested features.

Flux v2 is constructed with the GitOps Toolkit, a set of composable APIs and specialized tools for building Continuous Delivery on top of Kubernetes.

Flux installation

With Homebrew for macOS and Linux:

brew install fluxcd/tap/flux

With GoFish for Windows, macOS and Linux:

gofish install flux

With Bash for macOS and Linux:

curl -s https://fluxcd.io/install.sh | sudo bash

# enable completions in ~/.bash_profile
. <(flux completion bash)

Arch Linux (AUR) packages:

  • flux-bin: install the latest stable version using a pre-build binary (recommended)
  • flux-go: build the latest stable version from source code
  • flux-scm: build the latest (unstable) version from source code from our git main branch

Binaries for macOS AMD64/ARM64, Linux AMD64/ARM/ARM64 and Windows are available to download on the release page.

A multi-arch container image with kubectl and flux is available on Docker Hub and GitHub:

  • docker.io/fluxcd/flux-cli:
  • ghcr.io/fluxcd/flux-cli:

Verify that your cluster satisfies the prerequisites with:

flux check --pre

Get started

To get started with Flux, start browsing the documentation or get started with one of the following guides:

If you need help, please refer to our Support page.

GitOps Toolkit

The GitOps Toolkit is the set of APIs and controllers that make up the runtime for Flux v2. The APIs comprise Kubernetes custom resources, which can be created and updated by a cluster user, or by other automation tooling.

overview

You can use the toolkit to extend Flux, or to build your own systems for continuous delivery -- see the developer guides.

Components

Community

Need help or want to contribute? Please see the links below. The Flux project is always looking for new contributors and there are a multitude of ways to get involved.

Events

Check out our events calendar, both with upcoming talks, events and meetings you can attend. Or view the resources section with past events videos you can watch.

We look forward to seeing you with us!

Comments
  • AKS: Azure network policy addon blocks source-controller ingress

    AKS: Azure network policy addon blocks source-controller ingress

    Dears,

    I'm trying to bootstrap flux2 in a new azure AKS without any network policies defined. After all CRDs are installed and the GitHub account is create, the bootstrap finishes with (time exceeded). The four controller pods are up and running.

    • The cluster is synchronized with the last commit correctly as below:

    #kubectl get gitrepositories.source.toolkit.fluxcd.io -A NAMESPACE NAME URL READY STATUS AGE flux-system flux-system https://github.com/name/tanyflux True Fetched revision: main/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a 59m

    • But the Kustomization has the below error : #kubectl get kustomizations.kustomize.toolkit.fluxcd -A failed to download artifact from http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz, error: Get "http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz": dial tcp 10.0.165.86:80: i/o timeout

    • Same error exists while checking the Kustomization pod as below: #kubectl logs kustomize-controller-7f5455cd78-wwxhk -n flux-system
      "level":"error","ts":"2021-01-14T09:04:52.524Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"flux-system","namespace":"flux-system","error":"failed to download artifact from http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz, error: Get "http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz": dial tcp 10.0.165.86:80: i/o timeout"}

    Thanks for any helpful advice.

    bug blocked-upstream 
    opened by mazen-bassiouny 47
  • Bootstrap fails the first time

    Bootstrap fails the first time

    Describe the bug

    When running bootstrap on a github repository it seems to always fail the first time with:

    installing components in "flux-system" namespace
    Kustomization/flux-system/flux-system dry-run failed, error: no matches for kind "Kustomization" in version "kustomize.toolkit.fluxcd.io/v1beta2"
    

    After running the exact same bootstrap command again it works as expected. The bootstrap command is flux bootstrap github --owner=*** --repository=*** --path=some/repo/path --personal

    Any ideas what this might be about?

    Steps to reproduce

    N/A

    Expected behavior

    N/A

    Screenshots and recordings

    No response

    OS / Distro

    Windows 10

    Flux version

    0.25.3

    Flux check

    N/A

    Git provider

    github

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by mibollma 30
  • Two PVC-s bound to the same PV

    Two PVC-s bound to the same PV

    Describe the bug

    Hello team,

    Reconciliation process creates new pod before deleting old one. In case of pod has pvc in volume-section that ordering creates double claim to the same pv. IMO order of operations should be

    1. remove old pod
    2. create new one

    Steps to reproduce

    Easiest way to reproduce is to follow "Automate image updates to Git" guide , with the following addition to podinfo-deployment.yaml.

    Step 1) Add PV / PVC and attach volume to pod.

          volumes:
            - name: empty-dir-vol
              persistentVolumeClaim:
                claimName: empty-dir-pvc
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: empty-dir-pvc
      namespace: podinfo-image-updater
    spec:
      storageClassName: slow
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 3Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      labels:
        type: nfs
      name: podinfoimageupdater-emptydir-pv
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 10Gi
      claimRef:
        name: empty-dir-pvc
        namespace: podinfo-image-updater
      nfs:
        path: /storage_local/podinfo-image-updater/empty-dir
        server: 192.168.170.36
      storageClassName: slow
    

    If it is confusing full manifest is here

    1. Change image version to trigger deployment reconciliation

    2. Observe the problem. PVC will get to Lost state

    $ kubectl get pvc -w
    NAME            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    empty-dir-pvc   Bound     podinfoimageupdater-emptydir-pv   10Gi       RWO            slow           11s
    empty-dir-pvc   Lost      podinfoimageupdater-emptydir-pv   0                         slow           2m26s
    
    [email protected]:~$ microk8s.kubectl describe pvc 
    Name:          empty-dir-pvc
    Namespace:     podinfo-image-updater
    StorageClass:  slow
    Status:        Lost
    Volume:        podinfoimageupdater-emptydir-pv
    Labels:        kustomize.toolkit.fluxcd.io/name=flux-system
                   kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:   pv.kubernetes.io/bind-completed: yes
                   pv.kubernetes.io/bound-by-controller: yes
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      0
    Access Modes:  
    VolumeMode:    Filesystem
    Used By:       podinfo-9ccf96ff5-6d8nx    <----------- notice podID
    Events:
      Type     Reason         Age   From                         Message
      ----     ------         ----  ----                         -------
      Warning  ClaimMisbound  26s   persistentvolume-controller  Two claims are bound to the same volume, this one is bound incorrectly
    

    PV will get to Available state

    $ kubectl get pv -w
    podinfoimageupdater-emptydir-pv     10Gi       RWO            Retain           Bound       podinfo-image-updater/empty-dir-pvc   slow                    2m23s
    podinfoimageupdater-emptydir-pv     10Gi       RWO            Retain           Available   podinfo-image-updater/empty-dir-pvc   slow                    2m23s
    

    Reason for that is order of pod update operations

    $ kubectl get pod -w
    NAME                       READY   STATUS    RESTARTS       AGE
    podinfo-844777597c-hhj8g   1/1     Running   1 (114m ago)   11h <----- this pod owns PVC
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              0s
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              0s
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              14s
    podinfo-9ccf96ff5-6d8nx    0/1     ContainerCreating   0              14s
    podinfo-9ccf96ff5-6d8nx    0/1     ContainerCreating   0              15s
    podinfo-9ccf96ff5-6d8nx    1/1     Running             0              15s   <--------- this pod creates duplicate PVC
    podinfo-844777597c-hhj8g   1/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    

    Expected behavior

    Successful image update even with PV/PVC attached to the pod

    Screenshots and recordings

    No response

    OS / Distro

    20.04.3 LTS (Focal Fossa)

    Flux version

    flux version 0.27.0

    Flux check

    $ flux check ► checking prerequisites ✔ Kubernetes 1.22.6-3+7ab10db7034594 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.17.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.22.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.16.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.20.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.21.0 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.21.2 ✔ all checks passed

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by IvanKuchin 27
  • [Gitlab] flux bootstrap fails for personal projects if they already exist

    [Gitlab] flux bootstrap fails for personal projects if they already exist

    When I try to update my flux v2 installation using the bootstrap command it fails with an error from gitlab:

    failed to create project, error: POST https://gitlab.com/api/v4/projects: 400

    I use the below bootstrap command to install / update flux v2, which worked until now:

    [email protected]:~$ cat install-flux.sh 
    curl -s https://toolkit.fluxcd.io/install.sh | sudo bash
    
    export GITLAB_TOKEN=???????????????
    
    flux bootstrap gitlab \
      --owner=isnull \
      --repository=myrepo \
      --branch=master \
      --path=k8 \
      --token-auth \
      --personal
    

    Executing the flux bootstrap yields the error:

    [email protected]:~$ sh install-flux.sh 
    [INFO]  Downloading metadata https://api.github.com/repos/fluxcd/flux2/releases/latest
    [INFO]  Using 0.7.4 as release
    [INFO]  Downloading hash https://github.com/fluxcd/flux2/releases/download/v0.7.4/flux_0.7.4_checksums.txt
    [INFO]  Downloading binary https://github.com/fluxcd/flux2/releases/download/v0.7.4/flux_0.7.4_linux_amd64.tar.gz
    [INFO]  Verifying binary download
    [INFO]  Installing flux to /usr/local/bin/flux
    ► connecting to gitlab.com
    ✗ failed to create project, error: POST https://gitlab.com/api/v4/projects: 400 {message: {limit_reached: []}, {name: [has already been taken]}, {path: [has already been taken]}}
    

    Sys info:

    [email protected]:~$ flux check
    ► checking prerequisites
    ✔ kubectl 1.20.2 >=1.18.0
    ✔ Kubernetes 1.19.5-34+8af48932a5ef06 >=1.16.0
    ► checking controllers
    ✔ source-controller is healthy
    ► ghcr.io/fluxcd/source-controller:v0.5.6
    ✔ kustomize-controller is healthy
    ► ghcr.io/fluxcd/kustomize-controller:v0.5.3
    ✔ helm-controller is healthy
    ► ghcr.io/fluxcd/helm-controller:v0.4.4
    ✔ notification-controller is healthy
    ► ghcr.io/fluxcd/notification-controller:v0.5.0
    ✔ all checks passed
    

    Maybe some Gitlab project api change caused this?

    opened by IsNull 24
  • ImageRepository manifests ignoring spec.secretRef changes

    ImageRepository manifests ignoring spec.secretRef changes

    Describe the bug

    We noticed this issue after updating to v0.25.1 and the issue is not currently effecting one of our other clusters that is on v0.24.1

    when making changes to our image repository manifests we noticed that despite the reconciliation passing without issue the spec.secretRef field was not effected example.

    Git Version

    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImageRepository
    metadata:
      name: some-app
      namespace: flux-system
    spec:
      image: <ECR_URL>/some-app
      interval: 5m
    

    Cluster Version

    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImageRepository
    metadata:
      name: some-app
      namespace: flux-system
    spec:
      image: <ECR_URL>/some-app
      interval: 5m
      secretRef:
        name: ecr-credentials
    

    Steps to reproduce

    1. add a spec.secretRef section to an existing ImageRepository manifest
    2. commit to git
    3. watch reconciliation pass successfully
    4. remove field
    5. watch reconciliation pass successfully
    6. see that spec.secretRef has not been removed

    Expected behavior

    I expect that when removing the spec.secretRef for the sync process to remove it on the cluster as well or error if there is a reason it cannot be edited/applied.

    Screenshots and recordings

    No response

    OS / Distro

    N/A

    Flux version

    v.0.25.1

    Flux check

    ► checking prerequisites ✔ Kubernetes 1.21.5 >=1.19.0-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.15.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.19.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.15.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.19.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.20.1 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.20.1 ✔ all checks passed

    Git provider

    gitlab

    Container Registry provider

    ECR

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by stvnksslr 22
  • Feature request: `flux render {kustomization|helmrelease}`

    Feature request: `flux render {kustomization|helmrelease}`

    Debugging configuration examples would benefit with a new render subcommand to flux whereby the fully rendered manifests defined by a Kustomization or HelmRelease object are output. You'd run

    flux render kustomization my-app
    

    and get the streamed manifests as they were (our would have been except for an error) applied to K8S.

    opened by metasim 22
  • Flux ignores kustomization.yaml

    Flux ignores kustomization.yaml

    Describe the bug

    After recently deploying a new cluster with GKE version 1.22 I receive the error below:

    Kustomization/flux-system/${environment} dry-run failed, reason: Invalid, error: Kustomization.kustomize.toolkit.fluxcd.io "${environment}" is invalid: metadata.name: Invalid value: "${environment}": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
    

    It seems that that kustomization.yaml file is somehow completely ignored. Because I compared contents of all the patch targets and they are clearly not patched.

    When, I assume, trying to deploy this:

    apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    kind: Kustomization
    metadata:
      name: ${environment}
      namespace: flux-system
    spec:
      prune: True
      interval: 1m
      dependsOn:
        - name: namespaces
        - ....
      path: environments/${environment}/application
      sourceRef:
        kind: GitRepository
        name: flux-system
        namespace: flux-system
      postBuild:
        substitute:
          environment: ${environment}
    

    Steps to reproduce

    1. Kubernetes version 1.22.12-gke.1200 - I am not sure about this step but that it is the only significant change at that stage

    kustomization.yaml:

    # This manifest was generated by Terraform. DO NOT EDIT.
    # Modify this file through the flux module
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - gotk-sync.yaml
    - gotk-components.yaml
    patches:
      - target:
          version: v1beta2
          group: kustomize.toolkit.fluxcd.io
          kind: Kustomization
          name: flux-system
          namespace: flux-system
        patch: |-
          - op: add
            path: /spec/postBuild
            value:
              substitute:
                environment: "dev"
    

    Expected behavior

    Flux should pickup kustomization.yaml and apply all the patches in it.

    Screenshots and recordings

    No response

    OS / Distro

    WSL2, Kubernetes version 1.22.12-gke.1200

    Flux version

    v0.31.5, v0.35.0

    Flux check

    flux check ► checking prerequisites ✗ flux 0.31.5 <0.35.0 (new version is available, please upgrade) W1013 11:21:43.198854 10069 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke ✔ Kubernetes 1.22.12-gke.1200 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.25.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.26.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.22.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.29.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.27.0 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.30.0 ► checking crds ✔ alerts.notification.toolkit.fluxcd.io/v1beta1 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ imagepolicies.image.toolkit.fluxcd.io/v1beta1 ✔ imagerepositories.image.toolkit.fluxcd.io/v1beta1 ✔ imageupdateautomations.image.toolkit.fluxcd.io/v1beta1 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1beta2 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ providers.notification.toolkit.fluxcd.io/v1beta1 ✔ receivers.notification.toolkit.fluxcd.io/v1beta1 ✔ all checks passed

    Git provider

    GitHub

    Container Registry provider

    ghcr.io

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by ar-qun 21
  • Bootstrap creates empty files

    Bootstrap creates empty files

    Bootstrap command doesn't want to install on existing GIT repository. I can live with it, so I've decided to allow it create new repository. Here's a log of creation:

     $ flux bootstrap github   --token-auth   --hostname=github.tools.xxx   --owner=yyyy   --repository=kubernetes-config2   --branch=master   --path=/clusters/k3sdev   --team=zzzz
    ► connecting to github.tools.xxx
    ✔ repository created
    ✔ zzzz team access granted
    ✔ repository cloned
    ✚ generating manifests
    ✔ components manifests pushed
    ► installing components in flux-system namespace
    namespace/flux-system created
    networkpolicy.networking.k8s.io/allow-scraping created
    networkpolicy.networking.k8s.io/allow-webhooks created
    networkpolicy.networking.k8s.io/deny-ingress created
    role.rbac.authorization.k8s.io/crd-controller-flux-system created
    rolebinding.rbac.authorization.k8s.io/crd-controller-flux-system created
    clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system created
    customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io created
    service/source-controller created
    deployment.apps/source-controller created
    customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io created
    deployment.apps/kustomize-controller created
    customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io created
    deployment.apps/helm-controller created
    customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io created
    service/notification-controller created
    service/webhook-receiver created
    deployment.apps/notification-controller created
    Waiting for deployment "source-controller" rollout to finish: 0 of 1 updated replicas are available...
    deployment "source-controller" successfully rolled out
    deployment "kustomize-controller" successfully rolled out
    deployment "helm-controller" successfully rolled out
    Waiting for deployment "notification-controller" rollout to finish: 0 of 1 updated replicas are available...
    deployment "notification-controller" successfully rolled out
    ✔ install completed
    ► generating sync manifests
    ✔ sync manifests pushed
    ► applying sync manifests
    ◎ waiting for cluster sync
    ✗ kustomization path not found: stat /tmp/flux-system309109433/clusters/k3sdev: no such file or directory
    

    Repository has been created, README.md is created, but yaml files are empty. Should they? Listing logs repository shows:

    $ git log -p
    commit cf5e74b0428674aced9f2fd1b45f7d147991fb40 (HEAD -> master, origin/master, origin/HEAD)
    Author: flux <xxxx>
    Date:   Fri Dec 11 10:58:55 2020 +0100
    
        Add manifests
    
    commit bea83df32c89baeec8031da2235b83504a43c6c3
    Author: flux <xxxx>
    Date:   Fri Dec 11 10:58:35 2020 +0100
    
        Add manifests
    
    commit 0bd99a15329e3370dcf82833455e82efb8ff35d7
    Author: xxxx <xxxx>
    Date:   Fri Dec 11 10:58:31 2020 +0100
    
        Initial commit
    
    bug area/bootstrap 
    opened by Marx2 21
  • Kustomizations without a base do not apply

    Kustomizations without a base do not apply

    Describe the bug

    according to the FAQ we should be able to patch arbitrary pre-installed resources using kustomize objects.

    I have not been able to patch any using the (limited) instructions in the FAQ.

    Steps to reproduce

    1. install flux
    2. create kustomization with patchesStrategicMerge
    3. reconcile kustomization

    Expected behavior

    resource patched with provided patch

    Screenshots and recordings

    kustomization:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    patches:
    - path: weave-liveness.yaml
      target:
        kind: DaemonSet
        name: weave-net
        namespace: kube-system
    

    weave-liveness.yaml:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      annotations:
        kustomize.fluxcd.toolkit.io/ssa: merge
      name: weave-net
      namespace: kube-system
    spec:
      template:
        spec:
          containers:
          - name: weave
            livenessProbe:
              exec:
                command:
                - /bin/sh
                - -c
                - /home/weave/weave --local status connections | grep fastdp
              initialDelaySeconds: 20
              periodSeconds: 5
    

    no errors, but also no change / no output.

    # kubectl get kustomizations.kustomize.toolkit.fluxcd.io -n flux-system weave-net
    NAME        AGE   READY   STATUS
    weave-net   22h   True    Applied revision: main/ca160ca0ec5d1ef98cb6fc368d09e6e09195f1ab
    

    OS / Distro

    centos 7.7

    Flux version

    v0.28.4

    Flux check

    flux check

    ► checking prerequisites ✔ Kubernetes 1.23.3 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► car:5000/helm-controller:v0.18.2 ✔ image-automation-controller: deployment ready ► car:5000/image-automation-controller:v0.21.2 ✔ image-reflector-controller: deployment ready ► car:5000/image-reflector-controller:v0.17.1 ✔ kustomize-controller: deployment ready ► car:5000/kustomize-controller:v0.22.2 ✔ notification-controller: deployment ready ► car:5000/notification-controller:v0.23.1 ✔ source-controller: deployment ready ► car:5000/source-controller:v0.22.4 ✔ all checks passed

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by daveoy 20
  • Bootstrapping new cluster fails on k3s v1.20

    Bootstrapping new cluster fails on k3s v1.20

    I have a k3s cluster working on a Raspberry Pi connected to my home local network. Tried to bootstrap a new GOTK repo using the following command:

    flux bootstrap github \
    --owner=$GITHUB_USER \
    --repository=$CONFIG_REPO \
    --branch=master \
    --path=./clusters/my-cluster \
    --personal \
    --kubeconfig=/etc/rancher/k3s/k3s.yaml
    

    The output for the bootstrapping command (notice the "context deadline exceeded" after "waiting for Kustomization "flux-system/flux-system" to be reconciled"):

    ► connecting to github.com
    ► cloning branch "master" from Git repository "https://github.com/argamanza/raspberry-pi-flux-config.git"
    ✔ cloned repository
    ► generating component manifests
    ✔ generated component manifests
    ✔ component manifests are up to date
    ► installing toolkit.fluxcd.io CRDs
    ◎ waiting for CRDs to be reconciled
    ✔ CRDs reconciled successfully
    ► installing components in "flux-system" namespace
    ✔ installed components
    ✔ reconciled components
    ► determining if source secret "flux-system/flux-system" exists
    ✔ source secret up to date
    ► generating sync manifests
    ✔ generated sync manifests
    ✔ sync manifests are up to date
    ► applying sync manifests
    ✔ reconciled sync configuration
    ◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
    ✗ context deadline exceeded
    ► confirming components are healthy
    ✔ source-controller: deployment ready
    ✔ kustomize-controller: deployment ready
    ✔ helm-controller: deployment ready
    ✔ notification-controller: deployment ready
    ✔ all components are healthy
    ✗ bootstrap failed with 1 health check failure(s)
    

    The logs for the Kustomize Controller expose what the issue might be:

    {"level":"info","ts":"2021-04-24T20:55:51.200Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
    {"level":"info","ts":"2021-04-24T20:55:51.202Z","logger":"setup","msg":"starting manager"}
    I0424 20:55:51.206769       7 leaderelection.go:243] attempting to acquire leader lease flux-system/kustomize-controller-leader-election...
    {"level":"info","ts":"2021-04-24T20:55:51.307Z","msg":"starting metrics server","path":"/metrics"}
    I0424 20:56:30.436269       7 leaderelection.go:253] successfully acquired lease flux-system/kustomize-controller-leader-election
    {"level":"info","ts":"2021-04-24T20:56:30.436Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.437Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.538Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.639Z","logger":"controller.kustomization","msg":"Starting Controller","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization"}
    {"level":"info","ts":"2021-04-24T20:56:30.639Z","logger":"controller.kustomization","msg":"Starting workers","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","worker count":4}
    {"level":"info","ts":"2021-04-24T20:56:47.576Z","logger":"controller.kustomization","msg":"Kustomization applied in 2.713132582s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-egress":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
    {"level":"error","ts":"2021-04-24T20:56:47.609Z","logger":"controller.kustomization","msg":"unable to update status after reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    {"level":"error","ts":"2021-04-24T20:56:47.609Z","logger":"controller.kustomization","msg":"Reconciler error","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    {"level":"info","ts":"2021-04-24T20:56:53.835Z","logger":"controller.kustomization","msg":"Kustomization applied in 2.470822475s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-egress":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
    {"level":"error","ts":"2021-04-24T20:56:53.863Z","logger":"controller.kustomization","msg":"unable to update status after reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    

    From the logs I can tell that status.snapshot.entries.namespace shouldn't be null for the flux-system kustomization, and after testing the same bootstrap procedure on a local machine using cluster I provisioned using kind I can see that the kustomization indeed miss the status.snapshot data in the K3S cluster while on my local kind cluster it exists:

    [email protected]:

    kubectl describe kustomization flux-system -n flux-system
    
    Name:         flux-system
    Namespace:    flux-system
    Labels:       kustomize.toolkit.fluxcd.io/checksum=1d4c5beef02b0043768a476cc3fed578aa3ed6f0
                  kustomize.toolkit.fluxcd.io/name=flux-system
                  kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:  <none>
    API Version:  kustomize.toolkit.fluxcd.io/v1beta1
    Kind:         Kustomization
    Metadata:
      Creation Timestamp:  2021-04-24T19:42:50Z
      Finalizers:
        finalizers.fluxcd.io
      Generation:  1
    ...
    ...
    Status:
      Conditions:
        Last Transition Time:  2021-04-24T19:43:30Z
        Message:               reconciliation in progress
        Reason:                Progressing
        Status:                Unknown
        Type:                  Ready
    Events:
      Type    Reason  Age   From                  Message
      ----    ------  ----  ----                  -------
      Normal  info    57m   kustomize-controller  customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
    ...
    

    [email protected]:

    kubectl describe kustomization flux-system -n flux-system
    
    Name:         flux-system
    Namespace:    flux-system
    Labels:       kustomize.toolkit.fluxcd.io/checksum=1d4c5beef02b0043768a476cc3fed578aa3ed6f0
                  kustomize.toolkit.fluxcd.io/name=flux-system
                  kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:  <none>
    API Version:  kustomize.toolkit.fluxcd.io/v1beta1
    Kind:         Kustomization
    Metadata:
      Creation Timestamp:  2021-04-25T12:35:37Z
      Finalizers:
        finalizers.fluxcd.io
      Generation:  1
    ...
    ...
    Status:
      Conditions:
        Last Transition Time:   2021-04-25T12:37:02Z
        Message:                Applied revision: master/dbce13415e4118bb071b58dab20d1f2bec527a14
        Reason:                 ReconciliationSucceeded
        Status:                 True
        Type:                   Ready
      Last Applied Revision:    master/dbce13415e4118bb071b58dab20d1f2bec527a14
      Last Attempted Revision:  master/dbce13415e4118bb071b58dab20d1f2bec527a14
      Observed Generation:      1
      Snapshot:
        Checksum:  1d4c5beef02b0043768a476cc3fed578aa3ed6f0
        Entries:
          Kinds:
            /v1, Kind=Namespace:                                     Namespace
            apiextensions.k8s.io/v1, Kind=CustomResourceDefinition:  CustomResourceDefinition
            rbac.authorization.k8s.io/v1, Kind=ClusterRole:          ClusterRole
            rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding:   ClusterRoleBinding
          Namespace:
          Kinds:
            /v1, Kind=Service:                                        Service
            /v1, Kind=ServiceAccount:                                 ServiceAccount
            apps/v1, Kind=Deployment:                                 Deployment
            kustomize.toolkit.fluxcd.io/v1beta1, Kind=Kustomization:  Kustomization
            networking.k8s.io/v1, Kind=NetworkPolicy:                 NetworkPolicy
            source.toolkit.fluxcd.io/v1beta1, Kind=GitRepository:     GitRepository
          Namespace:                                                  flux-system
    Events:
      Type    Reason  Age    From                  Message
      ----    ------  ----   ----                  -------
      Normal  info    3m53s  kustomize-controller  customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
    ...
    

    This is also where my debugging process came to a dead end as I couldn't find a reason why the status.snapshot doesn't populate on my [email protected] while it does on [email protected] using the same bootstrap process.

    I believe the fact that the issue only occurs on my raspberry pi implies that it might be a networking issue of some kind that prevents the kustomize controller from getting status updates from GitHub and I need to handle port forwarding or something similar, but I'm not sure.

    • Kubernetes version: v1.20.6+k3s1
    • Git provider: GitHub
    flux --version
    flux version 0.13.1
    
    flux check
    ► checking prerequisites
    ✔ kubectl 1.20.6+k3s1 >=1.18.0-0
    ✔ Kubernetes 1.20.6+k3s1 >=1.16.0-0
    ► checking controllers
    ✔ helm-controller: deployment ready
    ► ghcr.io/fluxcd/helm-controller:v0.10.0
    ✔ kustomize-controller: deployment ready
    ► ghcr.io/fluxcd/kustomize-controller:v0.11.1
    ✔ notification-controller: deployment ready
    ► ghcr.io/fluxcd/notification-controller:v0.13.0
    ✔ source-controller: deployment ready
    ► ghcr.io/fluxcd/source-controller:v0.12.1
    ✔ all checks passed
    
    blocked-upstream 
    opened by argamanza 20
  • custom port is not honored for ssh based git url

    custom port is not honored for ssh based git url

    I am trying to do a bootstrap of flux on to a new cluster, but the git server I have uses a custom port for its ssh access, and flux bootstrap seems to strip the port off, causing the initial clone to fail.

    I found the line in the git bootstrap below, which seems to have actually been changed in the past so custom ports are allowed for http/s.

    Not sure whether this is intended for ssh (I don't see why), but perhaps this can be changed. I would have done the change myself but I don't know whether there is a reason it is this way now.

    https://github.com/fluxcd/flux2/blob/18c944d18a6272e4c6fb26116a9db02ba4deb937/cmd/flux/bootstrap_git.go#L190

    area/bootstrap 
    opened by sartsj 19
  • Improve Flux Fuzz tests reliability

    Improve Flux Fuzz tests reliability

    Recently a change upstream have broken all builds across fluxcd. This is broken both at upstream build time (for our project) as well as locally within our projects. Here's the output error:

    + cd /tmp/oss_fuzz-9iS27e/go-118-fuzz-build
    + go build -o /root/go/bin/go-118-fuzz-build
    + cd addimport
    /root/go/src/github.com/fluxcd/image-reflector-controller/tests/fuzz/oss_fuzz_build.sh: line 37: cd: addimport: No such file or directory
    + cleanup
    + rm -rf /tmp/oss_fuzz-9iS27e
    ERROR:root:Building fuzzers failed.
    

    This is due to changes on https://github.com/AdamKorcz/go-118-fuzz-build/ which were not backwards compatible. This package is the basis of all Go support on oss-fuzz, and therefore any changes to it will affect us.

    For improved reliability we shall remove any unsupported features upstream and trying to rely solely on their setup to run smoke tests. We will still suffer from issues when upstream changes are introduced, but this should decrease their impact.

    We may also have to change the way we approach fuzz testing. Once of the key changes being to avoid sharing code amongst fuzz and non-fuzz tests. Although this is supported by Go fuzz, this isn't working well with oss-fuzz, and attempts to improve upstream support are currently stuck.

    Outstanding Tasks

    • [ ] Upstream changes
      • [ ] Add support for linking additional libs whilst compiling. (https://github.com/google/oss-fuzz/pull/9063)
      • [ ] Create auto discovery script within fluxcd project, so that oss_fuzz_build.sh does not need to be duplicated across all flux projects. (https://github.com/google/oss-fuzz/pull/9064)
    • [ ] CI
      • [ ] Update make fuzz-smoketest to be based on Upstream base image
      • [ ] Use google/oss-fuzz/infra/cifuzz/actions/build_fuzzers and google/oss-fuzz/infra/cifuzz/actions/run_fuzzers on cifuzz
    • [ ] pkg
      • [x] https://github.com/fluxcd/pkg/pull/415
      • [x] https://github.com/fluxcd/pkg/pull/416
      • [ ] Rely on upstream discovery/build scripts
    • [ ] helm-controller
      • [x] https://github.com/fluxcd/helm-controller/pull/565
      • [ ] Rely on upstream discovery/build scripts
    • [ ] kustomize-controller
      • [x] https://github.com/fluxcd/kustomize-controller/pull/771
      • [ ] Rely on upstream discovery/build scripts
    • [ ] source-controller
      • [x] https://github.com/fluxcd/source-controller/pull/965
      • [x] https://github.com/fluxcd/source-controller/pull/968
      • [ ] Rely on upstream discovery/build scripts
    • [ ] notification-controller
      • [x] https://github.com/fluxcd/notification-controller/pull/446
      • [ ] Rely on upstream discovery/build scripts
    • [ ] image-automation-controller
      • [x] https://github.com/fluxcd/image-automation-controller/pull/462
      • [x] https://github.com/fluxcd/image-automation-controller/pull/464
      • [ ] Rely on upstream discovery/build scripts
    • [ ] image-reflector-controller
      • [x] https://github.com/fluxcd/image-reflector-controller/pull/329
      • [ ] Rely on upstream discovery/build scripts
    area/build 
    opened by pjbgf 2
  • Prepare for v0.38 release

    Prepare for v0.38 release

    The 0.38 release promotes to Notification API to v1beta2 and introduces experimental support for Kustomize components.

    TODOs:

    • [ ] https://github.com/fluxcd/notification-controller/pull/435
    • [ ] https://github.com/fluxcd/kustomize-controller/pull/754

    Release checklist:

    • [ ] kustomize-controller v0.32.0
    • [ ] notification-controller v0.30.0
    • [ ] flux2 v0.38.0
    • [ ] terraform-provider-flux v0.22.0

    Documentation updates:

    • [ ] Update the website docs script to Notification API v1beta2 URLs
    • [ ] Update the notifications and webhook receivers guides to v1beta2
    • [ ] Publish Flux release change log to GitHub & Slack
    umbrella-issue 
    opened by stefanprodan 0
  • Update GH Actions Helm promotion example

    Update GH Actions Helm promotion example

    On the Flux website, there is a use-case example on doing Helm chart promotions using GitHub actions. This example depends on the assumption that the Revision contains a 1:1 copy of the version as reported by the chart metadata.

    For (verified) Helm charts originating from an OCI registry, this is expected to change based on RFC-0005. Before this is released, we should update the example to only look at the left side of the VERSION=${{ github.event.client_payload.metadata.revision }} string, after splitting it by @.

    area/docs enhancement area/helm 
    opened by hiddeco 0
  • Internal error occurred: failed calling webhook

    Internal error occurred: failed calling webhook "validate.kyverno.svc-fail"

    Describe the bug

    Calling flux bootstrap leads to the following error:

    ► connecting to https://example.com
    ► cloning branch "main" from Git repository "https://example.com/my_owner/my_repo.git"
    ✔ cloned repository
    ► generating component manifests
    ✔ generated component manifests
    ✔ component manifests are up to date
    ► installing components in "flux-system" namespace
    ✗ GitRepository/flux-system/flux-system dry-run failed, reason: InternalError, error: Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": failed to call webhook: Post "https://kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s": service "kyverno-svc" not found
    

    Steps to reproduce

    Install and uninstall flux a couple of times.

    Then run:

    flux bootstrap gitlab --hostname example.com --owner my_owner --repository my_repo --cluster-domain my.domain --context="[email protected]" --branch=main --path=clusters/staging --token-auth

    Expected behavior

    flux is bootstraped and ready to run.

    Screenshots and recordings

    No response

    OS / Distro

    Ubuntu 22.04

    Flux version

    0.36.0

    Flux check

    ► checking prerequisites ✔ Kubernetes 1.24.3 >=1.20.6-0 ► checking controllers ✗ no controllers found in the 'flux-system' namespace with the label selector 'app.kubernetes.io/part-of=flux' ► checking crds ✔ alerts.notification.toolkit.fluxcd.io/v1beta1 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1beta2 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ providers.notification.toolkit.fluxcd.io/v1beta1 ✔ receivers.notification.toolkit.fluxcd.io/v1beta1 ✗ check failed

    Git provider

    gitlab enterprise

    Container Registry provider

    No response

    Additional context

    I run some 'flux bootstrap' and 'flux uninstall' to find some problems in the config.

    Now I ended up in a state which I passed already a couple of times.

    Any hint what causes this Internal error and how to handle it are highly welcome.

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by florath 0
  • Resource deletion/recreation fails when immutable changed and force: true

    Resource deletion/recreation fails when immutable changed and force: true

    Describe the bug

    I've tried to do a running jobs via Flux tutorial (https://fluxcd.io/flux/use-cases/running-jobs/) and there is a bug i think.

    Job/default/db-migration immutable field detected, failed to delete object, error: jobs.batch "db-migration" not found... Seems like some race condition - after repo sync force: true, item is deleted in real, but Flux try to do load / delete it second time.

    Steps to reproduce

    Software Installed:

    Flux:

    % flux -v
    flux version 0.36.0
    

    K8S (Docker Desktop for Mac):

    % kubectl version --short
    Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
    Client Version: v1.25.2
    Kustomize Version: v4.5.7
    Server Version: v1.25.2
    

    Please clone my error proof repo: https://github.com/vinkiel/flux-prune-bug.git Or better fork it, change repo address in flux/gotk-sync.yaml to your

    Go to repo folder

    Starting with empty flux-system and default namespaces:

    kubectl apply -k flux

    Results:

    namespace/flux-system created
    customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/ocirepositories.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io created
    serviceaccount/helm-controller created
    serviceaccount/kustomize-controller created
    serviceaccount/notification-controller created
    serviceaccount/source-controller created
    clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system created
    clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system created
    clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system created
    service/notification-controller created
    service/source-controller created
    service/webhook-receiver created
    deployment.apps/helm-controller created
    deployment.apps/kustomize-controller created
    deployment.apps/notification-controller created
    deployment.apps/source-controller created
    kustomization.kustomize.toolkit.fluxcd.io/flux-system created
    networkpolicy.networking.k8s.io/allow-egress created
    networkpolicy.networking.k8s.io/allow-scraping created
    networkpolicy.networking.k8s.io/allow-webhooks created
    gitrepository.source.toolkit.fluxcd.io/flux-system created
    

    kubectl apply -k app-ci

    kustomization.kustomize.toolkit.fluxcd.io/app-deploy created
    kustomization.kustomize.toolkit.fluxcd.io/app-pre-deploy created
    

    kubectl get gitrepository -n flux-system

    NAME          URL                                             AGE   READY   STATUS
    flux-system   https://github.com/vinkiel/flux-prune-bug.git   90s   True    stored artifact for revision 'main/037f48ace8b2dd2381c4fa652317e237268fae6f'
    

    kubectl get kustomization -n flux-system

    NAME             AGE     READY   STATUS
    app-deploy       2m55s   True    Applied revision: main/037f48ace8b2dd2381c4fa652317e237268fae6f
    app-pre-deploy   2m55s   True    Applied revision: main/037f48ace8b2dd2381c4fa652317e237268fae6f
    flux-system      2m55s   True    Applied revision: main/037f48ace8b2dd2381c4fa652317e237268fae6f
    

    kubectl get pod

    NAME                 READY   STATUS      RESTARTS   AGE
    db-migration-8lmwz   0/1     Completed   0          5m22s
    webserver            1/1     Running     0          4m57s
    

    kubectl logs db-migration-8lmwz (use pod that was listed in real)

    starting db migration
    

    Looks good till now.

    Now change tag of busybox in pre-deploy/migration.job.yaml to 1.33, 1.34, 1.35 etc. , commit and push to main branch. Wait for sync (you can use kubectl get kustomization -n flux-system -w to observe)

    kubectl get kustomization -n flux-system

    NAME             AGE   READY   STATUS
    app-deploy       11m   False   dependency 'flux-system/app-pre-deploy' revision is not up to date
    app-pre-deploy   11m   False   Job/default/db-migration immutable field detected, failed to delete object, error: jobs.batch "db-migration" not found...
    flux-system      11m   True    Applied revision: main/c90a7a1657c2bde80a2e61081f0147c766550392
    

    Job/default/db-migration immutable field detected, failed to delete object, error: jobs.batch "db-migration" not found... is the error that blocks deployment.

    However Job was removed due force: true:

    kubectl get jobs

    No resources found in default namespace.
    

    This configuration works best when the Jobs are using the same image and tag as the application being deployed. When a new version of the application is deployed, the image tags are updated. The update of the image tag will force a recreation of the Jobs.

    Note: issue is not repeatable every time, so seems like a race condition. Please change version of busybox few time more if first try pass with no errors. Note: when error occured, next tag update commit will trigger Jobs creation successfully

    Expected behavior

    There should be no error Job/default/db-migration immutable field detected, failed to delete object, error: jobs.batch "db-migration" not found... and jobs should be recreated smooth way

    BTW. Any ideas how to run pre/post deployment jobs different within Flux only?

    Screenshots and recordings

    No response

    OS / Distro

    Mac OS 12.1 (21C52)

    Flux version

    flux: v0.36.0

    Flux check

    ► checking prerequisites ✔ Kubernetes 1.25.2 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.26.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.30.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.28.0 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.31.0 ► checking crds ✔ alerts.notification.toolkit.fluxcd.io/v1beta1 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1beta2 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ providers.notification.toolkit.fluxcd.io/v1beta1 ✔ receivers.notification.toolkit.fluxcd.io/v1beta1 ✔ all checks passed

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by vinkiel 1
  • flux bootstrap behind a proxy fails with

    flux bootstrap behind a proxy fails with "failed to fetch image reference"

    Describe the bug

    flux bootstrap fails behind a proxy.

    Error log:

    ► connecting to https://example.com
    ► cloning branch "main" from Git repository "https://example.com/my_owner/my_repo.git"
    ✔ cloned repository
    ► generating component manifests
    ✔ generated component manifests
    ✔ component manifests are up to date
    ► installing components in "flux-system" namespace
    ✗ Deployment/flux-system/helm-controller dry-run failed, error: admission webhook "mutate.kyverno.svc-fail" denied the request: 
    
    policy Deployment/flux-system/helm-controller for resource error: 
    
    verify-flux-images:
      autogen-verify-cosign-signature: 'failed to update digest: failed to fetch image
        reference: ghcr.io/fluxcd/helm-controller:v0.26.0, error: Get "https://ghcr.io/v2/":
        dial tcp: lookup ghcr.io on 169.169.169.169:53: server misbehaving'
    
    

    Steps to reproduce

    1. Install flux on a server behind a proxy
    2. run flux bootstrap gitlab --hostname example.com --owner my_owner --repository my_repo --cluster-domain my.domain --context="[email protected]" --branch=main --path=clusters/staging --token-auth

    Expected behavior

    Ready to use flux installation.

    Screenshots and recordings

    http(s)_proxy and no_proxy set: upper and lower case.

    For me it looks that the repo can be reached using the proxy (which is the only way) - but the 'verify-flux-images' is not using the proxy.

    OS / Distro

    Ubuntu 22.04

    Flux version

    0.36.0

    Flux check

    ► checking prerequisites ✔ Kubernetes 1.24.3 >=1.20.6-0 ► checking controllers ✗ no controllers found in the 'flux-system' namespace with the label selector 'app.kubernetes.io/part-of=flux' ► checking crds ✔ alerts.notification.toolkit.fluxcd.io/v1beta1 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1beta2 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ providers.notification.toolkit.fluxcd.io/v1beta1 ✔ receivers.notification.toolkit.fluxcd.io/v1beta1 ✗ check failed

    Git provider

    GitHub Enterprise

    Container Registry provider

    No response

    Additional context

    I'm using https://github.com/fluxcd/flux2-multi-tenancy

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by florath 2
Releases(v0.37.0)
Owner
Flux project
Open and extensible continuous delivery solution for Kubernetes
Flux project
Hot-swap Kubernetes clusters while keeping your microservices up and running.

Okra Okra is a Kubernetes controller and a set of CRDs which provide advanced multi-cluster appilcation rollout capabilities, such as canary deploymen

Yusuke Kuoka 46 Nov 23, 2022
In this repository, the development of the gardener extension, which deploys the flux controllers automatically to shoot clusters, takes place.

Gardener Extension for Flux Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle

23 Technologies GmbH 14 Oct 22, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

null 23 Nov 8, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Trendyol Open Source 361 Nov 15, 2022
Deploy, manage, and secure applications and resources across multiple clusters using CloudFormation and Shipa

CloudFormation provider Deploy, secure, and manage applications across multiple clusters using CloudFormation and Shipa. Development environment setup

Shipa 1 Feb 12, 2022
Natural-deploy - A natural and simple way to deploy workloads or anything on other machines.

Natural Deploy Its Go way of doing Ansibles: Motivation: Have you ever felt when using ansible or any declarative type of program that is used for dep

Akilan Selvacoumar 0 Jan 3, 2022
Automating Kubernetes Rollouts with Argo and Prometheus. Checkout the demo URL below

observe-argo-rollout Demo for Automating and Monitoring Kubernetes Rollouts with Argo and Prometheus Performing Demo The demo can be found on Katacoda

null 33 Nov 16, 2022
ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it with target clusters.

ArgoCD Interlace ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it

International Business Machines 59 Nov 11, 2022
A Terraform controller for Flux

tf-controller A Terraform controller for Flux Quick start Here's a simple exampl

Chanwit Kaewkasi 595 Nov 29, 2022
grafana-sync Keep your grafana dashboards in sync.

grafana-sync Keep your grafana dashboards in sync. Table of Contents grafana-sync Table of Contents Installing Getting Started Pull Save all dashboard

Maksym Postument 167 Nov 29, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2.2k Nov 24, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 63 Oct 26, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 86 Nov 13, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10.8k Nov 22, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kaisen Linux 0 Feb 14, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
Harbormaster - Toolkit for automating the creation & mgmt of Docker components and tools

My development environment is MacOS with an M1 chip and I mostly develop for lin

Gabe Susman 0 Feb 17, 2022