Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Overview

Flux version 2

CII Best Practices e2e report license release

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Flux version 2 ("v2") is built from the ground up to use Kubernetes' API extension system, and to integrate with Prometheus and other core components of the Kubernetes ecosystem. In version 2, Flux supports multi-tenancy and support for syncing an arbitrary number of Git repositories, among other long-requested features.

Flux v2 is constructed with the GitOps Toolkit, a set of composable APIs and specialized tools for building Continuous Delivery on top of Kubernetes.

Flux installation

With Homebrew for macOS and Linux:

brew install fluxcd/tap/flux

With GoFish for Windows, macOS and Linux:

gofish install flux

With Bash for macOS and Linux:

curl -s https://fluxcd.io/install.sh | sudo bash

# enable completions in ~/.bash_profile
. <(flux completion bash)

Arch Linux (AUR) packages:

  • flux-bin: install the latest stable version using a pre-build binary (recommended)
  • flux-go: build the latest stable version from source code
  • flux-scm: build the latest (unstable) version from source code from our git main branch

Binaries for macOS AMD64/ARM64, Linux AMD64/ARM/ARM64 and Windows are available to download on the release page.

A multi-arch container image with kubectl and flux is available on Docker Hub and GitHub:

  • docker.io/fluxcd/flux-cli:
  • ghcr.io/fluxcd/flux-cli:

Verify that your cluster satisfies the prerequisites with:

flux check --pre

Get started

To get started with Flux, start browsing the documentation or get started with one of the following guides:

If you need help, please refer to our Support page.

GitOps Toolkit

The GitOps Toolkit is the set of APIs and controllers that make up the runtime for Flux v2. The APIs comprise Kubernetes custom resources, which can be created and updated by a cluster user, or by other automation tooling.

overview

You can use the toolkit to extend Flux, or to build your own systems for continuous delivery -- see the developer guides.

Components

Community

Need help or want to contribute? Please see the links below. The Flux project is always looking for new contributors and there are a multitude of ways to get involved.

Events

Check out our events calendar, both with upcoming talks, events and meetings you can attend. Or view the resources section with past events videos you can watch.

We look forward to seeing you with us!

Issues
  • AKS: Azure network policy addon blocks source-controller ingress

    AKS: Azure network policy addon blocks source-controller ingress

    Dears,

    I'm trying to bootstrap flux2 in a new azure AKS without any network policies defined. After all CRDs are installed and the GitHub account is create, the bootstrap finishes with (time exceeded). The four controller pods are up and running.

    • The cluster is synchronized with the last commit correctly as below:

    #kubectl get gitrepositories.source.toolkit.fluxcd.io -A NAMESPACE NAME URL READY STATUS AGE flux-system flux-system https://github.com/name/tanyflux True Fetched revision: main/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a 59m

    • But the Kustomization has the below error : #kubectl get kustomizations.kustomize.toolkit.fluxcd -A failed to download artifact from http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz, error: Get "http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz": dial tcp 10.0.165.86:80: i/o timeout

    • Same error exists while checking the Kustomization pod as below: #kubectl logs kustomize-controller-7f5455cd78-wwxhk -n flux-system
      "level":"error","ts":"2021-01-14T09:04:52.524Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"flux-system","namespace":"flux-system","error":"failed to download artifact from http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz, error: Get "http://source-controller.flux-system.svc.cluster.local./gitrepository/flux-system/flux-system/6554ea6324d70caf0f2dfa200e137fd9c2aecc8a.tar.gz": dial tcp 10.0.165.86:80: i/o timeout"}

    Thanks for any helpful advice.

    bug blocked-upstream 
    opened by mazen-bassiouny 47
  • Bootstrap fails the first time

    Bootstrap fails the first time

    Describe the bug

    When running bootstrap on a github repository it seems to always fail the first time with:

    installing components in "flux-system" namespace
    Kustomization/flux-system/flux-system dry-run failed, error: no matches for kind "Kustomization" in version "kustomize.toolkit.fluxcd.io/v1beta2"
    

    After running the exact same bootstrap command again it works as expected. The bootstrap command is flux bootstrap github --owner=*** --repository=*** --path=some/repo/path --personal

    Any ideas what this might be about?

    Steps to reproduce

    N/A

    Expected behavior

    N/A

    Screenshots and recordings

    No response

    OS / Distro

    Windows 10

    Flux version

    0.25.3

    Flux check

    N/A

    Git provider

    github

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by mibollma 30
  • Two PVC-s bound to the same PV

    Two PVC-s bound to the same PV

    Describe the bug

    Hello team,

    Reconciliation process creates new pod before deleting old one. In case of pod has pvc in volume-section that ordering creates double claim to the same pv. IMO order of operations should be

    1. remove old pod
    2. create new one

    Steps to reproduce

    Easiest way to reproduce is to follow "Automate image updates to Git" guide , with the following addition to podinfo-deployment.yaml.

    Step 1) Add PV / PVC and attach volume to pod.

          volumes:
            - name: empty-dir-vol
              persistentVolumeClaim:
                claimName: empty-dir-pvc
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: empty-dir-pvc
      namespace: podinfo-image-updater
    spec:
      storageClassName: slow
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 3Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      labels:
        type: nfs
      name: podinfoimageupdater-emptydir-pv
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 10Gi
      claimRef:
        name: empty-dir-pvc
        namespace: podinfo-image-updater
      nfs:
        path: /storage_local/podinfo-image-updater/empty-dir
        server: 192.168.170.36
      storageClassName: slow
    

    If it is confusing full manifest is here

    1. Change image version to trigger deployment reconciliation

    2. Observe the problem. PVC will get to Lost state

    $ kubectl get pvc -w
    NAME            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    empty-dir-pvc   Bound     podinfoimageupdater-emptydir-pv   10Gi       RWO            slow           11s
    empty-dir-pvc   Lost      podinfoimageupdater-emptydir-pv   0                         slow           2m26s
    
    [email protected]:~$ microk8s.kubectl describe pvc 
    Name:          empty-dir-pvc
    Namespace:     podinfo-image-updater
    StorageClass:  slow
    Status:        Lost
    Volume:        podinfoimageupdater-emptydir-pv
    Labels:        kustomize.toolkit.fluxcd.io/name=flux-system
                   kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:   pv.kubernetes.io/bind-completed: yes
                   pv.kubernetes.io/bound-by-controller: yes
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      0
    Access Modes:  
    VolumeMode:    Filesystem
    Used By:       podinfo-9ccf96ff5-6d8nx    <----------- notice podID
    Events:
      Type     Reason         Age   From                         Message
      ----     ------         ----  ----                         -------
      Warning  ClaimMisbound  26s   persistentvolume-controller  Two claims are bound to the same volume, this one is bound incorrectly
    

    PV will get to Available state

    $ kubectl get pv -w
    podinfoimageupdater-emptydir-pv     10Gi       RWO            Retain           Bound       podinfo-image-updater/empty-dir-pvc   slow                    2m23s
    podinfoimageupdater-emptydir-pv     10Gi       RWO            Retain           Available   podinfo-image-updater/empty-dir-pvc   slow                    2m23s
    

    Reason for that is order of pod update operations

    $ kubectl get pod -w
    NAME                       READY   STATUS    RESTARTS       AGE
    podinfo-844777597c-hhj8g   1/1     Running   1 (114m ago)   11h <----- this pod owns PVC
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              0s
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              0s
    podinfo-9ccf96ff5-6d8nx    0/1     Pending   0              14s
    podinfo-9ccf96ff5-6d8nx    0/1     ContainerCreating   0              14s
    podinfo-9ccf96ff5-6d8nx    0/1     ContainerCreating   0              15s
    podinfo-9ccf96ff5-6d8nx    1/1     Running             0              15s   <--------- this pod creates duplicate PVC
    podinfo-844777597c-hhj8g   1/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    podinfo-844777597c-hhj8g   0/1     Terminating         1 (116m ago)   11h
    

    Expected behavior

    Successful image update even with PV/PVC attached to the pod

    Screenshots and recordings

    No response

    OS / Distro

    20.04.3 LTS (Focal Fossa)

    Flux version

    flux version 0.27.0

    Flux check

    $ flux check ► checking prerequisites ✔ Kubernetes 1.22.6-3+7ab10db7034594 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.17.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.22.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.16.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.20.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.21.0 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.21.2 ✔ all checks passed

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by IvanKuchin 27
  • [Gitlab] flux bootstrap fails for personal projects if they already exist

    [Gitlab] flux bootstrap fails for personal projects if they already exist

    When I try to update my flux v2 installation using the bootstrap command it fails with an error from gitlab:

    failed to create project, error: POST https://gitlab.com/api/v4/projects: 400

    I use the below bootstrap command to install / update flux v2, which worked until now:

    [email protected]:~$ cat install-flux.sh 
    curl -s https://toolkit.fluxcd.io/install.sh | sudo bash
    
    export GITLAB_TOKEN=???????????????
    
    flux bootstrap gitlab \
      --owner=isnull \
      --repository=myrepo \
      --branch=master \
      --path=k8 \
      --token-auth \
      --personal
    

    Executing the flux bootstrap yields the error:

    [email protected]:~$ sh install-flux.sh 
    [INFO]  Downloading metadata https://api.github.com/repos/fluxcd/flux2/releases/latest
    [INFO]  Using 0.7.4 as release
    [INFO]  Downloading hash https://github.com/fluxcd/flux2/releases/download/v0.7.4/flux_0.7.4_checksums.txt
    [INFO]  Downloading binary https://github.com/fluxcd/flux2/releases/download/v0.7.4/flux_0.7.4_linux_amd64.tar.gz
    [INFO]  Verifying binary download
    [INFO]  Installing flux to /usr/local/bin/flux
    ► connecting to gitlab.com
    ✗ failed to create project, error: POST https://gitlab.com/api/v4/projects: 400 {message: {limit_reached: []}, {name: [has already been taken]}, {path: [has already been taken]}}
    

    Sys info:

    [email protected]:~$ flux check
    ► checking prerequisites
    ✔ kubectl 1.20.2 >=1.18.0
    ✔ Kubernetes 1.19.5-34+8af48932a5ef06 >=1.16.0
    ► checking controllers
    ✔ source-controller is healthy
    ► ghcr.io/fluxcd/source-controller:v0.5.6
    ✔ kustomize-controller is healthy
    ► ghcr.io/fluxcd/kustomize-controller:v0.5.3
    ✔ helm-controller is healthy
    ► ghcr.io/fluxcd/helm-controller:v0.4.4
    ✔ notification-controller is healthy
    ► ghcr.io/fluxcd/notification-controller:v0.5.0
    ✔ all checks passed
    

    Maybe some Gitlab project api change caused this?

    opened by IsNull 24
  • ImageRepository manifests ignoring spec.secretRef changes

    ImageRepository manifests ignoring spec.secretRef changes

    Describe the bug

    We noticed this issue after updating to v0.25.1 and the issue is not currently effecting one of our other clusters that is on v0.24.1

    when making changes to our image repository manifests we noticed that despite the reconciliation passing without issue the spec.secretRef field was not effected example.

    Git Version

    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImageRepository
    metadata:
      name: some-app
      namespace: flux-system
    spec:
      image: <ECR_URL>/some-app
      interval: 5m
    

    Cluster Version

    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImageRepository
    metadata:
      name: some-app
      namespace: flux-system
    spec:
      image: <ECR_URL>/some-app
      interval: 5m
      secretRef:
        name: ecr-credentials
    

    Steps to reproduce

    1. add a spec.secretRef section to an existing ImageRepository manifest
    2. commit to git
    3. watch reconciliation pass successfully
    4. remove field
    5. watch reconciliation pass successfully
    6. see that spec.secretRef has not been removed

    Expected behavior

    I expect that when removing the spec.secretRef for the sync process to remove it on the cluster as well or error if there is a reason it cannot be edited/applied.

    Screenshots and recordings

    No response

    OS / Distro

    N/A

    Flux version

    v.0.25.1

    Flux check

    ► checking prerequisites ✔ Kubernetes 1.21.5 >=1.19.0-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.15.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.19.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.15.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.19.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.20.1 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.20.1 ✔ all checks passed

    Git provider

    gitlab

    Container Registry provider

    ECR

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by stvnksslr 22
  • Feature request: `flux render {kustomization|helmrelease}`

    Feature request: `flux render {kustomization|helmrelease}`

    Debugging configuration examples would benefit with a new render subcommand to flux whereby the fully rendered manifests defined by a Kustomization or HelmRelease object are output. You'd run

    flux render kustomization my-app
    

    and get the streamed manifests as they were (our would have been except for an error) applied to K8S.

    opened by metasim 22
  • Bootstrap creates empty files

    Bootstrap creates empty files

    Bootstrap command doesn't want to install on existing GIT repository. I can live with it, so I've decided to allow it create new repository. Here's a log of creation:

     $ flux bootstrap github   --token-auth   --hostname=github.tools.xxx   --owner=yyyy   --repository=kubernetes-config2   --branch=master   --path=/clusters/k3sdev   --team=zzzz
    ► connecting to github.tools.xxx
    ✔ repository created
    ✔ zzzz team access granted
    ✔ repository cloned
    ✚ generating manifests
    ✔ components manifests pushed
    ► installing components in flux-system namespace
    namespace/flux-system created
    networkpolicy.networking.k8s.io/allow-scraping created
    networkpolicy.networking.k8s.io/allow-webhooks created
    networkpolicy.networking.k8s.io/deny-ingress created
    role.rbac.authorization.k8s.io/crd-controller-flux-system created
    rolebinding.rbac.authorization.k8s.io/crd-controller-flux-system created
    clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system created
    customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io created
    service/source-controller created
    deployment.apps/source-controller created
    customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io created
    deployment.apps/kustomize-controller created
    customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io created
    deployment.apps/helm-controller created
    customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io created
    customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io created
    service/notification-controller created
    service/webhook-receiver created
    deployment.apps/notification-controller created
    Waiting for deployment "source-controller" rollout to finish: 0 of 1 updated replicas are available...
    deployment "source-controller" successfully rolled out
    deployment "kustomize-controller" successfully rolled out
    deployment "helm-controller" successfully rolled out
    Waiting for deployment "notification-controller" rollout to finish: 0 of 1 updated replicas are available...
    deployment "notification-controller" successfully rolled out
    ✔ install completed
    ► generating sync manifests
    ✔ sync manifests pushed
    ► applying sync manifests
    ◎ waiting for cluster sync
    ✗ kustomization path not found: stat /tmp/flux-system309109433/clusters/k3sdev: no such file or directory
    

    Repository has been created, README.md is created, but yaml files are empty. Should they? Listing logs repository shows:

    $ git log -p
    commit cf5e74b0428674aced9f2fd1b45f7d147991fb40 (HEAD -> master, origin/master, origin/HEAD)
    Author: flux <xxxx>
    Date:   Fri Dec 11 10:58:55 2020 +0100
    
        Add manifests
    
    commit bea83df32c89baeec8031da2235b83504a43c6c3
    Author: flux <xxxx>
    Date:   Fri Dec 11 10:58:35 2020 +0100
    
        Add manifests
    
    commit 0bd99a15329e3370dcf82833455e82efb8ff35d7
    Author: xxxx <xxxx>
    Date:   Fri Dec 11 10:58:31 2020 +0100
    
        Initial commit
    
    bug area/bootstrap 
    opened by Marx2 21
  • Bootstrapping new cluster fails on k3s v1.20

    Bootstrapping new cluster fails on k3s v1.20

    I have a k3s cluster working on a Raspberry Pi connected to my home local network. Tried to bootstrap a new GOTK repo using the following command:

    flux bootstrap github \
    --owner=$GITHUB_USER \
    --repository=$CONFIG_REPO \
    --branch=master \
    --path=./clusters/my-cluster \
    --personal \
    --kubeconfig=/etc/rancher/k3s/k3s.yaml
    

    The output for the bootstrapping command (notice the "context deadline exceeded" after "waiting for Kustomization "flux-system/flux-system" to be reconciled"):

    ► connecting to github.com
    ► cloning branch "master" from Git repository "https://github.com/argamanza/raspberry-pi-flux-config.git"
    ✔ cloned repository
    ► generating component manifests
    ✔ generated component manifests
    ✔ component manifests are up to date
    ► installing toolkit.fluxcd.io CRDs
    ◎ waiting for CRDs to be reconciled
    ✔ CRDs reconciled successfully
    ► installing components in "flux-system" namespace
    ✔ installed components
    ✔ reconciled components
    ► determining if source secret "flux-system/flux-system" exists
    ✔ source secret up to date
    ► generating sync manifests
    ✔ generated sync manifests
    ✔ sync manifests are up to date
    ► applying sync manifests
    ✔ reconciled sync configuration
    ◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
    ✗ context deadline exceeded
    ► confirming components are healthy
    ✔ source-controller: deployment ready
    ✔ kustomize-controller: deployment ready
    ✔ helm-controller: deployment ready
    ✔ notification-controller: deployment ready
    ✔ all components are healthy
    ✗ bootstrap failed with 1 health check failure(s)
    

    The logs for the Kustomize Controller expose what the issue might be:

    {"level":"info","ts":"2021-04-24T20:55:51.200Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
    {"level":"info","ts":"2021-04-24T20:55:51.202Z","logger":"setup","msg":"starting manager"}
    I0424 20:55:51.206769       7 leaderelection.go:243] attempting to acquire leader lease flux-system/kustomize-controller-leader-election...
    {"level":"info","ts":"2021-04-24T20:55:51.307Z","msg":"starting metrics server","path":"/metrics"}
    I0424 20:56:30.436269       7 leaderelection.go:253] successfully acquired lease flux-system/kustomize-controller-leader-election
    {"level":"info","ts":"2021-04-24T20:56:30.436Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.437Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.538Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
    {"level":"info","ts":"2021-04-24T20:56:30.639Z","logger":"controller.kustomization","msg":"Starting Controller","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization"}
    {"level":"info","ts":"2021-04-24T20:56:30.639Z","logger":"controller.kustomization","msg":"Starting workers","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","worker count":4}
    {"level":"info","ts":"2021-04-24T20:56:47.576Z","logger":"controller.kustomization","msg":"Kustomization applied in 2.713132582s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-egress":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
    {"level":"error","ts":"2021-04-24T20:56:47.609Z","logger":"controller.kustomization","msg":"unable to update status after reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    {"level":"error","ts":"2021-04-24T20:56:47.609Z","logger":"controller.kustomization","msg":"Reconciler error","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    {"level":"info","ts":"2021-04-24T20:56:53.835Z","logger":"controller.kustomization","msg":"Kustomization applied in 2.470822475s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-egress":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
    {"level":"error","ts":"2021-04-24T20:56:53.863Z","logger":"controller.kustomization","msg":"unable to update status after reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
    

    From the logs I can tell that status.snapshot.entries.namespace shouldn't be null for the flux-system kustomization, and after testing the same bootstrap procedure on a local machine using cluster I provisioned using kind I can see that the kustomization indeed miss the status.snapshot data in the K3S cluster while on my local kind cluster it exists:

    [email protected]:

    kubectl describe kustomization flux-system -n flux-system
    
    Name:         flux-system
    Namespace:    flux-system
    Labels:       kustomize.toolkit.fluxcd.io/checksum=1d4c5beef02b0043768a476cc3fed578aa3ed6f0
                  kustomize.toolkit.fluxcd.io/name=flux-system
                  kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:  <none>
    API Version:  kustomize.toolkit.fluxcd.io/v1beta1
    Kind:         Kustomization
    Metadata:
      Creation Timestamp:  2021-04-24T19:42:50Z
      Finalizers:
        finalizers.fluxcd.io
      Generation:  1
    ...
    ...
    Status:
      Conditions:
        Last Transition Time:  2021-04-24T19:43:30Z
        Message:               reconciliation in progress
        Reason:                Progressing
        Status:                Unknown
        Type:                  Ready
    Events:
      Type    Reason  Age   From                  Message
      ----    ------  ----  ----                  -------
      Normal  info    57m   kustomize-controller  customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
    ...
    

    [email protected]:

    kubectl describe kustomization flux-system -n flux-system
    
    Name:         flux-system
    Namespace:    flux-system
    Labels:       kustomize.toolkit.fluxcd.io/checksum=1d4c5beef02b0043768a476cc3fed578aa3ed6f0
                  kustomize.toolkit.fluxcd.io/name=flux-system
                  kustomize.toolkit.fluxcd.io/namespace=flux-system
    Annotations:  <none>
    API Version:  kustomize.toolkit.fluxcd.io/v1beta1
    Kind:         Kustomization
    Metadata:
      Creation Timestamp:  2021-04-25T12:35:37Z
      Finalizers:
        finalizers.fluxcd.io
      Generation:  1
    ...
    ...
    Status:
      Conditions:
        Last Transition Time:   2021-04-25T12:37:02Z
        Message:                Applied revision: master/dbce13415e4118bb071b58dab20d1f2bec527a14
        Reason:                 ReconciliationSucceeded
        Status:                 True
        Type:                   Ready
      Last Applied Revision:    master/dbce13415e4118bb071b58dab20d1f2bec527a14
      Last Attempted Revision:  master/dbce13415e4118bb071b58dab20d1f2bec527a14
      Observed Generation:      1
      Snapshot:
        Checksum:  1d4c5beef02b0043768a476cc3fed578aa3ed6f0
        Entries:
          Kinds:
            /v1, Kind=Namespace:                                     Namespace
            apiextensions.k8s.io/v1, Kind=CustomResourceDefinition:  CustomResourceDefinition
            rbac.authorization.k8s.io/v1, Kind=ClusterRole:          ClusterRole
            rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding:   ClusterRoleBinding
          Namespace:
          Kinds:
            /v1, Kind=Service:                                        Service
            /v1, Kind=ServiceAccount:                                 ServiceAccount
            apps/v1, Kind=Deployment:                                 Deployment
            kustomize.toolkit.fluxcd.io/v1beta1, Kind=Kustomization:  Kustomization
            networking.k8s.io/v1, Kind=NetworkPolicy:                 NetworkPolicy
            source.toolkit.fluxcd.io/v1beta1, Kind=GitRepository:     GitRepository
          Namespace:                                                  flux-system
    Events:
      Type    Reason  Age    From                  Message
      ----    ------  ----   ----                  -------
      Normal  info    3m53s  kustomize-controller  customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
    ...
    

    This is also where my debugging process came to a dead end as I couldn't find a reason why the status.snapshot doesn't populate on my [email protected] while it does on [email protected] using the same bootstrap process.

    I believe the fact that the issue only occurs on my raspberry pi implies that it might be a networking issue of some kind that prevents the kustomize controller from getting status updates from GitHub and I need to handle port forwarding or something similar, but I'm not sure.

    • Kubernetes version: v1.20.6+k3s1
    • Git provider: GitHub
    flux --version
    flux version 0.13.1
    
    flux check
    ► checking prerequisites
    ✔ kubectl 1.20.6+k3s1 >=1.18.0-0
    ✔ Kubernetes 1.20.6+k3s1 >=1.16.0-0
    ► checking controllers
    ✔ helm-controller: deployment ready
    ► ghcr.io/fluxcd/helm-controller:v0.10.0
    ✔ kustomize-controller: deployment ready
    ► ghcr.io/fluxcd/kustomize-controller:v0.11.1
    ✔ notification-controller: deployment ready
    ► ghcr.io/fluxcd/notification-controller:v0.13.0
    ✔ source-controller: deployment ready
    ► ghcr.io/fluxcd/source-controller:v0.12.1
    ✔ all checks passed
    
    blocked-upstream 
    opened by argamanza 20
  • No matches for kind ImageRepository

    No matches for kind ImageRepository

    Hello,

    I'm trying to setup automatic image update with a simple yaml for ImageRepository as such

    apiVersion: image.toolkit.fluxcd.io/v1alpha1
    kind: ImageRepository
    metadata:
      name: mobile-test-repo
      namespace: flux-system
    spec:
      image: <id>.dkr.ecr.<zone>.amazonaws.com/<myrepo>
      interval: 1m0s
    

    However, I am getting the following error:

    error: unable to recognize "flux-image-registry.yml": no matches for kind "ImageRepository" in version "image.toolkit.fluxcd.io/v1alpha1"
    

    Flux was setup simply using flux bootstrap gitlab

    opened by andrei-dascalu 19
  • gitlab.com Bootstrap issue

    gitlab.com Bootstrap issue

    When starting a fresh install of gitops toolkit, it's managing to do most steps except that it doesn't succesfully add the deploy key ,

    on bootstrap: gotk bootstrap gitlab --owner acresoftware/terraform --repository k8s-config --branch develop --path gitops/configs/dev-cluster \ --components=source-controller,kustomize-controller,helm-controller,notification-controller --version v0.1.5

    ► connecting to gitlab.com
    ✔ repository cloned
    ✚ generating manifests
    ✔ components manifests pushed
    ► installing components in gotk-system namespace
    Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
    namespace/gotk-system configured
    networkpolicy.networking.k8s.io/deny-ingress created
    role.rbac.authorization.k8s.io/crd-controller-gotk-system created
    rolebinding.rbac.authorization.k8s.io/crd-controller-gotk-system created
    clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-gotk-system created
    customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
    customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io configured
    customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io configured
    customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io configured
    service/source-controller created
    deployment.apps/source-controller created
    customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io configured
    deployment.apps/kustomize-controller created
    customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io configured
    deployment.apps/helm-controller created
    customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io configured
    customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io configured
    customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io configured
    service/notification-controller created
    service/webhook-receiver created
    deployment.apps/notification-controller created
    Waiting for deployment "source-controller" rollout to finish: 0 of 1 updated replicas are available...
    deployment "source-controller" successfully rolled out
    deployment "kustomize-controller" successfully rolled out
    deployment "helm-controller" successfully rolled out
    deployment "notification-controller" successfully rolled out
    ✔ install completed
    ► configuring deploy key
    ✗ failed to list deploy keys, error: GET https://gitlab.com/api/v4/projects/20733394/deploy_keys: 403 {message: 403 Forbidden}
    

    I have seen this before as well on alpha version and had just got the key from the secret and added it manually.. any idea what's happening here? I have put all permissions on the Personal Access Token, could it have to do with group permissions?

    opened by SpectralHiss 18
  • Propose security model for impersonation/tenancy

    Propose security model for impersonation/tenancy

    This proposal describes a potential API design /w supporting controller implementation details for secure, multi-tenant API primitives within Flux.

    The examples clarify the utility of certain changes which may at first seem esoteric. This implementation should allow people using and building platforms on Flux to make concise choices toward implementing their own security needs and behavioral constraints.

    Ideally this document and the surrounding conversations can graduate to becoming documentation for various audiences.

    The ideas represented here are the work of many folks -- not limited to @stefanprodan @squaremo @hiddeco @jonathan-innis and myself. For previous comments, see #263

    opened by stealthybox 17
  • flux altering chart version on helmrelease reconciliation

    flux altering chart version on helmrelease reconciliation

    Describe the bug

    it appears flux is altering the chart version when reconciling a helmrelease that exists already in the cluster (installed with helm cli).

    this is causing problems when we're using the chart version in a metadata field, since the modification to the chart version includes the illegal character "+" for that field.

    Steps to reproduce

    1. helm install helm chart
    2. allow flux to apply helmrelease

    Expected behavior

    flux runs a helm upgrade against the same helm chart with the modified values hierarchy supplied in the helmrelease

    Screenshots and recordings

    % helm history -n mychart mychart REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Thu Jun 23 21:27:52 2022 deployed mychart-1.4.4 Install complete 2 Fri Jun 24 02:28:58 2022 failed mychart-1.4.4+1 Upgrade "mychart" failed: cannot patch "mychart-mychart" with kind ConfigMap: ConfigMap "mychart-mychart" is invalid: metadata.labels: Invalid value: "mychart-1.4.4+1": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')

    OS / Distro

    linux centos 7.9

    Flux version

    flux: v0.31.1

    Flux check

    ► checking prerequisites ✗ flux 0.31.1 <0.31.2 (new version is available, please upgrade) ✗ Kubernetes version v1.19.8 does not match >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► car:5000/helm-controller:v0.18.2 ✔ image-automation-controller: deployment ready ► car:5000/image-automation-controller:v0.21.2 ✔ image-reflector-controller: deployment ready ► car:5000/image-reflector-controller:v0.17.1 ✔ kustomize-controller: deployment ready ► car:5000/kustomize-controller:v0.22.2 ✔ notification-controller: deployment ready ► car:5000/notification-controller:v0.23.1 ✔ source-controller: deployment ready ► car:5000/source-controller:v0.22.4

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    air gapped environment, thus the recent flux version against an old kubernetes version.

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by daveoy 3
  • flux cli OCI helm repo reconcile panic

    flux cli OCI helm repo reconcile panic

    Describe the bug

    #flux reconcile source helm cprc

    ► annotating HelmRepository cprc in flux-system namespace
    ✔ HelmRepository annotated
    ◎ waiting for HelmRepository reconciliation
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x192c1a1]
    
    goroutine 1 [running]:
    main.helmRepositoryAdapter.successMessage({0x21dfc98?})
            /home/runner/work/flux2/flux2/cmd/flux/reconcile_source_helm.go:49 +0x21
    main.reconcileCommand.run({{{0x1e7c81c, 0xe}, {0x1e7833e, 0xb}, {{0x1e926f2, 0x18}, {0x1e70362, 0x7}}}, {0x21dfc98, 0xc000446a80}}, ...)
            /home/runner/work/flux2/flux2/cmd/flux/reconcile.go:138 +0xa0b
    github.com/spf13/cobra.(*Command).execute(0x318bf00, {0xc0003e96d0, 0x1, 0x1})
            /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:856 +0x67c
    github.com/spf13/cobra.(*Command).ExecuteC(0x3190780)
            /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:974 +0x3b4
    github.com/spf13/cobra.(*Command).Execute(...)
            /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
    main.main()
            /home/runner/work/flux2/flux2/cmd/flux/main.go:160 +0x32
            
    

    Steps to reproduce

    helm repo

    apiVersion: source.toolkit.fluxcd.io/v1beta2
    kind: HelmRepository
    metadata:
      name: cprc
      namespace: flux-system
    spec:
      interval: 1m
      timeout: 60s
      type: oci
      url: oci://harbor/mirror/
    

    run flux reconcile source helm cprc

    Expected behavior

    Helm release with this repo return: Helm Chart 'flux-system/kafka-cp-schema-registry' is not ready

    helmchart:

    сhart pull error: chart pull error: failed to get chart version for  remote reference: invalid_reference: invalid repository
    

    but i don't see any requests from flux in harbor logs

    Screenshots and recordings

    No response

    OS / Distro

    ubuntu 20.04.04

    Flux version

    flux: v0.31.1

    Flux check

    ► checking prerequisites ✔ Kubernetes 1.22.9-eks-a64ea69 >=1.20.6-0 ► checking controllers ✔ all checks passed

    Git provider

    Gitlab

    Container Registry provider

    Harbor

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by uderik 2
  • [POC] Add commands for managing OCI artifacts

    [POC] Add commands for managing OCI artifacts

    This PR is a proof of concept implementation of commands for managing OCI artifacts as described in the RFC Flux OCI support for Kubernetes manifests.

    OCI artifact commands

    $ flux push artifact ghcr.io/org/repository/app-config:v0.0.1 \
    	--path="./manifests" \
    	--source="$(git config --get remote.origin.url)" \
    	--revision="$(git branch --show-current)/$(git rev-parse HEAD)"
    
    $ flux tag artifact ghcr.io/org/repository/app-config:v0.0.1 --tag latest --tag production
    
    $ flux list artifacts ghcr.io/org/repository/app-config
    
    $ flux pull artifact ghcr.io/org/repository/app-config:latest --output ./tmp
    
    $ flux build artifact --path ./manifests --output ./tmp/artifact.tgz
    

    For authentication purposes, all flux <verb> artifact commands are using the ~/.docker/config.json config file and the Docker credential helpers.

    OCI repository commands

    $ flux create source oci podinfo-oci \
    --url ghcr.io/stefanprodan/manifests/podinfo \
    --tag 6.1.6 \
    --interval 10m
    
    $ flux create kustomization podinfo-oci \
    --source=OCIRepository/podinfo-oci \
    --path="./kustomize" \
    --prune=true \
    --interval=5m \
    --target-namespace=default \
    --wait=true
    
    $ flux get sources oci
    $ flux reconcile source oci podinfo-oci
    $ flux suspend source oci podinfo-oci
    $ flux resume source oci podinfo-oci
    $ flux export source oci podinfo-oci
    $ flux delete ks podinfo-oci --silent
    $ flux delete source oci podinfo-oci --silent
    
    area/oci 
    opened by stefanprodan 0
  • why is imagePolicy triggering error

    why is imagePolicy triggering error

    My Issue: I have a job that is used to on-demand pull down a one shot pod which moves some content into my environment. since it is not reoccurring this is the perfect type for it.

          apiVersion: batch/v1
          kind: Job
    

    when I have the job just as a deployment, aka no image-repository checking or updating, it seems to work just fine.

    When I attempt to add an updater it throws this error:

    ImagePolicy/flux-system/frontend-deployer-latest dry-run failed, error: failed to create typed patch object: .spec.template: field not declared in schema

    My suspicion is that image.toolkit just really dose not like kube jobs.

    looking for opinions

    .
    ├── base
    │   ├
    │   └── frontend-deployer
    │       ├── job.yaml
    │       ├── kustomization.yaml
    │       └── updater.yaml
    └── dev
        ├── frontend-deployer-latest.yaml
        └── kustomization.yaml
    

    updater.yaml

    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImageRepository
    metadata:
      name: frontend-deployer
      namespace: flux-system
    spec:
      image: ghcr.io/vodori/frontend-deployer
      interval: 1m0s
      secretRef:
        name: regcred
    
    ---
    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImagePolicy
    metadata:
      name: frontend-deployer-${version}
      namespace: flux-system
    spec:
      imageRepositoryRef:
        name: frontend-deployer
      filterTags:
        pattern: "^${tracking_branch}-[a-f0-9]+-(?P<ts>[0-9]+)"
        extract: '$ts'
      policy:
        numerical:
          order: asc
    
    ---
    apiVersion: image.toolkit.fluxcd.io/v1beta1
    kind: ImageUpdateAutomation
    metadata:
      name: frontend-deployer-${version}
      namespace: flux-system
    spec:
      interval: 1m0s
      sourceRef:
        kind: GitRepository
        name: flux-system
      git:
        checkout:
          ref:
            branch: ${branch}
        commit:
          author:
            email: [email protected]
            name: fluxcdbot
          messageTemplate: '{{range .Updated.Images}}{{println .}}{{end}}'
        push:
          branch: ${branch}
      update:
        path: ./apps/${update_dir}
        strategy: Setters
    

    job.yaml

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: frontend-deployer-${version}
      namespace: ${target_namespace}
    spec:
      template:
        spec:
          imagePullSecrets:
            - name: regcred
          containers:
            - name: frontend-deployer
              image: ghcr.io/vodori/frontend-deployer:latest-main
              imagePullPolicy: Always
              env:
                - name: PREFIX
                  value: /etc/nginx/html
                - name: AWS_ENV_NAME
                  valueFrom:
                    configMapKeyRef:
                      name: aws-env-info
                      key: name
                - name: DOCKER_HOST
                  value: tcp://localhost:2375
              volumeMounts:
                - name: regcred
                  mountPath: "/root/.docker-secret/"
                  readOnly: true
                - name: semaphore
                  mountPath: /signal
              lifecycle:
                postStart:
                  exec:
                    command:
                      - /bin/sh
                      - -c
                      - touch /signal/healthy; /root/.docker/config.json
                preStop:
                  exec:
                    command:
                      - /bin/sh
                      - -c
                      - rm -rf /signal/healthy
            - name: dind-daemon
              image: docker:1.12.6-dind
              resources:
    #### Snipped some unimportant stuff
          serviceAccountName: vodori-flow-frontend
          restartPolicy: Never
    

    frontend-deployer-latest.yaml

    apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    kind: Kustomization
    metadata:
      name: frontend-deployer-latest
      namespace: flux-system
    spec:
      interval: 1m0s
      path: ./apps/base/frontend-deployer
      prune: true
      sourceRef:
        kind: GitRepository
        name: flux-system
      images:
      - name: ghcr.io/vodori/frontend-deployer
        newTag: latest-main # {"$imagepolicy": "flux-system:frontend-deployer-latest:tag"}
      postBuild:
        substitute:
          update_dir: dev
          version: latest
          tracking_branch: main
          target_namespace: cloud-dev
          develop: develop-8d146f6b-1655137519 # {"$imagepolicy": "flux-system:flow-frontend-latest:tag"}
          r18: release-R18-fc4485f5-1655230817 # {"$imagepolicy": "flux-system:flow-frontend-r18:tag"}
          r17: release-R17-8d146f6b-1655139849 # {"$imagepolicy": "flux-system:flow-frontend-r17:tag"}
        substituteFrom:
        - kind: ConfigMap
          name: cluster-vars
      wait: true
      force: true
      patches:
      - patch: |-
          apiVersion: batch/v1
          kind: Job
          metadata:
            name: frontend-deployer
            namespace: ${target_namespace}
          spec:
            template:
              spec:
                containers:
                  - name: frontend-deployer
                    env:
                      - name: ARTIFACT_LATEST
                        value: ghcr.io/vodori/flow-frontend:${develop}
                      - name: ARTIFACT_R18
                        value: ghcr.io/vodori/flow-frontend:${r18}
                      - name: ARTIFACT_R17
                        value: ghcr.io/vodori/flow-frontend:${r17}
    

    kustomization.yaml (in base)

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
      - job.yaml
      - updater.yaml
    commonLabels:
      app: frontend-deployer
      version: ${version}
    

    Originally posted by @jvr-vodori in https://github.com/fluxcd/flux2/discussions/2846

    opened by jvr-vodori 3
  • logs: write into writer from io.Pipe instead of os.Stdout

    logs: write into writer from io.Pipe instead of os.Stdout

    This PR replaces os.Stdout with writer from io.Pipe(), and simplify strings compare with strings.EqualFold()

    Thanks in advance for your review!

    Signed-off-by: TianZong48 [email protected]

    opened by TianZong48 1
  • Checksum fails for chocolatey package v0.31.1 & triggers alert for embedded trojan in microsoft defender

    Checksum fails for chocolatey package v0.31.1 & triggers alert for embedded trojan in microsoft defender

    Describe the bug

    got this error when trying to install on windows 10, possibly windows defender is altering the file due to afalse? positive on trojan: Trojan:Script/Oneeva.A!ml in file: C:\Users___\AppData\Local\Temp\chocolatey\flux\0.31.1\flux_0.31.1_windows_amd64.zip

    error: Downloading flux 64 bit from 'https://github.com/fluxcd/flux2/releases/download/v0.31.1/flux_0.31.1_windows_amd64.zip' Progress: 100% - Completed download of C:\Users\Joost\AppData\Local\Temp\chocolatey\flux\0.31.1\flux_0.31.1_windows_amd64.zip (15.05 MB). Download of flux_0.31.1_windows_amd64.zip (15.05 MB) completed.

    Unhandled Exception: System.IO.IOException: Operation did not complete successfully because the file contains a virus or potentially unwanted software.

    at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at checksum.Program.Main(String[] args) ERROR: Checksum for 'C:\Users\xxxx\AppData\Local\Temp\chocolatey\flux\0.31.1\flux_0.31.1_windows_amd64.zip' did not meet '85C4B7D47DC081CAEEF31F3FCED20D25FE3FCCFB8ABB061C97131B9F8FC02043' for checksum type 'SHA256'. Consider passing the actual checksums through with --checksum --checksum64 once you validate the checksums are appropriate. A less secure option is to pass --ignore-checksums if necessary. The install of flux was NOT successful. Error while running 'C:\ProgramData\chocolatey\lib\flux\tools\chocolateyinstall.ps1'. See log for details.

    Chocolatey installed 0/1 packages. 1 packages failed. See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

    Steps to reproduce

    choco install flux

    Expected behavior

    flux correctly installed

    Screenshots and recordings

    No response

    OS / Distro

    windows 10

    Flux version

    v0.31.1

    Flux check

    NA

    Git provider

    No response

    Container Registry provider

    No response

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by dejoost 5
Releases(v0.31.2)
Owner
Flux project
Open and extensible continuous delivery solution for Kubernetes
Flux project
Hot-swap Kubernetes clusters while keeping your microservices up and running.

Okra Okra is a Kubernetes controller and a set of CRDs which provide advanced multi-cluster appilcation rollout capabilities, such as canary deploymen

Yusuke Kuoka 43 Jun 13, 2022
In this repository, the development of the gardener extension, which deploys the flux controllers automatically to shoot clusters, takes place.

Gardener Extension for Flux Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle

23 Technologies GmbH 11 May 19, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

null 18 Jun 2, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Trendyol Open Source 353 May 8, 2022
Deploy, manage, and secure applications and resources across multiple clusters using CloudFormation and Shipa

CloudFormation provider Deploy, secure, and manage applications across multiple clusters using CloudFormation and Shipa. Development environment setup

Shipa 1 Feb 12, 2022
Natural-deploy - A natural and simple way to deploy workloads or anything on other machines.

Natural Deploy Its Go way of doing Ansibles: Motivation: Have you ever felt when using ansible or any declarative type of program that is used for dep

Akilan Selvacoumar 0 Jan 3, 2022
Automating Kubernetes Rollouts with Argo and Prometheus. Checkout the demo URL below

observe-argo-rollout Demo for Automating and Monitoring Kubernetes Rollouts with Argo and Prometheus Performing Demo The demo can be found on Katacoda

null 32 Jun 15, 2022
A Terraform controller for Flux

tf-controller A Terraform controller for Flux Quick start Here's a simple exampl

Chanwit Kaewkasi 336 Jun 24, 2022
ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it with target clusters.

ArgoCD Interlace ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it

International Business Machines 43 Jun 15, 2022
grafana-sync Keep your grafana dashboards in sync.

grafana-sync Keep your grafana dashboards in sync. Table of Contents grafana-sync Table of Contents Installing Getting Started Pull Save all dashboard

Maksym Postument 151 Jun 14, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 1.7k Jun 27, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 47 Jun 25, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 45 Jun 18, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10k Jun 26, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kaisen Linux 0 Feb 14, 2022
Harbormaster - Toolkit for automating the creation & mgmt of Docker components and tools

My development environment is MacOS with an M1 chip and I mostly develop for lin

Gabe Susman 0 Feb 17, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021