Creates Helm chart from Kubernetes yaml

Overview

Helmify

CI GitHub go.mod Go version GitHub GitHub release (latest by date) Go Report Card GoDoc Maintainability Test Coverage

CLI that creates Helm charts from kubernetes yamls.

Helmify reads a list of supported k8s objects from stdin and converts it to a helm chart. Designed to generate charts for k8s operators but not limited to. See examples of charts generated by helmify.

Submit issue if some features missing for your use-case.

Usage

  1. From file: cat my-app.yaml | helmify mychart

    Will create 'mychart' directory with Helm chart from yaml file with k8s objects.

  2. From directory with yamls:

    awk 'FNR==1 && NR!=1  {print "---"}{print}' /<my_directory>/*.yaml | helmify mychart

    Will create 'mychart' directory with Helm chart from all yaml files in <my_directory> directory.

  3. From kustomize output:

    kustomize build <kustomize_dir> | helmify mychart

    Will create 'mychart' directory with Helm chart from kustomize output.

Integrate to your Operator-SDK/Kubebuilder project

Tested with operator-sdk version: "v1.8.0".

  1. Open Makefile in your operator project generated by Operator-SDK or Kubebuilder.
  2. Add these lines to Makefile:
    HELMIFY = $(shell pwd)/bin/helmify
    helmify:
        $(call go-get-tool,$(HELMIFY),github.com/arttor/helmify/cmd/[email protected])
    
    helm: manifests kustomize helmify
        $(KUSTOMIZE) build config/default | $(HELMIFY)
  3. Run make helm in project root. It will generate helm chart with name 'chart' in 'chart' directory.

Install

With Homebrew (for MacOS or Linux): brew install arttor/tap/helmify

Or download suitable for your system binary from the Releases page. Unpack the helmify binary and add it to your PATH and you are good to go!

Available options

Helmify takes a chart name for an argument. Usage:

helmify [flags] CHART_NAME - CHART_NAME is optional. Default is 'chart'. Can be a directory, e.g. 'deploy/charts/mychart'.

flag description sample
-h -help Prints help helmify -h
-v Enable verbose output. Prints WARN and INFO. helmify -v
-vv Enable very verbose output. Also prints DEBUG. helmify -vv
-version Print helmify version. helmify -version

Status

Supported default operator resources:

  • deployment
  • service
  • RBAC (serviceaccount, (cluster-)role, (cluster-)rolebinding)
  • configs (configmap, secret)
  • webhooks (cert, issuer, ValidatingWebhookConfiguration)

Known issues

  • Helmify will not overwrite Chart.yaml file if presented. Done on purpose.
  • Helmify will not delete existing template files, only overwrite.
  • Helmify overwrites templates and values files on every run. This meas that all your manual changes in helm template files will be lost on the next run.

Develop

To support a new type of k8s object template:

  1. Implement helmify.Processor interface. Place implementation in pkg/processor. The package contains examples for most k8s objects.
  2. Register your processor in the pkg/app/app.go
  3. Add relevant input sample to test_data/kustomize.output.

Run

Clone repo and execute command:

cat test_data/k8s-operator-kustomize.output | go run ./cmd/helmify mychart

Will generate mychart Helm chart form file test_data/k8s-operator-kustomize.output representing typical operator kustomize output.

Test

For manual testing, run program with debug output:

cat test_data/k8s-operator-kustomize.output | go run ./cmd/helmify -vv mychart

Then inspect logs and generated chart in ./mychart directory.

To execute tests, run:

go test ./...

Beside unit-tests, project contains e2e test pkg/app/app_e2e_test.go. It's a go test, which uses test_data/* to generate a chart in temporary directory. Then runs helm lint --strict to check if generated chart is valid.

Comments
  • Neither Labels nor Annotations are currently supported for CRDs

    Neither Labels nor Annotations are currently supported for CRDs

    until the follow are reviewed and merged can we remove the labels and annotations from the CRD template?

    https://github.com/kubernetes-sigs/controller-tools/pull/691 https://github.com/kubernetes-sigs/controller-tools/issues/454

    It doesnt make sense to include something which will be rejected at a later point by native tooling. Maybe better to use kustomize and point to how that could work if needed as an option

    opened by fire-ant 19
  • Error unclosed action

    Error unclosed action

    Hi, I'm very impressed with helmify, saving me a lot of work, thank you!

    One problem... PS C:\dev\Helm\Helmify\One> helm install firstheml mychart Error: parse error at (mychart/templates/mongodb-configmap.yaml:8): unclosed action

    that file looks like this apiVersion: v1 kind: Secret metadata: name: {{ include "mychart.fullname" . }}-mongodb-secret labels: {{- include "mychart.labels" . | nindent 4 }} data: mongo-root-password: {{ required "mongodbSecret.mongoRootPassword is required" .Values.mongodbSecret.mongoRootPassword | b64enc | quote }} mongo-root-username: {{ required "mongodbSecret.mongoRootUsername is required" .Values.mongodbSecret.mongoRootUsername | b64enc | quote }}

    the .Values file contains mongodbSecret: mongoRootPassword: "" mongoRootUsername: ""

    All help much appreciated

    Doug

    bug 
    opened by AceSyntax 13
  • [Feature Request] Add support for secrets

    [Feature Request] Add support for secrets

    In kustomization.yaml it is possible to generate secrets using secretGenerator:

    secretGenerator:
    - files:
      - ca.crt
      name: ca
      type: opaque
    - envs:
      - envfile.env
      name: my-secret
      type: opaque
    

    I would like the helmify tool to generate the secrets in the Helm Chart output.

    opened by david6983 12
  • put any CRD in its own directory

    put any CRD in its own directory

    thanks for making/maintaining this tool, fills a much needed gap in Operator-SDK, appreciated.

    per the latest helm3 docs it would be nice if there was the option to place crds in their own directory so that users can take advantage of any further changes to how CRDs are handled during the install process. I can also see how this would be cleaner and makes using other validation tooling simpler to use by convention.

    I haven't written much go but could have a look if its more than 5 minutes for anyone else.

    opened by fire-ant 4
  • fix: inject-ca-from is not correctly generated

    fix: inject-ca-from is not correctly generated

    Hi there,

    Thanks for creating this fantastic project! This will definitely enable more usage when users want to use both kustomize and helm charts but also want to maintain single version of truth.

    When I tried to use helmify with kubebuilder for my admission controller, I noticed that inject-ca-from field is not correctly converted into helm charts.

    Expected:

        cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "chart.fullname" . }}-serving-cert
    

    Actual:

        cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "chart.fullname" . }}-admission-controller-system/admission-controller-serving-cert
    

    Looks like its namespace and prefix is not correctly trimmed. Somehow appMeta.ChartName() is always chart, so it fails to remove those prefix.

    This PR fixes the issue by using appMeta.TrimName() to trim, like other processors.

    Let me know if you have any question or comment.

    Thanks a lot! Sam

    opened by holyspectral 4
  • initialising/updating crds do not inherit chart labels when running helmify against the kustomize build example

    initialising/updating crds do not inherit chart labels when running helmify against the kustomize build example

    Helmify version: 0.3.15 example chart: https://github.com/fire-ant/consulkv-operator

    instructions: clone the example chart and run 'make helm'. the output returns a diff where the labels will not render.

    expected result: the go template instruction remains unchanged and is inserted into the labels field of the crd.yaml.

    actual result: helmify should render the labels for the crd in 'crds/'

    Due to how crds are templated it appears that the labels line of the crd needs to be affixed to include from the _helpers inside the templates directory, alternatively it may be better to handle this templating inside the logic of helmify itself (Im a little unsure of which is better and why)

    opened by fire-ant 3
  • why helmify doesn't  support Daemonset and Statefulset?

    why helmify doesn't support Daemonset and Statefulset?

    Hello, arttor, I think daemonset and statefulset are frequently-used resource,and why helmify doesn't support Daemonset and Statefulset? The support for statefulset and daemonset are on plan?

    opened by huangzixun123 3
  • Issue when using imagePullSecret (Secret type: kubernetes.io/dockerconfigjson)

    Issue when using imagePullSecret (Secret type: kubernetes.io/dockerconfigjson)

    Hello! Hope you are well.

    When given a secret of type: kubernetes.io/dockerconfigjson, helmify does not preserve the type field in the resulting template:

    Input:

    apiVersion: v1
    kind: Secret
    type: kubernetes.io/dockerconfigjson
    metadata:
      name: foobar-registry-credentials
    data:
      .dockerconfigjson: |
        ewogICAgImF1dGhzIjogewogICAgICAgICJmb28uYmFyLmlvIjogewogICAgICAgICAgIC
        AidXNlcm5hbWUiOiAidXNlcm5hbWUiLAogICAgICAgICAgICAicGFzc3dvcmQiOiAic2Vj
        cmV0IiwKICAgICAgICAgICAgImF1dGgiOiAiZFhObGNtNWhiV1U2YzJWamNtVjAiCiAgIC
        AgICAgfQogICAgfQp9
    

    Output:

    apiVersion: v1
    kind: Secret
    metadata:
      name: {{ include "foobar.fullname" . }}-registry-credentials
      labels:
      {{- include "foobar.labels" . | nindent 4 }}
    data:
      .dockerconfigjson: {{ required "registryCredentials.dockerconfigjson is required"
        .Values.registryCredentials.dockerconfigjson | b64enc | quote }}
    

    I looked around the code and it seems like I might be able to simply add a field for type here and here. I will try to get something working and will make a PR if I'm successful.

    Also (maybe unrelated issue) - I am using kustomize to add the 'foobar' name prefix. Helmify properly added {{ include "foobar.fullname" . }} to my Secret's name, however in my deployment spec, I just see 'foobar'

    ...
    spec:
      template:
        spec:
          ...
          imagePullSecrets:
          - name: foobar-registry-credentials
    
    opened by f1tzpatrick 3
  • Ingress backend service name not being prefixed with xx.fullname and service name is being changed incorrectly

    Ingress backend service name not being prefixed with xx.fullname and service name is being changed incorrectly

    apiVersion: v1 kind: Service metadata: name: firelymongo-service
    namespace: xxx spec: selector: pod-Label: my-pod ports:

    • protocol: TCP port: 4080
      targetPort: 4080

    apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: firelyapi-ingress
    namespace: xxx annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules:

    • host: dev.local.xyz.com http: paths:
      • backend: service: name: firelymongo-service port: number: 4080
        path: /firelyapi(/|$)(.*) pathType: ImplementationSpecific tls:
    • hosts:
      • dev.local.xyz.com

    The service is named as 'firelymongo-service', but the file generated for it is called just mongo-service.yaml. In that file... apiVersion: v1 kind: Service metadata: name: {{ include "ft.fullname" . }}-mongo-service labels: {{- include "ft.labels" . | nindent 4 }}

    the service has it's 'firely' part removed which is wrong I believe.

    The api-ingress.yaml that is generated is...

    apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ include "ft.fullname" . }}-api-ingress labels: {{- include "ft.labels" . | nindent 4 }} annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules:

    • host: dev.local.elektaplatform.com http: paths:
      • backend: service: name: firelymongo-service port: number: 4080 path: /firelyapi(/|$)(.*) pathType: ImplementationSpecific tls:
    • hosts:
      • dev.local.elektaplatform.com

    Note:- the name: is 'firelymongo-service.' There is no fullname template here. so I'm having to manually change it to ... name: {{ include "ft.fullname" . }}-mongo-service which works fine. I'm using powershell Get-content firelyServer-WithMongo.yaml | helmify ft

    So part of the service name being removed and the file that is generated also has that part removed (this isn't critical but seems strange) Also the ingress name is not being templated, which is causing an issue.

    Or am I missing something?

    opened by AceSyntax 3
  • Seg fault processing a Deployment

    Seg fault processing a Deployment

    Trying to helmify some manifests I have the following segmentation fault code:

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1691f93]
    
    goroutine 1 [running]:
    github.com/arttor/helmify/pkg/processor/deployment.deployment.Process({}, {0x1906678, 0xc0000aed40}, 0xc00000e548)
    	/home/runner/work/helmify/helmify/pkg/processor/deployment/deployment.go:71 +0x293
    github.com/arttor/helmify/pkg/app.(*appContext).process(0xc0000e9d50, 0xc00000e548)
    	/home/runner/work/helmify/helmify/pkg/app/context.go:75 +0x39c
    github.com/arttor/helmify/pkg/app.(*appContext).CreateHelm(0xc0000e9d50, 0xc000308000)
    	/home/runner/work/helmify/helmify/pkg/app/context.go:57 +0x252
    github.com/arttor/helmify/pkg/app.Start({0x18f4d20, 0xc00000e010}, {{0x7ffeefbffa88, 0xc}, {0x17ee766, 0x1}, 0x0, 0x0})
    	/home/runner/work/helmify/helmify/pkg/app/app.go:58 +0x737
    main.main()
    	/home/runner/work/helmify/helmify/cmd/helmify/main.go:21 +0x1b1
    

    Someone had a similar situation?

    bug 
    opened by viniciusmucuge 3
  • Webhook does not have correct certificate

    Webhook does not have correct certificate

    I am trying the tool in our operator, and unfortunately it doesn't seem to inject the certificate into the ValidatingWebhookConfiguration and MutatingWebhookConfiguration correctly:

    kustomize patch:

    apiVersion: admissionregistration.k8s.io/v1
    kind: MutatingWebhookConfiguration
    metadata:
      name: mutating-webhook-configuration
      annotations:
        cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
    

    kustomize cert:

    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: serving-cert  # this name should match the one appeared in kustomizeconfig.yaml
      namespace: kyma-operator
    [...]
    

    generated helm manifests:

    kind: MutatingWebhookConfiguration
    metadata:
      name: {{ include "chart.fullname" . }}-mutating-webhook-configuration
      annotations:
        cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "chart.fullname" . }}-kyma-operator/kyma-operator-serving-cert
    [...]
    
    ---
    
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: {{ include "chart.fullname" . }}-serving-cert
    [...]
    
    bug 
    opened by KaiReichart 2
  • More flags to control what is generated

    More flags to control what is generated

    For our use case, we really want to leverage helmify to keep CRDs and RBAC up to date with changes in the chart kept in the same repository that houses a large set of controllers.

    However, we want to maintain the chart (e.g. deployment, values, and so on) ourselves. We would really benefit from individual flags within helmify to control what it generates and what it doesn't (e.g. only do CRDs and RBAC).

    enhancement 
    opened by alanmeadows 1
  • Wrong templating in certificate if chart name is the same as namespace name

    Wrong templating in certificate if chart name is the same as namespace name

    The generated Certificate looks like this:

    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: {{ include "busola-operator.fullname" . }}-serving-cert
      labels:
      {{- include "busola-operator.labels" . | nindent 4 }}
    spec:
      dnsNames:
      - '{{ include "{{ .Release.Namespace }}.fullname" . }}-$(SERVICE_NAME).$(SERVICE_NAMESPACE).svc'
      - '{{ include "{{ .Release.Namespace }}.fullname" . }}-$(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local'
      issuerRef:
        kind: Issuer
        name: '{{ include "busola-operator.fullname" . }}-selfsigned-issuer'
      secretName: webhook-server-cert
    

    Under spec.dnsNames the template is {{ include "{{ .Release.Namespace }}.fullname" . }} although it should be {{ include "busola-operator.fullname" . }}

    I am using helmify version 0.3.7

    It works fine when the chart name is different to the namespace name, this only happens when both are the same

    opened by KaiReichart 2
  • ERRO[0000] wrong image format: XXX_production_traefik

    ERRO[0000] wrong image format: XXX_production_traefik"

    kompose --file production.yml -o $HOME/Documents/arthur_aks/arthur-kompose-manifests/production convert -c --volumes hostPath

    Screenshot 2022-05-28 at 03 39 24

    Then using helmify:

    awk 'FNR==1 && NR!=1 {print "---"}{print}' $HOME/Documents/arthur_aks/arthur-kompose-manifests/production/templates/*.yaml | helmify $HOME/Documents/arthur_aks/arthur-helmify

    ERRO[0000] helmify finished with error error="wrong image format: arthur_paf_production_traefik"

    This is the production.yaml (scaffolding from cookiecutter-django)

    version: '3'
    
    volumes:
    ![Screenshot 2022-05-28 at 03 36 18](https://user-images.githubusercontent.com/1273014/170806461-2ca470b6-b5bd-4621-9a48-d45cef7299f7.png)
    
      production_mongodb_data:
      production_traefik: {}
    
    services:
      django:
        build:
          context: .
          dockerfile: ./compose/production/django/Dockerfile
        image: arthuracr.azurecr.io/arthur_paf_production_django:v1.0.0
        container_name: arthur_paf_production_django
        # platform: linux/x86_64
        depends_on:
          - mongo
        volumes:
          - .:/app:z
          - ../RTL_data/_PAF_Data_samples/PAF_MAIN_FILE:/uploaddata
    
        env_file:
          - ./.envs/.production/.django
          - ./.envs/.production/.mongodb
        ports:
          - "8000:8000"
          - "3000:3000"
        command: /start
        stdin_open: true
        tty: true
    
      mongo:
        image: mongo:5.0.6
        container_name: "mongo"
        restart: always
        env_file:
          - ./.envs/.production/.mongodb
        environment:
          - MONGO_INITDB_ROOT_USERNAME=<USERNAME>
          - MONGO_INITDB_ROOT_PASSWORD=<PASSWORD>
          - MONGO_INITDB_DATABASE=<DATABASE>
          - MONGO_INITDB_USERNAME=<USERNAME> 
          - MONGO_INITDB_PASSWORD=<PASSWORD>
        volumes:
          - production_mongodb_data:/data/db
          # - ${PWD}/_data/mongo:/data/db
          # - ${PWD}/docker/_mongo/fixtures:/import
          # - ${PWD}/docker/_mongo/scripts/init.sh:/docker-entrypoint-initdb.d/setup.sh
        ports:
          - 27017:27017
    
      traefik:
        build:
          context: .
          dockerfile: ./compose/production/traefik/Dockerfile
        image: arthur_paf_production_traefik
        container_name: arthur_paf_production_traefik
        depends_on:
          - django
        volumes:
          - production_traefik:/etc/traefik/acme:z
        ports:
          - "0.0.0.0:80:80"
          - "0.0.0.0:443:443"
    

    helmify -version Version: 0.3.12 Build Time: 2022-05-17T09:02:12Z Git Commit: 3634ef55f022db202f123e43e55ebb15161e1892

    helm version --short v3.7.1+g1d11fcb

    k version --short
    Client Version: v1.22.2 Server Version: v1.21.9

    Mac: 12.4

    traefik-deployment.yaml:

    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        kompose.cmd: $HOME/.asdf/installs/kompose/1.26.0/bin/kompose --file production.yml -o $HOME/Documents/arthur_aks/arthur-kompose-manifests/production convert -c --volumes hostPath
        kompose.version: 1.26.0 (40646f47)
      creationTimestamp: null
      labels:
        io.kompose.service: traefik
      name: traefik
    spec:
      replicas: 1
      selector:
        matchLabels:
          io.kompose.service: traefik
      strategy:
        type: Recreate
      template:
        metadata:
          annotations:
            kompose.cmd: $HOME/.asdf/installs/kompose/1.26.0/bin/kompose --file production.yml -o $HOME/Documents/arthur_aks/arthur-kompose-manifests/production convert -c --volumes hostPath
            kompose.version: 1.26.0 (40646f47)
          creationTimestamp: null
          labels:
            io.kompose.service: traefik
        spec:
          containers:
            - image: arthur_paf_production_traefik
              name: arthur-paf-production-traefik
              ports:
                - containerPort: 80
                - containerPort: 443
              resources: {}
              volumeMounts:
                - mountPath: /etc/traefik/acme
                  name: production-traefik
          restartPolicy: Always
          volumes:
            - hostPath:
                path: $HOME/Documents/arthur_paf
              name: production-traefik
    status: {}
    
    

    traefik-service.yaml:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kompose.cmd: $HOME/.asdf/installs/kompose/1.26.0/bin/kompose --file production.yml -o $HOME/Documents/arthur_aks/arthur-kompose-manifests/production convert -c --volumes hostPath
        kompose.version: 1.26.0 (40646f47)
      creationTimestamp: null
      labels:
        io.kompose.service: traefik
      name: traefik
    spec:
      ports:
        - name: "80"
          port: 80
          targetPort: 80
        - name: "443"
          port: 443
          targetPort: 443
      selector:
        io.kompose.service: traefik
    status:
      loadBalancer: {}
    
    
    
    
    opened by scheung38 0
  • generate multiline configmap

    generate multiline configmap

    I am trying to generate helm chart from yaml file https://github.com/Altinity/clickhouse-operator/blob/master/deploy/operator/clickhouse-operator-install-bundle.yaml but it creates variables with all multiline code as one variable: ` defaultPodTemplateYamlExample: |

    apiVersion: "clickhouse.altinity.com/v1"
    kind: "ClickHouseInstallationTemplate"
    metadata:
      name: "default-oneperhost-pod-template"
    spec:
      templates:
        podTemplates: 
          - name: default-oneperhost-pod-template
            distribution: "OnePerHost"
    

    ` do I do something wrong or is it a what should I expect of using helmify ?

    opened by navi86 0
Releases(v0.3.18)
Owner
Artem
Artem
Helm Operator is designed to managed the full lifecycle of Helm charts with Kubernetes CRD resource.

Helm Operator Helm Operator is designed to install and manage Helm charts with Kubernetes CRD resource. Helm Operator does not create the Helm release

Chen Zhiwei 5 Aug 25, 2022
Katenary - Convert docker-compose to a configurable helm chart

Katenary is a tool to help transforming docker-compose files to a working Helm C

Patrice Ferlet 27 Sep 15, 2022
helm-lint-ls is helm lint language server protocol LSP.

helm-lint-ls is helm lint language server protocol LSP.

MrJosh 17 Sep 23, 2022
Dredger is a utility to help convert helm charts to Terraform modules using kubernetes provider.

dredger Dredger is a utility to help convert helm charts to Terraform modules using kubernetes provider. Dredger is made of dark magic and cannot full

Synchronoss 14 Aug 25, 2022
The CLI tool glueing Git, Docker, Helm and Kubernetes with any CI system to implement CI/CD and Giterminism

___ werf is an Open Source CLI tool written in Go, designed to simplify and speed up the delivery of applications. To use it, you need to describe the

werf 3.3k Sep 15, 2022
Helm : a tool for managing Kubernetes charts

Helm Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources. Use Helm to: Find and use popular soft

null 0 Nov 30, 2021
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates

Keel - automated Kubernetes deployments for the rest of us Website https://keel.sh Slack - kubernetes.slack.com look for channel #keel Keel is a tool

Keel 2.1k Sep 15, 2022
YurtCluster Operator creates and manages OpenYurt cluster atop Kubernetes

YurtCluster Operator Quick Start Prepare a Kubernetes cluster # cat <<EOF | kind create cluster --config=- kind: Cluster apiVersion: kind.x-k8s.io/v1a

OpenYurt 10 Aug 3, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

null 22 Jul 18, 2022
Create changelogs for Helm Charts, based on git history

helm-changelog Create changelogs for Helm Charts, based on git history. The application depends on the assumption that the helm chart is released on t

Frederik Mogensen 35 Jun 3, 2022
A helm v3 plugin to get values from a previous release

helm-val helm-val is a helm plugin to fetch values from a previous release. Getting started Installation To install the plugin: $ helm plugin install

Hamza ZOUHAIR 12 Feb 8, 2022
Render helm values-files from others

helm-plugin-render-values The Helm downloader plugin with rendering templated values files Install Use helm CLI to install this plugin: $ helm plugin

Vivid Money 5 Aug 1, 2022
sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC), and Everything as Code. So it is a tool for DevOps.

sail 中文文档 sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC),a

Bougou Nisou 10 Dec 16, 2021
Plugin for Helm to integrate the sigstore ecosystem

helm-sigstore Plugin for Helm to integrate the sigstore ecosystem. Search, upload and verify signed Helm Charts in the Rekor Transparency Log. Info he

sigstore 39 Aug 27, 2022
A best practices Go source project with unit-test and integration test, also use skaffold & helm to automate CI & CD at local to optimize development cycle

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Quang Nguyen 4 Apr 4, 2022
How you can use Go to replace YAML files with Kubernetes.

YamYams A small project that is free to use. ?? I wanted to offer a starting point for anyone interested in replacing YAML with Go. You can read more

Kris Nóva 1.1k Sep 12, 2022
Not another markup language. Framework for replacing Kubernetes YAML with Go.

Not another markup language. Replace Kubernetes YAML with raw Go! Say so long ?? to YAML and start using the Go ?? programming language to represent a

Kris Nóva 1.1k Sep 12, 2022
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

sigstore 38 Sep 7, 2022
YAML and Golang implementations of common Kubernetes patterns.

Kubernetes Patterns Types Patterns Foundational Patterns Behavioral Patterns Structural Patterns Configuration Patterns Usage To run, simply do go run

Sharad Bhat 71 Aug 8, 2022