The CLI tool glueing Git, Docker, Helm and Kubernetes with any CI system to implement CI/CD and Giterminism

Overview

GH Discussions Twitter Telegram chat
GoDoc Contributor Covenant

___

werf is an Open Source CLI tool written in Go, designed to simplify and speed up the delivery of applications. To use it, you need to describe the configuration of your application (in other words, how to build and deploy it to Kubernetes) and store it in a Git repo — the latter acts as a single source of truth. In short, that's what we call GitOps today.

  • werf builds Docker images using Dockerfiles or an alternative fast built-in builder based on the custom syntax. It also deletes unused images from the Docker registry.
  • werf deploys your application to Kubernetes using a chart in the Helm-compatible format with handy customizations and improved rollout tracking mechanism, error detection, and log output.

werf is not a complete CI/CD solution, but a tool for creating pipelines that can be embedded into any existing CI/CD system. It literally "connects the dots" to bring these practices into your application. We consider it a new generation of high-level CI/CD tools.

How it works?

Quickstart

Installation

Features

  • Full application lifecycle management: build and publish images, deploy an application to Kubernetes, and remove unused images based on policies.
  • The description of all rules for building and deploying an application (that may have any number of components) is stored in a single Git repository along with the source code (Single Source Of Truth).
  • Build images using Dockerfiles.
  • Alternatively, werf provides a custom builder tool with support for custom syntax, Ansible, and incremental rebuilds based on Git history.
  • werf supports Helm compatible charts and complex fault-tolerant deployment processes with logging, tracking, early error detection, and annotations to customize the tracking logic of specific resources.
  • werf is a CLI tool written in Go. It can be embedded into any existing CI/CD system to implement CI/CD for your application.
  • Cross-platform development: Linux-based containers can be run on Linux, macOS, and Windows.

Building

  • Effortlessly build as many images as you like in one project.
  • Build images using Dockerfiles or Stapel builder instructions.
  • Build images concurrently on a single host (using file locks).
  • Build images simultaneously.
  • Build images distributedly.
  • Content-based tagging.
  • Advanced building process with Stapel:
    • Incremental rebuilds based on git history.
    • Build images with Ansible tasks or Shell scripts.
    • Share a common cache between builds using mounts.
    • Reduce image size by detaching source data and building tools.
  • Build one image on top of another based on the same config.
  • Debugging tools for inspecting the build process.
  • Detailed output.

Deploying

  • Deploy an application to Kubernetes and check if it has been deployed correctly.
    • Track the statuses of all application resources.
    • Control the readiness of resources.
    • Control the deployment process with annotations.
  • Full visibility of both the deployment process and the final result.
    • Logging and error reporting.
    • Regular status reporting during the deployment phase.
    • Debug problems effortlessly without unnecessary kubectl invocations.
  • Prompt CI pipeline failure in case of a problem (i.e. fail fast).
    • Instant detection of resource failures during the deployment process without having to wait for a timeout.
  • Full compatibility with Helm 2.
  • Ability to limit user permissions using RBAC definition when deploying an application (Tiller is compiled into werf and is run under the ID of the outside user that carries out the deployment).
  • Parallel builds on a single host (using file locks).
  • Distributed parallel deploys (coming soon) #1620.
  • Сontinuous delivery of images with permanent tags (e.g., when using a branch-based tagging strategy).

Cleaning up

  • Clean up local and Docker registry by enforcing customizable policies.
  • Keep images that are being used in the Kubernetes cluster. werf scans the following kinds of objects: Pod, Deployment, ReplicaSet, StatefulSet, DaemonSet, Job, CronJob, ReplicationController.

Coming soon

  • Developing applications locally with werf #1940.
  • (Kaniko-like) building in the userspace that does not require Docker daemon #1618.
  • 3-way-merge #1616.
  • Content-based tagging #1184.
  • Support for the most Docker registry implementations #2199.
  • Parallel image builds #2200.
  • Proven approaches and recipes for the most popular CI systems #1617.
  • Distributed builds with the shared Docker registry #1614.
  • Support for Helm 3 #1606.

Production ready

werf is a mature, reliable tool you can trust. Read about release channels.

Documentation

Detailed documentation is available in multiple languages.

Many guides are provided for developers to quickly deploy apps to Kubernetes using werf.

Community & support

Please feel free to reach developers/maintainers and users via GitHub Discussions for any questions regarding werf.

Your issues are processed carefully if posted to issues at GitHub.

You're also welcome to:

  • follow @werf_io to stay informed about all important news, new articles, etc;
  • join our Telegram chat for announcements and ongoing talks: werf_io. (There is a Russian-speaking Telegram chat werf_ru as well.)

License

Apache License 2.0, see LICENSE.

Comments
  • Падает во время сборки на локальном компьютере

    Падает во время сборки на локальном компьютере

    Добрый день.

    У меня такая проблема - пытаюсь сделать make build, на этапе beforeInstall выдается ошибка

    Building stages
      🛸 artifact base
      Use cache image for stage from
          repository: werf-stages-storage/processing
            image_id: a79ece4953bb
             created: 2020-02-19T16:57:55.219984421Z
                 tag: 06bf6b4e0807da1cd6c03bbfdd97620e12f081f6898c38c41821c3b4814210ab
                size: 61.2 MiB
      
       Building stage beforeInstall
       base/beforeInstall  /.werf/stapel/embedded/bin/bash: /.werf/shell/script.sh: No such file or directory
       Info
       Building stage beforeInstall (1.70 seconds) FAILED
      🛸 artifact base (1.70 seconds) FAILED
     Building stages (1.70 seconds) FAILED
    

    Участок werf.yaml

    artifact: base
    from: "ubuntu:18.04"
    
    mount:
    - from: build_dir
      to: /var/cache/apt
    - from: tmp_dir
      to: /var/lib/apt/lists
    
    shell:
      beforeInstall:
      - ln -sf /usr/share/zoneinfo/Europe/Moscow /etc/localtime
      - apt-get -q update
      - apt-get -q install -y mc nano git curl wget gnupg2 iputils-ping net-tools ca-certificates software-properties-common
      - curl -sL https://deb.nodesource.com/setup_10.x | bash -E -
      - apt-get -q install -y nodejs
    

    Очень буду признателен за помощь.

    opened by dgdagadin87 17
  • Работа с контекстом сборки

    Работа с контекстом сборки

    • добавлены команды build-context import/export;
    • добавлена архивация build_dir;
    • добавлена опция --build_context_directory=File, которая определяет директорию сохранения контекста;
      • архив образов (images.tar);
      • архив с папкой build_dir (build.tar).
    • доработана команда build;
      • при падении сборки будет вызвано сохранения контекста, в случае, если передана опция --build_context_directory=File.
    opened by alexey-igrychev 13
  • ARM support?

    ARM support?

    Hi there! We are use WERF in production and wanna use the same way for local development with minikube. It will be great if werf supports the ARM architecture, especially after the Apple M1 release.

    opened by Faust13 11
  • Helm 3 support

    Helm 3 support

    Hello, guys!

    Amazing project! Looks like exactly what I’m looking for.

    Just some probably weird question for me, if Helm 3 is supported? I know it’s still alpha, but it still works well for us even in this stage, will be happy to use werf with third version of Helm. If not supported yet, pls let me know your roadmap to upgrade to next version of Helm.

    Best regards, Alexey Zhokhov.

    opened by donbeave 11
  • Is there a way to enable an Image Tag Policy?

    Is there a way to enable an Image Tag Policy?

    Hello, is there any way to enforce an image tag policy to avoid pulling incompatible changes? Imagine a microservice architecture. My deployment consists of different applications that follow semver. How can I ensure that no breaking changes are pulled? I would like to lock individual images to patch or minor releases so that fixes can be rollout automatically. Does it make sense?

    Example: https://docs.fluxcd.io/en/latest/tutorials/driving-flux/#automations-locks-and-annotations

    opened by StarpTech 9
  • deploys failing with error: marking resource as failed because no progress for 1m30s

    deploys failing with error: marking resource as failed because no progress for 1m30s

    Before proceeding

    • [X] I didn't find a similar issue

    Version

    1.2.122+fix1

    How to reproduce

    Not sure how to reproduce but on our builds, the only thing that change was the werf version.

    Helm resources fail to converge with error:

    ERROR: marking resource as failed because no progress for 1m30
    
    • working version: werf is /root/.trdl/repositories/werf/releases/1.2.117+fix2/linux-amd64/bin/werf
    • not working version: werf is /root/.trdl/repositories/werf/releases/1.2.122+fix1/linux-amd64/bin/werf

    Result

    ERROR: marking resource as failed because no progress for 1m30

    resources fail to update and deploy fials

    Expected result

    resources to update as normal

    Additional information

    No response

    type: bug priority: high 
    opened by oppianmatt 8
  • Не читаются файлы из .dappfiles

    Не читаются файлы из .dappfiles

    Здравствуйте ! Согласно документации к версии dapp 0.27.0 dapp должен читать yaml файлы из .dapfiles. Но либо я что то не так делаю, либо файлы не читаются

    dapp --version dapp: 0.27.0

    ls -a . .. .dappfile.render.yaml .dappfiles dappfile.yaml

    ls -a .dappfiles/ . .. app.yaml

    cat dappfile.yaml dimg: hello from: ubuntu:16.04 ansible: beforeInstall: - debug: msg='Before install' install: - debug: msg='Install'

    cat .dappfiles/app.yaml dimg: app from: centos:7 ansible: install: - debug: msg='Install' - yum: name: mc state: latest

    Запускаю build

    /test$ dapp dimg build hello hello: calculating stages signatures [RUNNING] hello: calculating stages signatures [OK] 0.0 sec From ... [OK] 1.84 sec signature: dimgstage- test:b3fba727e2fd0bafe9368bc46768aa625ebd6f534e9ba4e5b7058d1dae278dba Before install [BUILDING] debug msg Before install Before install [OK] 2.66 sec signature: dimgstage-test:ee9a17f6adbb04d2dd6753fd3031e2d723ad8adc8a75e8c3b3362495a033fec8 commands: /.dapp/deps/ansible/2.4.4.0-10/embedded/bin/ansible-playbook /.dapp/ansible-workdir/playbook.yml Install group Install [BUILDING] debug msg Install Install [OK] 2.7 sec signature: dimgstage-test:77c62b717f0168113b384ea488cfedbf18c2c220c9242ea45020a5ad4fe7a9ec commands: /.dapp/deps/ansible/2.4.4.0-10/embedded/bin/ansible-playbook /.dapp/ansible-workdir/playbook.yml

    Собрался только hello образ

    $ dapp dimg list hello Running time 0.02 seconds

    type: enhancement scope: build 
    opened by rjeka 7
  • Префикс 'docker.io/' в имени репозитория образа

    Префикс 'docker.io/' в имени репозитория образа

    When i using dapp on CentOS 7 with docker 1.10.3 i got next error

    $ dapp build --verbose
    pulling image `ruby:2.3-alpine`                            [RUNNING]
    Trying to pull repository docker.io/library/ruby ... 2.3-alpine: 
    Pulling from docker.io/library/ruby
    Digest: sha256:a0edc9fa6ce63e1a33475cb8a1097ec961d45597f2972242ef2d5187ba7bedf6
    Status: Image is up to date for docker.io/ruby:2.3-alpine
    pulling image `ruby:2.3-alpine`                             [OK] 1.9 sec
    Stacktrace dumped to /tmp/dapp-stacktrace-7e12a2f5-89ea-4e1e-bfb5-0b21bcd6553e.out
    Image `ruby:2.3-alpine` not found!
    

    Used next Dappfile

    dimg do
      docker.from 'ruby:2.3-alpine'
      shell do
        before_install do
          run '/bin/sh -c ls'
        end
      end
    
      git do
        add '/' do
          to '/app'
        end
      end
    end
    

    When i change docker.from to docker.io/ruby:2.3-alpine build sucessfuly starts

    But on ubuntu 14.04/16.04 works both records.

    type: bug 
    opened by kobel169 7
  • Support helm plugins

    Support helm plugins

    Is there any plan to add support of helm plugins? In particular helm-secrets? At the present time I can invoke plugin commands but they don't work (nothing happens).

    $ werf helm plugin list
    
    NAME    VERSION DESCRIPTION                                                                  
    diff    3.4.2   Preview helm upgrade changes as a diff                                       
    secrets 3.13.0  This plugin provides secrets values encryption for Helm charts secure storing
    
    $ werf helm secrets view secrets.yaml
    
    $ echo $?
    0
    

    Werf version: 1.2.88

    opened by Sebor 6
  • Ability to enable/disable checking of all kube namespaces during cleaning images from registry

    Ability to enable/disable checking of all kube namespaces during cleaning images from registry

    My case: there are several projects in one cluster, each project has access only to certain namespaces. Therefore, the werf cleanup does not work as it tries to extract images from the entire cluster. For example:

    627.191299ms Using werf config render file: /private/var/folders/82/qx_s8s_n3c3016fsx1rbbjsh0000gn/T/werf-config-render-088482510
    633.265926ms Using images repo docker registry implementation: default
    633.301918ms Using images repo mode: multirepo
    633.337162ms -- LocalDockerServerStagesStorage.GetManagedImages cross-stitch
    792.055737ms Managed images names: [backend migrations notifications static]
    851.228204ms 
    851.264639ms ┌ Running images cleanup
    863.395068ms │ ┌ Fetching repo images data
    863.425751ms │ │ ┌ Fetching repo images
    2m42.6553478 │ │ └ Fetching repo images (161.78 seconds)
    2m42.6554011 │ └ Fetching repo images data (161.79 seconds)
    2m42.6554378 │ 
    2m42.6554631 │ ┌ Skipping repo images that are being used in Kubernetes
    2m42.6622105 │ │ Getting deployed docker images (context stand) ... (0.51 seconds) FAILED
    2m43.1712105 │ └ Skipping repo images that are being used in Kubernetes (0.52 seconds) FAILED
    2m43.1817046 └ Running images cleanup (162.33 seconds) FAILED
    2m43.2183839 
    2m43.2183878 Running time 163.22 seconds
    2m43.2320993 Error: cannot get deployed imagesRepoImageList: cannot get Pods images: pods is forbidden: User "system:serviceaccount:example-project:deploy"   ↵
    2m43.2600260 cannot list resource "pods" in API group "" at the cluster scope
    

    I've added the --check-all-namespaces (WERF_CHECK_ALL_NAMESPACES) flag that controls this behavior. The default behavior has not changed, but for my case this flag can be turned off, in which case the werf will only check the namespaces that are specified in the contexts of the configuration file.

    I may not have corrected the code very nicely, I had to add an additional argument with map namespaces to many functions. But I can change it if necessary for another option, for example, add information about namespaces to KubernetesContextsClients.

    opened by identw 6
  • How to get git info in the build container

    How to get git info in the build container

    As a developer of an application, which built by the werf, I want to have ability to get application version — git commit, tag or other info — to embed this info into final application image.

    Maybe all CI-variables should be forwarded into build container.

    scope: build 
    opened by distorhead 6
  • Fix host cleanup when using buildah mode

    Fix host cleanup when using buildah mode

    Before proceeding

    • [X] I didn't find a similar issue

    Version

    1.2.190

    How to reproduce

    Set WERF_BUILDAH_MODE and run werf build command.

    Result

    WARNING: unable to start background auto host cleanup process: exec: "werf-in-a-user-namespace": executable file not found in $PATH
    

    Expected result

    No warnings and working auto host cleanup.

    Additional information

    This warning related to the internal mechanics used in buildah builder: reexec self process with another arguments.

    opened by distorhead 0
  • Fixing the documentation generator in kubectl commands

    Fixing the documentation generator in kubectl commands

    Documentation replacers have been made for all kubectl commands so that everything is formatted normally in the generated documentation for the site.

    A unit test was also made to check that everything worked as it should.

    opened by Zhbert 0
  • Ability to disable git submodules adding to the giterminism context

    Ability to disable git submodules adding to the giterminism context

    Before proceeding

    • [X] I didn't find a similar issue

    Problem

    There are submodules in the git repo, which should not be added into the built images, and no files from submodules used in werf.yaml or .helm configuration files.

    Solution (if you have one)

    So in such case we could optimize git archive fetch procedure and disable fetching submodules.

    Additional information

    No response

    opened by distorhead 0
  • Build image with custom buildkit syntax derictive

    Build image with custom buildkit syntax derictive

    Before proceeding

    • [X] I didn't find a similar issue

    Version

    1.2.11

    How to reproduce

    Create Dockerfile with custom syntax directive and use INCLUDE+ Dockerfile.common to include Dockerfile.common Run command DOCKER_BUILDKIT=1 werf build

    Result

    Error: dockerfile parse error line [dockerfile_line]: unknown instruction: INCLUDE+

    Expected result

    Nothing to preview

    Additional information

    The image is build on server in docker-demon mode

    opened by EveTN 0
  • Dump values.yaml from bundle

    Dump values.yaml from bundle

    Before proceeding

    • [X] I didn't find a similar issue

    Problem

    Using werf bundles for delivering applications can stumble with an inability to dump values.yaml included to bundle. In most cases, you can't apply bundle to cluster without changing various settings in values.yaml file; hence you have to provide the end user with sample values.yaml or the manual with the options described.

    Solution (if you have one)

    Helm already has the option to dump values included in the chart using the helm show values command, so it would be nice to have one in werf for bundles.

    Additional information

    Thanks a lot!

    opened by lazovskiy 0
Releases(v1.2.190)
Owner
werf
werf CI/CD tool and related projects
werf
A helm v3 plugin to adopt existing k8s resources into a new generated helm chart

helm-adopt Overview helm-adopt is a helm plugin to adopt existing k8s resources into a new generated helm chart, the idea behind the plugin was inspir

Hamza ZOUHAIR 32 Nov 15, 2022
helm-lint-ls is helm lint language server protocol LSP.

helm-lint-ls is helm lint language server protocol LSP.

MrJosh 20 Oct 19, 2022
Create changelogs for Helm Charts, based on git history

helm-changelog Create changelogs for Helm Charts, based on git history. The application depends on the assumption that the helm chart is released on t

Frederik Mogensen 36 Oct 25, 2022
Helm : a tool for managing Kubernetes charts

Helm Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources. Use Helm to: Find and use popular soft

null 0 Nov 30, 2021
Katenary - Convert docker-compose to a configurable helm chart

Katenary is a tool to help transforming docker-compose files to a working Helm C

Patrice Ferlet 33 Nov 23, 2022
Hassle-free minimal CI/CD for git repositories with docker or docker-compose projects.

GIT-PIPE Hassle-free minimal CI/CD for git repos for docker-based projects. Features: zero configuration for repos by default automatic encrypted back

Aleksandr Baryshnikov 50 Sep 23, 2022
ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run, exec, cp, logs, stop)

English / 日本語 ecsk ECS + Task = ecsk ?? ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run

null 109 Nov 15, 2022
Dredger is a utility to help convert helm charts to Terraform modules using kubernetes provider.

dredger Dredger is a utility to help convert helm charts to Terraform modules using kubernetes provider. Dredger is made of dark magic and cannot full

Synchronoss 14 Aug 25, 2022
Creates Helm chart from Kubernetes yaml

Helmify CLI that creates Helm charts from kubernetes yamls. Helmify reads a list of supported k8s objects from stdin and converts it to a helm chart.

Artem 330 Nov 27, 2022
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates

Keel - automated Kubernetes deployments for the rest of us Website https://keel.sh Slack - kubernetes.slack.com look for channel #keel Keel is a tool

Keel 2.1k Nov 23, 2022
A docker container that can be deployed as a sidecar on any kubernetes pod to monitor PSI metrics

CgroupV2 PSI Sidecar CgroupV2 PSI Sidecar can be deployed on any kubernetes pod with access to cgroupv2 PSI metrics. About This is a docker container

null 1 Nov 23, 2021
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Daniel Dias de Assumpção 15 Oct 28, 2022
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Daniel Dias de Assumpção 1 Feb 16, 2022
A tool to build, deploy, and release any application on any platform.

Waypoint Website: https://www.waypointproject.io Tutorials: HashiCorp Learn Forum: Discuss Waypoint allows developers to define their application buil

HashiCorp 4.6k Nov 27, 2022
sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC), and Everything as Code. So it is a tool for DevOps.

sail 中文文档 sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC),a

Bougou Nisou 10 Dec 16, 2021
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

null 1 Dec 17, 2021
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.7k Nov 24, 2022
Tool to convert docker-compose files to set of simple docker commands

docker-decompose Tool to convert docker-compose files to set of simple docker commands. Install Use go get to install the latest version of the librar

Liri S 2 Apr 12, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10.8k Nov 22, 2022