Workflow engine for Kubernetes

Overview

slack CI CII Best Practices Twitter Follow

What is Argo Workflows?

Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).

  • Define workflows where each step in the workflow is a container.
  • Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG).
  • Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes.

Argo is a Cloud Native Computing Foundation (CNCF) hosted project.

Argo Workflows in 5 minutes

Use Cases

  • Machine Learning pipelines
  • Data and batch processing
  • ETL
  • Infrastructure automation
  • CI/CD

Why Argo Workflows?

  • Argo Workflows is the most popular workflow execution engine for Kubernetes.
  • It can run 1000s of workflows a day, each with 1000s of concurrent tasks.
  • Our users say it is lighter-weight, faster, more powerful, and easier to use
  • Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments.
  • Cloud agnostic and can run on any Kubernetes cluster.

Read what people said in our latest survey

Try Argo Workflows

Access the demo environment (login using Github)

Screenshot

Ecosystem

Ecosystem

Argo Events | Argo Workflows Catalog | Couler | Katib | Kedro | Kubeflow Pipelines | Onepanel | Ploomber | Seldon | SQLFlow

Client Libraries

Check out our Java, Golang and Python clients.

Quickstart

kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/install.yaml

Who uses Argo Workflows?

Official Argo Workflows user list

Documentation

Features

  • UI to visualize and manage Workflows
  • Artifact support (S3, Artifactory, Alibaba Cloud OSS, HTTP, Git, GCS, raw)
  • Workflow templating to store commonly used Workflows in the cluster
  • Archiving Workflows after executing for later access
  • Scheduled workflows using cron
  • Server interface with REST API (HTTP and GRPC)
  • DAG or Steps based declaration of workflows
  • Step level input & outputs (artifacts/parameters)
  • Loops
  • Parameterization
  • Conditionals
  • Timeouts (step & workflow level)
  • Retry (step & workflow level)
  • Resubmit (memoized)
  • Suspend & Resume
  • Cancellation
  • K8s resource orchestration
  • Exit Hooks (notifications, cleanup)
  • Garbage collection of completed workflow
  • Scheduling (affinity/tolerations/node selectors)
  • Volumes (ephemeral/existing)
  • Parallelism limits
  • Daemoned steps
  • DinD (docker-in-docker)
  • Script steps
  • Event emission
  • Prometheus metrics
  • Multiple executors
  • Multiple pod and workflow garbage collection strategies
  • Automatically calculated resource usage per step
  • Java/Golang/Python SDKs
  • Pod Disruption Budget support
  • Single-sign on (OAuth2/OIDC)
  • Webhook triggering
  • CLI
  • Out-of-the box and custom Prometheus metrics
  • Windows container support
  • Embedded widgets
  • Multiplex log viewer

Community Meetings

We host monthly community meetings where we and the community showcase demos and discuss the current and future state of the project. Feel free to join us! For Community Meeting information, minutes and recordings please see here.

Participation in the Argo Workflows project is governed by the CNCF Code of Conduct

Community Blogs and Presentations

Project Resources

Issues
  • v2.10/v2.11/latest(21st Sep): Too many warn & error messages in Argo Workflow Controller (msg=

    v2.10/v2.11/latest(21st Sep): Too many warn & error messages in Argo Workflow Controller (msg="error in entry template execution" error="Deadline exceeded")

    Summary

    Too many warning and error messages inside Argo workflow controllers

    Argo workflow controller logs

    $ kubectl logs --tail=20 workflow-controller-cb99d68cf-znssr

    time="2020-09-16T13:46:45Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionczpht
    time="2020-09-16T13:46:45Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestioncbmt6
    time="2020-09-16T13:46:45Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestioncbmt6
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionvz4km
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionvz4km
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionhvnhs
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionhvnhs
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionnnsbb
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionnnsbb
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionkc5sb
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionkc5sb
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionc9fcz
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionc9fcz
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionpjczx
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionpjczx
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionftmdh
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionftmdh
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionbfrc5
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionbfrc5
    

    Workflows are getting stuck after some time and not completing in 12+ hours while normal execution is around 1 minute.

    I am creating almost 1000 workflows with each workflow containing 4 Pods in short span of time. There are enough worker nodes to do processing so no issues as such from K8s cluster side.

    internal-data-ingestiontj79x error in entry template execution: Deadline exceeded github.com/argoproj/argo/errors.New  /go/src/github.com/argoproj/argo/errors/errors.go:49 github.com/argoproj/argo/workflow/controller.init  /go/src/github.com/argoproj/argo/workflow/controller/operator.go:102 runtime.doInit  /usr/local/go/src/runtime/proc.go:5222 runtime.doInit  /usr/local/go/src/runtime/proc.go:5217 runtime.main  /usr/local/go/src/runtime/proc.go:190 runtime.goexit  /usr/local/go/src/runtime/asm_amd64.s:1357
    --
    

    Diagnostics

    What version of Argo Workflows are you running?

    Argo v2.10.1


    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug 
    opened by BlackRider97 61
  • Successfully finished pods show stuck in pending phase indefinitely in GKE + Kubernetes v1.20

    Successfully finished pods show stuck in pending phase indefinitely in GKE + Kubernetes v1.20

    Summary

    We were running reasonably complex Argo workflows without issues for a long time. However around the time we updated Kubernetes version to 1.19.10-gke.1000 (Running ARGO in GKE) we started experiencing frequent problems with workflow getting stuck because Pod that was successfully started by Argo and finished is shown stuck in the pending state in Argo. (Even though we can see from logs that the main container successfully finished) We have tried using PNS executor and k8sapi executors but that did not fix the issue. We have removed Argo namespace and recreated it again and the issue is still happening. We updated from Argo 2.x to 3.0.8 to 3.1.1 and to stable 3.0.3 and it still occurred. Currently we are at latest tag (argoproj/workflow-controller:v3.0.3)

    Diagnostics

    What Kubernetes provider are you using? We are using GKE with Kubernetes version 1.19.10-gke.1000

    What version of Argo Workflows are you running? Tested on multiple versions. Currently runing 3.0.3 but it started in 2.x version and happened in 3.1.1 and 3.0.8

    What executor are you running? Docker/K8SAPI/Kubelet/PNS/Emissary Failed with default(I guess docker) K8SAPI and PNS

    Did this work in a previous version? I.e. is it a regression? We are not sure if that was the cause but if worked without issues 2.6 on GKE before updating kubernetes to 1.19

    # Simplified failing part of workflow. 
     Parents-parent
        our-workflow-1625018400-118234800:
          boundaryID: our-workflow-1625018400-2940179300
          children:
            - our-workflow-1625018400-1820417972
            - our-workflow-1625018400-1463043834
          displayName: '[2]'
          finishedAt: null
          id: our-workflow-1625018400-118234800
          name: our-workflow-1625018400[2].fill-supervised-product-clusters[0].fill-supervised-product-cluster(19:configs:["things"],products:["params"],supervised-configs:[])[2]
          phase: Running
          progress: 0/2
          startedAt: "2021-06-30T09:34:44Z"
          templateScope: local/our-workflow-1625018400
          type: StepGroup   
      Parent stuck running
        our-workflow-1625018400-1820417972:
          boundaryID: our-workflow-1625018400-2940179300
          children:
          - our-workflow-1625018400-655392735
          displayName: fill-products(0:params)
          finishedAt: null
          id: our-workflow-1625018400-1820417972
          inputs:
            parameters:
            - name: products
              value: send_val
          name: our-workflow-1625018400[2].fill-supervised-product-clusters[0].fill-supervised-product-cluster(19:configs:["things"],products:["params"],supervised-configs:[])[2].fill-products(0:params)
          phase: Running
          progress: 0/1
          startedAt: "2021-06-30T09:34:44Z"
          templateName: fill-products
          templateScope: local/our-workflow-1625018400
          type: Retry
      Finished pod stuck pending
        our-workflow-1625018400-655392735:
          boundaryID: our-workflow-1625018400-2940179300
          displayName: fill-products(0:params)(0)
          finishedAt: null
          id: our-workflow-1625018400-655392735
          inputs:
            parameters:
              - name: products
                value: send_val
          name: our-workflow-1625018400[2].fill-supervised-product-clusters[0].fill-supervised-product-cluster(19:configs:["things"],products:["params"],supervised-configs:[])[2].fill-products(0:params)(0)
          phase: Pending
          progress: 0/1
          startedAt: "2021-06-30T09:34:44Z"
          templateName: fill-products
          templateScope: local/our-workflow-1625018400
          type: Pod
    
    # Logs from the workflow controller:
    # Workflow container restarts occasionally failing logs:
    #  Edit: NOTE: Pods get stuck even if workflow controller doesn't restart.
    time="2021-06-30T08:03:26.894Z" level=info msg="Workflow update successful" namespace=default phase=Running resourceVersion=349445621 workflow=workflow-working-fine-1625040000
    time="2021-06-30T08:03:26.898Z" level=info msg="SG Outbound nodes of affected-workflow-1625018400-3050482061 are [affected-workflow-1625018400-3894231772]" namespace=default workflow=affected-workflow-1625018400
    time="2021-06-30T08:03:26.898Z" level=info msg="SG Outbound nodes of affected-workflow-1625018400-483040858 are [affected-workflow-1625018400-3675982209]" namespace=default workflow=affected-workflow-1625018400
    time="2021-06-30T08:03:26.899Z" level=info msg="SG Outbound nodes of affected-workflow-1625018400-744350190 are [affected-workflow-1625018400-952860445]" namespace=default workflow=affected-workflow-1625018400
    E0630 08:03:27.386046       1 leaderelection.go:361] Failed to update lock: Put "https://10.47.240.1:443/apis/coordination.k8s.io/v1/namespaces/argo/leases/workflow-controller": context deadline exceeded
    I0630 08:03:27.681192       1 leaderelection.go:278] failed to renew lease argo/workflow-controller: timed out waiting for the condition
    time="2021-06-30T08:03:28.071Z" level=info msg="Update leases 409"
    time="2021-06-30T08:03:29.474Z" level=info msg="Create events 201"
    time="2021-06-30T08:03:31.003Z" level=info msg="cleaning up pod" action=labelPodCompleted key=default/workflow-working-fine-1625040000-957740190/labelPodCompleted
    E0630 08:03:31.031922       1 leaderelection.go:301] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "workflow-controller": the object has been modified; please apply your changes to the latest version and try again
    time="2021-06-30T08:03:31.477Z" level=info msg="stopped leading" id=workflow-controller-7c568955d7-bxtdj
    E0630 08:03:31.640872       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
    goroutine 78 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1b6eb00, 0x2c9aa80)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x89
    panic(0x1b6eb00, 0x2c9aa80)
    	/usr/local/go/src/runtime/panic.go:969 +0x1b9
    github.com/argoproj/argo-workflows/v3/workflow/controller.(*WorkflowController).Run.func2()
    	/go/src/github.com/argoproj/argo-workflows/workflow/controller/controller.go:256 +0x7c
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc0006547e0)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:200 +0x29
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0006547e0, 0x206c3e0, 0xc0002a48c0)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:210 +0x15d
    k8s.io/client-go/tools/leaderelection.RunOrDie(0x206c3e0, 0xc000522d40, 0x2078780, 0xc00085e000, 0x37e11d600, 0x2540be400, 0x12a05f200, 0xc000cb9c00, 0xc000a6a700, 0xc001096060, ...)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:222 +0x9c
    created by github.com/argoproj/argo-workflows/v3/workflow/controller.(*WorkflowController).Run
    	/go/src/github.com/argoproj/argo-workflows/workflow/controller/controller.go:241 +0xfb8
    panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    	panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x19cf4fc]
    
    goroutine 78 [running]:
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x10c
    panic(0x1b6eb00, 0x2c9aa80)
    	/usr/local/go/src/runtime/panic.go:969 +0x1b9
    github.com/argoproj/argo-workflows/v3/workflow/controller.(*WorkflowController).Run.func2()
    	/go/src/github.com/argoproj/argo-workflows/workflow/controller/controller.go:256 +0x7c
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc0006547e0)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:200 +0x29
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0006547e0, 0x206c3e0, 0xc0002a48c0)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:210 +0x15d
    k8s.io/client-go/tools/leaderelection.RunOrDie(0x206c3e0, 0xc000522d40, 0x2078780, 0xc00085e000, 0x37e11d600, 0x2540be400, 0x12a05f200, 0xc000cb9c00, 0xc000a6a700, 0xc001096060, ...)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:222 +0x9c
    created by github.com/argoproj/argo-workflows/v3/workflow/controller.(*WorkflowController).Run
    	/go/src/github.com/argoproj/argo-workflows/workflow/controller/controller.go:241 +0xfb8
    
    # Other than that is acting like this: 
    time="2021-06-30T11:59:37.960Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.960Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.961Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.962Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.963Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.963Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.964Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.965Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.965Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.966Z" level=info msg="Patch events 200"
    time="2021-06-30T11:59:37.966Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.968Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.968Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.969Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.970Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.970Z" level=info msg="Workflow step group node failing-workflow-1625018400-1253377659 not yet completed" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.970Z" level=info msg="Workflow step group node failing-workflow-1625018400-1369709231 not yet completed" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.972Z" level=info msg="Patch events 200"
    time="2021-06-30T11:59:37.977Z" level=info msg="Patch events 200"
    time="2021-06-30T11:59:39.413Z" level=info msg="Get leases 200"
    time="2021-06-30T11:59:39.420Z" level=info msg="Update leases 200"
    time="2021-06-30T11:59:43.564Z" level=info msg="Enforcing history limit 
    
    # Pending pod main logs:
    Logs everything as expected and finishes. 
    
    # Logs from pending workflow's wait container, something like:
    time="2021-06-30T09:34:45.973Z" level=info msg="Starting Workflow Executor" version="{v3.0.3 2021-05-11T21:14:20Z 02071057c082cf295ab8da68f1b2027ff8762b5a v3.0.3 clean go1.
    15.7 gc linux/amd64}"
    time="2021-06-30T09:34:45.979Z" level=info msg="Creating a docker executor"
    time="2021-06-30T09:34:45.979Z" level=info msg="Executor (version: v3.0.3, build_date: 2021-05-11T21:14:20Z) initialized (pod: default/failing-workflow-workfl
    ow-1625018400-531201185) with template:\n{\"name\":\"fill-products\",\"inputs\":{\"parameters\":[{\"name\":\"products\",\"value\":\"products\"}]},\"outputs\":{},\"metadata\":{},\"container\":{\"name\":\"\",\"i
    mage\":\"us.gcr.io/spaceknow-backend/failing-workflow-workflow-base-delivery:287\",\"command\":[\"python3\"],\"args\":[\"deliveries/datacube-fill-products.py\
    ",\"--products\",\"products\"],\"envF
    rom\":[{\"secretRef\":{\"name\":\"failing-workflow-workflow\"}}],\"resources\":{\"limits\":{\"memory\":\"2Gi\"},\"requests\":{\"cpu\":\"2\",\"memory\":\"2Gi\"
    }}},\"retryStrategy\":{\"limit\":2,\"retryPolicy\":\"Always\"}}"
    time="2021-06-30T09:34:45.979Z" level=info msg="Starting annotations monitor"
    time="2021-06-30T09:34:45.979Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:34:45.980Z" level=info msg="Starting deadline monitor"
    time="2021-06-30T09:34:46.039Z" level=info msg="mapped container name \"main\" to container ID \"bcd7bcab1bd8ab36553933c1b41cb81deacfbab26a9a440f36360aecef06af6f\" (created
     at 2021-06-30 09:34:45 +0000 UTC, status Created)"
    time="2021-06-30T09:34:46.039Z" level=info msg="mapped container name \"wait\" to container ID \"34831534a3aefb25a5744dfd102c8060dc2369767cab729b343f3b18d375828e\" (created
     at 2021-06-30 09:34:45 +0000 UTC, status Up)"
    time="2021-06-30T09:34:46.980Z" level=info msg="docker wait bcd7bcab1bd8ab36553933c1b41cb81deacfbab26a9a440f36360aecef06af6f"
    time="2021-06-30T09:35:15.026Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:16.066Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:26.101Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:36.135Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:46.168Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:56.200Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:06.234Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:16.266Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:26.299Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:36.333Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:46.372Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:56.406Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:37:04.406Z" level=info msg="Main container completed"
    time="2021-06-30T09:37:04.406Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-06-30T09:37:04.406Z" level=info msg="Capturing script exit code"
    time="2021-06-30T09:37:04.438Z" level=info msg="No output parameters"
    time="2021-06-30T09:37:04.438Z" level=info msg="No output artifacts"
    time="2021-06-30T09:37:04.438Z" level=info msg="Annotating pod with output"
    time="2021-06-30T09:37:04.454Z" level=info msg="Patch pods 200"
    time="2021-06-30T09:37:04.464Z" level=info msg="Killing sidecars []"
    time="2021-06-30T09:37:04.464Z" level=info msg="Alloc=5435 TotalAlloc=11825 Sys=73041 NumGC=4 Goroutines=10"
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug workaround 
    opened by klaucos 55
  • downloading artifact from s3 in ui, timed out waiting for condition

    downloading artifact from s3 in ui, timed out waiting for condition

    Checklist:

    • [x] I've included the version.
    • [x] I've included reproduction steps.
    • [] I've included the workflow YAML.
    • [x] I've included the logs.

    What happened: Installed the latest 2.5.0rc7 via install.yaml on eks 1.14 and added to the install.yaml the diff shown in the output below, so the archivedLogs and s3 config are enabled (workflow-controller-configmap):

    336,344d322
    < data:
    <   config: |
    <     artifactRepository:
    <       archiveLogs: true
    <       s3:
    <         bucket: "example-argo"
    <         keyPrefix: "example"
    <         endpoint: "s3.amazonaws.com"
    < 
    

    to gain ui access on localhost to the ui in kubernetes kubectl port-forward svc/argo-server 2746:2746 -n argo

    run a basic hello world workflow via argo cli and the workflow completes as expected, and clicking on artifacts link in ui shows main-logs object as expected, but when you click to download the actual artfact in the ui, the browser eventually returns a "timed out waiting on condition"

    What you expected to happen: I expect clicking on the link to download the requested artifact.

    How to reproduce it (as minimally and precisely as possible): install.yaml with a s3 config similar to above and run any workflow, and then try and download the resulting main-log artifact.

    Logs argo-server log shows:

    time="2020-01-31T23:02:06Z" level=info msg="S3 Load path: artifact368826374, key: example/local-script-gd5zj/local-script-gd5zj/main.log"
    time="2020-01-31T23:02:06Z" level=info msg="Creating minio client s3.amazonaws.com using IAM role"
    time="2020-01-31T23:02:06Z" level=info msg="Getting from s3 (endpoint: s3.amazonaws.com, bucket: example-argo, key: example/local-script-gd5zj/local-script-gd5zj/main.log) to artifact368826374"
    time="2020-01-31T23:02:06Z" level=warning msg="Failed get file: Get https://s3.amazonaws.com/example-argo/?location=: x509: certificate signed by unknown authority"
    

    Message from the maintainers:

    If you are impacted by this bug please add a πŸ‘ reaction to this issue! We often sort issues this way to know what to prioritize.

    bug regression 
    opened by haghabozorgi 44
  • Workflow - Could not get container status

    Workflow - Could not get container status

    Summary

    What happened/what you expected to happen?

    The workflow should successfully terminate all of your triggered tasks. It wasn't easy to reproduce the anomalous behaviour outside of the cluster and also not happen every single time, so please find additional configurations that might help to reproduce it

    argo-server: 3 replicas
    executor: pns
    parallelism: 1000
    

    Diagnostics

    What Kubernetes provider are you using? GKE or Kind

    What version of Argo Workflows are you running?

    ❯ argo version
    argo: v3.0.0-rc3
      BuildDate: 2021-02-23T21:06:58Z
      GitCommit: c0c364c229e3b72306bd0b73161df090d24e0c31
      GitTreeState: clean
      GitTag: v3.0.0-rc3
      GoVersion: go1.13
      Compiler: gc
      Platform: darwin/amd64
    

    Workflow template

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      creationTimestamp: "2021-03-02T00:36:25Z"
      generateName: ci-
      generation: 10
      labels:
        submit-from-ui: "true"
        workflows.argoproj.io/completed: "true"
        workflows.argoproj.io/creator: system-serviceaccount-argo-argo-server
        workflows.argoproj.io/phase: Error
        workflows.argoproj.io/workflow-template: ci
      managedFields:
      - apiVersion: argoproj.io/v1alpha1
        fieldsType: FieldsV1
        fieldsV1:
          f:metadata:
            f:generateName: {}
            f:labels:
              .: {}
              f:submit-from-ui: {}
              f:workflows.argoproj.io/creator: {}
              f:workflows.argoproj.io/workflow-template: {}
          f:spec:
            .: {}
            f:arguments: {}
            f:entrypoint: {}
            f:workflowTemplateRef: {}
          f:status:
            .: {}
            f:storedTemplates: {}
        manager: argo
        operation: Update
        time: "2021-03-02T00:36:25Z"
      - apiVersion: argoproj.io/v1alpha1
        fieldsType: FieldsV1
        fieldsV1:
          f:metadata:
            f:labels:
              f:workflows.argoproj.io/completed: {}
              f:workflows.argoproj.io/phase: {}
          f:status:
            f:artifactRepositoryRef: {}
            f:conditions: {}
            f:finishedAt: {}
            f:nodes: {}
            f:phase: {}
            f:progress: {}
            f:resourcesDuration: {}
            f:startedAt: {}
            f:storedWorkflowTemplateSpec: {}
        manager: workflow-controller
        operation: Update
        time: "2021-03-02T00:37:47Z"
      name: ci-t8vq2
      namespace: argo
      resourceVersion: "119304"
      uid: fb296b31-b810-42c0-90bc-c4e8bb879b22
    spec:
      arguments: {}
      entrypoint: main
      workflowTemplateRef:
        name: ci
    status:
      artifactRepositoryRef:
        default: true
      conditions:
      - status: "False"
        type: PodRunning
      - status: "True"
        type: Completed
      finishedAt: "2021-03-02T00:37:47Z"
      nodes:
        ci-t8vq2:
          children:
          - ci-t8vq2-28546012
          - ci-t8vq2-212400199
          displayName: ci-t8vq2
          finishedAt: "2021-03-02T00:37:47Z"
          id: ci-t8vq2
          name: ci-t8vq2
          outboundNodes:
          - ci-t8vq2-2699866519
          - ci-t8vq2-2921795641
          - ci-t8vq2-935633491
          - ci-t8vq2-2460421745
          - ci-t8vq2-1497633967
          - ci-t8vq2-2529876561
          - ci-t8vq2-2517646915
          - ci-t8vq2-2594947737
          - ci-t8vq2-2608532967
          - ci-t8vq2-3834947628
          - ci-t8vq2-1156673745
          - ci-t8vq2-2620463353
          - ci-t8vq2-3303001669
          - ci-t8vq2-139394661
          - ci-t8vq2-1087124617
          - ci-t8vq2-1466005665
          - ci-t8vq2-3097822101
          - ci-t8vq2-3815346741
          - ci-t8vq2-3045333153
          - ci-t8vq2-2135555749
          - ci-t8vq2-1671854890
          - ci-t8vq2-2189885778
          - ci-t8vq2-2454096578
          - ci-t8vq2-3752440098
          - ci-t8vq2-1958506170
          - ci-t8vq2-569544386
          - ci-t8vq2-743674290
          - ci-t8vq2-553546802
          - ci-t8vq2-3806784823
          - ci-t8vq2-13895053
          - ci-t8vq2-2653728782
          - ci-t8vq2-3144997424
          - ci-t8vq2-2221870794
          - ci-t8vq2-2998710952
          - ci-t8vq2-924300910
          - ci-t8vq2-2687447728
          - ci-t8vq2-441930226
          - ci-t8vq2-2027865656
          - ci-t8vq2-857446365
          - ci-t8vq2-1089779475
          - ci-t8vq2-1533362641
          - ci-t8vq2-2193859387
          - ci-t8vq2-2924122029
          - ci-t8vq2-245955437
          - ci-t8vq2-3119937059
          - ci-t8vq2-3119595669
          - ci-t8vq2-418065567
          - ci-t8vq2-2120215045
          - ci-t8vq2-4250566955
          - ci-t8vq2-145289723
          - ci-t8vq2-1642984369
          - ci-t8vq2-1325696727
          - ci-t8vq2-3992325445
          - ci-t8vq2-3744868327
          - ci-t8vq2-1719834657
          - ci-t8vq2-178844961
          phase: Error
          progress: 57/57
          resourcesDuration:
            cpu: 889
            memory: 889
          startedAt: "2021-03-02T00:36:26Z"
          templateName: main
          templateScope: local/
          type: DAG
        ci-t8vq2-13895053:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(29:context:L)
          finishedAt: "2021-03-02T00:36:54Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-13895053
          inputs:
            parameters:
            - name: message
              value: L
          name: ci-t8vq2.echo-multiple-times(29:context:L)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 17
            memory: 17
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-28546012:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-2699866519
          - ci-t8vq2-2921795641
          - ci-t8vq2-935633491
          - ci-t8vq2-2460421745
          - ci-t8vq2-1497633967
          - ci-t8vq2-2529876561
          - ci-t8vq2-2517646915
          - ci-t8vq2-2594947737
          - ci-t8vq2-2608532967
          - ci-t8vq2-3834947628
          - ci-t8vq2-1156673745
          - ci-t8vq2-2620463353
          - ci-t8vq2-3303001669
          - ci-t8vq2-139394661
          - ci-t8vq2-1087124617
          - ci-t8vq2-1466005665
          - ci-t8vq2-3097822101
          - ci-t8vq2-3815346741
          - ci-t8vq2-3045333153
          - ci-t8vq2-2135555749
          - ci-t8vq2-1671854890
          - ci-t8vq2-2189885778
          - ci-t8vq2-2454096578
          - ci-t8vq2-3752440098
          - ci-t8vq2-1958506170
          - ci-t8vq2-569544386
          - ci-t8vq2-743674290
          - ci-t8vq2-553546802
          - ci-t8vq2-3806784823
          - ci-t8vq2-13895053
          - ci-t8vq2-2653728782
          - ci-t8vq2-3144997424
          - ci-t8vq2-2221870794
          - ci-t8vq2-2998710952
          - ci-t8vq2-924300910
          - ci-t8vq2-2687447728
          - ci-t8vq2-441930226
          - ci-t8vq2-2027865656
          displayName: echo-multiple-times
          finishedAt: "2021-03-02T00:37:26Z"
          id: ci-t8vq2-28546012
          name: ci-t8vq2.echo-multiple-times
          phase: Succeeded
          progress: 38/38
          resourcesDuration:
            cpu: 705
            memory: 705
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: TaskGroup
        ci-t8vq2-128512104:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-3119937059
          - ci-t8vq2-3119595669
          - ci-t8vq2-418065567
          - ci-t8vq2-2120215045
          - ci-t8vq2-4250566955
          displayName: task4
          finishedAt: "2021-03-02T00:37:47Z"
          id: ci-t8vq2-128512104
          name: ci-t8vq2.task4
          phase: Succeeded
          progress: 5/5
          resourcesDuration:
            cpu: 42
            memory: 42
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: TaskGroup
        ci-t8vq2-139394661:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(13:context:O)
          finishedAt: "2021-03-02T00:36:46Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-139394661
          inputs:
            parameters:
            - name: message
              value: O
          name: ci-t8vq2.echo-multiple-times(13:context:O)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 13
            memory: 13
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-145289723:
          boundaryID: ci-t8vq2
          displayName: task5
          finishedAt: "2021-03-02T00:37:33Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-145289723
          inputs:
            parameters:
            - name: message
              value: task5
          name: ci-t8vq2.task5
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 10
            memory: 10
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-162067342:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-1642984369
          - ci-t8vq2-1325696727
          - ci-t8vq2-3992325445
          - ci-t8vq2-3744868327
          - ci-t8vq2-1719834657
          displayName: task6
          finishedAt: "2021-03-02T00:37:47Z"
          id: ci-t8vq2-162067342
          name: ci-t8vq2.task6
          phase: Succeeded
          progress: 5/5
          resourcesDuration:
            cpu: 46
            memory: 46
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: TaskGroup
        ci-t8vq2-178844961:
          boundaryID: ci-t8vq2
          displayName: task7
          finishedAt: "2021-03-02T00:37:35Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-178844961
          inputs:
            parameters:
            - name: message
              value: task7
          name: ci-t8vq2.task7
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 11
            memory: 11
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-212400199:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-229177818
          - ci-t8vq2-245955437
          - ci-t8vq2-128512104
          - ci-t8vq2-145289723
          - ci-t8vq2-162067342
          - ci-t8vq2-178844961
          displayName: task1
          finishedAt: "2021-03-02T00:37:08Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-212400199
          inputs:
            parameters:
            - name: message
              value: task1
          name: ci-t8vq2.task1
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 26
            memory: 26
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-229177818:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-857446365
          - ci-t8vq2-1089779475
          - ci-t8vq2-1533362641
          - ci-t8vq2-2193859387
          - ci-t8vq2-2924122029
          displayName: task2
          finishedAt: "2021-03-02T00:37:47Z"
          id: ci-t8vq2-229177818
          name: ci-t8vq2.task2
          phase: Error
          progress: 5/5
          resourcesDuration:
            cpu: 42
            memory: 42
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: TaskGroup
        ci-t8vq2-245955437:
          boundaryID: ci-t8vq2
          displayName: task3
          finishedAt: "2021-03-02T00:37:29Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-245955437
          inputs:
            parameters:
            - name: message
              value: task3
          name: ci-t8vq2.task3
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 7
            memory: 7
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-418065567:
          boundaryID: ci-t8vq2
          displayName: task4(2:context:C)
          finishedAt: "2021-03-02T00:37:26Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-418065567
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(2:context:C)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 5
            memory: 5
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-441930226:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(36:context:S)
          finishedAt: "2021-03-02T00:37:08Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-441930226
          inputs:
            parameters:
            - name: message
              value: S
          name: ci-t8vq2.echo-multiple-times(36:context:S)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 28
            memory: 28
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-553546802:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(27:context:I)
          finishedAt: "2021-03-02T00:37:02Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-553546802
          inputs:
            parameters:
            - name: message
              value: I
          name: ci-t8vq2.echo-multiple-times(27:context:I)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 23
            memory: 23
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-569544386:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(25:context:G)
          finishedAt: "2021-03-02T00:37:01Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-569544386
          inputs:
            parameters:
            - name: message
              value: G
          name: ci-t8vq2.echo-multiple-times(25:context:G)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 25
            memory: 25
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-743674290:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(26:context:H)
          finishedAt: "2021-03-02T00:36:56Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-743674290
          inputs:
            parameters:
            - name: message
              value: H
          name: ci-t8vq2.echo-multiple-times(26:context:H)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 16
            memory: 16
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-857446365:
          boundaryID: ci-t8vq2
          displayName: task2(0:context:A)
          finishedAt: "2021-03-02T00:37:29Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-857446365
          inputs:
            parameters:
            - name: message
              value: task2
          name: ci-t8vq2.task2(0:context:A)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 9
            memory: 9
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-924300910:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(34:context:Q)
          finishedAt: "2021-03-02T00:37:03Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-924300910
          inputs:
            parameters:
            - name: message
              value: Q
          name: ci-t8vq2.echo-multiple-times(34:context:Q)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 24
            memory: 24
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-935633491:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(2:context:C)
          finishedAt: "2021-03-02T00:36:45Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-935633491
          inputs:
            parameters:
            - name: message
              value: C
          name: ci-t8vq2.echo-multiple-times(2:context:C)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1087124617:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(14:context:P)
          finishedAt: "2021-03-02T00:36:49Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1087124617
          inputs:
            parameters:
            - name: message
              value: P
          name: ci-t8vq2.echo-multiple-times(14:context:P)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 14
            memory: 14
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1089779475:
          boundaryID: ci-t8vq2
          displayName: task2(1:context:B)
          finishedAt: "2021-03-02T00:37:31Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1089779475
          inputs:
            parameters:
            - name: message
              value: task2
          name: ci-t8vq2.task2(1:context:B)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1156673745:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(10:context:L)
          finishedAt: "2021-03-02T00:36:53Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1156673745
          inputs:
            parameters:
            - name: message
              value: L
          name: ci-t8vq2.echo-multiple-times(10:context:L)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 20
            memory: 20
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1325696727:
          boundaryID: ci-t8vq2
          displayName: task6(1:context:B)
          finishedAt: "2021-03-02T00:37:33Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1325696727
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(1:context:B)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 11
            memory: 11
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1466005665:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(15:context:Q)
          finishedAt: "2021-03-02T00:36:46Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1466005665
          inputs:
            parameters:
            - name: message
              value: Q
          name: ci-t8vq2.echo-multiple-times(15:context:Q)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1497633967:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(4:context:E)
          finishedAt: "2021-03-02T00:36:52Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1497633967
          inputs:
            parameters:
            - name: message
              value: E
          name: ci-t8vq2.echo-multiple-times(4:context:E)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 20
            memory: 20
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1533362641:
          boundaryID: ci-t8vq2
          displayName: task2(2:context:C)
          finishedAt: "2021-03-02T00:37:24Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1533362641
          inputs:
            parameters:
            - name: message
              value: task2
          message: 'Error (exit code 1): Could not get container status'
          name: ci-t8vq2.task2(2:context:C)
          phase: Error
          progress: 1/1
          resourcesDuration:
            cpu: 4
            memory: 4
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1642984369:
          boundaryID: ci-t8vq2
          displayName: task6(0:context:A)
          finishedAt: "2021-03-02T00:37:27Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1642984369
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(0:context:A)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 5
            memory: 5
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1671854890:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(20:context:B)
          finishedAt: "2021-03-02T00:36:48Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1671854890
          inputs:
            parameters:
            - name: message
              value: B
          name: ci-t8vq2.echo-multiple-times(20:context:B)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 13
            memory: 13
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1719834657:
          boundaryID: ci-t8vq2
          displayName: task6(4:context:E)
          finishedAt: "2021-03-02T00:37:32Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1719834657
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(4:context:E)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 11
            memory: 11
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1958506170:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(24:context:F)
          finishedAt: "2021-03-02T00:36:50Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1958506170
          inputs:
            parameters:
            - name: message
              value: F
          name: ci-t8vq2.echo-multiple-times(24:context:F)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 14
            memory: 14
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2027865656:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(37:context:T)
          finishedAt: "2021-03-02T00:37:02Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2027865656
          inputs:
            parameters:
            - name: message
              value: T
          name: ci-t8vq2.echo-multiple-times(37:context:T)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 24
            memory: 24
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2120215045:
          boundaryID: ci-t8vq2
          displayName: task4(3:context:D)
          finishedAt: "2021-03-02T00:37:28Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2120215045
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(3:context:D)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 8
            memory: 8
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2135555749:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(19:context:A)
          finishedAt: "2021-03-02T00:36:47Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2135555749
          inputs:
            parameters:
            - name: message
              value: A
          name: ci-t8vq2.echo-multiple-times(19:context:A)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 13
            memory: 13
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2189885778:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(21:context:C)
          finishedAt: "2021-03-02T00:36:55Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2189885778
          inputs:
            parameters:
            - name: message
              value: C
          name: ci-t8vq2.echo-multiple-times(21:context:C)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 19
            memory: 19
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2193859387:
          boundaryID: ci-t8vq2
          displayName: task2(3:context:D)
          finishedAt: "2021-03-02T00:37:30Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2193859387
          inputs:
            parameters:
            - name: message
              value: task2
          name: ci-t8vq2.task2(3:context:D)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 9
            memory: 9
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2221870794:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(32:context:O)
          finishedAt: "2021-03-02T00:36:53Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2221870794
          inputs:
            parameters:
            - name: message
              value: O
          name: ci-t8vq2.echo-multiple-times(32:context:O)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 17
            memory: 17
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2454096578:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(22:context:D)
          finishedAt: "2021-03-02T00:36:59Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2454096578
          inputs:
            parameters:
            - name: message
              value: D
          name: ci-t8vq2.echo-multiple-times(22:context:D)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 22
            memory: 22
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2460421745:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(3:context:D)
          finishedAt: "2021-03-02T00:36:49Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2460421745
          inputs:
            parameters:
            - name: message
              value: D
          name: ci-t8vq2.echo-multiple-times(3:context:D)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2517646915:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(6:context:G)
          finishedAt: "2021-03-02T00:36:46Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2517646915
          inputs:
            parameters:
            - name: message
              value: G
          name: ci-t8vq2.echo-multiple-times(6:context:G)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 13
            memory: 13
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2529876561:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(5:context:F)
          finishedAt: "2021-03-02T00:36:48Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2529876561
          inputs:
            parameters:
            - name: message
              value: F
          name: ci-t8vq2.echo-multiple-times(5:context:F)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2594947737:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(7:context:H)
          finishedAt: "2021-03-02T00:36:48Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2594947737
          inputs:
            parameters:
            - name: message
              value: H
          name: ci-t8vq2.echo-multiple-times(7:context:H)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2608532967:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(8:context:I)
          finishedAt: "2021-03-02T00:36:52Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2608532967
          inputs:
            parameters:
            - name: message
              value: I
          name: ci-t8vq2.echo-multiple-times(8:context:I)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2620463353:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(11:context:M)
          finishedAt: "2021-03-02T00:36:50Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2620463353
          inputs:
            parameters:
            - name: message
              value: M
          name: ci-t8vq2.echo-multiple-times(11:context:M)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 14
            memory: 14
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2653728782:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(30:context:M)
          finishedAt: "2021-03-02T00:37:05Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2653728782
          inputs:
            parameters:
            - name: message
              value: M
          name: ci-t8vq2.echo-multiple-times(30:context:M)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 25
            memory: 25
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2687447728:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(35:context:R)
          finishedAt: "2021-03-02T00:37:02Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2687447728
          inputs:
            parameters:
            - name: message
              value: R
          name: ci-t8vq2.echo-multiple-times(35:context:R)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 26
            memory: 26
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2699866519:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(0:context:A)
          finishedAt: "2021-03-02T00:36:54Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2699866519
          inputs:
            parameters:
            - name: message
              value: A
          name: ci-t8vq2.echo-multiple-times(0:context:A)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 26
            memory: 26
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2921795641:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(1:context:B)
          finishedAt: "2021-03-02T00:36:45Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2921795641
          inputs:
            parameters:
            - name: message
              value: B
          name: ci-t8vq2.echo-multiple-times(1:context:B)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2924122029:
          boundaryID: ci-t8vq2
          displayName: task2(4:context:E)
          finishedAt: "2021-03-02T00:37:28Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2924122029
          inputs:
            parameters:
            - name: message
              value: task2
          name: ci-t8vq2.task2(4:context:E)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 8
            memory: 8
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2998710952:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(33:context:P)
          finishedAt: "2021-03-02T00:37:05Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2998710952
          inputs:
            parameters:
            - name: message
              value: P
          name: ci-t8vq2.echo-multiple-times(33:context:P)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 29
            memory: 29
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3045333153:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(18:context:T)
          finishedAt: "2021-03-02T00:36:51Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3045333153
          inputs:
            parameters:
            - name: message
              value: T
          name: ci-t8vq2.echo-multiple-times(18:context:T)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 17
            memory: 17
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3097822101:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(16:context:R)
          finishedAt: "2021-03-02T00:36:45Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3097822101
          inputs:
            parameters:
            - name: message
              value: R
          name: ci-t8vq2.echo-multiple-times(16:context:R)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3119595669:
          boundaryID: ci-t8vq2
          displayName: task4(1:context:B)
          finishedAt: "2021-03-02T00:37:31Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3119595669
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(1:context:B)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 8
            memory: 8
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3119937059:
          boundaryID: ci-t8vq2
          displayName: task4(0:context:A)
          finishedAt: "2021-03-02T00:37:34Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3119937059
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(0:context:A)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3144997424:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(31:context:N)
          finishedAt: "2021-03-02T00:37:09Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3144997424
          inputs:
            parameters:
            - name: message
              value: "N"
          name: ci-t8vq2.echo-multiple-times(31:context:N)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 29
            memory: 29
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3303001669:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(12:context:N)
          finishedAt: "2021-03-02T00:36:52Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3303001669
          inputs:
            parameters:
            - name: message
              value: "N"
          name: ci-t8vq2.echo-multiple-times(12:context:N)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 20
            memory: 20
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3744868327:
          boundaryID: ci-t8vq2
          displayName: task6(3:context:D)
          finishedAt: "2021-03-02T00:37:34Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3744868327
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(3:context:D)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 11
            memory: 11
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3752440098:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(23:context:E)
          finishedAt: "2021-03-02T00:36:45Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3752440098
          inputs:
            parameters:
            - name: message
              value: E
          name: ci-t8vq2.echo-multiple-times(23:context:E)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 9
            memory: 9
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3806784823:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(28:context:K)
          finishedAt: "2021-03-02T00:36:50Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3806784823
          inputs:
            parameters:
            - name: message
              value: K
          name: ci-t8vq2.echo-multiple-times(28:context:K)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 15
            memory: 15
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3815346741:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(17:context:S)
          finishedAt: "2021-03-02T00:36:51Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3815346741
          inputs:
            parameters:
            - name: message
              value: S
          name: ci-t8vq2.echo-multiple-times(17:context:S)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 16
            memory: 16
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3834947628:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(9:context:K)
          finishedAt: "2021-03-02T00:36:47Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3834947628
          inputs:
            parameters:
            - name: message
              value: K
          name: ci-t8vq2.echo-multiple-times(9:context:K)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3992325445:
          boundaryID: ci-t8vq2
          displayName: task6(2:context:C)
          finishedAt: "2021-03-02T00:37:27Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3992325445
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(2:context:C)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 8
            memory: 8
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-4250566955:
          boundaryID: ci-t8vq2
          displayName: task4(4:context:E)
          finishedAt: "2021-03-02T00:37:30Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-4250566955
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(4:context:E)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 9
            memory: 9
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
      phase: Error
      progress: 57/57
      resourcesDuration:
        cpu: 889
        memory: 889
      startedAt: "2021-03-02T00:36:25Z"
      storedTemplates:
        namespaced/base/main:
          container:
            args:
            - echo {{inputs.parameters.message}}
            command:
            - sh
            - -c
            image: curlimages/curl:7.75.0
            name: ""
            resources: {}
          inputs:
            parameters:
            - name: message
          metadata: {}
          name: main
          outputs: {}
        namespaced/ci/main:
          dag:
            tasks:
            - arguments:
                parameters:
                - name: message
                  value: task1
              name: task1
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: '{{item.context}}'
              name: echo-multiple-times
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
              - context: F
              - context: G
              - context: H
              - context: I
              - context: K
              - context: L
              - context: M
              - context: "N"
              - context: O
              - context: P
              - context: Q
              - context: R
              - context: S
              - context: T
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
              - context: F
              - context: G
              - context: H
              - context: I
              - context: K
              - context: L
              - context: M
              - context: "N"
              - context: O
              - context: P
              - context: Q
              - context: R
              - context: S
              - context: T
            - arguments:
                parameters:
                - name: message
                  value: task2
              dependencies:
              - task1
              name: task2
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task3
              dependencies:
              - task1
              name: task3
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: task4
              dependencies:
              - task1
              name: task4
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task5
              dependencies:
              - task1
              name: task5
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: task6
              dependencies:
              - task1
              name: task6
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task7
              dependencies:
              - task1
              name: task7
              templateRef:
                name: base
                template: main
          inputs: {}
          metadata: {}
          name: main
          outputs: {}
      storedWorkflowTemplateSpec:
        arguments: {}
        entrypoint: main
        parallelism: 1000
        serviceAccountName: argo
        templates:
        - dag:
            tasks:
            - arguments:
                parameters:
                - name: message
                  value: task1
              name: task1
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: '{{item.context}}'
              name: echo-multiple-times
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
              - context: F
              - context: G
              - context: H
              - context: I
              - context: K
              - context: L
              - context: M
              - context: "N"
              - context: O
              - context: P
              - context: Q
              - context: R
              - context: S
              - context: T
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
              - context: F
              - context: G
              - context: H
              - context: I
              - context: K
              - context: L
              - context: M
              - context: "N"
              - context: O
              - context: P
              - context: Q
              - context: R
              - context: S
              - context: T
            - arguments:
                parameters:
                - name: message
                  value: task2
              dependencies:
              - task1
              name: task2
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task3
              dependencies:
              - task1
              name: task3
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: task4
              dependencies:
              - task1
              name: task4
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task5
              dependencies:
              - task1
              name: task5
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: task6
              dependencies:
              - task1
              name: task6
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task7
              dependencies:
              - task1
              name: task7
              templateRef:
                name: base
                template: main
          inputs: {}
          metadata: {}
          name: main
          outputs: {}
        workflowTemplateRef:
          name: ci
    

    Controller logs

    time="2021-03-02T00:36:25.857Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:25.865Z" level=info msg="Updated phase  -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.010Z" level=info msg="DAG node ci-t8vq2 initialized Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.013Z" level=info msg="TaskGroup node ci-t8vq2-28546012 initialized Running (message: )" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.013Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(0:context:A) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.014Z" level=info msg="Pod node ci-t8vq2-2699866519 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.032Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(0:context:A) (ci-t8vq2-2699866519)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.032Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(1:context:B) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.035Z" level=info msg="Pod node ci-t8vq2-2921795641 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.061Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(1:context:B) (ci-t8vq2-2921795641)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.062Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(2:context:C) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.063Z" level=info msg="Pod node ci-t8vq2-935633491 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.082Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(2:context:C) (ci-t8vq2-935633491)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.082Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(3:context:D) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.087Z" level=info msg="Pod node ci-t8vq2-2460421745 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.097Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(3:context:D) (ci-t8vq2-2460421745)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.097Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(4:context:E) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.105Z" level=info msg="Pod node ci-t8vq2-1497633967 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.150Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(4:context:E) (ci-t8vq2-1497633967)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.151Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(5:context:F) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.157Z" level=info msg="Pod node ci-t8vq2-2529876561 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.184Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(5:context:F) (ci-t8vq2-2529876561)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.184Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(6:context:G) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.185Z" level=info msg="Pod node ci-t8vq2-2517646915 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.199Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(6:context:G) (ci-t8vq2-2517646915)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.199Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(7:context:H) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.199Z" level=info msg="Pod node ci-t8vq2-2594947737 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.222Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(7:context:H) (ci-t8vq2-2594947737)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.222Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(8:context:I) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.222Z" level=info msg="Pod node ci-t8vq2-2608532967 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.243Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(8:context:I) (ci-t8vq2-2608532967)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.244Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(9:context:K) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.264Z" level=info msg="Pod node ci-t8vq2-3834947628 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.274Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(9:context:K) (ci-t8vq2-3834947628)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.274Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(10:context:L) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.275Z" level=info msg="Pod node ci-t8vq2-1156673745 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.294Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(10:context:L) (ci-t8vq2-1156673745)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.294Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(11:context:M) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.294Z" level=info msg="Pod node ci-t8vq2-2620463353 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.315Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(11:context:M) (ci-t8vq2-2620463353)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.315Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(12:context:N) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.315Z" level=info msg="Pod node ci-t8vq2-3303001669 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.347Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(12:context:N) (ci-t8vq2-3303001669)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.347Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(13:context:O) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.347Z" level=info msg="Pod node ci-t8vq2-139394661 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.378Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(13:context:O) (ci-t8vq2-139394661)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.378Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(14:context:P) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.378Z" level=info msg="Pod node ci-t8vq2-1087124617 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.426Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(14:context:P) (ci-t8vq2-1087124617)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.426Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(15:context:Q) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.427Z" level=info msg="Pod node ci-t8vq2-1466005665 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.466Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(15:context:Q) (ci-t8vq2-1466005665)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.466Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(16:context:R) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.479Z" level=info msg="Pod node ci-t8vq2-3097822101 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.506Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(16:context:R) (ci-t8vq2-3097822101)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.506Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(17:context:S) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.506Z" level=info msg="Pod node ci-t8vq2-3815346741 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.554Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(17:context:S) (ci-t8vq2-3815346741)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.556Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(18:context:T) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.569Z" level=info msg="Pod node ci-t8vq2-3045333153 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.623Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(18:context:T) (ci-t8vq2-3045333153)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.623Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(19:context:A) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.624Z" level=info msg="Pod node ci-t8vq2-2135555749 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.657Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(19:context:A) (ci-t8vq2-2135555749)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.657Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(20:context:B) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.664Z" level=info msg="Pod node ci-t8vq2-1671854890 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.696Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(20:context:B) (ci-t8vq2-1671854890)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.696Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(21:context:C) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.697Z" level=info msg="Pod node ci-t8vq2-2189885778 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.711Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(21:context:C) (ci-t8vq2-2189885778)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.712Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(22:context:D) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.712Z" level=info msg="Pod node ci-t8vq2-2454096578 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.732Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(22:context:D) (ci-t8vq2-2454096578)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.732Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(23:context:E) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.732Z" level=info msg="Pod node ci-t8vq2-3752440098 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.764Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(23:context:E) (ci-t8vq2-3752440098)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.764Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(24:context:F) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.766Z" level=info msg="Pod node ci-t8vq2-1958506170 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.874Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(24:context:F) (ci-t8vq2-1958506170)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.874Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(25:context:G) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.875Z" level=info msg="Pod node ci-t8vq2-569544386 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.920Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(25:context:G) (ci-t8vq2-569544386)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.923Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(26:context:H) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.924Z" level=info msg="Pod node ci-t8vq2-743674290 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.040Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(26:context:H) (ci-t8vq2-743674290)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.040Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(27:context:I) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.042Z" level=info msg="Pod node ci-t8vq2-553546802 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.100Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(27:context:I) (ci-t8vq2-553546802)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.100Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(28:context:K) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.100Z" level=info msg="Pod node ci-t8vq2-3806784823 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.134Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(28:context:K) (ci-t8vq2-3806784823)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.134Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(29:context:L) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.138Z" level=info msg="Pod node ci-t8vq2-13895053 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.178Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(29:context:L) (ci-t8vq2-13895053)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.178Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(30:context:M) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.178Z" level=info msg="Pod node ci-t8vq2-2653728782 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.198Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(30:context:M) (ci-t8vq2-2653728782)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.207Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(31:context:N) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.208Z" level=info msg="Pod node ci-t8vq2-3144997424 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.230Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(31:context:N) (ci-t8vq2-3144997424)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.230Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(32:context:O) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.231Z" level=info msg="Pod node ci-t8vq2-2221870794 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.268Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(32:context:O) (ci-t8vq2-2221870794)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.268Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(33:context:P) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.268Z" level=info msg="Pod node ci-t8vq2-2998710952 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.298Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(33:context:P) (ci-t8vq2-2998710952)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.298Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(34:context:Q) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.298Z" level=info msg="Pod node ci-t8vq2-924300910 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.340Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(34:context:Q) (ci-t8vq2-924300910)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.340Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(35:context:R) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.340Z" level=info msg="Pod node ci-t8vq2-2687447728 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.362Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(35:context:R) (ci-t8vq2-2687447728)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.362Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(36:context:S) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.366Z" level=info msg="Pod node ci-t8vq2-441930226 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.383Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(36:context:S) (ci-t8vq2-441930226)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.383Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(37:context:T) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.397Z" level=info msg="Pod node ci-t8vq2-2027865656 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.476Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(37:context:T) (ci-t8vq2-2027865656)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.476Z" level=info msg="All of node ci-t8vq2.task1 dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.476Z" level=info msg="Pod node ci-t8vq2-212400199 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.494Z" level=info msg="Created pod: ci-t8vq2.task1 (ci-t8vq2-212400199)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.620Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118446 workflow=ci-t8vq2
    time="2021-03-02T00:36:36.042Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.044Z" level=info msg="Updating node ci-t8vq2-1958506170 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.044Z" level=info msg="Updating node ci-t8vq2-2460421745 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.044Z" level=info msg="Updating node ci-t8vq2-2189885778 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-2620463353 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-3144997424 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-2594947737 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-3815346741 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-2608532967 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-3806784823 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-1497633967 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-2529876561 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-3303001669 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.081Z" level=info msg="Updating node ci-t8vq2-2454096578 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.081Z" level=info msg="Updating node ci-t8vq2-743674290 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.081Z" level=info msg="Updating node ci-t8vq2-2221870794 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.087Z" level=info msg="Updating node ci-t8vq2-1087124617 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.087Z" level=info msg="Updating node ci-t8vq2-3045333153 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.087Z" level=info msg="Updating node ci-t8vq2-1156673745 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.825Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118536 workflow=ci-t8vq2
    time="2021-03-02T00:36:46.220Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.221Z" level=info msg="Updating node ci-t8vq2-212400199 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.221Z" level=info msg="Updating node ci-t8vq2-1466005665 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.221Z" level=info msg="Updating node ci-t8vq2-1671854890 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2687447728 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-924300910 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-441930226 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2921795641 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-3097822101 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2027865656 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2998710952 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2135555749 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.223Z" level=info msg="Updating node ci-t8vq2-3834947628 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.223Z" level=info msg="Updating node ci-t8vq2-3752440098 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.223Z" level=info msg="Updating node ci-t8vq2-569544386 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.223Z" level=info msg="Updating node ci-t8vq2-2517646915 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-553546802 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-2699866519 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Setting node ci-t8vq2-935633491 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-935633491 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-2653728782 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-13895053 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-139394661 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.418Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118629 workflow=ci-t8vq2
    time="2021-03-02T00:36:56.412Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-1958506170 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-13895053 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-2620463353 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-139394661 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-1671854890 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Setting node ci-t8vq2-2529876561 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-2529876561 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Setting node ci-t8vq2-2189885778 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Updating node ci-t8vq2-2189885778 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Setting node ci-t8vq2-2699866519 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Updating node ci-t8vq2-2699866519 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Setting node ci-t8vq2-3303001669 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Updating node ci-t8vq2-3303001669 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Setting node ci-t8vq2-2594947737 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-2594947737 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-3815346741 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-2608532967 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-3834947628 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-1466005665 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.417Z" level=info msg="Setting node ci-t8vq2-2460421745 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.417Z" level=info msg="Updating node ci-t8vq2-2460421745 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.417Z" level=info msg="Updating node ci-t8vq2-3045333153 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-1497633967 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-3806784823 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Setting node ci-t8vq2-743674290 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-743674290 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-1156673745 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-2135555749 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-1087124617 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.419Z" level=info msg="Updating node ci-t8vq2-2221870794 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.564Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118736 workflow=ci-t8vq2
    time="2021-03-02T00:36:56.620Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1087124617/labelPodCompleted
    time="2021-03-02T00:36:56.620Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2529876561/labelPodCompleted
    time="2021-03-02T00:37:06.639Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.639Z" level=info msg="Setting node ci-t8vq2-2687447728 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.639Z" level=info msg="Updating node ci-t8vq2-2687447728 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.639Z" level=info msg="Updating node ci-t8vq2-2620463353 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.642Z" level=info msg="Updating node ci-t8vq2-2921795641 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.643Z" level=info msg="Updating node ci-t8vq2-935633491 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.644Z" level=info msg="Updating node ci-t8vq2-139394661 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.644Z" level=info msg="Updating node ci-t8vq2-1466005665 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.645Z" level=info msg="Setting node ci-t8vq2-924300910 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.645Z" level=info msg="Updating node ci-t8vq2-924300910 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.645Z" level=info msg="Updating node ci-t8vq2-3806784823 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.647Z" level=info msg="Setting node ci-t8vq2-2653728782 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.647Z" level=info msg="Updating node ci-t8vq2-2653728782 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.648Z" level=info msg="Setting node ci-t8vq2-2998710952 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.648Z" level=info msg="Updating node ci-t8vq2-2998710952 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.649Z" level=info msg="Setting node ci-t8vq2-2454096578 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.650Z" level=info msg="Updating node ci-t8vq2-2454096578 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.651Z" level=info msg="Setting node ci-t8vq2-2027865656 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.651Z" level=info msg="Updating node ci-t8vq2-2027865656 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.653Z" level=info msg="Setting node ci-t8vq2-553546802 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.653Z" level=info msg="Updating node ci-t8vq2-553546802 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.655Z" level=info msg="Updating node ci-t8vq2-1958506170 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.655Z" level=info msg="Updating node ci-t8vq2-2460421745 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.655Z" level=info msg="Setting node ci-t8vq2-569544386 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.656Z" level=info msg="Updating node ci-t8vq2-569544386 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.656Z" level=info msg="Updating node ci-t8vq2-3097822101 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.656Z" level=info msg="Updating node ci-t8vq2-3752440098 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.756Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118850 workflow=ci-t8vq2
    time="2021-03-02T00:37:06.778Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2620463353/labelPodCompleted
    time="2021-03-02T00:37:06.778Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-139394661/labelPodCompleted
    time="2021-03-02T00:37:06.779Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1466005665/labelPodCompleted
    time="2021-03-02T00:37:06.780Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2460421745/labelPodCompleted
    time="2021-03-02T00:37:06.860Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3752440098/labelPodCompleted
    time="2021-03-02T00:37:06.862Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2921795641/labelPodCompleted
    time="2021-03-02T00:37:06.878Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-935633491/labelPodCompleted
    time="2021-03-02T00:37:06.881Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3806784823/labelPodCompleted
    time="2021-03-02T00:37:06.909Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1958506170/labelPodCompleted
    time="2021-03-02T00:37:06.910Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3097822101/labelPodCompleted
    time="2021-03-02T00:37:16.763Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.764Z" level=info msg="Updating node ci-t8vq2-1671854890 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.765Z" level=info msg="Updating node ci-t8vq2-2189885778 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.765Z" level=info msg="Updating node ci-t8vq2-3303001669 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.765Z" level=info msg="Setting node ci-t8vq2-441930226 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.766Z" level=info msg="Updating node ci-t8vq2-441930226 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.766Z" level=info msg="Updating node ci-t8vq2-924300910 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.766Z" level=info msg="Updating node ci-t8vq2-2608532967 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.766Z" level=info msg="Updating node ci-t8vq2-2454096578 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.767Z" level=info msg="Updating node ci-t8vq2-2517646915 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.767Z" level=info msg="Updating node ci-t8vq2-2687447728 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.767Z" level=info msg="Updating node ci-t8vq2-2221870794 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Setting node ci-t8vq2-3144997424 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Updating node ci-t8vq2-3144997424 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Setting node ci-t8vq2-212400199 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Updating node ci-t8vq2-212400199 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Updating node ci-t8vq2-553546802 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.769Z" level=info msg="Updating node ci-t8vq2-1156673745 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.769Z" level=info msg="Updating node ci-t8vq2-2594947737 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.770Z" level=info msg="Updating node ci-t8vq2-569544386 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.770Z" level=info msg="Updating node ci-t8vq2-2135555749 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.771Z" level=info msg="Updating node ci-t8vq2-3834947628 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.771Z" level=info msg="Updating node ci-t8vq2-2998710952 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.772Z" level=info msg="Updating node ci-t8vq2-13895053 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.772Z" level=info msg="Updating node ci-t8vq2-2653728782 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.772Z" level=info msg="Updating node ci-t8vq2-2699866519 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.772Z" level=info msg="Updating node ci-t8vq2-2027865656 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.798Z" level=info msg="TaskGroup node ci-t8vq2-229177818 initialized Running (message: )" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.798Z" level=info msg="All of node ci-t8vq2.task2(0:context:A) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.800Z" level=info msg="Pod node ci-t8vq2-857446365 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.819Z" level=info msg="Created pod: ci-t8vq2.task2(0:context:A) (ci-t8vq2-857446365)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.820Z" level=info msg="All of node ci-t8vq2.task2(1:context:B) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.821Z" level=info msg="Pod node ci-t8vq2-1089779475 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.845Z" level=info msg="Created pod: ci-t8vq2.task2(1:context:B) (ci-t8vq2-1089779475)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.845Z" level=info msg="All of node ci-t8vq2.task2(2:context:C) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.849Z" level=info msg="Pod node ci-t8vq2-1533362641 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.889Z" level=info msg="Created pod: ci-t8vq2.task2(2:context:C) (ci-t8vq2-1533362641)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.890Z" level=info msg="All of node ci-t8vq2.task2(3:context:D) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.894Z" level=info msg="Pod node ci-t8vq2-2193859387 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.910Z" level=info msg="Created pod: ci-t8vq2.task2(3:context:D) (ci-t8vq2-2193859387)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.911Z" level=info msg="All of node ci-t8vq2.task2(4:context:E) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.914Z" level=info msg="Pod node ci-t8vq2-2924122029 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.936Z" level=info msg="Created pod: ci-t8vq2.task2(4:context:E) (ci-t8vq2-2924122029)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.936Z" level=info msg="All of node ci-t8vq2.task3 dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.937Z" level=info msg="Pod node ci-t8vq2-245955437 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.970Z" level=info msg="Created pod: ci-t8vq2.task3 (ci-t8vq2-245955437)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.971Z" level=info msg="TaskGroup node ci-t8vq2-128512104 initialized Running (message: )" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.971Z" level=info msg="All of node ci-t8vq2.task4(0:context:A) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.972Z" level=info msg="Pod node ci-t8vq2-3119937059 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.015Z" level=info msg="Created pod: ci-t8vq2.task4(0:context:A) (ci-t8vq2-3119937059)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.015Z" level=info msg="All of node ci-t8vq2.task4(1:context:B) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.016Z" level=info msg="Pod node ci-t8vq2-3119595669 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.039Z" level=info msg="Created pod: ci-t8vq2.task4(1:context:B) (ci-t8vq2-3119595669)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.039Z" level=info msg="All of node ci-t8vq2.task4(2:context:C) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.040Z" level=info msg="Pod node ci-t8vq2-418065567 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.069Z" level=info msg="Created pod: ci-t8vq2.task4(2:context:C) (ci-t8vq2-418065567)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.070Z" level=info msg="All of node ci-t8vq2.task4(3:context:D) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.072Z" level=info msg="Pod node ci-t8vq2-2120215045 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.103Z" level=info msg="Created pod: ci-t8vq2.task4(3:context:D) (ci-t8vq2-2120215045)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.104Z" level=info msg="All of node ci-t8vq2.task4(4:context:E) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.104Z" level=info msg="Pod node ci-t8vq2-4250566955 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.124Z" level=info msg="Created pod: ci-t8vq2.task4(4:context:E) (ci-t8vq2-4250566955)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.125Z" level=info msg="All of node ci-t8vq2.task5 dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.127Z" level=info msg="Pod node ci-t8vq2-145289723 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.171Z" level=info msg="Created pod: ci-t8vq2.task5 (ci-t8vq2-145289723)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.175Z" level=info msg="TaskGroup node ci-t8vq2-162067342 initialized Running (message: )" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.175Z" level=info msg="All of node ci-t8vq2.task6(0:context:A) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.176Z" level=info msg="Pod node ci-t8vq2-1642984369 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.262Z" level=info msg="Created pod: ci-t8vq2.task6(0:context:A) (ci-t8vq2-1642984369)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.262Z" level=info msg="All of node ci-t8vq2.task6(1:context:B) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.274Z" level=info msg="Pod node ci-t8vq2-1325696727 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.337Z" level=info msg="Created pod: ci-t8vq2.task6(1:context:B) (ci-t8vq2-1325696727)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.337Z" level=info msg="All of node ci-t8vq2.task6(2:context:C) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.340Z" level=info msg="Pod node ci-t8vq2-3992325445 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.423Z" level=info msg="Created pod: ci-t8vq2.task6(2:context:C) (ci-t8vq2-3992325445)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.423Z" level=info msg="All of node ci-t8vq2.task6(3:context:D) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.437Z" level=info msg="Pod node ci-t8vq2-3744868327 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.497Z" level=info msg="Created pod: ci-t8vq2.task6(3:context:D) (ci-t8vq2-3744868327)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.497Z" level=info msg="All of node ci-t8vq2.task6(4:context:E) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.518Z" level=info msg="Pod node ci-t8vq2-1719834657 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.587Z" level=info msg="Created pod: ci-t8vq2.task6(4:context:E) (ci-t8vq2-1719834657)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.587Z" level=info msg="All of node ci-t8vq2.task7 dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.587Z" level=info msg="Pod node ci-t8vq2-178844961 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.607Z" level=info msg="Created pod: ci-t8vq2.task7 (ci-t8vq2-178844961)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.718Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=119020 workflow=ci-t8vq2
    time="2021-03-02T00:37:17.779Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2454096578/labelPodCompleted
    time="2021-03-02T00:37:17.785Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2687447728/labelPodCompleted
    time="2021-03-02T00:37:17.789Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2221870794/labelPodCompleted
    time="2021-03-02T00:37:17.795Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-924300910/labelPodCompleted
    time="2021-03-02T00:37:17.831Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2608532967/labelPodCompleted
    time="2021-03-02T00:37:17.848Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-553546802/labelPodCompleted
    time="2021-03-02T00:37:17.868Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2135555749/labelPodCompleted
    time="2021-03-02T00:37:17.868Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3834947628/labelPodCompleted
    time="2021-03-02T00:37:17.910Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-13895053/labelPodCompleted
    time="2021-03-02T00:37:17.911Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2699866519/labelPodCompleted
    time="2021-03-02T00:37:17.976Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1671854890/labelPodCompleted
    time="2021-03-02T00:37:17.977Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2189885778/labelPodCompleted
    time="2021-03-02T00:37:17.986Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3303001669/labelPodCompleted
    time="2021-03-02T00:37:18.022Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-441930226/labelPodCompleted
    time="2021-03-02T00:37:18.056Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2594947737/labelPodCompleted
    time="2021-03-02T00:37:18.073Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2517646915/labelPodCompleted
    time="2021-03-02T00:37:18.103Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1156673745/labelPodCompleted
    time="2021-03-02T00:37:18.074Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-212400199/labelPodCompleted
    time="2021-03-02T00:37:18.105Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-569544386/labelPodCompleted
    time="2021-03-02T00:37:18.120Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2998710952/labelPodCompleted
    time="2021-03-02T00:37:18.182Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2653728782/labelPodCompleted
    time="2021-03-02T00:37:18.204Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2027865656/labelPodCompleted
    time="2021-03-02T00:37:26.829Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-743674290 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-857446365 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1089779475 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1533362641 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1497633967 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1642984369 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1719834657 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-2193859387 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-2924122029 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-3992325445 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-3144997424 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-1325696727 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-3815346741 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-4250566955 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-3744868327 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-145289723 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-3119595669 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-245955437 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-2120215045 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-3045333153 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-418065567 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.834Z" level=info msg="Updating node ci-t8vq2-3119937059 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.834Z" level=info msg="Updating node ci-t8vq2-178844961 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.904Z" level=info msg="node ci-t8vq2-28546012 phase Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.915Z" level=info msg="node ci-t8vq2-28546012 finished: 2021-03-02 00:37:26.9150578 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:27.012Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=119139 workflow=ci-t8vq2
    time="2021-03-02T00:37:27.050Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-743674290/labelPodCompleted
    time="2021-03-02T00:37:27.060Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1497633967/labelPodCompleted
    time="2021-03-02T00:37:27.061Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3144997424/labelPodCompleted
    time="2021-03-02T00:37:27.063Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3815346741/labelPodCompleted
    time="2021-03-02T00:37:27.115Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3045333153/labelPodCompleted
    time="2021-03-02T00:37:37.073Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.073Z" level=info msg="Setting node ci-t8vq2-3744868327 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.073Z" level=info msg="Updating node ci-t8vq2-3744868327 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.073Z" level=info msg="Setting node ci-t8vq2-1325696727 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.073Z" level=info msg="Updating node ci-t8vq2-1325696727 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-1642984369 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Setting node ci-t8vq2-245955437 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-245955437 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Setting node ci-t8vq2-1089779475 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-1089779475 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Setting node ci-t8vq2-3119937059 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-3119937059 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-418065567 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Setting node ci-t8vq2-1719834657 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-1719834657 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Setting node ci-t8vq2-178844961 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-178844961 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Pod failed: Error (exit code 1): Could not get container status" displayName="task2(2:context:C)" namespace=argo pod=ci-t8vq2-1533362641 templateName= workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-1533362641 status Running -> Error" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-1533362641 message: Error (exit code 1): Could not get container status" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-2924122029 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Setting node ci-t8vq2-2193859387 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-2193859387 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Setting node ci-t8vq2-145289723 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-145289723 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Setting node ci-t8vq2-4250566955 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-4250566955 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Setting node ci-t8vq2-3119595669 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-3119595669 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Setting node ci-t8vq2-857446365 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-857446365 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-3992325445 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-2120215045 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.141Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=119254 workflow=ci-t8vq2
    time="2021-03-02T00:37:37.161Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1533362641/labelPodCompleted
    time="2021-03-02T00:37:37.161Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2924122029/labelPodCompleted
    time="2021-03-02T00:37:37.162Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3992325445/labelPodCompleted
    time="2021-03-02T00:37:37.163Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2120215045/labelPodCompleted
    time="2021-03-02T00:37:37.198Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-418065567/labelPodCompleted
    time="2021-03-02T00:37:47.113Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.115Z" level=info msg="Updating node ci-t8vq2-3119595669 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.115Z" level=info msg="Updating node ci-t8vq2-3744868327 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.116Z" level=info msg="Updating node ci-t8vq2-1642984369 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.117Z" level=info msg="Updating node ci-t8vq2-2193859387 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.118Z" level=info msg="Updating node ci-t8vq2-4250566955 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.118Z" level=info msg="Updating node ci-t8vq2-1325696727 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.119Z" level=info msg="Updating node ci-t8vq2-857446365 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.119Z" level=info msg="Updating node ci-t8vq2-245955437 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.121Z" level=info msg="Updating node ci-t8vq2-1089779475 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.123Z" level=info msg="Updating node ci-t8vq2-178844961 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.123Z" level=info msg="Updating node ci-t8vq2-145289723 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.124Z" level=info msg="Updating node ci-t8vq2-3119937059 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.124Z" level=info msg="Updating node ci-t8vq2-1719834657 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.136Z" level=info msg="node ci-t8vq2-229177818 phase Running -> Error" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.136Z" level=info msg="node ci-t8vq2-229177818 finished: 2021-03-02 00:37:47.1368811 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.138Z" level=info msg="node ci-t8vq2-128512104 phase Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.138Z" level=info msg="node ci-t8vq2-128512104 finished: 2021-03-02 00:37:47.1385009 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.140Z" level=info msg="node ci-t8vq2-162067342 phase Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.140Z" level=info msg="node ci-t8vq2-162067342 finished: 2021-03-02 00:37:47.1409097 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.141Z" level=info msg="Outbound nodes of ci-t8vq2 set to [ci-t8vq2-2699866519 ci-t8vq2-2921795641 ci-t8vq2-935633491 ci-t8vq2-2460421745 ci-t8vq2-1497633967 ci-t8vq2-2529876561 ci-t8vq2-2517646915 ci-t8vq2-2594947737 ci-t8vq2-2608532967 ci-t8vq2-3834947628 ci-t8vq2-1156673745 ci-t8vq2-2620463353 ci-t8vq2-3303001669 ci-t8vq2-139394661 ci-t8vq2-1087124617 ci-t8vq2-1466005665 ci-t8vq2-3097822101 ci-t8vq2-3815346741 ci-t8vq2-3045333153 ci-t8vq2-2135555749 ci-t8vq2-1671854890 ci-t8vq2-2189885778 ci-t8vq2-2454096578 ci-t8vq2-3752440098 ci-t8vq2-1958506170 ci-t8vq2-569544386 ci-t8vq2-743674290 ci-t8vq2-553546802 ci-t8vq2-3806784823 ci-t8vq2-13895053 ci-t8vq2-2653728782 ci-t8vq2-3144997424 ci-t8vq2-2221870794 ci-t8vq2-2998710952 ci-t8vq2-924300910 ci-t8vq2-2687447728 ci-t8vq2-441930226 ci-t8vq2-2027865656 ci-t8vq2-857446365 ci-t8vq2-1089779475 ci-t8vq2-1533362641 ci-t8vq2-2193859387 ci-t8vq2-2924122029 ci-t8vq2-245955437 ci-t8vq2-3119937059 ci-t8vq2-3119595669 ci-t8vq2-418065567 ci-t8vq2-2120215045 ci-t8vq2-4250566955 ci-t8vq2-145289723 ci-t8vq2-1642984369 ci-t8vq2-1325696727 ci-t8vq2-3992325445 ci-t8vq2-3744868327 ci-t8vq2-1719834657 ci-t8vq2-178844961]" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.141Z" level=info msg="node ci-t8vq2 phase Running -> Error" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.141Z" level=info msg="node ci-t8vq2 finished: 2021-03-02 00:37:47.1418331 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.142Z" level=info msg="Checking daemoned children of ci-t8vq2" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.142Z" level=info msg="Updated phase Running -> Error" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.142Z" level=info msg="Marking workflow completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.142Z" level=info msg="Checking daemoned children of " namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.182Z" level=info msg="Workflow update successful" namespace=argo phase=Error resourceVersion=119304 workflow=ci-t8vq2
    time="2021-03-02T00:37:47.206Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1642984369/labelPodCompleted
    time="2021-03-02T00:37:47.207Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1325696727/labelPodCompleted
    time="2021-03-02T00:37:47.206Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2193859387/labelPodCompleted
    time="2021-03-02T00:37:47.206Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-4250566955/labelPodCompleted
    time="2021-03-02T00:37:47.241Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-857446365/labelPodCompleted
    time="2021-03-02T00:37:47.264Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-245955437/labelPodCompleted
    time="2021-03-02T00:37:47.274Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3119595669/labelPodCompleted
    time="2021-03-02T00:37:47.305Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3744868327/labelPodCompleted
    time="2021-03-02T00:37:47.313Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-145289723/labelPodCompleted
    time="2021-03-02T00:37:47.320Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3119937059/labelPodCompleted
    time="2021-03-02T00:37:47.335Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1719834657/labelPodCompleted
    time="2021-03-02T00:37:47.349Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1089779475/labelPodCompleted
    time="2021-03-02T00:37:47.349Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-178844961/labelPodCompleted
    

    kubectl logs ci-t8vq2-1533362641 -n argo -c wait

    time="2021-03-02T00:37:22.325Z" level=info msg="secured root for pid 24 root: runc:[2:INIT]"
    time="2021-03-02T00:37:22.327Z" level=info msg="mapped pid 24 to container ID \"42af41ccd93a5ced82d44e0230c07f3cd0592fc368e3694ee8ff400ce257a0b4\""
    time="2021-03-02T00:37:22.329Z" level=info msg="mapped container name \"main\" to container ID \"42af41ccd93a5ced82d44e0230c07f3cd0592fc368e3694ee8ff400ce257a0b4\" and pid 24"
    time="2021-03-02T00:37:23.568Z" level=info msg="Waiting for \"main\" pid 24 to complete"
    time="2021-03-02T00:37:23.568Z" level=info msg="\"main\" pid 24 completed"
    time="2021-03-02T00:37:23.568Z" level=info msg="Main container completed"
    time="2021-03-02T00:37:23.568Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-03-02T00:37:23.569Z" level=info msg="Capturing script exit code"
    time="2021-03-02T00:37:23.569Z" level=info msg="Getting exit code of main"
    time="2021-03-02T00:37:23.612Z" level=info msg="Get pods 200"
    time="2021-03-02T00:37:23.799Z" level=error msg="executor error: Could not get container status\ngithub.com/argoproj/argo-workflows/v3/errors.Wrap\n\t/go/src/github.com/argoproj/argo-workflows/errors/errors.go:88\ngithub.com/argoproj/argo-workflows/v3/errors.InternalWrapError\n\t/go/src/github.com/argoproj/argo-workflows/errors/errors.go:73\ngithub.com/argoproj/argo-workflows/v3/workflow/executor/k8sapi.(*K8sAPIExecutor).GetExitCode\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/k8sapi/k8sapi.go:50\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).CaptureScriptExitCode\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:714\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.waitContainer\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:55\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.NewWaitCommand.func1\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:18\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:846\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:950\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:887\nmain.main\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/main.go:14\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1374"
    time="2021-03-02T00:37:23.799Z" level=info msg="Alloc=6303 TotalAlloc=11141 Sys=73041 NumGC=3 Goroutines=10"
    time="2021-03-02T00:37:23.812Z" level=fatal msg="Could not get container status\ngithub.com/argoproj/argo-workflows/v3/errors.Wrap\n\t/go/src/github.com/argoproj/argo-workflows/errors/errors.go:88\ngithub.com/argoproj/argo-workflows/v3/errors.InternalWrapError\n\t/go/src/github.com/argoproj/argo-workflows/errors/errors.go:73\ngithub.com/argoproj/argo-workflows/v3/workflow/executor/k8sapi.(*K8sAPIExecutor).GetExitCode\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/k8sapi/k8sapi.go:50\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).CaptureScriptExitCode\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:714\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.waitContainer\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:55\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.NewWaitCommand.func1\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:18\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:846\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:950\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:887\nmain.main\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/main.go:14\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1374"
    
    image

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug regression 
    opened by caueasantos 43
  • OOM error not caught by the `emissary` executor forcing the workflow to hang in

    OOM error not caught by the `emissary` executor forcing the workflow to hang in "Running" state

    I am opening a new issue, but you can check https://github.com/argoproj/argo-workflows/issues/8456#issuecomment-1120206141 for context.


    The below error has been reproduced on master (07/05/2022), 3.3.5 and 3.2.11.

    When a workflow get OOM killed by K8s, the emissary executor sometime can't detect it. In consequence, the workflow hangs in "Running" state forever.

    The error does not happen when using the pns or docker executor. This is a major regression for us since the previous executors were working just fine. For now, we are falling back to docker.

    I have been able to make Argo detect the killed process by manually sshing to the pod and killing the /var/run/argo/argoexec emissary -- bash --login /argo/staging/script process (sending a SIGTERM signal). When doing that the main container get immediately killed, as well as the workflow. The workflow is correctly marked as failed with the correct OOMKilled (exit code 137) error (the same error when using the pns and docker executors).

    Unfortunately, so far all my attempts to reproduce it using openly available code, images and packages has been unsuccessful (I'll keep trying). I can only reproduce it using our private internal stack and images.

    The workload is a deeply nested machine learning code that rely a lot on the python and pytorch multiprocessing and distributed module. My guess is that some zombie child processes prevent the argo executor or workflow controller to detect the main container as completed.

    I will be happy to provide you more information, logs or config if it can help you to make sense of this (while on my side I'll keep trying to make a workflow reproducing the bug that I can share).

    While this bug affects us, I am quite confident other people running ML workload with Python on Argo will get that bug at some point.

    bug regression triage area/executor 
    opened by hadim 41
  • Workflow steps fail with a 'pod deleted' message.

    Workflow steps fail with a 'pod deleted' message.

    Summary

    Maybe related to #3381?

    Some of the workflow steps end up in Error state, with pod deleted. I am not sure which of the following data points are relevant, but listing all observations:

    • the workflow uses PodGC: strategy: OnPodSuccess.
    • we are seeing this for ~5% of workflow steps.
    • affected steps are a part of a withItems loop
    • the workflow is not large - ~170 to 300 concurrent nodes
    • this is observed since deploying v2.12.0rc2 yesterday, including v2.12.0rc2 executor image. We were previously on v2.11.6 and briefly on v2.11.7, and have not seen this.
    • k8s events confirm the pods ran to completion
    • cluster scaling has been ruled out as the cause - this is observed on multiple k8s nodes, all of which are still running
    • we have not tried the same workflow without PodGC yet.

    Diagnostics

    What Kubernetes provider are you using?

    docker

    What version of Argo Workflows are you running?

    v2.12.0rc2 for all components


    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug regression 
    opened by ebr 40
  • DAG/STEPS Hang v3.0.2 - Sidecars not being killed

    DAG/STEPS Hang v3.0.2 - Sidecars not being killed

    Summary

    What happened?

    DAG tasks randomly hang
    
    Screen Shot 2021-04-29 at 7 01 35 PM image

    What did you expect to happen?

    DAG tasks successfully finished
    

    Diagnostics

    What Kubernetes provider are you using?

    GKE
    Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.9-gke.1900", GitCommit:"008fd38bf3dc201bebdd4fe26edf9bf87478309a", GitTreeState:"clean", BuildDate:"2021-04-14T09:22:08Z", GoVersion:"go1.15.8b5", Compiler:"gc", Platform:"linux/amd64"}
    

    What version of Argo Workflows are you running?

    v3.0.2
    

    kubectl get wf -o yaml ${workflow}

    The workflow contains sensitive information regarding our organization, if it's important reach out me on CNCF slack

    kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name) | grep ${workflow}

    controller-dag-hang.txt

    Wait container logs

    W
    {},\"mirrorVolumeMounts\":true}],\"sidecars\":[{\"name\":\"mysql\",\"image\":\"mysql:5.6\",\"env\":[{\"name\":\"MYSQL_ALLOW_EMPTY_PASSWORD\",\"value\":\"true\"}],\"reso
    urces\":{},\"mirrorVolumeMounts\":true},{\"name\":\"redis\",\"image\":\"redis:alpine3.13\",\"resources\":{},\"mirrorVolumeMounts\":true},{\"name\":\"nginx\",\"image\":\
    "nginx:1.19.7-alpine\",\"resources\":{},\"mirrorVolumeMounts\":true}],\"archiveLocation\":{\"archiveLogs\":true,\"gcs\":{\"bucket\":\"7shitfs-argo-workflow-artifacts\",
    \"serviceAccountKeySecret\":{\"name\":\"devops-argo-workflow-sa\",\"key\":\"credentials.json\"},\"key\":\"argo-workflow-logs/2021/04/29/github-20979-9df1440/github-2097
    9-9df1440-2290904989\"}},\"retryStrategy\":{\"limit\":\"1\",\"retryPolicy\":\"Always\"},\"tolerations\":[{\"key\":\"node_type\",\"operator\":\"Equal\",\"value\":\"large
    \",\"effect\":\"NoSchedule\"}],\"hostAliases\":[{\"ip\":\"127.0.0.1\",\"hostnames\":[\"xyz.dev\",\"xyz.test\",\"cypress.xyz.test\",\"codeception.xyz.dev
    \"]}],\"podSpecPatch\":\"containers:\\n- name: main\\n  resources:\\n    request:\\n      memory: \\\"8Gi\\\"\\n      cpu: \\\"2\\\"\\n    limits:\\n      memory: \\\"8
    Gi\\\"\\n      cpu: \\\"2\\\"\\n- name: mysql\\n  resources:\\n    request:\\n      memory: \\\"2Gi\\\"\\n      cpu: \\\"0.5\\\"\\n    limits:\\n      memory: \\\"2Gi\\
    \"\\n      cpu: \\\"0.5\\\"\\n- name: redis\\n  resources:\\n    request:\\n      memory: \\\"50Mi\\\"\\n      cpu: \\\"0.05\\\"\\n    limits:\\n      memory: \\\"50Mi\
    \\"\\n      cpu: \\\"0.05\\\"\\n- name: nginx\\n  resources:\\n    request:\\n      memory: \\\"50Mi\\\"\\n      cpu: \\\"0.05\\\"\\n    limits:\\n      memory: \\\"50M
    i\\\"\\n      cpu: \\\"0.05\\\"\\n\",\"timeout\":\"1200s\"}"
    time="2021-04-29T22:33:05.291Z" level=info msg="Starting annotations monitor"
    time="2021-04-29T22:33:05.291Z" level=info msg="Starting deadline monitor"
    time="2021-04-29T22:33:10.299Z" level=info msg="Watch pods 200"
    time="2021-04-29T22:38:05.291Z" level=info msg="Alloc=4475 TotalAlloc=47692 Sys=75089 NumGC=15 Goroutines=10"
    time="2021-04-29T22:42:44.410Z" level=info msg="Main container completed"
    time="2021-04-29T22:42:44.410Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-04-29T22:42:44.410Z" level=info msg="Capturing script exit code"
    time="2021-04-29T22:42:44.410Z" level=info msg="Getting exit code of main"
    time="2021-04-29T22:42:44.413Z" level=info msg="Get pods 200"
    time="2021-04-29T22:42:44.414Z" level=info msg="Saving logs"
    time="2021-04-29T22:42:44.415Z" level=info msg="Getting output of main"
    time="2021-04-29T22:42:44.424Z" level=info msg="List log 200"
    time="2021-04-29T22:42:44.427Z" level=info msg="GCS Save path: /tmp/argo/outputs/logs/main.log, key: argo-workflow-logs/2021/04/29/github-20979-9df1440/github-20979-9df
    1440-2290904989/main.log"
    time="2021-04-29T22:42:44.763Z" level=info msg="not deleting local artifact" localArtPath=/tmp/argo/outputs/logs/main.log
    time="2021-04-29T22:42:44.763Z" level=info msg="Successfully saved file: /tmp/argo/outputs/logs/main.log"
    time="2021-04-29T22:42:44.763Z" level=info msg="No output parameters"
    time="2021-04-29T22:42:44.763Z" level=info msg="No output artifacts"
    time="2021-04-29T22:42:44.763Z" level=info msg="Annotating pod with output"
    time="2021-04-29T22:42:44.778Z" level=info msg="Patch pods 200"
    time="2021-04-29T22:42:44.779Z" level=info msg="Killing sidecars []"
    time="2021-04-29T22:42:44.779Z" level=info msg="Alloc=28577 TotalAlloc=72566 Sys=75089 NumGC=18 Goroutines=11"
    

    I've been continuously trying to upgrade our argo workflow version, but since 3.x.x dag tasks stopped working properly. I'm currently using v2.12 with no problems at all.


    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritize the issues with the most πŸ‘.

    bug regression 
    opened by caueasantos 39
  • artifactory main pod log storing using workflow-controller-configmap gives nil pointer panic

    artifactory main pod log storing using workflow-controller-configmap gives nil pointer panic

    Summary

    I wanted to store the main pod logs of my steps directly on artifactory, so followed the (splintered) docs I found to configure artifactory repo in the workflow-config-map, but I see the wait container (argoexec) crashing

    Diagnostics

    Using docker executor

    What version of Argo Workflows are you running? 3.0.1

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      creationTimestamp: "2021-04-21T21:20:08Z"
      generateName: nested-workflow-
      generation: 5
      labels:
        workflows.argoproj.io/phase: Running
      name: nested-wf-test
      namespace: default
      resourceVersion: "150308175"
      selfLink: /apis/argoproj.io/v1alpha1/namespaces/default/workflows/nested-wf-test
      uid: 8f930f77-8a87-4915-9815-ff97b872e952
    spec:
      arguments: {}
      entrypoint: nested-workflow-example
      templates:
      - inputs: {}
        metadata: {}
        name: nested-workflow-example
        outputs: {}
        steps:
        - - arguments:
              parameters:
              - name: excluded_node
                value: ""
            continueOn:
              failed: true
            name: runtb
            template: sleepabit
        - - arguments:
              parameters:
              - name: excluded_node
                value: '{{workflow.outputs.parameters.nodename}}'
            continueOn:
              failed: true
            name: retryrun
            template: sleepabit
            when: '{{steps.runtb.status}} != Succeeded'
      - affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: NotIn
                  values:
                  - '{{inputs.parameters.excluded_node}}'
        container:
          args:
          - echo $NODE_NAME > nodename.txt && echo blablabla && sleep 5 && false
          command:
          - sh
          - -c
          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          image: alpine:3.7
          name: ""
          resources: {}
        inputs:
          parameters:
          - name: excluded_node
        metadata: {}
        name: sleepabit
        outputs:
          parameters:
          - globalName: nodename
            name: nodename
            valueFrom:
              path: nodename.txt
        retryStrategy:
          limit: 2
          retryPolicy: OnError
    status:
      artifactRepositoryRef:
        default: true
      conditions:
      - status: "False"
        type: PodRunning
      finishedAt: null
      nodes:
        nested-wf-test:
          children:
          - nested-wf-test-4187779359
          displayName: nested-wf-test
          finishedAt: null
          id: nested-wf-test
          name: nested-wf-test
          phase: Running
          progress: 1/2
          startedAt: "2021-04-21T21:20:08Z"
          templateName: nested-workflow-example
          templateScope: local/nested-wf-test
          type: Steps
        nested-wf-test-73877493:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-1033837250
          displayName: runtb(0)
          finishedAt: "2021-04-21T21:20:24Z"
          hostNodeName: dev11-gsn107-k8s-med-worker-1
          id: nested-wf-test-73877493
          inputs:
            parameters:
            - name: excluded_node
              value: ""
          message: Error (exit code 1)
          name: nested-wf-test[0].runtb(0)
          phase: Failed
          progress: 1/1
          resourcesDuration:
            cpu: 10
            memory: 10
          startedAt: "2021-04-21T21:20:08Z"
          templateName: sleepabit
          templateScope: local/nested-wf-test
          type: Pod
        nested-wf-test-1033837250:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-3144780967
          displayName: '[1]'
          finishedAt: null
          id: nested-wf-test-1033837250
          name: nested-wf-test[1]
          phase: Running
          progress: 0/1
          startedAt: "2021-04-21T21:20:27Z"
          templateScope: local/nested-wf-test
          type: StepGroup
        nested-wf-test-1133525750:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-73877493
          displayName: runtb
          finishedAt: "2021-04-21T21:20:27Z"
          id: nested-wf-test-1133525750
          inputs:
            parameters:
            - name: excluded_node
              value: ""
          message: Error (exit code 1)
          name: nested-wf-test[0].runtb
          phase: Failed
          progress: 1/2
          resourcesDuration:
            cpu: 10
            memory: 10
          startedAt: "2021-04-21T21:20:08Z"
          templateName: sleepabit
          templateScope: local/nested-wf-test
          type: Retry
        nested-wf-test-3144780967:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-3241021146
          displayName: retryrun
          finishedAt: null
          id: nested-wf-test-3144780967
          inputs:
            parameters:
            - name: excluded_node
              value: '{{workflow.outputs.parameters.nodename}}'
          name: nested-wf-test[1].retryrun
          phase: Running
          progress: 0/1
          startedAt: "2021-04-21T21:20:27Z"
          templateName: sleepabit
          templateScope: local/nested-wf-test
          type: Retry
        nested-wf-test-3241021146:
          boundaryID: nested-wf-test
          displayName: retryrun(0)
          finishedAt: null
          id: nested-wf-test-3241021146
          inputs:
            parameters:
            - name: excluded_node
              value: '{{workflow.outputs.parameters.nodename}}'
          message: 'Unschedulable: 0/72 nodes are available: 72 node(s) didn''t match
            node selector.'
          name: nested-wf-test[1].retryrun(0)
          phase: Pending
          progress: 0/1
          startedAt: "2021-04-21T21:20:27Z"
          templateName: sleepabit
          templateScope: local/nested-wf-test
          type: Pod
        nested-wf-test-4187779359:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-1133525750
          displayName: '[0]'
          finishedAt: "2021-04-21T21:20:27Z"
          id: nested-wf-test-4187779359
          name: nested-wf-test[0]
          phase: Succeeded
          progress: 1/2
          resourcesDuration:
            cpu: 10
            memory: 10
          startedAt: "2021-04-21T21:20:08Z"
          templateScope: local/nested-wf-test
          type: StepGroup
      phase: Running
      progress: 1/2
      resourcesDuration:
        cpu: 10
        memory: 10
      startedAt: "2021-04-21T21:20:08Z"
    
    

    I then recompiled argoexec to add some extra prints to see the various variables in use:

    kubectl logs nested-wf-test-73877493 -c wait -f
    time="2021-04-21T21:20:19.230Z" level=info msg="Starting Workflow Executor" version="{untagged 2021-04-21T21:06:04Z 46221c5c901ce3df1ce3144cf6d54705c1e8eb04 untagged clean go1.15.7 gc linux/amd64}"
    I0421 21:20:19.231385       1 merged_client_builder.go:121] Using in-cluster configuration
    I0421 21:20:19.231668       1 merged_client_builder.go:163] Using in-cluster namespace
    time="2021-04-21T21:20:19.235Z" level=info msg="Creating a docker executor"
    time="2021-04-21T21:20:19.235Z" level=info msg="Executor (version: untagged, build_date: 2021-04-21T21:06:04Z) initialized (pod: default/nested-wf-test-73877493) with template:\n{\"name\":\"sleepabit\",\"inputs\":{\"parameters\":[{\"name\":\"excluded_node\",\"value\":\"\"}]},\"outputs\":{\"parameters\":[{\"name\":\"nodename\",\"valueFrom\":{\"path\":\"nodename.txt\"},\"globalName\":\"nodename\"}]},\"affinity\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpressions\":[{\"key\":\"kubernetes.io/hostname\",\"operator\":\"NotIn\",\"values\":[\"\"]}]}]}}},\"metadata\":{},\"container\":{\"name\":\"\",\"image\":\"alpine:3.7\",\"command\":[\"sh\",\"-c\"],\"args\":[\"echo $NODE_NAME \\u003e nodename.txt \\u0026\\u0026 echo blablabla \\u0026\\u0026 sleep 5 \\u0026\\u0026 false\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"resources\":{}},\"archiveLocation\":{\"archiveLogs\":true,\"artifactory\":{\"url\":\"http://artifactory-espoo1.int.net.nokia.com/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493\",\"usernameSecret\":{\"name\":\"artifactory-sandbox\",\"key\":\"username\"},\"passwordSecret\":{\"name\":\"artifactory-sandbox\",\"key\":\"password\"}}},\"retryStrategy\":{\"limit\":2,\"retryPolicy\":\"OnError\"}}"
    time="2021-04-21T21:20:19.235Z" level=info msg="Starting annotations monitor"
    time="2021-04-21T21:20:19.235Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:19.235Z" level=info msg="Starting deadline monitor"
    time="2021-04-21T21:20:19.280Z" level=info msg="mapped container name \"main\" to container ID \"7c3b4ec01ec4f5015110f7307aaba01caf68d7a90dd20385ea13a95831c2d530\" (created at 2021-04-21 21:20:19 +0000 UTC, status Created)"
    time="2021-04-21T21:20:19.280Z" level=info msg="mapped container name \"wait\" to container ID \"b5004cf8e03affa2199aead9ea5767d22a799490e9d24ce9d5066a3376eabd9b\" (created at 2021-04-21 21:20:19 +0000 UTC, status Up)"
    time="2021-04-21T21:20:20.235Z" level=info msg="docker wait 7c3b4ec01ec4f5015110f7307aaba01caf68d7a90dd20385ea13a95831c2d530"
    time="2021-04-21T21:20:20.280Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:21.314Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:22.350Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:23.384Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:24.420Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:24.560Z" level=info msg="Main container completed"
    time="2021-04-21T21:20:24.560Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-04-21T21:20:24.560Z" level=info msg="Capturing script exit code"
    time="2021-04-21T21:20:24.595Z" level=info msg="Saving logs"
    time="2021-04-21T21:20:24.595Z" level=info msg="[docker logs 7c3b4ec01ec4f5015110f7307aaba01caf68d7a90dd20385ea13a95831c2d530]"
    time="2021-04-21T21:20:24.630Z" level=info msg="art: &Artifact{Name:main-logs,Path:,Mode:nil,From:,ArtifactLocation:ArtifactLocation{ArchiveLogs:nil,S3:nil,Git:nil,HTTP:nil,Artifactory:&ArtifactoryArtifact{URL:/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493/main.log,ArtifactoryAuth:ArtifactoryAuth{UsernameSecret:nil,PasswordSecret:nil,},},HDFS:nil,Raw:nil,OSS:nil,GCS:nil,},GlobalName:,Archive:nil,Optional:false,SubPath:,RecurseMode:false,}"
    time="2021-04-21T21:20:24.630Z" level=info msg="driverArt: &Artifact{Name:main-logs,Path:,Mode:nil,From:,ArtifactLocation:ArtifactLocation{ArchiveLogs:nil,S3:nil,Git:nil,HTTP:nil,Artifactory:&ArtifactoryArtifact{URL:/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493/main.log,ArtifactoryAuth:ArtifactoryAuth{UsernameSecret:nil,PasswordSecret:nil,},},HDFS:nil,Raw:nil,OSS:nil,GCS:nil,},GlobalName:,Archive:nil,Optional:false,SubPath:,RecurseMode:false,}"
    time="2021-04-21T21:20:24.630Z" level=info msg="driverArt: &Artifact{Name:main-logs,Path:,Mode:nil,From:,ArtifactLocation:ArtifactLocation{ArchiveLogs:nil,S3:nil,Git:nil,HTTP:nil,Artifactory:&ArtifactoryArtifact{URL:/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493/main.log,ArtifactoryAuth:ArtifactoryAuth{UsernameSecret:nil,PasswordSecret:nil,},},HDFS:nil,Raw:nil,OSS:nil,GCS:nil,},GlobalName:,Archive:nil,Optional:false,SubPath:,RecurseMode:false,}"
    time="2021-04-21T21:20:24.630Z" level=info msg=NewDriver
    time="2021-04-21T21:20:24.630Z" level=info msg=Artifactory
    time="2021-04-21T21:20:24.630Z" level=info msg="Artifactory: &ArtifactoryArtifact{URL:/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493/main.log,ArtifactoryAuth:ArtifactoryAuth{UsernameSecret:nil,PasswordSecret:nil,},}"
    time="2021-04-21T21:20:24.630Z" level=info msg="Alloc=4327 TotalAlloc=9206 Sys=74577 NumGC=4 Goroutines=9"
    time="2021-04-21T21:20:24.630Z" level=fatal msg="executor panic: runtime error: invalid memory address or nil pointer dereference\ngoroutine 1 [running]:\nruntime/debug.Stack(0x2053ed2, 0x14, 0xc0006b41c0)\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0x9f\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).HandleError(0xc00037b600, 0x23706c0, 0xc000190020)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:126 +0x1d6\npanic(0x1dac5c0, 0x3064ec0)\n\t/usr/local/go/src/runtime/panic.go:975 +0x47a\ngithub.com/argoproj/argo-workflows/v3/workflow/artifacts.NewDriver(0x23706c0, 0xc000190020, 0xc000726240, 0x23324c0, 0xc00037b600, 0xc00071f960, 0xa388f5, 0xc0002330e0, 0x2005e00)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/artifacts/artifacts.go:99 +0xa21\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).InitDriver(0xc00037b600, 0x23706c0, 0xc000190020, 0xc000726240, 0xc00071f9f0, 0x1, 0x1, 0xa8)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:586 +0x65\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).saveArtifactFromFile(0xc00037b600, 0x23706c0, 0xc000190020, 0xc000726180, 0x20400e7, 0x8, 0xc000734b60, 0x1f, 0x0, 0xc000190020)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:316 +0x1be\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).SaveLogs(0xc00037b600, 0x23706c0, 0xc000190020, 0x0, 0x0, 0xc00071fba0)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:542 +0x23d\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.waitContainer(0x23706c0, 0xc000190020, 0x0, 0x0)\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:61 +0x61f\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.NewWaitCommand.func1(0xc00037ab00, 0xc0000a93e0, 0x0, 0x6)\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:18 +0x3d\ngithub.com/spf13/cobra.(*Command).execute(0xc00037ab00, 0xc0000a9380, 0x6, 0x6, 0xc00037ab00, 0xc0000a9380)\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2c2\ngithub.com/spf13/cobra.(*Command).ExecuteC(0xc00037a2c0, 0xc000086778, 0xc00071ff78, 0x4062c5)\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375\ngithub.com/spf13/cobra.(*Command).Execute(...)\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:887\nmain.main()\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/main.go:14 +0x2b\n"
    

    workflow controller configmap

    apiVersion: v1
    data:
      artifactRepository: |
        # archiveLogs will archive the main container logs as an artifact
        archiveLogs: true
        artifactory:
          repoURL: "http://artifactory-espoo1.int.net.nokia.com/artifactory/fixedaccess-sw-rpm-local"
          usernameSecret:
            name: artifactory-sandbox
            key: username
          passwordSecret:
            name: artifactory-sandbox
            key: password
      executor: |
        imagePullPolicy: Always
        args:
        - --loglevel
        - debug
        - --gloglevel
        - "6"
    kind: ConfigMap
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"workflow-controller-configmap","namespace":"argo"}}
      creationTimestamp: "2021-01-27T18:28:36Z"
      name: workflow-controller-configmap
      namespace: argo
      resourceVersion: "150280397"
      selfLink: /api/v1/namespaces/argo/configmaps/workflow-controller-configmap
      uid: 62afdfd0-32bb-4ed5-88bf-f8db88ed6706
    

    and the secret (base64 removed)

    apiVersion: v1
    data:
      password: xxxxxxxxxxxxxxx
      username: xxxxxxxxxxx
    kind: Secret
    metadata:
      creationTimestamp: "2021-04-21T16:17:47Z"
      name: artifactory-sandbox
      namespace: default
      resourceVersion: "149896347"
      selfLink: /api/v1/namespaces/default/secrets/artifactory-sandbox
      uid: 806ba8d8-183b-4210-a39f-06eeb2b2b9e5
    type: Opaque
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug regression 
    opened by heyleke 39
  • Difficulty scaling when running many workflows and/or steps

    Difficulty scaling when running many workflows and/or steps

    Problem statement

    • Use case: processing media assets as they are uploaded by customers
      • We are trying to test the case of continuously created workflows, with the hope of relatively predictable responsiveness.
      • In an ideal world, we would be able to run ~2500 workflows in parallel at peak times
      • Expected workflow run time varies quite a bit, but most of them will take less than five minutes
    • In the synthetic benchmarks detailed below, our nodegroup scale limits are never reached. We don’t seem to be able to run workflows quickly enough to saturate our capacity.

    Environment details

    • Running on EKS
      • Kubernetes version: 1.17
      • Controller running on instance type: m5.8xlarge (32 vCPU, 128GB RAM)
      • Workflow executors instance type: m5.xlarge (4 vCPU, 16GB RAM)
    • Argo details:
      • Modified for additional metrics based on master at commit 5c538d7a918e41029d3911a92c6ac615f04d3b80
      • Running with parallelism: 800, otherwise we observed the EKS control place becoming unresponsive
      • Running with containerRuntimeExecutor: kubelet on AWS Bottlerocket instances

    Case 1 (many workflows, few steps)

    • We created a workflow template (see fig.1) with a modifiable number of steps. We launched workflows at a rate of 3 per second with a script (see fig.2)
    • Initially the controller seems to keep up, with the running workflow count growing, and no noticeable delays. After a few minutes we observe a growing number of pending workflows, as well as an oscillation pattern in several metrics.
    • Instead of scaling up to meet demand, we see kubernetes node capacity go unused as the number of running workflows oscillates
    • We added custom metrics in the controller to monitor the workflow and pod queues (wfc.wfQueue and wfc.podQueue in controller/controller.go). The workflow queue oscillates between 1000 and 1500 items during our test. However, the pod queue consistently stays at 0.

    Case 2 (few workflows, many steps)

    • We created a similar template (fig. 6) and script (fig. 7) that generates a number of β€œwork” steps that run in parallel, and concludes with a final β€œfollowup” step.
    • When launching one workflow at a time, or at an interval larger than 20s, everything ran smoothly (pods completed and workflows were successful)
    • When the flow of workflows was increased, we began to see the β€œzombie” phenomena (fig. 4) - even though the pod was marked as Completed, the workflow lingers in the Running state (fig. 5).

    Things we tried

    In trying to address these issues, we changed the values of the following parameters without much success:

    • pod-workers
    • workflow-workers (the default of 32 was a bottleneck, but anything over 128 didn’t make a difference)
    • INFORMER_WRITE_BACK=false
    • --qps, --burst
    • We increased the memory and CPU resources available to the controller
    • workflowResyncPeriodand podResyncPeriod
    • We’ve also tried various experimental branches from recent seemingly related issues (and we are happy to try more!)

    fig. 1

    apiVersion: argoproj.io/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: sleep-test-template
      generateName: sleep-test-
      namespace: argo-workflows
    spec:
      entrypoint: sleep
      ttlStrategy:
        secondsAfterSuccess: 0
        secondsAfterFailure: 600
      podGC:
        strategy: OnPodCompletion
      arguments:
        parameters:
          - name: friendly-name
            value: sleep_test # Use underscores, not hyphens
          - name: cpu-limit
            value: 2000m
          - name: mem-limit
            value: 1024Mi
          - name: step-count
            value: "200"
          - name: sleep-seconds
            value: "8"
      metrics:
        prometheus:
          - name: "workflow_duration"      # Metric name (will be prepended with "argo_workflows_")
            help: "Duration gauge by name" # A help doc describing your metric. This is required.
            labels:
               - key: workflow_template
                 value: "{{workflow.parameters.friendly-name}}"
            gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
              value: "{{workflow.duration}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
          - name: "workflow_processed"
            help: "Workflow processed count"
            labels:
               - key: workflow_template
                 value: "{{workflow.parameters.friendly-name}}"
               - key: status
                 value: "{{workflow.status}}"
            counter:
              value: "1"
      templates:
      - name: sleep
        nodeSelector:
          intent: task-workers
        steps:
          - - name: generate
              template: gen-number-list
          - - name: "sleep"
              template: snooze
              arguments:
                parameters: [{name: input_asset, value: "{{workflow.parameters.sleep-seconds}}", id: "{{item}}"}]
              withParam: "{{steps.generate.outputs.result}}"
    
      # Generate a list of numbers in JSON format
      - name: gen-number-list
        nodeSelector:
          intent: task-workers
        script:
          image: python:3.8.5-alpine3.12
          imagePullPolicy: IfNotPresent
          command: [python]
          source: |
            import json
            import sys
            json.dump([i for i in range(0, {{workflow.parameters.step-count}})], sys.stdout)
      - name: snooze
        metrics:
          prometheus:
            - name: "resource_duration_cpu"      # Metric name (will be prepended with "argo_workflows_")
              help: "Resource Duration CPU" # A help doc describing your metric. This is required.
              labels:
                 - key: workflow_template
                   value: "{{workflow.parameters.friendly-name}}"
              gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
                value: "{{resourcesDuration.cpu}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
            - name: "resource_duration_memory"      # Metric name (will be prepended with "argo_workflows_")
              help: "Resource Duration Memory" # A help doc describing your metric. This is required.
              labels:
                 - key: workflow_template
                   value: "{{workflow.parameters.friendly-name}}"
              gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
                value: "{{resourcesDuration.memory}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
        nodeSelector:
          intent: task-workers
        inputs:
          parameters:
            - name: input_asset
        podSpecPatch: '{"containers":[{"name":"main", "resources":{"requests":{"cpu": "{{workflow.parameters.cpu-limit}}", "memory": "{{workflow.parameters.mem-limit}}"}, "limits":{"cpu": "{{workflow.parameters.cpu-limit}}", "memory": "{{workflow.parameters.mem-limit}}" }}}]}'
        container:
          image: alpine
          imagePullPolicy: IfNotPresent
          command: [sleep]
          args: ["{{workflow.parameters.sleep-seconds}}"]
    

    fig. 2

    #!/usr/bin/env bash
    set -euo pipefail
    while true; do
      for i in {1..3}; do
        argo submit \
            -n argo-workflows \
            --from workflowtemplate/sleep-test-template \
            -p step-count="1" \
            -p sleep-seconds="60" &>/dev/null &
      done
      sleep 1
      echo -n "."
    done
    

    fig. 3

    Screen Shot 2020-12-01 at 4 09 01 PM

    fig.4

    ❯ argo -n argo-workflows get sleep-fanout-test-template-6dtjp
    Name:                sleep-fanout-test-template-6dtjp
    Namespace:           argo-workflows
    ServiceAccount:      default
    Status:              Running
    Created:             Wed Dec 02 15:39:59 -0500 (6 minutes ago)
    Started:             Wed Dec 02 15:39:59 -0500 (6 minutes ago)
    Duration:            6 minutes 21 seconds
    ResourcesDuration:   42m21s*(1 cpu),2h30m41s*(100Mi memory)
    Parameters:
      step-count:        100
      sleep-seconds:     8
    
    STEP                                 TEMPLATE         PODNAME                                      DURATION  MESSAGE
     ● sleep-fanout-test-template-6dtjp  sleep
     β”œ---βœ” generate                      gen-number-list  sleep-fanout-test-template-6dtjp-2151903814  7s
     β”œ-Β·-βœ” sleep(0:0)                    snooze           sleep-fanout-test-template-6dtjp-1189074090  14s
     | β”œ-βœ” sleep(1:1)                    snooze           sleep-fanout-test-template-6dtjp-1828931302  25s
    ...
     | β””-βœ” sleep(99:99)                  snooze           sleep-fanout-test-template-6dtjp-1049774502  16s
     β””---β—· followup                      snooze           sleep-fanout-test-template-6dtjp-1490893639  5m
    

    fig. 5

    ❯ kubectl -n argo-workflows get pod/sleep-fanout-test-template-6dtjp-1490893639
    NAME                                          READY   STATUS      RESTARTS   AGE
    sleep-fanout-test-template-6dtjp-1490893639   0/2     Completed   0          5m43s
    

    fig. 6

    apiVersion: argoproj.io/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: sleep-fanout-test-template
      generateName: sleep-fanout-test-
      namespace: argo-workflows
    spec:
      entrypoint: sleep
      ttlStrategy:
        secondsAfterSuccess: 0
        secondsAfterFailure: 600
      podGC:
        strategy: OnPodCompletion
      arguments:
        parameters:
          - name: friendly-name
            value: sleep_fanout_test # Use underscores, not hyphens
          - name: cpu-limit
            value: 2000m
          - name: mem-limit
            value: 1024Mi
          - name: step-count
            value: "200"
          - name: sleep-seconds
            value: "8"
      metrics:
        prometheus:
          - name: "workflow_duration"      # Metric name (will be prepended with "argo_workflows_")
            help: "Duration gauge by name" # A help doc describing your metric. This is required.
            labels:
               - key: workflow_template
                 value: "{{workflow.parameters.friendly-name}}"
            gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
              value: "{{workflow.duration}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
          - name: "workflow_processed"
            help: "Workflow processed count"
            labels:
               - key: workflow_template
                 value: "{{workflow.parameters.friendly-name}}"
               - key: status
                 value: "{{workflow.status}}"
            counter:
              value: "1"
      templates:
      - name: sleep
        nodeSelector:
          intent: task-workers
        steps:
          - - name: generate
              template: gen-number-list
          - - name: "sleep"
              template: snooze
              withParam: "{{steps.generate.outputs.result}}"
          - - name: "followup"
              template: snooze
    
      # Generate a list of numbers in JSON format
      - name: gen-number-list
        nodeSelector:
          intent: task-workers
        script:
          image: python:3.8.5-alpine3.12
          imagePullPolicy: IfNotPresent
          command: [python]
          source: |
            import json
            import sys
            json.dump([i for i in range(0, {{workflow.parameters.step-count}})], sys.stdout)
      - name: snooze
        metrics:
          prometheus:
            - name: "resource_duration_cpu"      # Metric name (will be prepended with "argo_workflows_")
              help: "Resource Duration CPU" # A help doc describing your metric. This is required.
              labels:
                 - key: workflow_template
                   value: "{{workflow.parameters.friendly-name}}"
              gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
                value: "{{resourcesDuration.cpu}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
            - name: "resource_duration_memory"      # Metric name (will be prepended with "argo_workflows_")
              help: "Resource Duration Memory" # A help doc describing your metric. This is required.
              labels:
                 - key: workflow_template
                   value: "{{workflow.parameters.friendly-name}}"
              gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
                value: "{{resourcesDuration.memory}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
        nodeSelector:
          intent: task-workers
        podSpecPatch: '{"containers":[{"name":"main", "resources":{"requests":{"cpu": "{{workflow.parameters.cpu-limit}}", "memory": "{{workflow.parameters.mem-limit}}"}, "limits":{"cpu": "{{workflow.parameters.cpu-limit}}", "memory": "{{workflow.parameters.mem-limit}}" }}}]}'
        container:
          image: alpine
          imagePullPolicy: IfNotPresent
          command: [sleep]
          args: ["{{workflow.parameters.sleep-seconds}}"]
    

    fig. 7

    #!/usr/bin/env bash
    set -euo pipefail
    while true; do
      argo submit \
        -n argo-workflows \
        --from workflowtemplate/sleep-fanout-test-template \
        -p step-count="100" \
        -p sleep-seconds="8" &>/dev/null
      echo -n "."
      sleep 10
    done
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug 
    opened by tomgoren 39
  • WorkflowTemplate CRD

    WorkflowTemplate CRD

    I implemented WorkflowTemplate CRD which holds template data outside of Workflow which was discussed in #927.

    You can refer to another template from any template by either local template name of template or template resource reference of templateRef. When the referenced template is loaded, template level arguments of arguments are applied.

    Here's the example which describes the feature well.

    apiVersion: argoproj.io/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: workflow-template-whalesay-template
    spec:
      templates:
      - name: whalesay-template
        inputs:
          parameters:
          - name: message
        container:
          image: docker/whalesay
          command: [cowsay]
          args: ["{{inputs.parameters.message}}"]
    ---
    apiVersion: argoproj.io/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: workflow-template-nested-template
    spec:
      templates:
      - name: whalesay-inner-template
        templateRef:
          name: workflow-template-whalesay-template
          template: whalesay-template
        inputs:
          parameters:
          - name: message
        arguments:
          parameters:
          - name: message
            value: "{{inputs.parameters.message}} hello"
      - name: whalesay-template
        template: whalesay-inner-template
        inputs:
          parameters:
          - name: message
        arguments:
          parameters:
          - name: message
            value: "{{inputs.parameters.message}} hello"
    ---
    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      generateName: workflow-template-nested-
    spec:
      entrypoint: whalesay
      templates:
      - name: whalesay
        templateRef:
          name: workflow-template-nested-template
          template: whalesay-template
        arguments:
          parameters:
          - name: message
            value: "hello"
    

    Here's the guide to read the example above.

    • It starts from the entry point whalesay of the workflow workflow-template-nested-xxxx
    • The template refers to the template whalesay-template in the workflow template workflow-template-nested-template.
    • The referred template refers to the template whalesay-inner-template of the context of the workflow template workflow-template-nested-template.
    • The referred template refers to the template whalesay-template of the context of the workflow template workflow-template-whalesay-template.
    • The template is a concrete template so returns this template.

    After resolving input parameters of referred templates by arguments of referring templates at each step, this workflow outputs hello hello hello in the log.

    Workflow templates must be submitted with argo template submit command before workflows using them are submitted unless you set the runtimeResolution option.

    opened by dtaniwaki 39
  • pns executor w/ security context not working with vault injector

    pns executor w/ security context not working with vault injector

    I'm using vault injector with pns executor. It works fine until I add security context to not run as root. Vault sidecar container not getting killed. Details below.

    apiVersion: v1
    data:
      containerRuntimeExecutor: pns
      workflowDefaults: |
        spec:
          securityContext:
            runAsNonRoot: true
            runAsUser: 8737
    
    ---
    apiVersion: argoproj.io/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: git-clone
    spec:
      ...
      templates:
        - name: git-clone
            artifacts:
              - name: argo-source
                path: /src
                git:
                  repo: "{{inputs.parameters.repo}}"
                  revision: "{{inputs.parameters.revision}}"
                  usernameSecret:
                    name: github-creds
                    key: username
                  passwordSecret:
                    name: github-creds
                    key: password
          metadata:
            annotations:
              vault.hashicorp.com/agent-inject: "true"
              vault.hashicorp.com/role: "xxxx
              vault.hashicorp.com/auth-path: "xxx"
              vault.hashicorp.com/secret-volume-path: "/tmp/vault"
              vault.hashicorp.com/agent-inject-secret-npmrc: "xxx"
              vault.hashicorp.com/agent-inject-template-npmrc: |
                {{- with secret "xxx" -}}
                  {{ .Data.data.npmrc }}
                {{- end }}
          container:
            image: alpine/git:latest
            command: [sh, -c]
            args:
              - |-
                    cp /tmp/vault/secrets/npmrc ./.npmrc
    
            volumeMounts:
              - name: tmp
                mountPath: /tmp
            workingDir: /src
          volumes:
            - name: tmp
              emptyDir: {}
          outputs:
            artifacts:
              - name: src
                path: /src
                globalName: src
    
    time="2021-05-26T23:08:58.651Z" level=info msg="Starting Workflow Executor" executorType=pns version=v3.1.0-rc8
    time="2021-05-26T23:08:58.655Z" level=info msg="Creating PNS executor (namespace: argo, pod: ci-4fqjw-539851645, pid: 66)"
    time="2021-05-26T23:08:58.655Z" level=info msg="Creating a K8sAPI executor"
    time="2021-05-26T23:08:58.655Z" level=info msg="Executor (version: v3.1.0-rc8, build_date: 2021-05-25T15:15:42Z) initialized (pod: argo/ci-4fqjw-539851645) with template:\n{\"name\":\"git-clone\",\"inputs\":{\"parameters\":[{\"name\":\"repo\",\"value\":\"https://github.com/Forbes-Media/dp-seal.git\"},{\"name\":\"revision\",\"value\":\"main\"},{\"name\":\"generate-artifactory-credentials\",\"value\":\"false\"}],\"artifacts\":[{\"name\":\"argo-source\",\"path\":\"/src\",\"git\":{\"repo\":\"https://github.com/Forbes-Media/dp-seal.git\",\"revision\":\"main\",\"usernameSecret\":{\"name\":\"github-creds\",\"key\":\"username\"},\"passwordSecret\":{\"name\":\"github-creds\",\"key\":\"password\"}}}]},\"outputs\":{\"artifacts\":[{\"name\":\"src\",\"path\":\"/src\",\"globalName\":\"src\"},{\"name\":\"git-labels\",\"path\":\"/tmp/git-labels\",\"globalName\":\"git-labels\"}]},\"metadata\":{\"annotations\":{\"vault.hashicorp.com/agent-inject\":\"true\",\"vault.hashicorp.com/agent-inject-secret-npmrc\":\"kv/data/operations/argo/jfrog\",\"vault.hashicorp.com/agent-inject-template-npmrc\":\"{{- with secret \\\"kv/data/operations/argo/jfrog\\\" -}}\\n  {{ .Data.data.npmrc }}\\n{{- end }}\\n\",\"vault.hashicorp.com/auth-path\":\"/auth/development-playground\",\"vault.hashicorp.com/role\":\"operations-read-only\",\"vault.hashicorp.com/secret-volume-path\":\"/tmp/vault\"}},\"container\":{\"name\":\"\",\"image\":\"forbes-docker.jfrog.io/alpine/git:latest\",\"command\":[\"sh\",\"-c\"],\"args\":[\"if [[ \\\"false\\\" == \\\"true\\\" ]]; then\\n  cp /tmp/vault/secrets/npmrc ./.npmrc\\nfi\\n# spaces bad for clobber-args\\n# GIT_AUTHOR_NAME=$(git log -1 --pretty=format:'%an')\\nGIT_AUTHOR_EMAIL=$(git log -1 --pretty=format:'%ae')\\n# GIT_COMMIT_MESSAGE=$(git log -1 --pretty=%B | head -1)\\nGIT_COMMIT_HASH=$(git log -1 --format=\\\"%H\\\")\\nOUTPUT=/tmp/git-labels\\necho \\\"GIT_REPO=\\\\\\\"https://github.com/Forbes-Media/dp-seal.git\\\\\\\"\\\" | tee -a ${OUTPUT}\\necho \\\"GIT_REVISION=\\\\\\\"main\\\\\\\"\\\" | tee -a ${OUTPUT}\\n# echo \\\"GIT_AUTHOR_NAME=\\\\\\\"${GIT_AUTHOR_NAME}\\\\\\\"\\\" | tee -a ${OUTPUT}\\necho \\\"GIT_AUTHOR_EMAIL=\\\\\\\"${GIT_AUTHOR_EMAIL}\\\\\\\"\\\" | tee -a ${OUTPUT}\\n# echo \\\"GIT_COMMIT_MESSAGE=\\\\\\\"${GIT_COMMIT_MESSAGE}\\\\\\\"\\\" | tee -a ${OUTPUT}\\necho \\\"GIT_COMMIT_HASH=\\\\\\\"${GIT_COMMIT_HASH}\\\\\\\"\\\" | tee -a ${OUTPUT}\"],\"workingDir\":\"/src\",\"resources\":{},\"volumeMounts\":[{\"name\":\"tmp\",\"mountPath\":\"/tmp\"}]},\"volumes\":[{\"name\":\"tmp\",\"emptyDir\":{}}],\"archiveLocation\":{\"gcs\":{\"bucket\":\"argo-artifact-repository\",\"key\":\"ci-4fqjw/ci-4fqjw-539851645\"}}}"
    time="2021-05-26T23:08:58.655Z" level=info msg="Starting annotations monitor"
    time="2021-05-26T23:08:58.655Z" level=info msg="Starting deadline monitor"
    time="2021-05-26T23:08:59.717Z" level=warning msg="failed to secure root file handle for 82" error="open /proc/82/root: permission denied"
    time="2021-05-26T23:08:59.768Z" level=warning msg="failed to secure root file handle for 82" error="open /proc/82/root: permission denied"
    time="2021-05-26T23:08:59.921Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/root: permission denied"
    time="2021-05-26T23:08:59.971Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/root: permission denied"
    time="2021-05-26T23:09:00.022Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.022Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.073Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.073Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.124Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.124Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.174Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.174Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.225Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.225Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.276Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.276Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.326Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.326Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.377Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.377Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.428Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.428Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.478Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.478Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.529Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.529Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.580Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.580Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.631Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.631Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.681Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.681Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.732Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.732Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.782Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.783Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.833Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.833Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.884Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.884Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.935Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.935Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:00.985Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:00.985Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.037Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.037Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.088Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.088Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.138Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.139Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.189Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.189Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.240Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.240Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.291Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.291Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.341Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.341Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.392Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.392Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.443Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.443Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.494Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.494Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.544Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.544Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.595Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.595Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.646Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.646Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.696Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.696Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.747Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.747Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.798Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.798Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.848Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.848Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.899Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.899Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:01.950Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:01.950Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.001Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.001Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.052Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.052Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.102Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.102Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.153Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.153Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.204Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.204Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.254Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.254Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.305Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.305Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.356Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.356Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.406Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.406Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.457Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.457Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.508Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.508Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.559Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.559Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.609Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.609Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.660Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.660Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.711Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.711Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.762Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.762Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.812Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.812Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.863Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.863Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.913Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.914Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:02.964Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:02.964Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.015Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.015Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.065Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.065Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.116Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.116Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.167Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.167Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.217Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.217Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.268Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.268Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.319Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.319Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.369Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.370Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.420Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.420Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.471Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.471Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.521Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.521Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.572Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.572Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.623Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.623Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.664Z" level=info msg="Watch pods 200"
    time="2021-05-26T23:09:03.674Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.674Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.676Z" level=info msg="Main container completed"
    time="2021-05-26T23:09:03.677Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-05-26T23:09:03.677Z" level=info msg="No output parameters"
    time="2021-05-26T23:09:03.677Z" level=info msg="Saving output artifacts"
    time="2021-05-26T23:09:03.677Z" level=info msg="Staging artifact: src"
    time="2021-05-26T23:09:03.677Z" level=info msg="Staging /src from mirrored volume mount /mainctrfs/src"
    time="2021-05-26T23:09:03.677Z" level=info msg="Taring /mainctrfs/src"
    time="2021-05-26T23:09:03.725Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.725Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.776Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.776Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.826Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.826Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.877Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.877Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.928Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.928Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:03.979Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:03.979Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.030Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.030Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.080Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.081Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.131Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.131Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.182Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.182Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.233Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.233Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.284Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.284Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.335Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.335Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.385Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.386Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.436Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.436Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.487Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.487Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.538Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.538Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.589Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.589Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.639Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.639Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.642Z" level=info msg="archived 82 files/dirs in /mainctrfs/src"
    time="2021-05-26T23:09:04.642Z" level=info msg="Successfully staged /src from mirrored volume mount /mainctrfs/src"
    time="2021-05-26T23:09:04.642Z" level=info msg="GCS Save path: /tmp/argo/outputs/artifacts/src.tgz, key: ci-4fqjw/ci-4fqjw-539851645/src.tgz"
    time="2021-05-26T23:09:04.690Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.690Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.741Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.741Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.793Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.793Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.843Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.843Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.894Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.894Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.945Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.945Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:04.995Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:04.995Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.046Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.046Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.097Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.097Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.148Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.148Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.198Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.198Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.249Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.249Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.300Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.300Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.351Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.351Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.401Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.401Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.452Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.452Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.503Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.503Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.554Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.554Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.574Z" level=info msg="not deleting local artifact" localArtPath=/tmp/argo/outputs/artifacts/src.tgz
    time="2021-05-26T23:09:05.574Z" level=info msg="Successfully saved file: /tmp/argo/outputs/artifacts/src.tgz"
    time="2021-05-26T23:09:05.574Z" level=info msg="Staging artifact: git-labels"
    time="2021-05-26T23:09:05.574Z" level=info msg="Staging /tmp/git-labels from mirrored volume mount /mainctrfs/tmp/git-labels"
    time="2021-05-26T23:09:05.574Z" level=info msg="Taring /mainctrfs/tmp/git-labels"
    time="2021-05-26T23:09:05.576Z" level=info msg="Successfully staged /tmp/git-labels from mirrored volume mount /mainctrfs/tmp/git-labels"
    time="2021-05-26T23:09:05.576Z" level=info msg="GCS Save path: /tmp/argo/outputs/artifacts/git-labels.tgz, key: ci-4fqjw/ci-4fqjw-539851645/git-labels.tgz"
    time="2021-05-26T23:09:05.604Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.604Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.655Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.655Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.706Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.706Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.729Z" level=info msg="not deleting local artifact" localArtPath=/tmp/argo/outputs/artifacts/git-labels.tgz
    time="2021-05-26T23:09:05.729Z" level=info msg="Successfully saved file: /tmp/argo/outputs/artifacts/git-labels.tgz"
    time="2021-05-26T23:09:05.729Z" level=info msg="Annotating pod with output"
    time="2021-05-26T23:09:05.757Z" level=info msg="secured root for pid 97 root: vault (\"/proc/97/root\")"
    time="2021-05-26T23:09:05.757Z" level=warning msg="failed to secure root file handle for 97" error="open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.767Z" level=info msg="Patch pods 200"
    time="2021-05-26T23:09:05.769Z" level=error msg="executor error: failed to get container name for process 97: open /proc/97/environ: permission denied"
    time="2021-05-26T23:09:05.769Z" level=info msg="Alloc=22385 TotalAlloc=60246 Sys=73297 NumGC=8 Goroutines=13"
    
    bug 
    opened by rwong2888 38
  • Workflows fail when cluster-scoped resources have successCondition

    Workflows fail when cluster-scoped resources have successCondition

    Checklist

    • [x] Double-checked my configuration.
    • [ ] Tested using the latest version.
    • [ ] Used the Emissary executor.

    Summary

    What happened/what you expected to happen?

    I expected this workflow to succeed. It applies a namespace and has a successCondition. The resource is applied successfully, but the workflow is marked as failed because of The resource has been deleted while its status was still being checked. Will not be retried: the server could not find the requested resource. This is because the selflink for the namespace resource is malformed: SelfLink: api/v1/namespaces//namespaces/kubeflow. This selflink is used during the evaluation of successConditions. This issue still exists in latest because inferSelfObjectLink doesn't account for cluster-scoped resources. According to kubernetes docs, the correct selflink should be /api/v1/namespaces/kubeflow

    What version are you running? Argo workflows v2.3.11, but the issue still exists in latest version

    Diagnostics

    Paste the smallest workflow that reproduces the bug. We must be able to run the workflow.

            apiVersion: argoproj.io/v1alpha1
            kind: Workflow
            metadata:
              labels:
                workflows.argoproj.io/controller-instanceid: workflow-controller
              name: workflow
            spec:
              entrypoint: entry
              serviceAccountName: default
              templates:
              - name: entry
                steps:
                - - name: prereq-resources
                    template: prereq-resources-install
              - name: prereq-resources-install
                resource:
                  action: apply
                  successCondition: status.phase == Active
                  manifest: |
                    apiVersion: v1
                    kind: Namespace
                    metadata:
                      name: kubeflow
    
    
    # Logs from workflow pod
    time="2022-08-05T23:33:33.997Z" level=info msg="Resource: /namespace./kubeflow. SelfLink: api/v1/namespaces//namespaces/kubeflow"
    time="2022-08-05T23:33:33.998Z" level=info msg="Waiting for conditions: status.phase==Active"
    time="2022-08-05T23:33:34.003Z" level=info msg="Get namespaces 404"
    time="2022-08-05T23:33:34.003Z" level=warning msg="Non-transient error: The resource has been deleted while its status was still being checked. Will not be retried: the server could not find the requested resource"
    
    # Logs from the workflow controller:
    kubectl logs -n argo deploy/workflow-controller | grep ${workflow} 
    
    time="2022-08-08T17:18:20.676Z" level=info msg="Processing workflow" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.677Z" level=info msg="Pod failed: Error (exit code 1): The resource has been deleted while its status was still being checked. Will not be retried: the server could not find the requested resource" displayName=prereq-resources namespace=addon-manager-system pod=kubeflow-prereqs-b9539c3c-wf-2764095841 templateName=prereq-resources-install workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.677Z" level=info msg="Updating node kubeflow-prereqs-b9539c3c-wf-2764095841 exit code 1" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.677Z" level=info msg="Updating node kubeflow-prereqs-b9539c3c-wf-2764095841 status Pending -> Failed" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.677Z" level=info msg="Updating node kubeflow-prereqs-b9539c3c-wf-2764095841 message: Error (exit code 1): The resource has been deleted while its status was still being checked. Will not be retried: the server could not find the requested resource" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.677Z" level=info msg="Step group node kubeflow-prereqs-b9539c3c-wf-3136752568 deemed failed: child 'kubeflow-prereqs-b9539c3c-wf-2764095841' failed" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.677Z" level=info msg="node kubeflow-prereqs-b9539c3c-wf-3136752568 phase Running -> Failed" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.677Z" level=info msg="node kubeflow-prereqs-b9539c3c-wf-3136752568 message: child 'kubeflow-prereqs-b9539c3c-wf-2764095841' failed" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.677Z" level=info msg="node kubeflow-prereqs-b9539c3c-wf-3136752568 finished: 2022-08-08 17:18:20.67792632 +0000 UTC" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.678Z" level=info msg="step group kubeflow-prereqs-b9539c3c-wf-3136752568 was unsuccessful: child 'kubeflow-prereqs-b9539c3c-wf-2764095841' failed" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.678Z" level=info msg="Outbound nodes of kubeflow-prereqs-b9539c3c-wf-2764095841 is [kubeflow-prereqs-b9539c3c-wf-2764095841]" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.678Z" level=info msg="Outbound nodes of kubeflow-prereqs-b9539c3c-wf-683026374 is [kubeflow-prereqs-b9539c3c-wf-2764095841]" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.678Z" level=info msg="node kubeflow-prereqs-b9539c3c-wf-683026374 phase Running -> Failed" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.678Z" level=info msg="node kubeflow-prereqs-b9539c3c-wf-683026374 message: child 'kubeflow-prereqs-b9539c3c-wf-2764095841' failed" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    time="2022-08-08T17:18:20.678Z" level=info msg="node kubeflow-prereqs-b9539c3c-wf-683026374 finished: 2022-08-08 17:18:20.678149711 +0000 UTC" namespace=addon-manager-system workflow=kubeflow-prereqs-b9539c3c-wf
    
    # If the workflow's pods have not been created, you can skip the rest of the diagnostics.
    
    # The workflow's pods that are problematic:
    kubectl get pod -o yaml -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded
    
    apiVersion: v1
    items:
    - apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          kubernetes.io/psp: eks.privileged
          vpc.amazonaws.com/pod-ips: ""
          workflows.argoproj.io/node-id: kubeflow-prereqs-b9539c3c-wf-2764095841
          workflows.argoproj.io/node-name: kubeflow-prereqs-b9539c3c-wf(0)[0].prereq-resources
        creationTimestamp: "2022-08-08T17:18:10Z"
        labels:
          workflows.argoproj.io/completed: "true"
          workflows.argoproj.io/controller-instanceid: addon-manager-workflow-controller
          workflows.argoproj.io/workflow: kubeflow-prereqs-b9539c3c-wf
        name: kubeflow-prereqs-b9539c3c-wf-2764095841
        namespace: addon-manager-system
        ownerReferences:
        - apiVersion: argoproj.io/v1alpha1
          blockOwnerDeletion: true
          controller: true
          kind: Workflow
          name: kubeflow-prereqs-b9539c3c-wf
          uid: 912fc861-58ea-41fa-bf07-f8ab3c877d6a
        resourceVersion: "5067407"
        uid: 85a4b071-a9b2-410e-ad80-024c6c80c9ff
      spec:
        activeDeadlineSeconds: 599
        containers:
        - command:
          - argoexec
          - resource
          - apply
          env:
          - name: ARGO_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: ARGO_CONTAINER_RUNTIME_EXECUTOR
            value: pns
          - name: GODEBUG
            value: x509ignoreCN=0
          - name: ARGO_WORKFLOW_NAME
            value: kubeflow-prereqs-b9539c3c-wf
          - name: ARGO_CONTAINER_NAME
            value: main
          - name: ARGO_TEMPLATE
            value: '{"name":"prereq-resources-install","inputs":{},"outputs":{},"metadata":{},"resource":{"action":"apply","manifest":"apiVersion:
              v1\nkind: Namespace\nmetadata:\n    annotations:\n        iks.intuit.com/allowHostIPC:
              \"false\"\n        iks.intuit.com/allowHostNetwork: \"false\"\n        iks.intuit.com/allowHostPID:
              \"false\"\n        iks.intuit.com/allowHostPort: \"false\"\n        iks.intuit.com/allowPrivilegeEscalation:
              \"true\"\n        iks.intuit.com/allowPrivileged: \"true\"\n        iks.intuit.com/allowed-igs:
              nodes\n        iks.intuit.com/allowedHostPaths: \"\"\n        iks.intuit.com/managed:
              \"false\"\n        iks.intuit.com/service-asset-alias: Intuit.data.mlplatform.mlpinfrastructure\n        iks.intuit.com/service-asset-id:
              \"8001788230453915925\"\n    labels:\n        app.kubernetes.io/managed-by:
              addonmgr.keikoproj.io\n        app.kubernetes.io/name: kubeflow\n        app.kubernetes.io/part-of:
              kubeflow\n        app.kubernetes.io/version: v1.0.0\n        istio-injection:
              enabled\n    name: kubeflow\n","successCondition":"status.phase == Active"}}'
          - name: ARGO_INCLUDE_SCRIPT_OUTPUT
            value: "false"
          - name: ARGO_DEADLINE
            value: "2022-08-08T17:28:10Z"
          image: docker.intuit.com/quay-rmt/argoproj/argoexec:v3.2.11
          imagePullPolicy: IfNotPresent
          name: main
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-vpdp2
            readOnly: true
        dnsPolicy: ClusterFirst
        enableServiceLinks: true
        nodeName: ip-10-205-126-187.us-east-2.compute.internal
        nodeSelector:
          node.kubernetes.io/instancegroup: system
        preemptionPolicy: PreemptLowerPriority
        priority: 0
        restartPolicy: Never
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: addon-manager-workflow-installer-sa
        serviceAccountName: addon-manager-workflow-installer-sa
        shareProcessNamespace: true
        terminationGracePeriodSeconds: 30
        tolerations:
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
        - key: ig/system
        volumes:
        - name: kube-api-access-vpdp2
          projected:
            defaultMode: 420
            sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                items:
                - key: ca.crt
                  path: ca.crt
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:10Z"
          status: "True"
          type: Initialized
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:12Z"
          message: 'containers with unready status: [main]'
          reason: ContainersNotReady
          status: "False"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:12Z"
          message: 'containers with unready status: [main]'
          reason: ContainersNotReady
          status: "False"
          type: ContainersReady
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:10Z"
          status: "True"
          type: PodScheduled
        containerStatuses:
        - containerID: containerd://b7c4179226c99ea28d9cce22d2ae660068253e2eb86504d9d62d91263cfb458b
          image: docker.intuit.com/quay-rmt/argoproj/argoexec:v3.2.11
          imageID: docker.intuit.com/quay-rmt/argoproj/[email protected]:96390f7ea826f7a918d697f9ff2bdc79c74a343712275f8e3c90c18f474f6b92
          lastState: {}
          name: main
          ready: false
          restartCount: 0
          started: false
          state:
            terminated:
              containerID: containerd://b7c4179226c99ea28d9cce22d2ae660068253e2eb86504d9d62d91263cfb458b
              exitCode: 1
              finishedAt: "2022-08-08T17:18:12Z"
              message: 'The resource has been deleted while its status was still being
                checked. Will not be retried: the server could not find the requested
                resource'
              reason: Error
              startedAt: "2022-08-08T17:18:11Z"
        hostIP: 10.205.126.187
        phase: Failed
        podIP: 10.205.124.58
        podIPs:
        - ip: 10.205.124.58
        qosClass: BestEffort
        startTime: "2022-08-08T17:18:10Z"
    - apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          kubernetes.io/psp: eks.privileged
          vpc.amazonaws.com/pod-ips: ""
          workflows.argoproj.io/node-id: kubeflow-prereqs-b9539c3c-wf-3201628028
          workflows.argoproj.io/node-name: kubeflow-prereqs-b9539c3c-wf(1)[0].prereq-resources
        creationTimestamp: "2022-08-08T17:18:20Z"
        labels:
          workflows.argoproj.io/completed: "true"
          workflows.argoproj.io/controller-instanceid: addon-manager-workflow-controller
          workflows.argoproj.io/workflow: kubeflow-prereqs-b9539c3c-wf
        name: kubeflow-prereqs-b9539c3c-wf-3201628028
        namespace: addon-manager-system
        ownerReferences:
        - apiVersion: argoproj.io/v1alpha1
          blockOwnerDeletion: true
          controller: true
          kind: Workflow
          name: kubeflow-prereqs-b9539c3c-wf
          uid: 912fc861-58ea-41fa-bf07-f8ab3c877d6a
        resourceVersion: "5067517"
        uid: b4e0c0f5-3c7e-4370-87d6-c0522736e150
      spec:
        activeDeadlineSeconds: 589
        containers:
        - command:
          - argoexec
          - resource
          - apply
          env:
          - name: ARGO_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: ARGO_CONTAINER_RUNTIME_EXECUTOR
            value: pns
          - name: GODEBUG
            value: x509ignoreCN=0
          - name: ARGO_WORKFLOW_NAME
            value: kubeflow-prereqs-b9539c3c-wf
          - name: ARGO_CONTAINER_NAME
            value: main
          - name: ARGO_TEMPLATE
            value: '{"name":"prereq-resources-install","inputs":{},"outputs":{},"metadata":{},"resource":{"action":"apply","manifest":"apiVersion:
              v1\nkind: Namespace\nmetadata:\n    annotations:\n        iks.intuit.com/allowHostIPC:
              \"false\"\n        iks.intuit.com/allowHostNetwork: \"false\"\n        iks.intuit.com/allowHostPID:
              \"false\"\n        iks.intuit.com/allowHostPort: \"false\"\n        iks.intuit.com/allowPrivilegeEscalation:
              \"true\"\n        iks.intuit.com/allowPrivileged: \"true\"\n        iks.intuit.com/allowed-igs:
              nodes\n        iks.intuit.com/allowedHostPaths: \"\"\n        iks.intuit.com/managed:
              \"false\"\n        iks.intuit.com/service-asset-alias: Intuit.data.mlplatform.mlpinfrastructure\n        iks.intuit.com/service-asset-id:
              \"8001788230453915925\"\n    labels:\n        app.kubernetes.io/managed-by:
              addonmgr.keikoproj.io\n        app.kubernetes.io/name: kubeflow\n        app.kubernetes.io/part-of:
              kubeflow\n        app.kubernetes.io/version: v1.0.0\n        istio-injection:
              enabled\n    name: kubeflow\n","successCondition":"status.phase == Active"}}'
          - name: ARGO_INCLUDE_SCRIPT_OUTPUT
            value: "false"
          - name: ARGO_DEADLINE
            value: "2022-08-08T17:28:10Z"
          image: docker.intuit.com/quay-rmt/argoproj/argoexec:v3.2.11
          imagePullPolicy: IfNotPresent
          name: main
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-8nhww
            readOnly: true
        dnsPolicy: ClusterFirst
        enableServiceLinks: true
        nodeName: ip-10-205-126-187.us-east-2.compute.internal
        nodeSelector:
          node.kubernetes.io/instancegroup: system
        preemptionPolicy: PreemptLowerPriority
        priority: 0
        restartPolicy: Never
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: addon-manager-workflow-installer-sa
        serviceAccountName: addon-manager-workflow-installer-sa
        shareProcessNamespace: true
        terminationGracePeriodSeconds: 30
        tolerations:
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
        - key: ig/system
        volumes:
        - name: kube-api-access-8nhww
          projected:
            defaultMode: 420
            sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                items:
                - key: ca.crt
                  path: ca.crt
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:20Z"
          status: "True"
          type: Initialized
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:23Z"
          message: 'containers with unready status: [main]'
          reason: ContainersNotReady
          status: "False"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:23Z"
          message: 'containers with unready status: [main]'
          reason: ContainersNotReady
          status: "False"
          type: ContainersReady
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:20Z"
          status: "True"
          type: PodScheduled
        containerStatuses:
        - containerID: containerd://dc8d8c0dbf63fe016e881e76a6b493c881a08d4775754e2c8af06a95e4abd29a
          image: docker.intuit.com/quay-rmt/argoproj/argoexec:v3.2.11
          imageID: docker.intuit.com/quay-rmt/argoproj/[email protected]:96390f7ea826f7a918d697f9ff2bdc79c74a343712275f8e3c90c18f474f6b92
          lastState: {}
          name: main
          ready: false
          restartCount: 0
          started: false
          state:
            terminated:
              containerID: containerd://dc8d8c0dbf63fe016e881e76a6b493c881a08d4775754e2c8af06a95e4abd29a
              exitCode: 1
              finishedAt: "2022-08-08T17:18:23Z"
              message: 'The resource has been deleted while its status was still being
                checked. Will not be retried: the server could not find the requested
                resource'
              reason: Error
              startedAt: "2022-08-08T17:18:21Z"
        hostIP: 10.205.126.187
        phase: Failed
        podIP: 10.205.125.39
        podIPs:
        - ip: 10.205.125.39
        qosClass: BestEffort
        startTime: "2022-08-08T17:18:20Z"
    - apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          kubernetes.io/psp: eks.privileged
          vpc.amazonaws.com/pod-ips: ""
          workflows.argoproj.io/node-id: kubeflow-prereqs-b9539c3c-wf-4228090283
          workflows.argoproj.io/node-name: kubeflow-prereqs-b9539c3c-wf(2)[0].prereq-resources
        creationTimestamp: "2022-08-08T17:18:30Z"
        labels:
          workflows.argoproj.io/completed: "true"
          workflows.argoproj.io/controller-instanceid: addon-manager-workflow-controller
          workflows.argoproj.io/workflow: kubeflow-prereqs-b9539c3c-wf
        name: kubeflow-prereqs-b9539c3c-wf-4228090283
        namespace: addon-manager-system
        ownerReferences:
        - apiVersion: argoproj.io/v1alpha1
          blockOwnerDeletion: true
          controller: true
          kind: Workflow
          name: kubeflow-prereqs-b9539c3c-wf
          uid: 912fc861-58ea-41fa-bf07-f8ab3c877d6a
        resourceVersion: "5067630"
        uid: fc8058a9-b096-4783-a9e6-503a876d03c5
      spec:
        activeDeadlineSeconds: 579
        containers:
        - command:
          - argoexec
          - resource
          - apply
          env:
          - name: ARGO_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: ARGO_CONTAINER_RUNTIME_EXECUTOR
            value: pns
          - name: GODEBUG
            value: x509ignoreCN=0
          - name: ARGO_WORKFLOW_NAME
            value: kubeflow-prereqs-b9539c3c-wf
          - name: ARGO_CONTAINER_NAME
            value: main
          - name: ARGO_TEMPLATE
            value: '{"name":"prereq-resources-install","inputs":{},"outputs":{},"metadata":{},"resource":{"action":"apply","manifest":"apiVersion:
              v1\nkind: Namespace\nmetadata:\n    annotations:\n        iks.intuit.com/allowHostIPC:
              \"false\"\n        iks.intuit.com/allowHostNetwork: \"false\"\n        iks.intuit.com/allowHostPID:
              \"false\"\n        iks.intuit.com/allowHostPort: \"false\"\n        iks.intuit.com/allowPrivilegeEscalation:
              \"true\"\n        iks.intuit.com/allowPrivileged: \"true\"\n        iks.intuit.com/allowed-igs:
              nodes\n        iks.intuit.com/allowedHostPaths: \"\"\n        iks.intuit.com/managed:
              \"false\"\n        iks.intuit.com/service-asset-alias: Intuit.data.mlplatform.mlpinfrastructure\n        iks.intuit.com/service-asset-id:
              \"8001788230453915925\"\n    labels:\n        app.kubernetes.io/managed-by:
              addonmgr.keikoproj.io\n        app.kubernetes.io/name: kubeflow\n        app.kubernetes.io/part-of:
              kubeflow\n        app.kubernetes.io/version: v1.0.0\n        istio-injection:
              enabled\n    name: kubeflow\n","successCondition":"status.phase == Active"}}'
          - name: ARGO_INCLUDE_SCRIPT_OUTPUT
            value: "false"
          - name: ARGO_DEADLINE
            value: "2022-08-08T17:28:10Z"
          image: docker.intuit.com/quay-rmt/argoproj/argoexec:v3.2.11
          imagePullPolicy: IfNotPresent
          name: main
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-bzwkx
            readOnly: true
        dnsPolicy: ClusterFirst
        enableServiceLinks: true
        nodeName: ip-10-205-126-187.us-east-2.compute.internal
        nodeSelector:
          node.kubernetes.io/instancegroup: system
        preemptionPolicy: PreemptLowerPriority
        priority: 0
        restartPolicy: Never
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: addon-manager-workflow-installer-sa
        serviceAccountName: addon-manager-workflow-installer-sa
        shareProcessNamespace: true
        terminationGracePeriodSeconds: 30
        tolerations:
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
        - key: ig/system
        volumes:
        - name: kube-api-access-bzwkx
          projected:
            defaultMode: 420
            sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                items:
                - key: ca.crt
                  path: ca.crt
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:30Z"
          status: "True"
          type: Initialized
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:32Z"
          message: 'containers with unready status: [main]'
          reason: ContainersNotReady
          status: "False"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:32Z"
          message: 'containers with unready status: [main]'
          reason: ContainersNotReady
          status: "False"
          type: ContainersReady
        - lastProbeTime: null
          lastTransitionTime: "2022-08-08T17:18:30Z"
          status: "True"
          type: PodScheduled
        containerStatuses:
        - containerID: containerd://194ebe8cbe4d5098751e15df67f9d009181db6519853b8c8997752d1f6030e84
          image: docker.intuit.com/quay-rmt/argoproj/argoexec:v3.2.11
          imageID: docker.intuit.com/quay-rmt/argoproj/[email protected]:96390f7ea826f7a918d697f9ff2bdc79c74a343712275f8e3c90c18f474f6b92
          lastState: {}
          name: main
          ready: false
          restartCount: 0
          started: false
          state:
            terminated:
              containerID: containerd://194ebe8cbe4d5098751e15df67f9d009181db6519853b8c8997752d1f6030e84
              exitCode: 1
              finishedAt: "2022-08-08T17:18:32Z"
              message: 'The resource has been deleted while its status was still being
                checked. Will not be retried: the server could not find the requested
                resource'
              reason: Error
              startedAt: "2022-08-08T17:18:31Z"
        hostIP: 10.205.126.187
        phase: Failed
        podIP: 10.205.124.15
        podIPs:
        - ip: 10.205.124.15
        qosClass: BestEffort
        startTime: "2022-08-08T17:18:30Z"
    kind: List
    metadata:
      resourceVersion: ""
    
    # Logs from in your workflow's wait container, something like:
    kubectl logs -c wait -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug triage 
    opened by Kyle-Wong 0
  • ArtifactGC: Add strategies for

    ArtifactGC: Add strategies for "OnWorkflowSuccessOrDeletion" and "OnWorkflowFailureOrDeletion"

    Consider giving users the ability to configure their Workflows so artifacts be automatically deleted only in the case of success or failure.

    Strategies like "OnWorkflowSuccessOrDeletion" and "OnWorkflowFailureOrDeletion" would make it so that artifacts could also be deleted when the Workflow gets deleted if they weren't already deleted.

    Need to determine if we also want to have "OnWorkflowSuccess" and "OnWorkflowFailure" or no need...


    Message from the maintainers:

    Love this enhancement proposal? Give it a πŸ‘. We prioritise the proposals with the most πŸ‘.

    enhancement 
    opened by juliev0 0
  • UI returns 500 on WorkflowTemplates tap if one WorkflowTemplate is invalid

    UI returns 500 on WorkflowTemplates tap if one WorkflowTemplate is invalid

    Checklist

    • [x] Double-checked my configuration.
    • [x] Tested using the latest version.
    • [x] Used the Emissary executor.

    Summary

    What happened/what you expected to happen?

    I pushed a broken version of a WorkflowTempate to the cluster. Clicking on the WorkflowTemplates tap in the argo-workflows UI returns 500 and I can't see any WorkflowTemplates.

    The error message is: "Unsuccessful HTTP response: json: cannot unmarshal object into Go struct field DAGTask.items.spec.templates.dag.tasks.template of type string"

    I understand the behavior, it would still be nice to get a list of valid WorkflowTemplates.

    What version are you running?

    v3.3.8

    Diagnostics

    Paste the smallest workflow that reproduces the bug. We must be able to run the workflow.

    apiVersion: argoproj.io/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: check-cond
    spec:
      serviceAccountName: workflow
      templates:
        dag:
          tasks:
            - name: print
              template:
                name: go-print
                container:
                  image: docker/whalesay:latest
                  command: [cowsay]
                  args: ["printed"]
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug triage 
    opened by nice-pink 0
  • Add

    Add "argo delete --force" to force deletion of a workflow which has a finalizer

    Artifact GC creates a finalizer to prevent Workflows from being deleted until their artifacts have been deleted. But if the artifacts fail to get deleted, the finalizer will remain.

    This CLI command would force delete a workflow which has a finalizer, removing the finalizer first. (And thus the ArtifactGCTask, which is owned by it, and the ArtifactGC pod owned by the ArtifactGCTask would all get deleted.)


    Message from the maintainers:

    Love this enhancement proposal? Give it a πŸ‘. We prioritise the proposals with the most πŸ‘.

    enhancement 
    opened by juliev0 1
  • Linting/Submission for Workflow Of Workflows not working for workflows steps withItems and other complex workflows.

    Linting/Submission for Workflow Of Workflows not working for workflows steps withItems and other complex workflows.

    Checklist

    • [x] Double-checked my configuration.
    • [X] Tested using the latest version.
    • [x] Used the Emissary executor.

    Summary

    Linting/Submission for Workflow Of Workflows not working for workflows steps withItems and other complex workflows.

    What happened/what you expected to happen? Would like to use workflow of workflow feature of ArgoWorkflow but it is working for basic cases only.

    Three cases for details on the issue:

    1. Used simple hello world workflow as the json in parent workflow. Its working fine and NOT getting error

    Used child workflow definition from here: https://argoproj.github.io/argo-workflows/walk-through/hello-world/

    Workflow available in gist litn_working_helloworld_case.json https://gist.github.com/deepaksinghvi/75f00f51b08c7cf16ce57e19c139251a

    1. Used DAG child workflow as the json in parent workflow and getting error

      βœ– in "e62f9d660a864699a544cd94ff4f3cca" (Workflow): templates.main.tasks.workload8d53692805c748dda1aba1ebd622c78f templates.workload8d53692805c748dda1aba1ebd622c78f: failed to resolve {{inputs.parameters.message}}

    Used child workflow definition from here: https://argoproj.github.io/argo-workflows/walk-through/dag/

    Workflow available in gist litn_not_working_dag_case.json https://gist.github.com/deepaksinghvi/75f00f51b08c7cf16ce57e19c139251a

    1. Used Step witItems child workflow as the json in parent workflow and getting error

      βœ– in "e62f9d660a864699a544cd94ff4f3cca" (Workflow): templates.main.tasks.workload8d53692805c748dda1aba1ebd622c78f templates.workload8d53692805c748dda1aba1ebd622c78f: failed to resolve {{item.image}}

    Used child workflow definition from here: https://argoproj.github.io/argo-workflows/walk-through/loops/

    Workflow available in gist litn_not_working_withItems_case.json https://gist.github.com/deepaksinghvi/75f00f51b08c7cf16ce57e19c139251a

    What version are you running? 3.2/3.3 both.

    Diagnostics

    Paste the smallest workflow that reproduces the bug. We must be able to run the workflow.

    
    
    # Logs from the workflow controller:
    kubectl logs -n argo deploy/workflow-controller | grep ${workflow} 
    
    # If the workflow's pods have not been created, you can skip the rest of the diagnostics.
    
    # The workflow's pods that are problematic:
    kubectl get pod -o yaml -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded
    
    # Logs from in your workflow's wait container, something like:
    kubectl logs -c wait -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

    bug triage 
    opened by deepaksinghvi 0
Releases(v3.3.8)
Owner
Argo Project
Get stuff done with Kubernetes!
Argo Project
A Golang based high performance, scalable and distributed workflow framework

Go-Flow A Golang based high performance, scalable and distributed workflow framework It allows to programmatically author distributed workflow as Dire

Vanu 595 Aug 1, 2022
Workflow Orchestrator

Adagio - A Workflow Orchestrator This project is currently in a constant state of flux. Don't expect it to work. Thank you o/ Adagio is a workflow exe

George 89 May 24, 2022
Export GitHub Action Workflow data as traces via OTLP

Github Action to OTLP NOTE: This is still work in progress This action outputs Github Action workflows and jobs details to OTLP via gRPC. Inputs endpo

alrex 18 Jul 19, 2022
A reverse engineered github actions compatible self-hosted runner using nektos/act to execute your workflow steps

github-act-runner A reverse engineered github actions compatible self-hosted runner using nektos/act to execute your workflow steps. Unlike the offici

null 87 Jul 15, 2022
A simple Go app and GitHub workflow that shows how to use GitHub Actions to test, build and deploy a Go app to Docker Hub

go-pipeline-demo A repository containing a simple Go app and GitHub workflow that shows how to use GitHub Actions to test, build and deploy a Go app t

Marat Bogatyrev 0 Nov 17, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Litmus Chaos 3.1k Aug 1, 2022
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.3k Aug 1, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website β€’ Quickstart β€’ Documentation β€’ Blog β€’ Twitter β€’ Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 1.8k Aug 7, 2022
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 97 Jun 12, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Opstree Container Kit 111 Apr 28, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Open Cloud-native Game-application Initiative 30 Jul 28, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 687 Jul 26, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 22 Jun 17, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10.2k Aug 6, 2022
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

null 10 Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 63 Jul 27, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 53 Jul 25, 2022