Robust framework for running complex workload scenarios in isolation, using Go; for integration, e2e tests, benchmarks and more! 💪

Overview

e2e

golang docs

Go Module providing robust framework for running complex workload scenarios in isolation, using Go and Docker. For integration, e2e tests, benchmarks and more! 💪

What are the goals?

  • Ability to schedule isolated processes programmatically from single process on single machine.
  • Focus on cluster workloads, cloud native services and microservices.
  • Developer scenarios in mind e.g preserving scenario readability, Go unit test integration.
  • Metric monitoring as the first citizen. Assert on Prometheus metric values during test scenarios or check overall performance characteristics.

Usage Models

There are three main use cases envisioned for this Go module:

  • Unit test use (see example). Use e2e in unit tests to quickly run complex test scenarios involving many container services. This was the main reason we created this module. You can check usage of it in Cortex and Thanos projects.
  • Standalone use (see example). Use e2e to run setups in interactive mode where you spin up workloads as you want programmatically and poke with it on your own using your browser or other tools. No longer need to deploy full Kubernetes or external machines.
  • Benchmark use (see example). Use e2e in local Go benchmarks when your code depends on external services with ease.

Getting Started

Let's go through an example leveraging go test flow:

  1. Implement the workload by embeddinge2e.Runnable or *e2e.InstrumentedRunnable. Or you can use existing ones in e2edb package. For example implementing Thanos Querier with our desired configuration could look like this:

    func newThanosSidecar(env e2e.Environment, name string, prom e2e.Linkable) *e2e.InstrumentedRunnable {
    	ports := map[string]int{
    		"http": 9090,
    		"grpc": 9091,
    	}
    	return e2e.NewInstrumentedRunnable(env, name, ports, "http", e2e.StartOptions{
    		Image: "quay.io/thanos/thanos:v0.21.1",
    		Command: e2e.NewCommand("sidecar", e2e.BuildArgs(map[string]string{
    			"--debug.name":     name,
    			"--grpc-address":   fmt.Sprintf(":%d", ports["grpc"]),
    			"--http-address":   fmt.Sprintf(":%d", ports["http"]),
    			"--prometheus.url": "http://" + prom.InternalEndpoint(e2edb.AccessPortName),
    			"--log.level":      "info",
    		})...),
    		Readiness: e2e.NewHTTPReadinessProbe("http", "/-/ready", 200, 200),
    		User:      strconv.Itoa(os.Getuid()),
    	})
    }
  2. Implement test. Start by creating environment. Currently e2e supports Docker environment only. Use unique name for all your tests. It's recommended to keep it stable so resources are consistently cleaned.

    	// Start isolated environment with given ref.
    	e, err := e2e.NewDockerEnvironment("e2e_example")
    	testutil.Ok(t, err)
    	// Make sure resources (e.g docker containers, network, dir) are cleaned.
    	t.Cleanup(e.Close)
  3. Program your scenario as you want. You can start, wait for their readiness, stop, check their metrics and use their network endpoints from both unit test (Endpoint) as well as within each workload (InternalEndpoint). You can also access workload directory. There is a shared directory across all workloads. Check Dir and InternalDir runnable methods.

    	// Create structs for Prometheus containers scraping itself.
    	p1, err := e2edb.NewPrometheus(e, "prometheus-1")
    	testutil.Ok(t, err)
    	s1 := newThanosSidecar(e, "sidecar-1", p1)
    
    	p2, err := e2edb.NewPrometheus(e, "prometheus-2")
    	testutil.Ok(t, err)
    	s2 := newThanosSidecar(e, "sidecar-2", p2)
    
    	// Create Thanos Query container. We can point the peer network addresses of both Prometheus instance
    	// using InternalEndpoint methods, even before they started.
    	t1 := newThanosQuerier(e, "query-1", s1.InternalEndpoint("grpc"), s2.InternalEndpoint("grpc"))
    
    	// Start them.
    	testutil.Ok(t, e2e.StartAndWaitReady(p1, s1, p2, s2, t1))
    
    	// To ensure query should have access we can check its Prometheus metric using WaitSumMetrics method. Since the metric we are looking for
    	// only appears after init, we add option to wait for it.
    	testutil.Ok(t, t1.WaitSumMetricsWithOptions(e2e.Equals(2), []string{"thanos_store_nodes_grpc_connections"}, e2e.WaitMissingMetrics()))
    
    	// To ensure Prometheus scraped already something ensure number of scrapes.
    	testutil.Ok(t, p1.WaitSumMetrics(e2e.Greater(50), "prometheus_tsdb_head_samples_appended_total"))
    	testutil.Ok(t, p2.WaitSumMetrics(e2e.Greater(50), "prometheus_tsdb_head_samples_appended_total"))
    
    	// We can now query Thanos Querier directly from here, using it's host address thanks to Endpoint method.
    	a, err := api.NewClient(api.Config{Address: "http://" + t1.Endpoint("http")})
    	testutil.Ok(t, err)
    
    	{
         now := model.Now()
         v, w, err := v1.NewAPI(a).Query(context.Background(), "up{}", now.Time())
         testutil.Ok(t, err)
         testutil.Equals(t, 0, len(w))
         testutil.Equals(
             t,
             fmt.Sprintf(`up{instance="%v", job="myself", prometheus="prometheus-1"} => 1 @[%v]
    up{instance="%v", job="myself", prometheus="prometheus-2"} => 1 @[%v]`, p1.InternalEndpoint(e2edb.AccessPortName), now, p2.InternalEndpoint(e2edb.AccessPortName), now),
             v.String(),
         )
    	}
    
    	// Stop first Prometheus and sidecar.
    	testutil.Ok(t, s1.Stop())
    	testutil.Ok(t, p1.Stop())
    
    	// Wait a bit until Thanos drops connection to stopped Prometheus.
    	testutil.Ok(t, t1.WaitSumMetricsWithOptions(e2e.Equals(1), []string{"thanos_store_nodes_grpc_connections"}, e2e.WaitMissingMetrics()))
    
    	{
         now := model.Now()
         v, w, err := v1.NewAPI(a).Query(context.Background(), "up{}", now.Time())
         testutil.Ok(t, err)
         testutil.Equals(t, 0, len(w))
         testutil.Equals(
             t,
             fmt.Sprintf(`up{instance="%v", job="myself", prometheus="prometheus-2"} => 1 @[%v]`, p2.InternalEndpoint(e2edb.AccessPortName), now),
             v.String(),
         )
    	}
    }

Credits

Issues
  • Remove `RunOnce`

    Remove `RunOnce`

    Hm, so I create RunOnce API but actually I forgot I managed to solve my use case without this in https://github.com/thanos-io/thanos/blob/main/test/e2e/compatibility_test.go#L62

    It's as easy as creating noop container and doing execs...

    // Start noop promql-compliance-tester. See https://github.com/prometheus/compliance/tree/main/promql on how to build local docker image.
    	compliance := e.Runnable("promql-compliance-tester").Init(e2e.StartOptions{
    		Image:   "promql-compliance-tester:latest",
    		Command: e2e.NewCommandWithoutEntrypoint("tail", "-f", "/dev/null"),
    	})
    	testutil.Ok(t, e2e.StartAndWaitReady(compliance))
    // ...
    		stdout, stderr, err := compliance.Exec(e2e.NewCommand("/promql-compliance-tester", "-config-file", filepath.Join(compliance.InternalDir(), "receive.yaml")))
    		t.Log(stdout, stderr)
    		testutil.Ok(t, err)
    	})
    

    I think we should kill RunOnce API to simplify all - and put the above into examples? 🤔

    cc @saswatamcode @PhilipGough @matej-g ?

    enhancement help wanted 
    opened by bwplotka 6
  • Possibility of infinite loop when waiting for missing metrics to appear

    Possibility of infinite loop when waiting for missing metrics to appear

    When using InstrumentedRunnable.WaitSumMetricsWithOptions(...) together with e2e.WaitMissingMetrics() it's possible that tests get stuck forever because of this loop.

    It would be great to have a safety mechanism to timeout the wait. It helps avoid long running CI jobs, which can be $$$ friendly.

    opened by douglascamata 2
  • monitoring: allow compilation for non-Linux

    monitoring: allow compilation for non-Linux

    This commit extracts the Linux-specific code into files that are guarded by Golang build flags, allowing binaries that import the e2e/monitoring package to compile on non-Linux operating systems. I tested this with the following main.go and was able to compile a binary for both Darwin and Windows:

    package main
    
    import (
    	"fmt"
    
    	m "github.com/efficientgo/e2e/monitoring"
    )
    
    func main() {
    	s := m.Service{}
    	fmt.Println(s)
    }
    

    Fixes: https://github.com/efficientgo/e2e/issues/15

    Signed-off-by: Lucas Servén Marín [email protected]

    opened by squat 2
  • `monitoring` package is not usable on MacOS

    `monitoring` package is not usable on MacOS

    See the report in https://github.com/thanos-io/thanos/issues/4642.

    The monitoring package uses github.com/containerd/cgroups, which seem to include code usable only on Linux platforms. We should find a way to make code from monitoring run on MacOS as well.

    opened by matej-g 2
  • Object does not exist error without pre pulling.

    Object does not exist error without pre pulling.

    Sometimes I was getting weird errors of objects does not exist for new docker images. What always worked was:

    • I had to run e2e with WithVerbose option
    • copy docker run ... command for problematic image.
    • Run it manually locally

    After that all runs 100% works.

    Leaving here as a known issue to debug (: I suspect some permissions on my machines? 🤔 Let's see if other can repro!

    opened by bwplotka 2
  • Add example application

    Add example application

    This adds a simple example application instrumented with 1 metric (http_requests_total) under examples/. Also added a Dockerfile + make target to build and run the image locally. cc @matej-g

    opened by jessicalins 1
  • Added support to batch jobs.

    Added support to batch jobs.

    Also changing wait Download image check to pre download pull which does the same, but allow us to easily capture output of batch job itself.

    Signed-off-by: Bartlomiej Plotka [email protected]

    opened by bwplotka 1
  • Added `Containerize` method for watching local code execution through cadvisor and metrics.

    Added `Containerize` method for watching local code execution through cadvisor and metrics.

    This replaces e2emonitoring.WithCurrentProcessAsContainer() which was problematic for non Linux machines and required manual intervention.

    Also stopped working since cgroups v1 are disabled on many systems now.

    opened by bwplotka 1
  • Fix Makefile & CI for macOS

    Fix Makefile & CI for macOS

    Makefile commands like build, lint, and test don't run on macOS (for eg this run).

    This is due to dirname behaving differently on macOS and linux, i.e, it can only take one path as an argument in macOS. Fix is using xargs to run dirname once per input. 🙂

    However, now that the macOS test runs on CI it fails due to GitHub Actions not pre-installing Docker for macOS due to licensing issues. It can be installed using a setup-docker action which I've done here but it is kind of slow.

    Open question: Is it necessary to keep CI for macOS in this case?

    opened by saswatamcode 1
  • Consider adding HTTPS readiness probe

    Consider adding HTTPS readiness probe

    On occasions, I'm using the framework to run services which are running only on HTTPS port (thus HTTP probe won't work). In such cases I tend to do a simple command readiness check by using curl --insecure ... https://<readiness-endpoint> or similar command. However, this has an overhead, since I have to 1) have a utility capable of probing available inside the container; 2) need to craft my own command with arguments each time.

    It could be beneficial to have a HTTPS readiness probe, on similar principle (e.g. it could skip verifying TLS, which should be fine for purely testing purposes).

    opened by matej-g 1
  • Matchers package cannot be used since it is internal

    Matchers package cannot be used since it is internal

    I would like to use the metrics option WithLabelMatchers, however I am unable to construct the matcher since the compiler will complain about this package being internal.

    Is this intentional for some reason or just an oversight?

    opened by matej-g 1
  • Getting Dir & InternalDir mixed up - is there a better way?

    Getting Dir & InternalDir mixed up - is there a better way?

    Knowing when to use Dir & InternalDir is confusing and getting them mixed up can lead to file permission issues when your containers start up.

    For example, when trying to create a dir called test in the container:

    if err := os.MkdirAll(filepath.Join(demo.InternalDir(), "test"), os.ModePerm); err != nil {
    		return e2e.NewErrInstrumentedRunnable(name, errors.Wrap(err, "create test dir failed"))
    	}
    

    leads to the following when run

       unexpected error: create logs dir failed: mkdir /shared: permission denied     
    

    You receive that error message when the test is running & the containers have started up, so naturally you think that the error is coming from within the container, when in actual fact it is failing because the process can't create the /shared directory on your local machine.

    Is there a better way of doing this? or preventing this kind of confusing error message from the caller's?

    opened by bill3tt 1
  • idea: Declarative K8s API as the API for docker env.

    idea: Declarative K8s API as the API for docker env.

    Just an idea: But it would be amazing to contain some service like e2e.Runnable or instrumented e2e.Runnable in a declarative, mutable state. Ideally, something that speaks a common language like K8s APIs. Then have docker engine supporting an important subset of K8s API for local use. There would be a few benefits to this:

    • We would be able to compose adjustments of e.g flags for different tests together better like Jsonnet allows (also adds huge cognitive load potentially!). The current approach has similar issues to https://github.com/bwplotka/mimic initial deployment at Improbable - the input for adjusting services is getting out of control (check ruler or querier helpers for e.g https://github.com/thanos-io/thanos/pull/5348)
    • We could REUSE some Infrastructure as Go code (e.g. https://github.com/bwplotka/mimic) for both productions, staging, testing etc K8s clusters AS WELL AS local simplified e2e docker environments!
    opened by bwplotka 0
  • Permissions of DockerEnvironment.SharedDir()

    Permissions of DockerEnvironment.SharedDir()

    I had several hours of confusion and difficulty because on my test machine the Docker instances received a /shared directory (holding /shared/config etc) with permissions rwxr-xr-x but on a CircleCI machine running a PR the Docker instances saw permissions rwx------ for /shared.

    (This affects test containers that don't run as root.)

    It is unclear to me if the problem is that only I am using Docker on a Mac, I am using Go 1.17, or I have a different umask than the CircleCI machine. I tried setting my umask to 000 but was unable to get my builds to fail the same way as the CircleCI builds.

    opened by esnible 2
  • Minio is not ready even after `StartAndWaitReady` completes

    Minio is not ready even after `StartAndWaitReady` completes

    Issue description

    Trying to start Minio on the latest version of main, the server is not ready to handle requests, despite StartAndWaitReady being ran successfully already. Any immediate requests afterwards result in error response Server not initialized, please try again.

    I suspect this could be an issue with the readiness probe upstream, since when setting up the same scenario with code version from before Minio image update in https://github.com/efficientgo/e2e/pull/4, everything is working correctly. However, I haven't confirmed the exact cause yet.

    Minimal setup to reproduce

    Run this test:

    import (
    	"context"
    	"io/ioutil"
    	"testing"
    
    	"github.com/efficientgo/e2e"
            e2edb "github.com/efficientgo/e2e/db"
    	"github.com/efficientgo/tools/core/pkg/testutil"
    	"github.com/minio/minio-go/v7"
    	"github.com/minio/minio-go/v7/pkg/credentials"
    )
    
    func TestMinio(t *testing.T) {
    	e, err := e2e.NewDockerEnvironment("minio_test", e2e.WithVerbose())
    	testutil.Ok(t, err)
    	t.Cleanup(e.Close)
    
    	const bucket = "minoiotest"
    	m := e2edb.NewMinio(e, "minio", bucket)
    	testutil.Ok(t, e2e.StartAndWaitReady(m))
    
    	mc, err := minio.New(m.Endpoint("http"), &minio.Options{
    		Creds: credentials.NewStaticV4(e2edb.MinioAccessKey, e2edb.MinioSecretKey, ""),
    	})
    	testutil.Ok(t, err)
    	testutil.Ok(t, ioutil.WriteFile("test.txt", []byte("just a test"), 0755))
    
    	_, err = mc.FPutObject(context.Background(), bucket, "obj", "./test.txt", minio.PutObjectOptions{})
    	testutil.Ok(t, err)
    }
    
    bug 
    opened by matej-g 2
  • BuildArgs should support repeating arguments

    BuildArgs should support repeating arguments

    It is not uncommon that some programs support repeating arguments to provide multiple values, i.e. in following format: example -p "first argument" -p "second one" -p "third one"

    It is currently not possible to use BuildArgs to build arguments in such way, since it depends on using map[string]string, thus not allowing for repeated values.

    opened by matej-g 0
  • Interrupting in standalone mode propagates to docker containers (?)

    Interrupting in standalone mode propagates to docker containers (?)

    Repro:

    • make run-example
    • Ctrl+C

    Logs (after interrupt):

    ^C14:48:03 Killing query-1
    14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.676445174Z caller=main.go:167 msg="caught signal. Exiting." signal=interrupt
    14:48:03 sidecar-2: level=warn name=sidecar-2 ts=2021-07-24T11:48:03.676527331Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
    14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.676541775Z caller=http.go:74 service=http/server component=sidecar msg="internal server is shutting down" err=null
    14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.676619483Z caller=main.go:167 msg="caught signal. Exiting." signal=interrupt
    14:48:03 sidecar-1: level=warn name=sidecar-1 ts=2021-07-24T11:48:03.676682445Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
    14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.676695729Z caller=http.go:74 service=http/server component=sidecar msg="internal server is shutting down" err=null
    14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.677752224Z caller=http.go:93 service=http/server component=sidecar msg="internal server is shutdown gracefully" err=null
    14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.677809395Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason=null
    14:48:03 sidecar-2: level=warn name=sidecar-2 ts=2021-07-24T11:48:03.677847401Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
    14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.677857689Z caller=grpc.go:130 service=gRPC/server component=sidecar msg="internal server is shutting down" err=null
    14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.677875199Z caller=grpc.go:143 service=gRPC/server component=sidecar msg="gracefully stopping internal server"
    14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.677912421Z caller=http.go:93 service=http/server component=sidecar msg="internal server is shutdown gracefully" err=null
    14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.677972702Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason=null
    14:48:03 sidecar-1: level=warn name=sidecar-1 ts=2021-07-24T11:48:03.67801026Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
    14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.678022172Z caller=grpc.go:130 service=gRPC/server component=sidecar msg="internal server is shutting down" err=null
    14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.678038023Z caller=grpc.go:143 service=gRPC/server component=sidecar msg="gracefully stopping internal server"
    14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.678369251Z caller=grpc.go:156 service=gRPC/server component=sidecar msg="internal server is shutdown gracefully" err=null
    14:48:03 sidecar-1: level=info name=sidecar-1 ts=2021-07-24T11:48:03.678437559Z caller=main.go:159 msg=exiting
    14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.678758319Z caller=grpc.go:156 service=gRPC/server component=sidecar msg="internal server is shutdown gracefully" err=null
    14:48:03 sidecar-2: level=info name=sidecar-2 ts=2021-07-24T11:48:03.678797963Z caller=main.go:159 msg=exiting
    14:48:03 prometheus-1: level=warn ts=2021-07-24T11:48:03.695Z caller=main.go:653 msg="Received SIGTERM, exiting gracefully..."
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:676 msg="Stopping scrape discovery manager..."
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:690 msg="Stopping notify discovery manager..."
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:712 msg="Stopping scrape manager..."
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:686 msg="Notify discovery manager stopped"
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:672 msg="Scrape discovery manager stopped"
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=main.go:706 msg="Scrape manager stopped"
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=manager.go:934 component="rule manager" msg="Stopping rule manager..."
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.696Z caller=manager.go:944 component="rule manager" msg="Rule manager stopped"
    14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.697417989Z caller=main.go:167 msg="caught signal. Exiting." signal=interrupt
    14:48:03 query-1: level=warn name=query-1 ts=2021-07-24T11:48:03.697765153Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
    14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.697813969Z caller=http.go:74 service=http/server component=query msg="internal server is shutting down" err=null
    14:48:03 prometheus-2: level=warn ts=2021-07-24T11:48:03.697Z caller=main.go:653 msg="Received SIGTERM, exiting gracefully..."
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:676 msg="Stopping scrape discovery manager..."
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:690 msg="Stopping notify discovery manager..."
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:712 msg="Stopping scrape manager..."
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:672 msg="Scrape discovery manager stopped"
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:686 msg="Notify discovery manager stopped"
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=manager.go:934 component="rule manager" msg="Stopping rule manager..."
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=manager.go:944 component="rule manager" msg="Rule manager stopped"
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.697Z caller=main.go:706 msg="Scrape manager stopped"
    14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699077457Z caller=http.go:93 service=http/server component=query msg="internal server is shutdown gracefully" err=null
    14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699157713Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason=null
    14:48:03 query-1: level=warn name=query-1 ts=2021-07-24T11:48:03.699192767Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason=null
    14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699204094Z caller=grpc.go:130 service=gRPC/server component=query msg="internal server is shutting down" err=null
    14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699233377Z caller=grpc.go:143 service=gRPC/server component=query msg="gracefully stopping internal server"
    14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699338349Z caller=grpc.go:156 service=gRPC/server component=query msg="internal server is shutdown gracefully" err=null
    14:48:03 query-1: level=info name=query-1 ts=2021-07-24T11:48:03.699371953Z caller=main.go:159 msg=exiting
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.703Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.703Z caller=main.go:885 msg="Notifier manager stopped"
    14:48:03 prometheus-1: level=info ts=2021-07-24T11:48:03.703Z caller=main.go:897 msg="See you next time!"
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.710Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.710Z caller=main.go:885 msg="Notifier manager stopped"
    14:48:03 prometheus-2: level=info ts=2021-07-24T11:48:03.711Z caller=main.go:897 msg="See you next time!"
    14:48:04 Killing sidecar-2
    14:48:04 Error response from daemon: Cannot kill container: e2e_example-sidecar-2: No such container: e2e_example-sidecar-2
    
    14:48:04 Unable to kill service sidecar-2 : exit status 1
    14:48:04 Killing prometheus-2
    14:48:04 Error response from daemon: Cannot kill container: e2e_example-prometheus-2: No such container: e2e_example-prometheus-2
    
    14:48:04 Unable to kill service prometheus-2 : exit status 1
    14:48:04 Killing sidecar-1
    14:48:04 Error response from daemon: Cannot kill container: e2e_example-sidecar-1: No such container: e2e_example-sidecar-1
    
    14:48:04 Unable to kill service sidecar-1 : exit status 1
    14:48:04 Killing prometheus-1
    14:48:04 Error response from daemon: Cannot kill container: e2e_example-prometheus-1: No such container: e2e_example-prometheus-1
    
    14:48:04 Unable to kill service prometheus-1 : exit status 1
    2021/07/24 14:48:04 received signal interrupt
    exit status 1
    make: *** [Makefile:78: run-example] Interrupt
    
    opened by bwplotka 0
Releases(v0.12.1)
  • v0.12.1(May 9, 2022)

    What's Changed

    • Fixed support for local images. by @bwplotka in https://github.com/efficientgo/e2e/pull/30
    • Removed RunOnce, extended Exec. by @bwplotka in https://github.com/efficientgo/e2e/pull/32

    Full Changelog: https://github.com/efficientgo/e2e/compare/v0.12.0...v0.12.1

    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Apr 17, 2022)

    What's Changed

    • Added support for specifying Docker capabilities. by @bwplotka in https://github.com/efficientgo/e2e/pull/13
    • Docker runnable start: Wait for image download if not available by @matej-g in https://github.com/efficientgo/e2e/pull/14
    • monitoring: allow compilation for non-Linux by @squat in https://github.com/efficientgo/e2e/pull/16
    • Fix docker network host addr for macOS by @saswatamcode in https://github.com/efficientgo/e2e/pull/17
    • Fix Makefile & CI for macOS by @saswatamcode in https://github.com/efficientgo/e2e/pull/18
    • WaitSumMetrics: Add option to configure backoff by @matej-g in https://github.com/efficientgo/e2e/pull/20
    • Fixed README for VPN cases. by @bwplotka in https://github.com/efficientgo/e2e/pull/21
    • Refactored instrumented for more consistent API with runnable. by @bwplotka in https://github.com/efficientgo/e2e/pull/22
    • Support HTTPS readiness probe by @clyang82 in https://github.com/efficientgo/e2e/pull/19
    • db: Update minio version and bin path by @PhilipGough in https://github.com/efficientgo/e2e/pull/26
    • Added Containerize method for watching local code execution through cadvisor and metrics. by @bwplotka in https://github.com/efficientgo/e2e/pull/24
    • Added support to batch jobs. by @bwplotka in https://github.com/efficientgo/e2e/pull/28
    • Make minio SSE using KMS optional by @saswatamcode in https://github.com/efficientgo/e2e/pull/27
    • e2emonitoring: Adding support for custom registry by @bwplotka in https://github.com/efficientgo/e2e/pull/29

    New Contributors

    • @squat made their first contribution in https://github.com/efficientgo/e2e/pull/16
    • @saswatamcode made their first contribution in https://github.com/efficientgo/e2e/pull/17
    • @clyang82 made their first contribution in https://github.com/efficientgo/e2e/pull/19
    • @PhilipGough made their first contribution in https://github.com/efficientgo/e2e/pull/26

    Full Changelog: https://github.com/efficientgo/e2e/compare/v0.11.1...v0.12.0

    Source code(tar.gz)
    Source code(zip)
  • v0.11.1(Sep 1, 2021)

  • v0.11.0(Aug 25, 2021)

    • Exposed matchers as non-internal package.
    • Expanded standalone example with Jaeger tracing.
    • Added support for monitoring e2e process itself.
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Aug 7, 2021)

  • v0.9.0(Jul 26, 2021)

Package has tool to generate workload for vegeta based kube-api stress tests.

Package has tool to generate workload for vegeta based kube-api stress tests.

Mikhail Sakhnov 0 Nov 22, 2021
HTTP mocking to test API services for chaos scenarios

GAOS HTTP mocking to test API services for chaos scenarios Gaos, can create and provide custom mock restful services via using your fully-customizable

Trendyol Open Source 209 May 24, 2022
The test suite to demonstrate the chaos experiment behavior in different scenarios

Litmus-E2E The goal of litmus e2e is to provide the test suite to demonstrate the chaos experiment behavior in different scenarios. As the name sugges

Vedant Shrotria 0 Jan 4, 2022
A workload generator for MySQL compatible databases

Diligent is a tool we created at Flipkart for generating workloads for our SQL databases that enables us to answer specific questions about the performance of a database.

Flipkart Incubator 14 May 18, 2022
Testing framework for Go. Allows writing self-documenting tests/specifications, and executes them concurrently and safely isolated. [UNMAINTAINED]

GoSpec GoSpec is a BDD-style testing framework for the Go programming language. It allows writing self-documenting tests/specs, and executes them in p

Esko Luontola 112 Apr 5, 2022
http integration test framework

go-hit hit is an http integration test framework written in golang. It is designed to be flexible as possible, but to keep a simple to use interface f

Tobias Salzmann 113 Jun 27, 2022
Partial fork of testify framework with allure integration

allure-testify Оглавление Demo Getting started Examples Global environments keys How to use suite Allure info Test info Label Link Allure Actions Step

null 3 Dec 1, 2021
Simple HTTP integration test framework for Golang

go-itest Hassle-free REST API testing for Go. Installation go get github.com/jefflinse/go-itest Usage Create tests for your API endpoints and run the

Jeff Linse 12 Jan 8, 2022
Snapshot - snapshot provides a set of utility functions for creating and loading snapshot files for using snapshot tests.

Snapshot - snapshot provides a set of utility functions for creating and loading snapshot files for using snapshot tests.

Daniel J. Rollins 2 Jan 27, 2022
Record and replay your HTTP interactions for fast, deterministic and accurate tests

go-vcr go-vcr simplifies testing by recording your HTTP interactions and replaying them in future runs in order to provide fast, deterministic and acc

Marin Atanasov Nikolov 884 Jun 24, 2022
Testy is a Go test running framework designed for Gametime's API testing needs.

template_library import "github.com/gametimesf/template_library" Overview Index Overview Package template_library is a template repository for buildin

Gametime United, Inc. 4 Jun 21, 2022
Full-featured test framework for Go! Assertions, mocking, input testing, output capturing, and much more! 🍕

testza ?? Testza is like pizza for Go - you could life without it, but why should you? Get The Module | Documentation | Contributing | Code of Conduct

Marvin Wendt 389 Jun 18, 2022
Extremely flexible golang deep comparison, extends the go testing package and tests HTTP APIs

go-testdeep Extremely flexible golang deep comparison, extends the go testing package. Latest news Synopsis Description Installation Functions Availab

Maxime Soulé 303 Jun 19, 2022
A next-generation testing tool. Orion provides a powerful DSL to write and automate your acceptance tests

Orion is born to change the way we implement our acceptance tests. It takes advantage of HCL from Hashicorp t o provide a simple DSL to write the acceptance tests.

Wesovi Labs 42 Jun 18, 2022
A simple and expressive HTTP server mocking library for end-to-end tests in Go.

mockhttp A simple and expressive HTTP server mocking library for end-to-end tests in Go. Installation go get -d github.com/americanas-go/mockhttp Exa

Americanas Go 6 Dec 19, 2021
How we can run unit tests in parallel mode with failpoint injection taking effect and without injection race

This is a simple demo to show how we can run unit tests in parallel mode with failpoint injection taking effect and without injection race. The basic

amyangfei 1 Oct 31, 2021
A mock of Go's net package for unit/integration testing

netmock: Simulate Go network connections netmock is a Go package for simulating net connections, including delays and disconnects. This is work in pro

Lucas Wolf 1 Oct 27, 2021
Package for comparing Go values in tests

Package for equality of Go values This package is intended to be a more powerful and safer alternative to reflect.DeepEqual for comparing whether two

Google 3k Jun 27, 2022
Go testing in the browser. Integrates with `go test`. Write behavioral tests in Go.

GoConvey is awesome Go testing Welcome to GoConvey, a yummy Go testing tool for gophers. Works with go test. Use it in the terminal or browser accordi

SmartyStreets 7.3k Jun 28, 2022