Kubedock is a minimal implementation of the docker api that will orchestrate containers on a Kubernetes cluster, rather than running containers locally.

Overview

Kubedock

Kubedock is an minimal implementation of the docker api that will orchestrate containers on a kubernetes cluster, rather than running containers locally. The main driver for this project is to run tests that require docker-containers inside a container, without the requirement of running docker-in-docker within resource heavy containers. Containers that are orchestrated by kubedock are considered short-lived and emphemeral and not intended to run production services. An example use case is running testcontainers-java enabled unit-tests in a tekton pipeline. In this use case, running kubedock in a sidecar can help orchestrating containers inside the kubernetes cluster instead of within the task container itself.

Quick start

Running this locally with a testcontainers enabled unit-test requires to run kubedock (kubedock server). After that start the unit tests in another terminal with the below environment variables set, for example:

export TESTCONTAINERS_RYUK_DISABLED=true
export DOCKER_HOST=tcp://127.0.0.1:8999
mvn test

The default configuration for kubedock is to orchestrate in the namespace that has been set in the current context. This can be overruled with -n argument (or via the NAMESPACE environment variable). The service requires permissions to create Deployments, Services and Configmaps in the namespace.

To see a complete list of available options and additional examples: kubedock --help.

Implementation

When kubedock is started with kubedock server it will start an API server on port :8999, which can be used as a drop-in replacement for the default docker api server. Additionally, kubedock can also start listening to an unix-socket (docker.sock).

Containers

Container API calls are translated towards kubernetes Deployment resources. When a container is started, it will create port-forwards for the ports that should be exposed (only tcp is supported). Starting a container is a blocking call that will wait until the Deployment results in a running Pod. By default it will wait for maximum 1 minute, but this is configurable with the --timeout argument. The logs API calls will always return the complete history of logs, and doesn't differentiate between stdout/stderr. All log output is send as stdout. Executions in the containers are supported.

Volumes

Volumes are implemented by copying over the source content towards the container by means of an init-container that is started before the actual container is started. By default the kubedock image with the same version as the running kubedock is used as the init container. However, this can be any image that has tar available and can be configured with the --initimage argument.

Volumes are one-way copies and emphemeral. This typically means, any data that is written into the volume is not available locally. This also means that mounts to devices, or sockets are not supported (e.g. mounting a docker-socket).

Copying data from a running container back towards the client is not supported either. Also be aware that copying data towards a container will implicitly start the container. This is different compared to a real docker api, where a container can be in an unstarted state. To 'workaround' this, use a volume instead.

Networking

Kubedock flattens all networking, which basicly means that everything will run in the same namespace. This should be sufficient for most use-cases. Network aliases are supported. When a network alias is present, it will create a service exposing all ports that have been exposed by the container. If no ports are configured, kubedock is able to fetch ports that are exposed in the container image. To do this, kubedock should be started with the --inspector argument.

Images

Kubedock implements the images API by tracking which images are requested. It is not able to actually build images. If kubedock is started with --inspector, kubedock will fetch configuration information about the image by calling external container registries. This configuration includes ports that are exposed by the container image itself, and increases network aliases support. The registries should be configured by the client (for example by doing a skopeo login).

Namespace locking

If multiple kubedocks are using the namespace, it might be possible there will be collisions in network aliases. Since networks are flattend (see Networking), all network aliases will result in a Service with the name of the given network alias. To ensure tests don't fail because of these name collisions, kubedock can lock the namespace while it's running. When enabling this with the --lock argument, kubedock will create a Configmap called kubedock-lock in the namespace in which it tracks the current ownership.

Resources cleanup

Kubedock will dynamically create deployments and services in the configured namespace. If kubedock is requested to delete a container, it will remove the deployment and related services. Kubedock will also delete all the resources (Services and Deployments) it created in the running instance before exiting (identified with the kubedock.id label).

Automatic reaping

If a test fails and didn't clean up its started containers, these resources will remain in the namespace. To prevent unused deployments and services lingering around, kubedock will automatically delete deployments and services that are older than 15 minutes (default) if it's owned by the current process. If the deployment is not owned by the running process, it will delete it after 30 minutes if the deployment or service has the label kubedock=true.

Forced cleaning

The reaping of resources can also be enforced at startup. When kubedock is started with the --prune-start argument, it will delete all resources that have the kubedock=true before starting the API server. These resource includes resources created by other instances.

See also

Issues
  • Port from container are not exposed by kubedock

    Port from container are not exposed by kubedock

    Hello there,

    when I'm running e.g. mongodb as a testcontainers image, port 27017 is not exposed by the kudedock, kubectl says that:

        Container ID:  containerd://f08607808925b030a57c604f02904ce8f74c02fd9fdf43fb317281c21f6f06e0
        Image:         mongo:4.4.10
        Image ID:      docker.io/library/[email protected]:2821997cba3c26465b59cc2e863b940d21a58732434462100af10659fc0d164f
        Port:          27017/TCP
        Host Port:     0/TCP
        Args:
          --replSet
          docker-rs
    

    Testcontainers test suite reports log:

    org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=docker:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=docker:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]
    

    docker is the container alias, port 2475 is exposed and docker API is available.

    Ports of target containers are not accessible from the container which uses kubedock. Do anyone know the reason of this?

    Cheers, W

    opened by wgebis 12
  • Example for Mounting Files

    Example for Mounting Files

    Hi,

    my colleagues and I are having trouble using TestContainers to mount files into our container.

    Could you provide an example that demonstrates how kubedock can be used to successfully copy (or even mount) files to a container?

    We tried using:

    .withCopyFileToContainer(MountableFile.forClasspathResource("/ignite.xml"), "/conf/ignite.xml")
    .new GenericContainer(...)
            .withClasspathResourceMapping("redis.conf",
                                          "/etc/redis.conf",
                                          BindMode.READ_ONLY)
    .withFileSystemBind("./src/test/local/data-grid/conf", "/conf")
    

    but Ignite gives us a FileNotFoundException for /conf/ignite.xml (The config is needed for startup).

    This is using kubedock-0.4.0 with Kubernetes 1.21.1

    P.S. Thanks for creating kubedock! It's a great-looking solution for getting TestContainers to work nicely with Kubernetes.

    opened by FelixKampfer 12
  • --reverse-proxy not working as expected when running kubedock as a standalone service

    --reverse-proxy not working as expected when running kubedock as a standalone service

    I have kubedock running as a standalone service in namespace A.

    I am running a test that uses kubedock to spin up a MySQL pod in namespace B.

    I can see that the MySQL pod has started up. When I exec into a pod in namespace A I can successfully nc MySQL using the pod IP (on port 3306) and cluster IP (on the randomly assigned port).

    However I can't connect to the kubedock pod using its pod IP and the randomly assigned MySQL service port.

    # nc -vvv 10.52.156.57:34615
    nc: 10.52.156.57:34615 (10.52.156.57:34615): Connection refused
    

    I am running kubedock with the following arguments:

          args:
            - "server"
            - "--image-pull-secrets=xxxx"
            - "--namespace=B"
            - "--reverse-proxy"
    

    Would you expect kubedock to work when run like this? Is there anything obvious I'm doing wrong?

    From the logs, it looks like the kubedock reverse proxy has started up, e.g.

    I0722 10:27:44.843072       1 deploy.go:190] reverse proxy for 34615 to 3306
    I0722 10:27:44.843079       1 tcpproxy.go:33] start reverse-proxy localhost:34615->172.20.86.237:3306
    I0722 10:27:44.852264       1 copy.go:36] copy 4096 bytes to 83710e5d2443:/
    
    opened by rcgeorge23 8
  • Testcontainers waiting for container output to contain expected content is not reliable

    Testcontainers waiting for container output to contain expected content is not reliable

    Hello,

    First of all thank you very much for the awesome project!

    We've tried using kubedock for out testcontainers tests, but have hit an issue with using the bellow pattern from the testcontainers docs:

    WaitingConsumer consumer = new WaitingConsumer();
    
    container.followOutput(consumer, STDOUT);
    
    consumer.waitUntil(frame -> 
        frame.getUtf8String().contains("STARTED"), 30, TimeUnit.SECONDS);
    

    About 1 out of 5 times, it will timeout even though the logs do contain the expected string. Calling container.getLogs() just before the wait confirms that.

    Is this a know limitation? I am happy to help debug this, but not sure where to start

    opened by dpp23 8
  • Use timeout config for the init container timeout

    Use timeout config for the init container timeout

    In some environments (like our overloaded EKS cluster) it might take more than the hardcoded 30 seconds timeout for the setup init container to start.

    ~~This change makes it configuration with a default 30s which should match the current behaviour if the parameter is not overwritten.~~

    Simply use the timeout configuration parameter for the init timeout instead of the hardcoded 30s

    opened by dpp23 6
  • Reverse proxy with random ports

    Reverse proxy with random ports

    Hi @joyrex2001, really enjoying kubedock so far!

    We are trying to move away from using --port-forward, replacing it with --reverse-proxy, unfortunately we have a bunch of TestContainers tests which need to communicate with the container via random ports. We're seemingly hitting a wall here with --reverse-proxy, with the TestContainers tests ending up failing with timeouts, whereas it works out of the box with --port-forward.

    Do you have any suggestion for this usecase? This might simply be that I do not fully understand how --reverse-proxy is supposed to work as there isn't really a lot of documentation on this flag, so feel free to correct me if it isn't designed for this. Alternatively, what makes --port-forward unreliable, and is it addressable?

    We would also like to host kubedock on our cluster, while running the tests remotely on our CI platform, however that requires an extra layer of proxying between kubedock and our CI with something like kubectl port-forward, which makes this problem even worse. Have you thought about this scenario as well?

    opened by micheldlebeau 5
  • Allow specifying the User a container should run as

    Allow specifying the User a container should run as

    • Added a CLI flag that sets a default RunAsUser on managed pods, which can be overridden by setting the User value when creating a container via the Docker API.
    • Implemented as a PodSecurityContext on the generated pod. This is useful for running Kubedock against K8S clusters that require pods to run as non-root
    opened by yaraskm 4
  • kubedock high CPU usage if pods stuck in CrashLoopBackOff

    kubedock high CPU usage if pods stuck in CrashLoopBackOff

    Hi,

    today, I found this amazing project to close the gab between testcontainers and gitlab ci which runs on kubernetes, too. Thanks for this awesome work!

    Before I start to integrate kubedock, I test it locally. While initial tests are fine, running https://github.com/rieckpil/blog-tutorials/tree/master/spring-boot-integration-tests-testcontainers results into a high CPU usage.

    image

    Logs:

    [email protected] ~ % ~/Downloads/kubedock server --port-forward
    I1027 19:35:11.748402   84001 main.go:26] kubedock 0.7.0 (20211008-105904)
    I1027 19:35:11.749336   84001 main.go:95] kubernetes config: namespace=vrp-testcontainers-kubernetes, initimage=joyrex2001/kubedock:0.7.0, ready timeout=1m0s
    I1027 19:35:11.749668   84001 main.go:117] reaper started with max container age 1h0m0s
    I1027 19:35:11.749770   84001 main.go:68] port-forwarding services to 127.0.0.1
    I1027 19:35:11.749885   84001 main.go:100] default image pull policy: ifnotpresent
    I1027 19:35:11.749926   84001 main.go:102] using namespace: vrp-testcontainers-kubernetes
    I1027 19:35:11.750065   84001 main.go:35] api server started listening on :2475
    [GIN] 2021/10/27 - 19:35:20 | 200 |     123.807µs |       127.0.0.1 | GET      "/info"
    [GIN] 2021/10/27 - 19:35:20 | 200 |      29.519µs |       127.0.0.1 | GET      "/info"
    [GIN] 2021/10/27 - 19:35:20 | 200 |      28.059µs |       127.0.0.1 | GET      "/version"
    [GIN] 2021/10/27 - 19:35:20 | 200 |      75.252µs |       127.0.0.1 | GET      "/images/json"
    [GIN] 2021/10/27 - 19:35:20 | 200 |        78.5µs |       127.0.0.1 | GET      "/images/jboss/keycloak:11.0.0/json"
    [GIN] 2021/10/27 - 19:35:20 | 201 |     394.528µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2021/10/27 - 19:35:35 | 204 | 14.567361081s |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/start"
    I1027 19:35:35.422243   84001 portforward.go:42] start port-forward 34468->8080
    [GIN] 2021/10/27 - 19:35:35 | 200 |     120.223µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
    [GIN] 2021/10/27 - 19:35:35 | 200 |     124.327µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
    E1027 19:35:36.603462   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:36 socat[60751] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:36.676557   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:36 socat[60758] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:37.762316   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:37 socat[60924] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:37.832716   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:37 socat[60931] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:38.912798   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:38 socat[61039] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:38.987719   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:39 socat[61046] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:40.076636   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:40 socat[61095] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:40.158013   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:40 socat[61107] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    [GIN] 2021/10/27 - 19:35:53 | 201 |      84.187µs |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/exec"
    [GIN] 2021/10/27 - 19:35:54 | 200 |  317.038459ms |       127.0.0.1 | POST     "/exec/808a7da1789efc9f5e8a0b8bdf5b8ca44843e0dddcaeed5ab7e0a331870c2029/start"
    [GIN] 2021/10/27 - 19:35:54 | 200 |      68.388µs |       127.0.0.1 | GET      "/exec/808a7da1789efc9f5e8a0b8bdf5b8ca44843e0dddcaeed5ab7e0a331870c2029/json"
    [GIN] 2021/10/27 - 19:35:54 | 200 |      83.432µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
    I1027 19:35:54.033603   84001 containers.go:217] ignoring signal
    [GIN] 2021/10/27 - 19:35:54 | 204 |      42.978µs |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/kill"
    [GIN] 2021/10/27 - 19:35:54 | 200 |      90.172µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
    [GIN] 2021/10/27 - 19:35:54 | 204 |  253.744279ms |       127.0.0.1 | DELETE   "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369?v=true&force=true"
    [GIN] 2021/10/27 - 19:35:54 | 200 |     105.337µs |       127.0.0.1 | GET      "/images/postgres:12/json"
    [GIN] 2021/10/27 - 19:35:54 | 201 |     175.279µs |       127.0.0.1 | POST     "/containers/create"
    I1027 19:36:19.621781   84001 portforward.go:42] start port-forward 47785->5432
    [GIN] 2021/10/27 - 19:36:19 | 204 | 25.308206574s |       127.0.0.1 | POST     "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/start"
    [GIN] 2021/10/27 - 19:36:19 | 200 |     129.701µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
    [GIN] 2021/10/27 - 19:36:19 | 200 |      81.498µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
    [GIN] 2021/10/27 - 19:37:19 | 200 |     317.497µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
    [GIN] 2021/10/27 - 19:37:19 | 200 |   74.017123ms |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:37:19 | 200 |    58.57439ms |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:37:19 | 201 |     190.866µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2021/10/27 - 19:37:23 | 204 |  3.277132909s |       127.0.0.1 | POST     "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/start"
    I1027 19:37:23.216054   84001 portforward.go:42] start port-forward 62630->5432
    [GIN] 2021/10/27 - 19:37:23 | 200 |      95.659µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
    [GIN] 2021/10/27 - 19:37:23 | 200 |      86.154µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
    [GIN] 2021/10/27 - 19:38:23 | 200 |     116.501µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
    [GIN] 2021/10/27 - 19:38:23 | 200 |   69.953111ms |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:38:23 | 200 |   61.136457ms |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:38:23 | 201 |     285.397µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2021/10/27 - 19:38:28 | 204 |  4.293655856s |       127.0.0.1 | POST     "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/start"
    I1027 19:38:28.117308   84001 portforward.go:42] start port-forward 62426->5432
    [GIN] 2021/10/27 - 19:38:28 | 200 |     203.466µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
    [GIN] 2021/10/27 - 19:38:28 | 200 |     164.063µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
    [GIN] 2021/10/27 - 19:39:28 | 200 |     146.367µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
    [GIN] 2021/10/27 - 19:39:28 | 200 |   66.033642ms |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:39:28 | 200 |   62.444152ms |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/logs?stdout=true&stderr=true&since=0"
    

    Kubedock: 0.7.0 OS: Mac OS Kubernetes: Openshift 3.11

    [email protected] ~ % kubectl get all -l kubedock=true
    NAME                                READY   STATUS             RESTARTS   AGE
    pod/169389ceb3c5-85446cf64b-66qpb   0/1     CrashLoopBackOff   5          3m
    pod/2d2f2db70a48-65bdb6dbb7-67rj8   0/1     CrashLoopBackOff   5          4m
    pod/a1f2506d6520-c5c57c8b6-x5b65    0/1     CrashLoopBackOff   5          5m
    
    NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
    service/kd-169389ceb3c5   ClusterIP   172.30.173.82   <none>        5432/TCP,62426/TCP   3m
    service/kd-2d2f2db70a48   ClusterIP   172.30.227.70   <none>        5432/TCP,62630/TCP   4m
    service/kd-a1f2506d6520   ClusterIP   172.30.71.160   <none>        5432/TCP,47785/TCP   5m
    
    NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/169389ceb3c5   1         1         1            0           3m
    deployment.apps/2d2f2db70a48   1         1         1            0           4m
    deployment.apps/a1f2506d6520   1         1         1            0           5m
    
    NAME                                      DESIRED   CURRENT   READY   AGE
    replicaset.apps/169389ceb3c5-85446cf64b   1         1         0       3m
    replicaset.apps/2d2f2db70a48-65bdb6dbb7   1         1         0       4m
    replicaset.apps/a1f2506d6520-c5c57c8b6    1         1         0       5m
    

    This reason for this could be that the pods are in CrashLoopBackOff. The reason, that pods crashes are known (file permissions issues), but in such cases, kubedock should not generate such a high load. The high load is persistent even after mvn clean verify is finished. Also press CRTL+C takes some time to terminate the process.

    I'm able to reproduce this behaivor. If you teach me, I'm able to provide traces or profiling files. But before doing this, please ensure that such profiling files does not contain sensitive informations like the kube credentials.

    opened by jkroepke 4
  • Accessing containers - --reverse-proxy / --inspector

    Accessing containers - --reverse-proxy / --inspector

    I am struggling a bit trying to understand how best to access containers that kubedock has created.

    I have created a simple integration test that depends on a mysql container. If I use kubedock in --reverse-proxy mode, I can see that when my test asks testcontainers for the mysql container db url, it is given the fully qualified kubedock hostname with the random port that has been assigned to the mysql instance:

    Waiting for database connection to become available at jdbc:mysql://kubedock.xxx.svc.cluster.local:57636/yyy using query 'SELECT 1'
    

    Unfortunately, because I have not exposed this port explicitly in the kubedock service config, it times out while trying to obtain a connection.

    If I remove the --reverse-proxy flag and instead try to use --inspector, I can see that a k8s service has been created for my mysql container with the random port exposed, however when my test asks testcontainers for the mysql db url, it is still given the kubedock hostname rather than that of the mysql service.

    So I guess my question is, what is the expected use case for the --inspector flag? Is there some way I can get testcontainers to provide me with the container's service name rather than the kubedock service name?

    opened by rcgeorge23 2
  • Kubedock liveness / readiness probe?

    Kubedock liveness / readiness probe?

    Hi there,

    More of a question than an issue...

    Does kubedock provide a liveness / readiness endpoint?

    I would like to run kubedock as a standalone deployment in k8s, so I'm wondering how to configure the k8s liveness / readiness probes.

    Thanks!

    opened by rcgeorge23 2
  • Document serviceaccount RBAC configuration

    Document serviceaccount RBAC configuration

    I did not found in the Readme how to configure correctly RBAC for serviceaccount. I personnally found this as a minimal working configuration :

            apiVersion: rbac.authorization.k8s.io/v1
            kind: Role
            metadata:
              name: kubedock
            rules:
              - apiGroups: ["apps"]
                resources: ["deployments"]
                verbs: ["create", "get", "list", "delete"]
              - apiGroups: [""]
                resources: ["pods", "pods/log"]
                verbs: ["list", "get"]
              - apiGroups: [""]
                resources: ["services"]
                verbs: ["create", "get", "list"]
    

    Perhaps it should be mentionned in the doc ?

    opened by newsletter-csuite 2
  • Waiting for running container times out

    Waiting for running container times out

    The following simple python script fails if running against kubedock, but works against docker:

        import docker
    
        client = docker.from_env(timeout=_DOCKER_CLIENT_TIMEOUT)
    
        container = client.containers.run(
            "busybox",
            entrypoint="echo",
            command="hey",
            detach=True,
            stdout=True,
            stderr=True,
            tty=False,
            labels={
                "com.joyrex2001.kubedock.deploy-as-job": "true"
            }
        )
        container.wait(timeout=_DOCKER_CLIENT_TIMEOUT)
    
        print(container.logs(stdout=True, stderr=True, tail=100))
    

    I can see the job starting and running successfully, however container.wait(timeout=_DOCKER_CLIENT_TIMEOUT) times out even though the pod has finished.

    opened by dpp23 1
  • `OneShotStartupCheckStrategy` always returns `StartupStatus.NOT_YET_KNOWN`

    `OneShotStartupCheckStrategy` always returns `StartupStatus.NOT_YET_KNOWN`

    OneShotStartupCheckStrategy in testcontainers checks if finishedAt is not equal DOCKER_TIMESTAMP_ZERO ("0001-01-01T00:00:00Z"). Kubedock always returns DOCKER_TIMESTAMP_ZERO for finished containers.

    opened by muhimu 0
  • Kubedock does not work with recent testcontainers-java kafka (1.16.+)

    Kubedock does not work with recent testcontainers-java kafka (1.16.+)

    Hi,

    First of all: thanks for making and maintaining this repo. Really useful! I have played around with this repo and especially with Kafka testcontainers on OpenShift with Tekton. I found out that your example works nice on OpenShift but mine project failed.

    Mainly because your examples use version 1.15.3 while mine project was using 1.16.3. There have been some changes around the dynamic updating of the Kafka config.

    With version 1.16.3 the args of the deployed containers look like:

    args:
            - sh
            - '-c'
            - |
              #!/bin/bash
              echo 'clientPort=2181' > zookeeper.properties
              echo 'dataDir=/var/lib/zookeeper/data' >> zookeeper.properties
              echo 'dataLogDir=/var/lib/zookeeper/log' >> zookeeper.properties
              zookeeper-server-start zookeeper.properties &
              echo '' > /etc/confluent/docker/ensure 
              /etc/confluent/docker/run 
    

    while 1.15.3 creates:

    args:
            - sh
            - '-c'
            - >-
              while [ ! -f /testcontainers_start.sh ]; do sleep 0.1; done;
              /testcontainers_start.sh
    
    kafka openshift 
    opened by adis-me 4
  • 404 when starting a container

    404 when starting a container

    Running a container build from the following Dockerfile:

                FROM alpine
                RUN echo \$RANDOM >> /tmp/test.txt
                CMD cat /tmp/test.txt && echo "DONE" && sleep 28800
    

    I get 404 from when calling container.start. When I look at the pods inside k8s, everything looks good, so I think this is a bug inside kubedock.

    In the logs I get the following, but not sure if it is a red herring:

    [GIN-debug] [WARNING] Headers were already written. Wanted to override status code 404 with 204
    

    I am going to try and figure out a small runnable example to demonstrate this.

    opened by dpp23 3
  • TestContainers Kafka wrapper fails to start Confleunt Kafka

    TestContainers Kafka wrapper fails to start Confleunt Kafka

    The Test Containers - Kafka module fails to startup a container. The following starter example works fine on a standard Docker backend but fails with Kubedock.

    import org.junit.jupiter.api.Test;
    
    import org.testcontainers.containers.KafkaContainer;
    import org.testcontainers.junit.jupiter.Testcontainers;
    import org.testcontainers.utility.DockerImageName;
    
    @Testcontainers
    public class KafkaTest {
    
        @Test
        void testKafkaStartup() {
            KafkaContainer KAFKA_CONTAINER = new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:5.5.0"))
                    .withStartupAttempts(3);
    
            KAFKA_CONTAINER.start();
            KAFKA_CONTAINER.stop();
        }
    }
    

    Running this example against a local Kubedock build from master yields:

    [GIN] 2022/02/14 - 12:33:51 | 201 |       165.8µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2022/02/14 - 12:33:53 | 204 |    2.4264976s |       127.0.0.1 | POST     "/containers/44e0da1c4a4ff1260dfe20404f1a9f916c88ab78dcf1f0d0204a51fc228c0cde/start"
    I0214 12:33:53.722604   27595 portforward.go:42] start port-forward 37943->9093
    I0214 12:33:53.722665   27595 portforward.go:42] start port-forward 52796->2181
    [GIN] 2022/02/14 - 12:33:53 | 200 |        86.8µs |       127.0.0.1 | GET      "/containers/44e0da1c4a4ff1260dfe20404f1a9f916c88ab78dcf1f0d0204a51fc228c0cde/json"
    I0214 12:33:53.833324   27595 copy.go:36] copy 2048 bytes to 44e0da1c4a4f:/
    E0214 12:33:58.908660   27595 v2.go:168] io: read/write on closed pipe
    E0214 12:33:58.914197   27595 util.go:18] error during request[500]: command terminated with exit code 2
    [GIN] 2022/02/14 - 12:33:58 | 500 |    5.1302852s |       127.0.0.1 | PUT      "/containers/44e0da1c4a4ff1260dfe20404f1a9f916c88ab78dcf1f0d0204a51fc228c0cde/archive?noOverwriteDirNonDir=false&path=%2F"
    [GIN] 2022/02/14 - 12:33:59 | 200 |    102.6893ms |       127.0.0.1 | GET      "/containers/44e0da1c4a4ff1260dfe20404f1a9f916c88ab78dcf1f0d0204a51fc228c0cde/logs?stdout=true&stderr=true&since=0"
    [GIN] 2022/02/14 - 12:33:59 | 201 |       144.7µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2022/02/14 - 12:34:01 | 204 |     2.379796s |       127.0.0.1 | POST     "/containers/86abe9fd787d0b557bde2496a269a708f03a9a368b05989116d6cfad93ca41e3/start"
    I0214 12:34:01.436944   27595 portforward.go:42] start port-forward 38007->2181
    I0214 12:34:01.437094   27595 portforward.go:42] start port-forward 52166->9093
    [GIN] 2022/02/14 - 12:34:01 | 200 |       175.1µs |       127.0.0.1 | GET      "/containers/86abe9fd787d0b557bde2496a269a708f03a9a368b05989116d6cfad93ca41e3/json"
    I0214 12:34:01.491702   27595 copy.go:36] copy 2048 bytes to 86abe9fd787d:/
    E0214 12:34:01.989394   27595 v2.go:168] io: read/write on closed pipe
    E0214 12:34:01.990213   27595 util.go:18] error during request[500]: command terminated with exit code 2
    [GIN] 2022/02/14 - 12:34:01 | 500 |    547.6007ms |       127.0.0.1 | PUT      "/containers/86abe9fd787d0b557bde2496a269a708f03a9a368b05989116d6cfad93ca41e3/archive?noOverwriteDirNonDir=false&path=%2F"
    [GIN] 2022/02/14 - 12:34:02 | 200 |    103.6247ms |       127.0.0.1 | GET      "/containers/86abe9fd787d0b557bde2496a269a708f03a9a368b05989116d6cfad93ca41e3/logs?stdout=true&stderr=true&since=0"
    [GIN] 2022/02/14 - 12:34:02 | 201 |       183.3µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2022/02/14 - 12:34:04 | 204 |    2.4081256s |       127.0.0.1 | POST     "/containers/479e20e215dd43c14171afb588470b239a31c475f8ac44670d54b95dad71c6fd/start"
    I0214 12:34:04.518805   27595 portforward.go:42] start port-forward 44148->9093
    I0214 12:34:04.518965   27595 portforward.go:42] start port-forward 47341->2181
    [GIN] 2022/02/14 - 12:34:04 | 200 |       207.4µs |       127.0.0.1 | GET      "/containers/479e20e215dd43c14171afb588470b239a31c475f8ac44670d54b95dad71c6fd/json"
    I0214 12:34:04.573697   27595 copy.go:36] copy 2048 bytes to 479e20e215dd:/
    E0214 12:34:05.047982   27595 v2.go:168] io: read/write on closed pipe
    E0214 12:34:05.052508   27595 util.go:18] error during request[500]: command terminated with exit code 2
    [GIN] 2022/02/14 - 12:34:05 | 500 |    526.6571ms |       127.0.0.1 | PUT      "/containers/479e20e215dd43c14171afb588470b239a31c475f8ac44670d54b95dad71c6fd/archive?noOverwriteDirNonDir=false&path=%2F"
    [GIN] 2022/02/14 - 12:34:05 | 200 |    101.7087ms |       127.0.0.1 | GET      "/containers/479e20e215dd43c14171afb588470b239a31c475f8ac44670d54b95dad71c6fd/logs?stdout=true&stderr=true&since=0"
    

    I've looked at this off and on over the last few weeks and was hopeful that perhaps the fix for #6 would resolve this as well.

    kafka 
    opened by yaraskm 3
Releases(0.8.2)
Owner
Vincent van Dam
Vincent van Dam
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 58 Jul 23, 2022
Docker-compose files for running full Storj network locally

docker-compose based Storj environment storj-up is a swiss-army tool to create / customize Storj clusters with the help of docker-compose (not just st

Storj 10 Jul 11, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022
DepCharge is a tool designed to help orchestrate the execution of commands across many directories at once.

DepCharge DepCharge is a tool that helps orchestrate the execution of commands across the many dependencies and directories in larger projects. It als

Andrew LeTourneau 22 Jul 16, 2022
Hassle-free minimal CI/CD for git repositories with docker or docker-compose projects.

GIT-PIPE Hassle-free minimal CI/CD for git repos for docker-based projects. Features: zero configuration for repos by default automatic encrypted back

Aleksandr Baryshnikov 50 Aug 7, 2022
A minimal Go project with user authentication ready out of the box. All frontend assets should be less than 100 kB on every page load

Golang Base Project A minimal Golang project with user authentication ready out of the box. All frontend assets should be less than 100 kB on every pa

Markus Tenghamn 218 Aug 5, 2022
Viewnode displays Kubernetes cluster nodes with their pods and containers.

viewnode The viewnode shows Kubernetes cluster nodes with their pods and containers. It is very useful when you need to monitor multiple resources suc

NTTDATA-DACH 8 Jul 15, 2022
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

null 4 Nov 16, 2021
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Aidan Melen 26 Aug 5, 2022
Run Kubernetes locally

minikube implements a local Kubernetes cluster on macOS, Linux, and Windows. minikube's primary goals are to be the best tool for local Kubernetes application development and to support all Kubernetes features that fit.

Kaisen Linux 0 Nov 12, 2021
Truly Minimal Linux Distribution for Containers

Statesman Statesman is a minimal Linux distribution, running from memory, that has just enough functionality to run OCI-compatible containers. Rationa

James Cunningham 3 Nov 12, 2021
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Daniel Dias de Assumpção 5 Jun 13, 2022
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Daniel Dias de Assumpção 1 Feb 16, 2022
Open Source runtime scanner for Linux containers (LXD), It performs security audit checks based on CIS Linux containers Benchmark specification

lxd-probe Scan your Linux container runtime !! Lxd-Probe is an open source audit scanner who perform audit check on a linux container manager and outp

Chen Keinan 14 May 16, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 1.8k Aug 7, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 63 Jul 27, 2022
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.3k Aug 11, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Open Cloud-native Game-application Initiative 30 Jul 28, 2022