In ur kubernetes, buildin ur imagez

Related tags

DevOps Tools kim
Overview

kim - The Kubernetes Image Manager

STATUS: EXPERIMENT - Let us know what you think

This project is a continuation of the experiment started with k3c, however, unlike the original aim/design for k3c, it IS NOT meant to be a replacement or re-build of the containerd/CRI.

kim is a Kubernetes-aware CLI that will install a small builder backend consisting of a BuildKit daemon bound to the Kubelet's underlying containerd socket (for building images) along with a small server-side agent that the CLI leverages for image management (think push, pull, etc) rather than talking to the backing containerd/CRI directly. kim enables building images locally, natively on your k3s cluster.

A familiar UX

There really is nothing better than the classic Docker UX of build/push/pull/tag. This tool copies the same UX as classic Docker (think Docker v1.12). The intention is to follow the same style but not be a 100% drop in replacement. Behaviour and arguments have been changed to better match the behavior of the Kubernetes ecosystem.

A single binary

kim, similar to k3s and old school docker, is packaged as a single binary, because nothing is easier for distribution than a static binary.

Built on Kubernetes Tech (and others)

Fundamentally kim is a built on the Container Runtime Interface (CRI), containerd, and buildkit.

Architecture

kim enables building k3s-local images by installing a DaemonSet Pod that runs both buildkitd and kim agent and exposing the gRPC endpoints for these active agents in your cluster via a Service. Once installed, the kim CLI can inspect your installation and communicate with the backend daemons for image building and manipulation with merely the KUBECONFIG that was available when invoking kim install. When building kim will talk directly to the buildkit service but all other interactions with the underlying containerd/CRI are mediated by the kim agent (primarily because the containerd "smart client" code assumes a certain level of co-locality with the containerd installation).

Building

# more to come on this front but builds are currently a very manual affair
# git clone --branch=trunk https://github.com/rancher/kim.git ~/Projects/rancher/kim
# cd ~/Projects/rancher/kim
go generate # only necessary when modifying the gRPC protobuf IDL, see Dockerfile for pre-reqs
make ORG=<your-dockerhub-org> build publish

Running

Have a working k3s installation with a working $HOME/.kube/config or $KUBECONFIG, then:

# Installation on a single-node cluster is automatic
# Installation on a multi-node cluster, targeting a Node named "my-builder-node"
./bin/kim install --selector k3s.io/hostname=my-builder-node

kim currently works against a single builder Node so you must specify a narrow selector when installing on multi-node clusters. Upon successful installation this node will acquire the "builder" role.

Build images like you would with the Docker CLI:

$ ./bin/kim --help
Kubernetes Image Manager -- in ur kubernetes buildin ur imagez

Usage:
  kim [OPTIONS] COMMAND
  kim [command]

Examples:
  kim image build --tag your/image:tag .

Available Commands:
  help        Help about any command
  image       Manage Images
  system      Manage KIM

Images Shortcuts:
  build       Build an image
  images      List images
  pull        Pull an image
  push        Push an image
  rmi         Remove an image
  tag         Tag an image

Flags:
  -x, --context string      kubeconfig context for authentication
      --debug               
      --debug-level int     
  -h, --help                help for kim
  -k, --kubeconfig string   kubeconfig for authentication
  -n, --namespace string    namespace (default "kube-image")
  -v, --version             version for kim

Use "kim [command] --help" for more information about a command.

Roadmap

  • Automated builds for clients on MacOS (amd64/arm64), Windows (amd64), and Linux client/server (amd64/arm64/arm).

License

Copyright (c) 2020-2021 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Error while dialing dial tcp 10.0.10.36:1234: i/o timeout

    Error while dialing dial tcp 10.0.10.36:1234: i/o timeout

    Hi,

    i tried to build with kim an image on our RKE cluster.

    Installing kim builder was successfull and the pod is running.

    ./kim builder install --selector k3s.io/hostname=my-builder-node --containerd-socket=/run/containerd/containerd.sock
    

    image

    When i try to run a build i get following error:

     ./kim build --tag dirien/busybox .  
    [+] Building 0.0s (0/0)                                                                                                                                                                          
    Error: failed to get status: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 10.0.10.36:1234: i/o timeout"
    Usage:
      kim build [OPTIONS] PATH
    
    Aliases:
      build, image build
    
    

    And i can not figure out, what the problem could be... Any hints from your side?

    Thanks a lot.

    opened by dirien 7
  • Unable to mount volumes var-lib-rancher, etc-ssl, ...

    Unable to mount volumes var-lib-rancher, etc-ssl, ...

    I have just installed the kim client on my linux NUC and ran ./kim builder install --selector k3s.io/hostname=<rpi-hostname> against my k3s RPI cluster.

    The builder pod is getting stuck in the ContainerCreating state as you can see from kubectl get pods,services -n kube-image

    NAME                READY   STATUS              RESTARTS   AGE
    pod/builder-7vl46   0/2     ContainerCreating   0          5m11s
    NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
    service/builder   NodePort   10.43.233.104   <none>        1234:32016/TCP,1233:30724/TCP   5m11s
    

    The details from running kubectl describe pods -n kube-image are as follows:

    ...snip...
    Events:
      Type     Reason       Age                   From               Message
      ----     ------       ----                  ----               -------
      Normal   Scheduled    7m9s                  default-scheduler  Successfully assigned kube-image/builder-7vl46 to rpi-107
      Warning  FailedMount  5m7s                  kubelet            Unable to attach or mount volumes: unmounted volumes=[var-lib-buildkit], unattached volumes=[var-lib-rancher etc-ssl tls-ca tls-server tmp run etc-pki cgroup var-lib-buildkit default-token-fnnlk]: timed out waiting for the condition
      Warning  FailedMount  2m50s                 kubelet            Unable to attach or mount volumes: unmounted volumes=[var-lib-buildkit], unattached volumes=[var-lib-rancher etc-pki run var-lib-buildkit tls-server default-token-fnnlk tmp etc-ssl cgroup tls-ca]: timed out waiting for the condition
      Warning  FailedMount  58s (x11 over 7m10s)  kubelet            MountVolume.SetUp failed for volume "var-lib-buildkit" : hostPath type check failed: /var/lib/buildkit is not a directory
      Warning  FailedMount  35s                   kubelet            Unable to attach or mount volumes: unmounted volumes=[var-lib-buildkit], unattached volumes=[run etc-pki cgroup var-lib-buildkit var-lib-rancher default-token-fnnlk tmp tls-server etc-ssl tls-ca]: timed out waiting for the condition
    

    @dweomer Mentioned that this is likely "an oversight in my daemonset spec"

    bug 
    opened by toriaezunama 4
  • kim doesn't seem to support multiple files in KUBECONFIG

    kim doesn't seem to support multiple files in KUBECONFIG

    Hello, I'm encountering an error where kim doesn't work with multiple kubeconfigs. My kubeconfig setup has a single kubeconfig per cluster I access. Kubectl supports this natively, using a colon : to parse the configs in order, very similar to the workings of the PATH variable. This is a breaking issue for me, and I'm unable to use kim to run commands.

     ajones@ajones  ~  kim pull alpine
    Error: stat /Users/ajones/.kube/config:/Users/ajones/code/k8s-bootstrapping/environments/dev/kubeconfig_agari-dev-k8s:/Users/ajones/code/k8s-bootstrapping/environments/stage/kubeconfig_agari-stage-k8s:/Users/ajones/code/k8s-bootstrapping/environments/prod/kubeconfig_agari-prod-k8s:/Users/ajones/code/k8s-bootstrapping/environments/euc1-prod/kubeconfig_agari-euc1-prod-k8s:/Users/ajones/code/k8s-bootstrapping/environments/ops/kubeconfig_agari-ops-k8s:/Users/ajones/code/k8s-bootstrapping/environments/sensors/kubeconfig_agari-sensors-k8s: no such file or directory
    
    opened by AlexMichaelJonesNC 2
  • `kim pull` doesn't work on macos, against a minikube running k3s

    `kim pull` doesn't work on macos, against a minikube running k3s

    $ kim --version
    kim version v0.1.0-alpha.9 (2778cc93265a202c65d5002cdddbba33b5116970)
    
    $ minikube version
    minikube version: v1.17.0
    commit: 9e7f03395052ddaa971eb5195287f13230004226-dirty
    
    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2+k3s1", GitCommit:"d38505b124c92bffd45f6e0654adb9371cae9610", GitTreeState:"clean", BuildDate:"2020-09-21T17:00:07Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
    
    $ kim pull alpine
    
    Error: rpc error: code = Unknown desc = failed to resolve reference "docker.io/library/alpine": object required
    Usage:
      kim pull [OPTIONS] IMAGE
    
    bug 
    opened by ericpromislow 2
  • kim with colon separated KUBECONFIG env var

    kim with colon separated KUBECONFIG env var

    As documented here:

    https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable

    My KUBECONFIG env var points to three files, like this:

    $ echo $KUBECONFIG
    /Users/mdsh/.kube/config:/Users/mdsh/.kube/config.d/home/config:/Users/mdsh/.kube/config.d/work/config 
    

    This works as expected with kubectl but kim fail, like this:

    $ kim image ls
    Error: stat /Users/mdsh/.kube/config:/Users/mdsh/.kube/config.d/home/config:/Users/mdsh/.kube/config.d/work/config: no such file or directory
    Usage:
      kim image ls [OPTIONS] [REPOSITORY[:TAG]]
    
    Aliases:
      ls, list
    
    Flags:
      -a, --all        Show all images (default hides tag-less images)
          --digests    Show digests
      -h, --help       help for ls
          --no-trunc   Don't truncate output
      -q, --quiet      Only show image IDs
    
    Global Flags:
      -x, --context string      kubeconfig context for authentication
          --debug               
          --debug-level int     
      -k, --kubeconfig string   kubeconfig for authentication
      -n, --namespace string    namespace (default "kube-image")
    
    FATA[0000] stat /Users/mdsh/.kube/config:/Users/mdsh/.kube/config.d/home/config:/Users/mdsh/.kube/config.d/work/config: no such file or directory 
    

    I have to use the -k parameter to point to a specific config file - which is less than ideal.

    $ kim image ls -k /Users/mdsh/.kube/config
    
    opened by himslm01 1
  • Can't push after running `docker login`

    Can't push after running `docker login`

    $ docker login 
    …
    $ kim push morspin/whoami:v50
    
    Error: rpc error: code = Unknown desc = server message: insufficient_scope: authorization failed
    ...
    

    But I can use docker push to push images.

    bug 
    opened by ericpromislow 1
  • Error when installing in RKE:  FATA[0000] container runtime `docker` not supported

    Error when installing in RKE: FATA[0000] container runtime `docker` not supported

    Rancher v2.5.9
    RKE Cluster k8s version v1.20.8
    
    kubectl version --short
    
    Client Version: v1.21.3
    Server Version: v1.20.8
    
    ❯ kim image ls
    
    WARN[0000] Cannot find available builder daemon, attempting automatic installation...
    WARN[0001] Too many nodes, please specify a selector, e.g. kubernetes.io/hostname=nj-prd-k8s-master11.mydomain.com
    Error: services "builder" not found
    Usage:
      kim image ls [OPTIONS] [REPOSITORY[:TAG]]
    
    Aliases:
      ls, list
    
    Flags:
      -a, --all        Show all images (default hides tag-less images)
          --digests    Show digests
      -h, --help       help for ls
          --no-trunc   Don't truncate output
      -q, --quiet      Only show image IDs
    
    Global Flags:
      -x, --context string      kubeconfig context for authentication
          --debug
          --debug-level int
      -k, --kubeconfig string   kubeconfig for authentication
      -n, --namespace string    namespace (default "kube-image")
    
    FATA[0001] services "builder" not found
    
    ❯ kim builder install --selector kubernetes.io/hostname=nj-prd-k8s-master11.mydomain.com
    
    INFO[0000] Applying node-role `builder` to `nj-prd-k8s-master11.mydomain.com`
    Error: container runtime `docker` not supported
    Usage:
      kim builder install [OPTIONS]
    
    Flags:
          --agent-image string         Image to run the agent w/ missing tag inferred from version
          --agent-port int             Port that the agent will listen on (default 1233)
          --buildkit-image string      BuildKit image for running buildkitd (default "docker.io/moby/buildkit:v0.8.3")
          --buildkit-port int          BuildKit service port (default 1234)
          --buildkit-socket string     BuildKit socket address (default "unix:///run/buildkit/buildkitd.sock")
          --containerd-socket string   Containerd socket address (default on k3s "/run/k3s/containerd/containerd.sock")
          --containerd-volume string   Containerd storage volume (default on k3s "/var/lib/rancher")
          --force                      Force installation by deleting existing builder
      -h, --help                       help for install
          --no-fail                    Do not fail if backend components are already installed
          --no-wait                    Do not wait for backend to become available
          --selector string            Selector for nodes (label query) to apply builder role
    
    Global Flags:
      -x, --context string      kubeconfig context for authentication
          --debug
          --debug-level int
      -k, --kubeconfig string   kubeconfig for authentication
      -n, --namespace string    namespace (default "kube-image")
    
    FATA[0000] container runtime `docker` not supported
    
    opened by haim-ari 1
  • Index out of range panic

    Index out of range panic

    While using KIM inside Rancher Desktop I encountered the following panic. I've not traced the issue, yet, but wanted to report it. We use KIM as part of our setup process to make sure the builder is present.

    time="2021-06-07T16:52:40-04:00" level=warning msg="Cannot find available builder daemon, attempting automatic installation..."
    panic: runtime error: index out of range [0] with length 0
    
    goroutine 1 [running]:
    github.com/rancher/kim/pkg/client/builder.(*Install).NodeRole(0xc0003aa180, 0x2c784b0, 0xc0000e9b00, 0xc000339310, 0xc0000e9b00, 0xc0001d66a0)
    	/drone/src/pkg/client/builder/install.go:527 +0x416
    github.com/rancher/kim/pkg/client/builder.(*Install).Do(0xc0003aa180, 0x2c78440, 0xc0000e9b00, 0xc000339310, 0x0, 0x0)
    	/drone/src/pkg/client/builder/install.go:49 +0xcc
    github.com/rancher/kim/pkg/cli/command/image.(*CommandSpec).PersistentPre(0x3918788, 0xc0008518c0, 0xc0003cefd0, 0x0, 0x1, 0x0, 0x0)
    	/drone/src/pkg/cli/command/image/image.go:65 +0x1bb
    github.com/rancher/wrangler-cli.bind.func1(0xc0008518c0, 0xc0003cefd0, 0x0, 0x1, 0x0, 0x0)
    	/go/pkg/mod/github.com/rancher/[email protected]/builder.go:270 +0x15d
    github.com/rancher/kim/pkg/cli.AddShortcut.func1(0xc0008518c0, 0xc0003cefd0, 0x0, 0x1, 0x0, 0x0)
    	/drone/src/pkg/cli/cli.go:124 +0x57
    github.com/spf13/cobra.(*Command).execute(0xc0008518c0, 0xc0003cefc0, 0x1, 0x1, 0xc0008518c0, 0xc0003cefc0)
    	/go/pkg/mod/github.com/spf13/[email protected]/command.go:829 +0x582
    github.com/spf13/cobra.(*Command).ExecuteC(0xc000449340, 0xc00012a010, 0x2c78440, 0xc000348200)
    	/go/pkg/mod/github.com/spf13/[email protected]/command.go:958 +0x375
    github.com/spf13/cobra.(*Command).Execute(...)
    	/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
    github.com/spf13/cobra.(*Command).ExecuteContext(...)
    	/go/pkg/mod/github.com/spf13/[email protected]/command.go:888
    github.com/rancher/wrangler-cli.Main(0xc000449340)
    	/go/pkg/mod/github.com/rancher/[email protected]/builder.go:73 +0x6d
    main.main()
    	/drone/src/main.go:29 +0xd4
    
    bug 
    opened by mattfarina 1
  • Support/document kind clusters

    Support/document kind clusters

    I tried this out on https://kind.sigs.k8s.io/.

    I had to pass --containerd-socket=/run/containerd/containerd.sock on install. I then had to modified to mount /var/lib/containerd on the Daemonset. Once I did that, things worked great.

    opened by howardjohn 1
  • main: fix kubectl-dropin on windows

    main: fix kubectl-dropin on windows

    Allow for the case, on windows, that the zeroth command-line argument will have an .exe suffix when attempting to make the multi-call disambiguation.

    Addresses #36

    Signed-off-by: Jacob Blain Christen [email protected]

    opened by dweomer 1
  • support for running kim in a k3s container

    support for running kim in a k3s container

    With kim binding to the containerd it is running under, buildkit requires some bidirectional mounting under /tmp, /var/lib/buildkit, and /var/lib/rancher (because the containerd persistent root lives under here). Because these are bind mounts from the "host" we can know where they are located on disk and we leverage that in some init containers to attempt to nsenter into the host pid+mount namespaces to make these locations shared. The mount --make-rshared attempts can fail silently without prevent kim from working as expected so long as the actual locations on disk are under shared/rshared mountpoints.

    Signed-off-by: Jacob Blain Christen [email protected]

    opened by dweomer 1
  • failed to generate spec: path

    failed to generate spec: path "/tmp" is mounted on "/" but it is not a shared mount

    Error: failed to generate container "73c7a50781eaf5de74ab1f95568a7bc7e26016fd4a06aa28bf8ea2a79be3f9dd" spec: failed to generate spec: path "/tmp" is mounted on "/" but it is not a shared mount
    

    Expand below to see more.

    kubectl describe pods -n kube-image builder-btls
    Name:         builder-btlsg
    Namespace:    kube-image
    Priority:     0
    Node:         k3d-kim-server-0/172.22.0.2
    Start Time:   Sun, 27 Mar 2022 10:49:58 +1100
    Labels:       app=kim
                  app.kubernetes.io/component=builder
                  app.kubernetes.io/managed-by=kim
                  app.kubernetes.io/name=kim
                  component=builder
                  controller-revision-hash=7bb6779b98
                  pod-template-generation=1
    Annotations:  <none>
    Status:       Pending
    IP:           172.22.0.2
    IPs:
      IP:           172.22.0.2
    Controlled By:  DaemonSet/builder
    Init Containers:
      rshared-tmp:
        Container ID:  containerd://949bd0c0307b7e9bd307fe6fdc154baac68c2807843aef74914294af5c622087
        Image:         docker.io/moby/buildkit:v0.8.3
        Image ID:      docker.io/moby/buildkit@sha256:171689e43026533b48701ab6566b72659dd1839488d715c73ef3fe387fab9a80
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
        Args:
          (if mountpoint $_DIR; then set -x; nsenter -m -p -t 1 -- env PATH=$_PATH sh -c 'mount --make-rshared $_DIR'; fi) || true
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Sun, 27 Mar 2022 10:50:16 +1100
          Finished:     Sun, 27 Mar 2022 10:50:16 +1100
        Ready:          True
        Restart Count:  0
        Environment:
          _DIR:   /tmp
          _PATH:  /usr/sbin:/usr/bin:/sbin:/bin:/bin/aux
        Mounts:
          /tmp from host-tmp (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4cz4 (ro)
      rshared-buildkit:
        Container ID:  containerd://d61fd8440158f9e282cbcd0fcf77fa2c24e1e5826c3644e97b7bbe6cf82eb944
        Image:         docker.io/moby/buildkit:v0.8.3
        Image ID:      docker.io/moby/buildkit@sha256:171689e43026533b48701ab6566b72659dd1839488d715c73ef3fe387fab9a80
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
        Args:
          (if mountpoint $_DIR; then set -x; nsenter -m -p -t 1 -- env PATH=$_PATH sh -c 'mount --make-rshared $_DIR'; fi) || true
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Sun, 27 Mar 2022 10:50:16 +1100
          Finished:     Sun, 27 Mar 2022 10:50:16 +1100
        Ready:          True
        Restart Count:  0
        Environment:
          _DIR:   /var/lib/buildkit
          _PATH:  /usr/sbin:/usr/bin:/sbin:/bin:/bin/aux
        Mounts:
          /var/lib/buildkit from host-var-lib-buildkit (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4cz4 (ro)
      rshared-containerd:
        Container ID:  containerd://49825e4b0231bfbd98f166208902e31a07c80f47ee2b754a17f3c9cdcef93a5c
        Image:         docker.io/moby/buildkit:v0.8.3
        Image ID:      docker.io/moby/buildkit@sha256:171689e43026533b48701ab6566b72659dd1839488d715c73ef3fe387fab9a80
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
        Args:
          (if mountpoint $_DIR; then set -x; nsenter -m -p -t 1 -- env PATH=$_PATH sh -c 'mount --make-rshared $_DIR'; fi) || true
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Sun, 27 Mar 2022 10:50:17 +1100
          Finished:     Sun, 27 Mar 2022 10:50:17 +1100
        Ready:          True
        Restart Count:  0
        Environment:
          _DIR:   /var/lib/rancher
          _PATH:  /usr/sbin:/usr/bin:/sbin:/bin:/bin/aux
        Mounts:
          /var/lib/rancher from host-containerd (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4cz4 (ro)
    Containers:
      buildkit:
        Container ID:  
        Image:         docker.io/moby/buildkit:v0.8.3
        Image ID:      
        Port:          1234/TCP
        Host Port:     1234/TCP
        Args:
          --addr=unix:///run/buildkit/buildkitd.sock
          --addr=tcp://0.0.0.0:1234
          --containerd-worker=true
          --containerd-worker-addr=/run/k3s/containerd/containerd.sock
          --containerd-worker-gc
          --oci-worker=false
          --tlscacert=/certs/ca/tls.crt
          --tlscert=/certs/server/tls.crt
          --tlskey=/certs/server/tls.key
        State:          Waiting
          Reason:       CreateContainerError
        Ready:          False
        Restart Count:  0
        Liveness:       exec [buildctl debug workers] delay=5s timeout=1s period=20s #success=1 #failure=3
        Readiness:      exec [buildctl debug workers] delay=5s timeout=1s period=20s #success=1 #failure=3
        Environment:    <none>
        Mounts:
          /certs/ca from certs-ca (ro)
          /certs/server from certs-server (ro)
          /run from host-run (rw)
          /sys/fs/cgroup from host-ctl (rw)
          /tmp from host-tmp (rw)
          /var/lib/buildkit from host-var-lib-buildkit (rw)
          /var/lib/rancher from host-containerd (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4cz4 (ro)
      agent:
        Container ID:  
        Image:         rancher/kim:v0.1.0-beta.7
        Image ID:      
        Port:          1233/TCP
        Host Port:     1233/TCP
        Command:
          kim
          --debug
          agent
        Args:
          --agent-port=1233
          --buildkit-socket=unix:///run/buildkit/buildkitd.sock
          --buildkit-port=1234
          --containerd-socket=/run/k3s/containerd/containerd.sock
          --tlscacert=/certs/ca/tls.crt
          --tlscert=/certs/server/tls.crt
          --tlskey=/certs/server/tls.key
        State:          Waiting
          Reason:       CreateContainerError
        Ready:          False
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /certs/ca from certs-ca (ro)
          /certs/server from certs-server (ro)
          /etc/pki from host-etc-pki (ro)
          /etc/ssl from host-etc-ssl (ro)
          /run from host-run (rw)
          /sys/fs/cgroup from host-ctl (rw)
          /var/lib/buildkit from host-var-lib-buildkit (rw)
          /var/lib/rancher from host-containerd (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4cz4 (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             False 
      ContainersReady   False 
      PodScheduled      True 
    Volumes:
      host-ctl:
        Type:          HostPath (bare host directory volume)
        Path:          /sys/fs/cgroup
        HostPathType:  Directory
      host-etc-pki:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/pki
        HostPathType:  DirectoryOrCreate
      host-etc-ssl:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/ssl
        HostPathType:  DirectoryOrCreate
      host-run:
        Type:          HostPath (bare host directory volume)
        Path:          /run
        HostPathType:  Directory
      host-tmp:
        Type:          HostPath (bare host directory volume)
        Path:          /tmp
        HostPathType:  Directory
      host-var-lib-buildkit:
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/buildkit
        HostPathType:  DirectoryOrCreate
      host-containerd:
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/rancher
        HostPathType:  DirectoryOrCreate
      certs-ca:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kim-tls-ca
        Optional:    false
      certs-server:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kim-tls-server
        Optional:    false
      kube-api-access-f4cz4:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              node-role.kubernetes.io/builder=true
    Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                                 node.kubernetes.io/not-ready:NoExecute op=Exists
                                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/unreachable:NoExecute op=Exists
                                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
    Events:
      Type     Reason     Age                    From               Message
      ----     ------     ----                   ----               -------
      Normal   Scheduled  3m36s                  default-scheduler  Successfully assigned kube-image/builder-btlsg to k3d-kim-server-0
      Normal   Pulling    3m37s                  kubelet            Pulling image "docker.io/moby/buildkit:v0.8.3"
      Normal   Pulled     3m20s                  kubelet            Successfully pulled image "docker.io/moby/buildkit:v0.8.3" in 16.8479912s
      Normal   Created    3m19s                  kubelet            Created container rshared-buildkit
      Normal   Created    3m19s                  kubelet            Created container rshared-tmp
      Normal   Started    3m19s                  kubelet            Started container rshared-tmp
      Normal   Started    3m19s                  kubelet            Started container rshared-buildkit
      Normal   Pulled     3m19s                  kubelet            Container image "docker.io/moby/buildkit:v0.8.3" already present on machine
      Normal   Pulled     3m18s                  kubelet            Container image "docker.io/moby/buildkit:v0.8.3" already present on machine
      Normal   Created    3m18s                  kubelet            Created container rshared-containerd
      Normal   Started    3m18s                  kubelet            Started container rshared-containerd
      Normal   Pulling    3m17s                  kubelet            Pulling image "rancher/kim:v0.1.0-beta.7"
      Warning  Failed     3m17s                  kubelet            Error: failed to generate container "73c7a50781eaf5de74ab1f95568a7bc7e26016fd4a06aa28bf8ea2a79be3f9dd" spec: failed to generate spec: path "/tmp" is mounted on "/" but it is not a shared mount
      Normal   Pulled     3m6s                   kubelet            Successfully pulled image "rancher/kim:v0.1.0-beta.7" in 10.3655793s
      Warning  Failed     3m6s                   kubelet            Error: failed to generate container "c7913044e35bbb8ef948e2bd17848cb308888cb4dbeddc97d08bfad073d08853" spec: failed to generate spec: path "/var/lib/buildkit" is mounted on "/" but it is not a shared mount
      Warning  Failed     3m6s                   kubelet            Error: failed to generate container "4f9180c6190fedcd9601131d41b1ca48160330bbef6b40ca9b0fd2cbd0bae24c" spec: failed to generate spec: path "/tmp" is mounted on "/" but it is not a shared mount
      Warning  Failed     3m6s                   kubelet            Error: failed to generate container "68a8bf3b5550081f11d07d1c5e614eb5024f65633d60b7a4d78913c15f10d091" spec: failed to generate spec: path "/var/lib/buildkit" is mounted on "/" but it is not a shared mount
      Warning  Failed     2m55s                  kubelet            Error: failed to generate container "356515b2435fc6ec28c8f3e7be405fc22a64d10b6b423ea17342e6cf30c7b823" spec: failed to generate spec: path "/tmp" is mounted on "/" but it is not a shared mount
      Normal   Pulled     2m55s (x2 over 3m6s)   kubelet            Container image "rancher/kim:v0.1.0-beta.7" already present on machine
      Warning  Failed     2m55s                  kubelet            Error: failed to generate container "0031efc0dc317b721e73d3b500b5b006730b16d0f354538cf5c7d728daafd802" spec: failed to generate spec: path "/var/lib/buildkit" is mounted on "/" but it is not a shared mount
      Normal   Pulled     2m43s (x4 over 3m17s)  kubelet            Container image "docker.io/moby/buildkit:v0.8.3" already present on machine
      Warning  Failed     2m43s                  kubelet            Error: failed to generate container "4f60fc92c2b125fba4279390bfcd6a6255e4e0d8faec90838d49013b2c52b04a" spec: failed to generate spec: path "/tmp" is mounted on "/" but it is not a shared mount
    
    opened by tekumara 6
  • [WSL2] Kim builder install fails: CrashLoopBackOff

    [WSL2] Kim builder install fails: CrashLoopBackOff

    I'm running:

    • Windows 11 with WSL2
    • Rancher Desktop 1.0.1 with Kubernetes v1.23.3, on containerd
    • Ubuntu 20.04 in WSL2 as a client/ui
    • Rancher Desktop WSL integration
    • Installed Arkade, installed Kim via arkade get kim on Ubuntu
    • Tried running kim builder install on Ubuntu

    Result:

    INFO[0000] Applying node-role `builder` to `myhostname-redacted`
    INFO[0000] Asserting namespace `kube-image`
    INFO[0000] Asserting TLS secrets
    INFO[0000] Asserting service/endpoints
    INFO[0000] Installing builder daemon
    INFO[0000] Waiting on builder daemon availability...
    INFO[0006] Waiting on builder daemon availability...
    INFO[0013] Waiting on builder daemon availability...
    INFO[0018] Waiting on builder daemon availability...
    INFO[0024] Waiting on builder daemon availability...
    INFO[0030] Waiting on builder daemon availability...
    INFO[0036] Waiting on builder daemon availability...
    INFO[0041] Waiting on builder daemon availability...
    INFO[0047] Waiting on builder daemon availability...
    INFO[0052] Waiting on builder daemon availability...
    INFO[0059] Waiting on builder daemon availability...
    INFO[0065] Waiting on builder daemon availability...
    INFO[0070] Waiting on builder daemon availability...
    INFO[0075] Waiting on builder daemon availability...
    INFO[0081] Waiting on builder daemon availability...
    Error: timeout waiting for builder to become available
    

    On the kubectl side:

    $ kubectl get pods -A
    NAMESPACE     NAME                                      READY   STATUS             RESTARTS      AGE
    kube-system   helm-install-traefik-crd-45xtb            0/1     Completed          0             28m
    kube-system   helm-install-traefik-j5hws                0/1     Completed          1             28m
    kube-system   svclb-traefik-vr9hp                       2/2     Running            2 (12m ago)   28m
    kube-system   local-path-provisioner-6c79684f77-pzzpw   1/1     Running            1 (12m ago)   28m
    kube-system   coredns-5789895cd-ngcvk                   1/1     Running            1 (12m ago)   28m
    kube-system   metrics-server-7cd5fcb6b7-jhbwm           1/1     Running            1 (12m ago)   28m
    kube-system   traefik-6bb96f9bd8-zrqtm                  1/1     Running            1 (12m ago)   28m
    kube-image    builder-4rcj8                             1/2     CrashLoopBackOff   5 (81s ago)   4m24s
    

    So lets describe the offending pod:

    $ kubectl -n kube-image describe pods builder-4rcj8
    Name:         builder-4rcj8
    Namespace:    kube-image
    Priority:     0
    Node:         myhostname-redacted/192.168.98.213
    Start Time:   Tue, 22 Feb 2022 21:53:25 +0100
    Labels:       app=kim
                  app.kubernetes.io/component=builder
                  app.kubernetes.io/managed-by=kim
                  app.kubernetes.io/name=kim
                  component=builder
                  controller-revision-hash=6df6b4765c
                  pod-template-generation=1
    Annotations:  <none>
    Status:       Running
    IP:           192.168.98.213
    IPs:
      IP:           192.168.98.213
    Controlled By:  DaemonSet/builder
    Init Containers:
      rshared-tmp:
        Container ID:  containerd://0b6b9560c261531abfaa779b3a5701f683d2b0f0b99af0c0b3d04dbd428656f6
        Image:         docker.io/moby/buildkit:v0.8.3
        Image ID:      docker.io/moby/buildkit@sha256:171689e43026533b48701ab6566b72659dd1839488d715c73ef3fe387fab9a80
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
        Args:
          (if mountpoint $_DIR; then set -x; nsenter -m -p -t 1 -- env PATH=$_PATH sh -c 'mount --make-rshared $_DIR'; fi) || true
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Tue, 22 Feb 2022 21:53:31 +0100
          Finished:     Tue, 22 Feb 2022 21:53:31 +0100
        Ready:          True
        Restart Count:  0
        Environment:
          _DIR:   /tmp
          _PATH:  /usr/sbin:/usr/bin:/sbin:/bin:/bin/aux
        Mounts:
          /tmp from host-tmp (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nw66x (ro)
      rshared-buildkit:
        Container ID:  containerd://feade611fdc670eab306f9dbe44a8a34c2f2fd1f0cdbaa94a4310c6c3af748e1
        Image:         docker.io/moby/buildkit:v0.8.3
        Image ID:      docker.io/moby/buildkit@sha256:171689e43026533b48701ab6566b72659dd1839488d715c73ef3fe387fab9a80
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
        Args:
          (if mountpoint $_DIR; then set -x; nsenter -m -p -t 1 -- env PATH=$_PATH sh -c 'mount --make-rshared $_DIR'; fi) || true
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Tue, 22 Feb 2022 21:53:32 +0100
          Finished:     Tue, 22 Feb 2022 21:53:32 +0100
        Ready:          True
        Restart Count:  0
        Environment:
          _DIR:   /var/lib/buildkit
          _PATH:  /usr/sbin:/usr/bin:/sbin:/bin:/bin/aux
        Mounts:
          /var/lib/buildkit from host-var-lib-buildkit (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nw66x (ro)
      rshared-containerd:
        Container ID:  containerd://e7a8850075281fe5167447e62a33d3309b2502f5a9233b4c0f5f4d61de06465f
        Image:         docker.io/moby/buildkit:v0.8.3
        Image ID:      docker.io/moby/buildkit@sha256:171689e43026533b48701ab6566b72659dd1839488d715c73ef3fe387fab9a80
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
        Args:
          (if mountpoint $_DIR; then set -x; nsenter -m -p -t 1 -- env PATH=$_PATH sh -c 'mount --make-rshared $_DIR'; fi) || true
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Tue, 22 Feb 2022 21:53:33 +0100
          Finished:     Tue, 22 Feb 2022 21:53:33 +0100
        Ready:          True
        Restart Count:  0
        Environment:
          _DIR:   /var/lib/rancher
          _PATH:  /usr/sbin:/usr/bin:/sbin:/bin:/bin/aux
        Mounts:
          /var/lib/rancher from host-containerd (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nw66x (ro)
    Containers:
      buildkit:
        Container ID:  containerd://073df55342e8e3d59254525aeba6b6fca60cb0777a3f3bcd0152eace779c2c13
        Image:         docker.io/moby/buildkit:v0.8.3
        Image ID:      docker.io/moby/buildkit@sha256:171689e43026533b48701ab6566b72659dd1839488d715c73ef3fe387fab9a80
        Port:          1234/TCP
        Host Port:     1234/TCP
        Args:
          --addr=unix:///run/buildkit/buildkitd.sock
          --addr=tcp://0.0.0.0:1234
          --containerd-worker=true
          --containerd-worker-addr=/run/k3s/containerd/containerd.sock
          --containerd-worker-gc
          --oci-worker=false
          --tlscacert=/certs/ca/tls.crt
          --tlscert=/certs/server/tls.crt
          --tlskey=/certs/server/tls.key
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Tue, 22 Feb 2022 21:56:28 +0100
          Finished:     Tue, 22 Feb 2022 21:56:28 +0100
        Ready:          False
        Restart Count:  5
        Liveness:       exec [buildctl debug workers] delay=5s timeout=1s period=20s #success=1 #failure=3
        Readiness:      exec [buildctl debug workers] delay=5s timeout=1s period=20s #success=1 #failure=3
        Environment:    <none>
        Mounts:
          /certs/ca from certs-ca (ro)
          /certs/server from certs-server (ro)
          /run from host-run (rw)
          /sys/fs/cgroup from host-ctl (rw)
          /tmp from host-tmp (rw)
          /var/lib/buildkit from host-var-lib-buildkit (rw)
          /var/lib/rancher from host-containerd (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nw66x (ro)
      agent:
        Container ID:  containerd://bf79e9def3ff4eb74481a13c8a0f4b0a663a80b0e22facd69f5b8dd77bb7b172
        Image:         rancher/kim:v0.1.0-beta.4
        Image ID:      docker.io/rancher/kim@sha256:091daceebc3f3b9f9e126d39f6e8b6ef96d3085813f4afbd35efc1a8a94e7bf4
        Port:          1233/TCP
        Host Port:     1233/TCP
        Command:
          kim
          --debug
          agent
        Args:
          --agent-port=1233
          --buildkit-socket=unix:///run/buildkit/buildkitd.sock
          --buildkit-port=1234
          --containerd-socket=/run/k3s/containerd/containerd.sock
          --tlscacert=/certs/ca/tls.crt
          --tlscert=/certs/server/tls.crt
          --tlskey=/certs/server/tls.key
        State:          Running
          Started:      Tue, 22 Feb 2022 21:53:38 +0100
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /certs/ca from certs-ca (ro)
          /certs/server from certs-server (ro)
          /etc/pki from host-etc-pki (ro)
          /etc/ssl from host-etc-ssl (ro)
          /run from host-run (rw)
          /sys/fs/cgroup from host-ctl (rw)
          /var/lib/buildkit from host-var-lib-buildkit (rw)
          /var/lib/rancher from host-containerd (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nw66x (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             False
      ContainersReady   False
      PodScheduled      True
    Volumes:
      host-ctl:
        Type:          HostPath (bare host directory volume)
        Path:          /sys/fs/cgroup
        HostPathType:  Directory
      host-etc-pki:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/pki
        HostPathType:  DirectoryOrCreate
      host-etc-ssl:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/ssl
        HostPathType:  DirectoryOrCreate
      host-run:
        Type:          HostPath (bare host directory volume)
        Path:          /run
        HostPathType:  Directory
      host-tmp:
        Type:          HostPath (bare host directory volume)
        Path:          /tmp
        HostPathType:  Directory
      host-var-lib-buildkit:
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/buildkit
        HostPathType:  DirectoryOrCreate
      host-containerd:
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/rancher
        HostPathType:  DirectoryOrCreate
      certs-ca:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kim-tls-ca
        Optional:    false
      certs-server:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kim-tls-server
        Optional:    false
      kube-api-access-nw66x:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              node-role.kubernetes.io/builder=true
    Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                                 node.kubernetes.io/not-ready:NoExecute op=Exists
                                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                                 node.kubernetes.io/unreachable:NoExecute op=Exists
                                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
    Events:
      Type     Reason     Age                    From               Message
      ----     ------     ----                   ----               -------
      Normal   Scheduled  5m23s                  default-scheduler  Successfully assigned kube-image/builder-4rcj8 to myhostname-redacted
      Normal   Pulling    5m23s                  kubelet            Pulling image "docker.io/moby/buildkit:v0.8.3"
      Normal   Pulled     5m17s                  kubelet            Successfully pulled image "docker.io/moby/buildkit:v0.8.3" in 5.713335137s
      Normal   Started    5m17s                  kubelet            Started container rshared-tmp
      Normal   Created    5m17s                  kubelet            Created container rshared-tmp
      Normal   Created    5m16s                  kubelet            Created container rshared-buildkit
      Normal   Pulled     5m16s                  kubelet            Container image "docker.io/moby/buildkit:v0.8.3" already present on machine
      Normal   Started    5m16s                  kubelet            Started container rshared-buildkit
      Normal   Started    5m15s                  kubelet            Started container rshared-containerd
      Normal   Pulled     5m15s                  kubelet            Container image "docker.io/moby/buildkit:v0.8.3" already present on machine
      Normal   Created    5m15s                  kubelet            Created container rshared-containerd
      Normal   Pulling    5m14s                  kubelet            Pulling image "rancher/kim:v0.1.0-beta.4"
      Normal   Pulled     5m10s                  kubelet            Successfully pulled image "rancher/kim:v0.1.0-beta.4" in 4.262237397s
      Normal   Created    5m10s                  kubelet            Created container agent
      Normal   Started    5m10s                  kubelet            Started container agent
      Normal   Created    5m9s (x2 over 5m14s)   kubelet            Created container buildkit
      Normal   Started    5m9s (x2 over 5m14s)   kubelet            Started container buildkit
      Normal   Pulled     4m50s (x3 over 5m14s)  kubelet            Container image "docker.io/moby/buildkit:v0.8.3" already present on machine
      Warning  BackOff    22s (x32 over 5m8s)    kubelet            Back-off restarting failed container
    

    I have no clue why this pod is crashlooping. I've not been able to get kim to work on this machine. I'd love to use it so I can not deal with shuffling images by hand.

    opened by tobiasoort 2
  • Error while dialing dial tcp 10.0.2.15:1233: i/o timeout

    Error while dialing dial tcp 10.0.2.15:1233: i/o timeout

    I use the latsest kim binary (v0.1.0-beta.7). I installed on a k8s 1.20.13 cluster. The install looks good (logs above), but when i try to list images is get this error: Error while dialing dial tcp 10.0.2.15:1233: i/o timeout

    klf builder-r4rds -c buildkit
    time="2021-12-05T09:41:16Z" level=warning msg="using host network as the default"
    time="2021-12-05T09:41:16Z" level=info msg="found worker \"c0bb48tm9n3r3rrc5ru1s91rn\", labels=map[org.mobyproject.buildkit.worker.executor:containerd org.mobyproject.buildkit.worker.hostname:alma8 org.mobyproject.buildkit.worker.snapshotter:overlayfs], platforms=[linux/amd64 linux/386]"
    time="2021-12-05T09:41:16Z" level=info msg="found 1 workers, default=\"c0bb48tm9n3r3rrc5ru1s91rn\""
    time="2021-12-05T09:41:16Z" level=warning msg="currently, only the default worker can be used."
    time="2021-12-05T09:41:16Z" level=warning msg="TLS is disabled for unix:///run/buildkit/buildkitd.sock"
    time="2021-12-05T09:41:16Z" level=info msg="running server on /run/buildkit/buildkitd.sock"
    time="2021-12-05T09:41:16Z" level=info msg="running server on [::]:1234"
    
    klf builder-r4rds -c agent
    

    No log in agent. The ip in the error 10.0.2.15 is the ip of the pod:

    kgp -o wide
    NAME            READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
    builder-r4rds   2/2     Running   0          25m   10.0.2.15   alma8   <none>           <none>
    

    Obviously my host can not connect to the pod ip. What is a best practice here. Is there an option to run kim with loadbalancer service, or an ingress? how can I configure the cli where to connect? For example to connect to a port forward to localhost.

    opened by devopstales 1
  • `kim build` with Rancher Desktop fails to pull base images from custom registry with self-signed cert

    `kim build` with Rancher Desktop fails to pull base images from custom registry with self-signed cert

    For bugs, describe what you're seeing

    Using kim build with Rancher Desktop on macOS involves pulling a base image from a custom registry which uses self-signed corporate cert, and the error is x509: certificate signed by unknown authority. I have the root CA certs in KeyChain as well as under /usr/local/share/ca-certificates on my host machine. I understand that Rancher Desktop has recently added support for installing the host CA certs into k3s under the cover. However, when I checked the BuildKit instance running in the kube-image namespace in k3s, it doesn’t seem to have the corporate root CA certs imported from the host machine. My understanding is that kim is the one installing the BuildKit instance, hence this report.

    To Reproduce Steps to reproduce the behaviour:

    $ kim build -f Dockerfile .
    

    Result

    [+] Building 0.4s (3/3) FINISHED                                                                                                                                                            
     => [internal] load build definition from Dockerfile                                                                                                                            0.1s
     => => transferring dockerfile: 38B                                                                                                                                                    0.0s
     => [internal] load .dockerignore                                                                                                                                                      0.0s
     => => transferring context: 2B                                                                                                                                                        0.0s
     => ERROR [internal] load metadata for foobar.com/myimage:tag                                                                                                             0.2s
    ------
     > [internal] load metadata for foobar.com/myimage:tag
    ------
    error: failed to solve: failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to do request: Head https://foobar.com/v2/myimage/manifests/tag: x509: certificate signed by unknown authority
    FATA[0000] unrecognized image format
    

    This issue makes kim unsuitable to work in many corporate environments. This issue is similar to the one reported to Rancher Desktop: https://github.com/rancher-sandbox/rancher-desktop/issues/909, as both kim and nerdctl seem to suffer the same problem.

    opened by stanleymho 0
  • kim for osx kind-backed clusters?

    kim for osx kind-backed clusters?

    So, as an experiment I've integrated kim with tilt using a kind backend. This works perfectly on ubuntu, and has allowed us to shave a pretty hefty chunk off build times (25 to 40%) by avoiding the image registry one would normally need for a tilt / kind / docker project.

    However, on osx using kind and docker-desktop-for-mac, kim fails with the error:

     level=fatal msg="failed to get status: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 172.24.0.2:1234: i/o timeout\""
    

    I know Smarter automatic-ish bootstrap for non-k3s installations (think EKS support) is on the roadmap, and I assume this would fall into that bucket.

    It appears to install the infrastructure needed by kim correctly, but when it comes to actually interacting with it - builds, kim image ls, etc, it fails with the error above. Any suggestions about how to get beyond this error are very welcome!

    question 
    opened by djcp 2
Releases(v0.1.0-beta.7)
Owner
Rancher
Rancher
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Litmus Chaos 3.4k Jan 1, 2023
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.9k Jan 7, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2.3k Jan 4, 2023
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 102 Dec 18, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Opstree Container Kit 111 Oct 15, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Open Cloud-native Game-application Initiative 31 Nov 25, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 832 Dec 30, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 24 Sep 27, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 11k Jan 5, 2023
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

null 10 Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 64 Dec 19, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Dec 14, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 86 Nov 13, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 69 Jan 3, 2023
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 61 Oct 27, 2022
kube-champ 43 Oct 19, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kaisen Linux 0 Feb 14, 2022
The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes.

The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes. Apache NiFI is a free, open-source solution that support powerful and scalable directed graphs of data routing, transformation, and system mediation logic.

konpyutaika 46 Dec 26, 2022