Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

Overview

The Moby Project

Moby Project logo

Moby is an open-source project created by Docker to enable and accelerate software containerization.

It provides a "Lego set" of toolkit components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts and professionals to experiment and exchange ideas. Components include container build tools, a container registry, orchestration tools, a runtime and more, and these can be used as building blocks in conjunction with other tools and projects.

Principles

Moby is an open project guided by strong principles, aiming to be modular, flexible and without too strong an opinion on user experience. It is open to the community to help set its direction.

  • Modular: the project includes lots of components that have well-defined functions and APIs that work together.
  • Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
  • Usable security: Moby provides secure defaults without compromising usability.
  • Developer focused: The APIs are intended to be functional and useful to build powerful tools. They are not necessarily intended as end user tools but as components aimed at developers. Documentation and UX is aimed at developers not end users.

Audience

The Moby Project is intended for engineers, integrators and enthusiasts looking to modify, hack, fix, experiment, invent and build systems based on containers. It is not for people looking for a commercially supported system, but for people who want to work and learn with open source code.

Relationship with Docker

The components and tools in the Moby Project are initially the open source components that Docker and the community have built for the Docker Project. New projects can be added if they fit with the community goals. Docker is committed to using Moby as the upstream for the Docker Product. However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.

The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful. The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.


Legal

Brought to you courtesy of our legal counsel. For more context, please see the NOTICE document in this repo.

Use and transfer of Moby may be subject to certain restrictions by the United States and other governments.

It is your responsibility to ensure that your use and/or transfer does not violate applicable laws.

For more information, please see https://www.bis.doc.gov

Licensing

Moby is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Comments
  • docker fails to mount the block device for the container on devicemapper

    docker fails to mount the block device for the container on devicemapper

    When running something like for i in {0..100}; do docker run busybox echo test; done with Docker running on devicemapper, errors are thrown and containers fail to run:

    2014/02/10 9:48:42 Error: start: Cannot start container 56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284: Error getting container 56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284' on '/var/lib/docker/devicemapper/mnt/56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284': no such file or directory
    2014/02/10 9:48:42 Error: start: Cannot start container b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914: Error getting container b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914' on '/var/lib/docker/devicemapper/mnt/b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914': no such file or directory
    2014/02/10 9:48:43 Error: start: Cannot start container ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c: Error getting container ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c' on '/var/lib/docker/devicemapper/mnt/ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c': no such file or directory
    test
    2014/02/10 9:48:43 Error: start: Cannot start container 1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3: Error getting container 1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3' on '/var/lib/docker/devicemapper/mnt/1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3': no such file or directory
    

    Fedora 20 with kernel 3.12.9 doesn't seem to be affected.

    kernel version, distribution, docker info and docker version:

    3.11.0-15-generic #25~precise1-Ubuntu SMP Thu Jan 30 17:39:31 UTC 2014 x86_64 x86_64
    Ubuntu 12.04.4
     docker info
    Containers: 101
    Images: 44
    Driver: devicemapper
     Pool Name: docker-8:1-4980769-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 3234.9 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 6.9 Mb
     Metadata Space Total: 2048.0 Mb
    
    Client version: 0.8.0-dev
    Go version (client): go1.2
    Git commit (client): 695719b
    Server version: 0.8.0-dev
    Git commit (server): 695719b
    Go version (server): go1.2
    Last stable version: 0.8.0
    

    The Docker binary is actually master with PR #4017 merged.

    /cc @alexlarsson

    area/storage/devicemapper kind/bug 
    opened by unclejack 391
  • Proposal: Add support for build-time environment variables to the 'build' API

    Proposal: Add support for build-time environment variables to the 'build' API

    A build-time environment variable is passed to the builder (as part of build API) and made available to the Dockerfile primitives for use in variable expansion and setting up the environment for the RUN primitive (without modifying the Dockerfile and persisting them as environment in the final image).

    Following simple example illustrates the feature:

    docker build --build-env usr=foo --build-env http_proxy="my.proxy.url" <<EOF
    From busybox
    USER ${usr:-bar}
    RUN git clone <my.private.repo.behind.a.proxy>
    EOF
    

    Some of the use cases this PR enables are listed below (captured from comments in the PR's thread).

    [Edit: 05/22/2015] ~~A build-time environment variable gets used only while processing the 'RUN' primitive of a DockerFile. Such a variable is only accessible during 'RUN' and is 'not' persisted with the intermediate and final docker images, thereby addressing the portability concerns of the images generated with 'build'.~~

    This addresses issue #4962

    +++++++++ Edit: 05/21/2015, 06/26/2015

    This PR discussion thread has grown, bringing out use cases that this PR serves and doesn't serves well. Below I consolidate a list of those use cases that have emerged from the comments for ease of reference.

    There are two broad usecases that this feature enables:

    • passing build environment specific variables without modifying the Dockerfile or persisting them in the final image. A common usecase is the proxy url (http_proxy, https_proxy...) ~~but this can be any other environment variable as well~~ but there are other cases as well like this one https://github.com/docker/docker/issues/14191#issuecomment-115672621.
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-72072046 https://github.com/docker/docker/pull/9176#issuecomment-104386863
    • parametrize builds.
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-99269827 https://github.com/docker/docker/pull/9176#issuecomment-75432026 https://github.com/docker/docker/issues/9731#issuecomment-77370381

    The following use-case is not served well by this feature and hence not recommended to be used such:

    • passing secrets with caching turned on:
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-101876406 https://github.com/docker/docker/pull/9176#issuecomment-99542089

    Following use-cases might still be suitable with caching turned off: https://github.com/docker/docker/pull/9176#issuecomment-88278968 https://github.com/docker/docker/pull/9176#issuecomment-88377527

    status/1-design-review status/needs-attention 
    opened by mapuri 384
  • docker build should support privileged operations

    docker build should support privileged operations

    Currently there seems to be no way to run privileged operations outside of docker run -privileged.

    That means that I cannot do the same things in a Dockerfile. My recent issue: I'd like to run fuse (for encfs) inside of a container. Installing fuse is already a mess with hacks and ugly workarounds (see [1] and [2]), because mknod fails/isn't supported without a privileged build step.

    The only workaround right now is to do the installation manually, using run -privileged, and creating a new 'fuse base image'. Which means that I cannot describe the whole container, from an official base image to finish, in a single Dockerfile.

    I'd therefor suggest adding either

    • a docker build -privileged
      this should do the same thing as run -privileged, i.e. removing all caps limitations

    or

    • a RUNP command in the Dockerfile
      this should .. well .. RUN, but with _P_rivileges

    I tried looking at the source, but I'm useless with go and couldn't find a decent entrypoint to attach a proof of concept, unfortunately. :(

    1: https://github.com/rogaha/docker-desktop/blob/master/Dockerfile#L40 2: https://github.com/dotcloud/docker/issues/514#issuecomment-22101217

    area/builder kind/feature 
    opened by darklajid 287
  • New feature request: Selectively disable caching for specific RUN commands in Dockerfile

    New feature request: Selectively disable caching for specific RUN commands in Dockerfile

    branching off the discussion from #1384 :

    I understand -no-cache will disable caching for the entire Dockerfile. But would be useful if I can disable cache for a specific RUN command? For example updating repos or downloading a remote file .. etc. From my understanding that right now RUN apt-get update if cached wouldn't actually update the repo? This will cause the results to be different than from a VM?

    If disable caching for specific commands in the Dockerfile is made possible, would the subsequent commands in the file then not use the cache? Or would they do something a bit more intelligent - e.g. use cache if the previous command produced same results (fs layer) when compared to a previous run?

    area/builder kind/feature 
    opened by mohanraj-r 266
  • Document how to connect to Docker host from container

    Document how to connect to Docker host from container

    I had some trouble figuring out how to connect the docker host from the container. Couldn't find documentation, but did find irc logs saying something about using 172.16.42.1, which works.

    It'd be nice if this behavior and how it's related to docker0 was documented.

    opened by bkad 263
  • Docker 1.9.1 hanging at build step

    Docker 1.9.1 hanging at build step "Setting up ca-certificates-java"

    A few of us within the office upgraded to the latest version of docker toolbox backed by Docker 1.9.1 and builds are hanging as per the below build output.

    docker version:

     Version:      1.9.1
     API version:  1.21
     Go version:   go1.4.3
     Git commit:   a34a1d5
     Built:        Fri Nov 20 17:56:04 UTC 2015
     OS/Arch:      darwin/amd64
    
    Server:
     Version:      1.9.1
     API version:  1.21
     Go version:   go1.4.3
     Git commit:   a34a1d5
     Built:        Fri Nov 20 17:56:04 UTC 2015
     OS/Arch:      linux/amd64
    

    docker info:

    Containers: 10
    Images: 57
    Server Version: 1.9.1
    Storage Driver: aufs
     Root Dir: /mnt/sda1/var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 77
     Dirperm1 Supported: true
    Execution Driver: native-0.2
    Logging Driver: json-file
    Kernel Version: 4.1.13-boot2docker
    Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
    CPUs: 1
    Total Memory: 1.956 GiB
    Name: vbootstrap-vm
    ID: LLM6:CASZ:KOD3:646A:XPRK:PIVB:VGJ5:JSDB:ZKAN:OUC4:E2AK:FFTC
    Debug mode (server): true
     File Descriptors: 13
     Goroutines: 18
     System Time: 2015-11-24T02:03:35.597772191Z
     EventsListeners: 0
     Init SHA1: 
     Init Path: /usr/local/bin/docker
     Docker Root Dir: /mnt/sda1/var/lib/docker
    Labels:
     provider=virtualbox
    

    uname -a:

    Darwin JRedl-MB-Pro.local 15.0.0 Darwin Kernel Version 15.0.0: Sat Sep 19 15:53:46 PDT 2015; root:xnu-3247.10.11~1/RELEASE_X86_64 x86_64
    

    Here is a snippet from the docker build uppet that hangs on the Setting up ca-certificates-java line. Something to do with the latest version of docker and openjdk?

    update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode
    update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode
    Setting up ca-certificates-java (20140324) ...
    

    Docker file example:

    FROM gcr.io/google_appengine/base
    
    # Prepare the image.
    ENV DEBIAN_FRONTEND noninteractive
    RUN apt-get update && apt-get install -y -qq --no-install-recommends build-essential wget curl unzip python python-dev php5-mysql php5-cli php5-cgi openjdk-7-jre-headless openssh-client python-openssl && apt-get clean
    

    I can confirm that this is not an issue with Docker 1.9.0 or Docker Toolbox 1.9.0d. Let me know if I can provide any further information but this feels like a regression of some sort within the new version.

    area/kernel kind/bug 
    opened by jredl-va 258
  • Swarm is having occasional network connection problems between nodes.

    Swarm is having occasional network connection problems between nodes.

    Few times a day I am having connection issues between nodes and clients are seeing occasional "Bad request" error. My swarm setup (aws) has following services: nginx (global) and web (replicated=2) and separate overlay network. In nginx.conf I am using proxy_pass http://web:5000 to route requests to web service. Both services are running and marked as healthy and haven't been restarted while having these errors. Manager is separate node (30sec-manager1).

    Few times a day for few requests I am receiving an errors that nginx couldn't connect upstream and I always see 10.0.0.6 IP address mentioned:

    Here are related nginx and docker logs. Both web services are replicated on 30sec-worker3 and 30sec-worker4 nodes.

    Nginx log:
    ----------
    2017/03/29 07:13:18 [error] 7#7: *44944 connect() failed (113: Host is unreachable) while connecting to upstream, client: 104.154.58.95, server: 30seconds.com, request: "GET / HTTP/1.1", upstream: "http://10.0.0.6:5000/", host: "30seconds.com"
    
    Around same time from docker logs (journalctl -u docker.service)
    
    on node 30sec-manager1:
    ---------------------------
    Mar 29 07:12:50 30sec-manager1 docker[30365]: time="2017-03-29T07:12:50.736935344Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54.659229055Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:10 30sec-manager1 docker[30365]: time="2017-03-29T07:13:10.302960985Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:11 30sec-manager1 docker[30365]: time="2017-03-29T07:13:11.055187819Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:17 30sec-manager1 docker[30365]: time="2017-03-29T07:13:17Z" level=info msg="Firewalld running: false"
    
    on node 30sec-worker3:
    -------------------------
    Mar 29 07:12:50 30sec-worker3 docker[30362]: time="2017-03-29T07:12:50.613402284Z" level=info msg="memberlist: Suspect 30sec-manager1-b1cbc10665cc has failed, no acks received"
    Mar 29 07:12:55 30sec-worker3 docker[30362]: time="2017-03-29T07:12:55.614174704Z" level=warning msg="memberlist: Refuting a dead message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:09 30sec-worker3 docker[30362]: time="2017-03-29T07:13:09.613368306Z" level=info msg="memberlist: Suspect 30sec-worker4-4ca6b1dcaa42 has failed, no acks received"
    Mar 29 07:13:10 30sec-worker3 docker[30362]: time="2017-03-29T07:13:10.613972658Z" level=info msg="memberlist: Suspect 30sec-manager1-b1cbc10665cc has failed, no acks received"
    Mar 29 07:13:11 30sec-worker3 docker[30362]: time="2017-03-29T07:13:11.042788976Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:14 30sec-worker3 docker[30362]: time="2017-03-29T07:13:14.613951134Z" level=info msg="memberlist: Marking 30sec-worker4-4ca6b1dcaa42 as failed, suspect timeout reached"
    Mar 29 07:13:25 30sec-worker3 docker[30362]: time="2017-03-29T07:13:25.615128313Z" level=error msg="Bulk sync to node 30sec-manager1-b1cbc10665cc timed out"
    
    on node 30sec-worker4:
    -------------------------
    Mar 29 07:12:49 30sec-worker4 docker[30376]: time="2017-03-29T07:12:49.658082975Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54.658737367Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:09 30sec-worker4 docker[30376]: time="2017-03-29T07:13:09.658056735Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16.303689665Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    
    syslog on 30sec-worker4:
    --------------------------
    Mar 29 07:12:49 30sec-worker4 docker[30376]: time="2017-03-29T07:12:49.658082975Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54.658737367Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.048975] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.100691] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.130069] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.155859] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.180461] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.205707] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.230326] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.255597] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 docker[30376]: message repeated 7 times: [ time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"]
    Mar 29 07:13:09 30sec-worker4 docker[30376]: time="2017-03-29T07:13:09.658056735Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16.303689665Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: message repeated 7 times: [ time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"]
    

    I checked other cases when nginx can't find find upstream and always I find these 3x lines appear most at these times in docker logs in:

    level=info msg="memberlist:Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    level=warning msg="memberlist: Refuting a dead message (from: 30sec-worker3-054c94d39b58)
    

    By searching other issues, found that these have similar errors, so it may be related: https://github.com/docker/docker/issues/28843 https://github.com/docker/docker/issues/25325

    Anything I should check or debug more to spot the problem or is it a bug? Thank you.

    Output of docker version:

    Client:
     Version:      17.03.0-ce
     API version:  1.26
     Go version:   go1.7.5
     Git commit:   60ccb22
     Built:        Thu Feb 23 11:02:43 2017
     OS/Arch:      linux/amd64
    
    Server:
     Version:      17.03.0-ce
     API version:  1.26 (minimum version 1.12)
     Go version:   go1.7.5
     Git commit:   60ccb22
     Built:        Thu Feb 23 11:02:43 2017
     OS/Arch:      linux/amd64
     Experimental: false
    

    Output of docker info:

    Containers: 18
     Running: 3
     Paused: 0
     Stopped: 15
    Images: 16
    Server Version: 17.03.0-ce
    Storage Driver: aufs
     Root Dir: /var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 83
     Dirperm1 Supported: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
     Volume: local
     Network: bridge host macvlan null overlay
    Swarm: active
     NodeID: ck99cyhgydt8y1zn8ik2xmcdv
     Is Manager: true
     ClusterID: in0q54eh74ljazrprt0vza3wj
     Managers: 1
     Nodes: 5
     Orchestration:
      Task History Retention Limit: 5
     Raft:
      Snapshot Interval: 10000
      Number of Old Snapshots to Retain: 0
      Heartbeat Tick: 1
      Election Tick: 3
     Dispatcher:
      Heartbeat Period: 5 seconds
     CA Configuration:
      Expiry Duration: 3 months
     Node Address: 172.31.31.146
     Manager Addresses:
      172.31.31.146:2377
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 977c511eda0925a723debdc94d09459af49d082a
    runc version: a01dafd48bc1c7cc12bdb01206f9fea7dd6feb70
    init version: 949e6fa
    Security Options:
     apparmor
     seccomp
      Profile: default
    Kernel Version: 4.4.0-57-generic
    Operating System: Ubuntu 16.04.1 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 990.6 MiB
    Name: 30sec-manager1
    ID: 5IIF:RONB:Y27Q:5MKX:ENEE:HZWM:XYBV:O6KN:BKL6:AEUK:2VKB:MO5P
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    WARNING: No swap limit support
    Labels:
     provider=amazonec2
    Experimental: false
    Insecure Registries:
     127.0.0.0/8
    Live Restore Enabled: false
    

    Additional environment details (AWS, VirtualBox, physical, etc.): Amazon AWS (Manager - t2.micro, rest of nodes - t2.small)

    docker-compose.yml (There are more services and nodes in setup, but I posted only involved ones)

    version: "3"
    
    services:
    
      nginx:
        image: 333435094895.dkr.ecr.us-east-1.amazonaws.com/swarm/nginx:latest
        ports:
          - 80:80
          - 81:81
        networks:
          - thirtysec
        depends_on:
          - web
        deploy:
          mode: global
          update_config:
            delay: 2s
            monitor: 2s
    
      web:
        image: 333435094895.dkr.ecr.us-east-1.amazonaws.com/swarm/os:latest
        command: sh -c "python manage.py collectstatic --noinput && daphne thirtysec.asgi:channel_layer -b 0.0.0.0 -p 5000"
        ports:
          - 5000:5000
        networks:
          - thirtysec
        deploy:
          mode: replicated
          replicas: 2
          labels: [APP=THIRTYSEC]
          update_config:
            delay: 15s
            monitor: 15s
          placement:
            constraints: [node.labels.aws_type == t2.small]
    
        healthcheck:
          test: goss -g deploy/swarm/checks/web-goss.yaml validate
          interval: 2s
          timeout: 3s
          retries: 15
    
    networks:
        thirtysec:
    

    web-goss.yaml

    port:
      tcp:5000:
        listening: true
        ip:
        - 0.0.0.0
    
    area/networking area/swarm version/17.03 
    opened by darklow 250
  • Phase 1 implementation of user namespaces as a remapped container root

    Phase 1 implementation of user namespaces as a remapped container root

    This pull request is an initial implementation of user namespace support in the Docker daemon that we are labeling an initial "phase 1" milestone with limited scope/capability; which hopefully will be available for use in Docker 1.7.

    The code is designed to support full uid and gid maps, but this implementation limits the scope of usage to a remap of just the root uid/gid to a non-privileged user on the host. This remapping is scoped at the Docker daemon level, so all containers running on a Docker daemon will have the same remapped uid/gid as root. See PR #11253 for an initial discussion on the design.

    Discussion of future, possibly more complex, phases should be separate from specific design/code review of this "phase 1" implementation--see the above-mentioned PR for discussions on more advanced use cases such as mapping complete uid/gid ranges per tenant in a multi-tenant environment.

    status/2-code-review area/security 
    opened by estesp 240
  • flatten images - merge multiple layers into a single one

    flatten images - merge multiple layers into a single one

    There's no way to flatten images right now. When performing a build in multiple steps, a few images can be generated and a larger number of layers is produced. When these are pushed to the registry, a lot of data and a large number of layers have to be downloaded.

    There are some cases where one starts with a base image (or another image), changes some large files in one step, changes them again in the next and deletes them in the end. This means those files would be stored in 2 separate layers and deleted by whiteout files in the final image.

    These intermediary layers aren't necessarily useful to others or to the final deployment system.

    Image flattening should work like this:

    • the history of the build steps needs to be preserved
    • the flattening can be done up to a target image (for example, up to a base image)
    • the flattening should also be allowed to be done completely (as if exporting the image)
    kind/feature 
    opened by unclejack 231
  • Device-mapper does not release free space from removed images

    Device-mapper does not release free space from removed images

    Docker claims, via docker info to have freed space after an image is deleted, but the data file retains its former size and the sparse file allocated for the device-mapper storage backend file will continue to grow without bound as more extents are allocated.

    I am using lxc-docker on Ubuntu 13.10:

    Linux ergodev-zed 3.11.0-14-generic #21-Ubuntu SMP Tue Nov 12 17:04:55 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
    

    This sequence of commands reveals the problem:

    Doing a docker pull stackbrew/ubuntu:13.10 increased space usage reported docker info, before:

    Containers: 0
    Images: 0
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 291.5 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.7 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    And after docker pull stackbrew/ubuntu:13.10:

    Containers: 0
    Images: 3
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 413.1 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.8 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    And after docker rmi 8f71d74c8cfc, it returns:

    Containers: 0
    Images: 0
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 291.5 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.7 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    Only problem is, the data file has expanded to 414MiB (849016 512-byte sector blocks) per stat. Some of that space is properly reused after an image has been deleted, but the data file never shrinks. And under some mysterious condition (not yet able to reproduce) I have 291.5 MiB allocated that can't even be reused.

    My dmsetup ls looks like this when there are 0 images installed:

    # dmsetup ls
    docker-252:0-131308-pool        (252:2)
    ergodev--zed--vg-root   (252:0)
    cryptswap       (252:1)
    

    And a du of the data file shows this:

    # du /var/lib/docker/devicemapper/devicemapper/data -h
    656M    /var/lib/docker/devicemapper/devicemapper/data
    

    How can I have docker reclaim space, and why doesn't docker automatically do this when images are removed?

    area/storage/devicemapper exp/expert kind/bug 
    opened by AaronFriel 206
  • Unable to remove a stopped container: `device or resource busy`

    Unable to remove a stopped container: `device or resource busy`

    Apologies if this is a duplicate issue, there seems to be several outstanding issues around a very similar error message but under different conditions. I initially added a comment on #21969 and was told to open a separate ticket, so here it is!


    BUG REPORT INFORMATION

    Output of docker version:

    Client:
     Version:      1.11.0
     API version:  1.23
     Go version:   go1.5.4
     Git commit:   4dc5990
     Built:        Wed Apr 13 18:34:23 2016
     OS/Arch:      linux/amd64
    
    Server:
     Version:      1.11.0
     API version:  1.23
     Go version:   go1.5.4
     Git commit:   4dc5990
     Built:        Wed Apr 13 18:34:23 2016
     OS/Arch:      linux/amd64
    

    Output of docker info:

    Containers: 2
     Running: 2
     Paused: 0
     Stopped: 0
    Images: 51
    Server Version: 1.11.0
    Storage Driver: aufs
     Root Dir: /var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 81
     Dirperm1 Supported: false
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
     Volume: local
     Network: bridge null host
    Kernel Version: 3.13.0-74-generic
    Operating System: Ubuntu 14.04.3 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 3.676 GiB
    Name: ip-10-1-49-110
    ID: 5GAP:SPRQ:UZS2:L5FP:Y4EL:RR54:R43L:JSST:ZGKB:6PBH:RQPO:PMQ5
    Docker Root Dir: /var/lib/docker
    Debug mode (client): false
    Debug mode (server): false
    Registry: https://index.docker.io/v1/
    WARNING: No swap limit support
    

    Additional environment details (AWS, VirtualBox, physical, etc.):

    Running on Ubuntu 14.04.3 LTS HVM in AWS on an m3.medium instance with an EBS root volume.

    Steps to reproduce the issue:

    1. $ docker run --restart on-failure --log-driver syslog --log-opt syslog-address=udp://localhost:514 -d -p 80:80 -e SOME_APP_ENV_VAR myimage
    2. Container keeps shutting down and restarting due to a bug in the runtime and exiting with an error
    3. Manually running docker stop container
    4. Container is successfully stopped
    5. Trying to rm container then throws the error: Error response from daemon: Driver aufs failed to remove root filesystem 88189a16be60761a2c04a455206650048e784d750533ce2858bcabe2f528c92e: rename /var/lib/docker/aufs/diff/a48629f102d282572bb5df964eeec7951057b50f21df7abe162f8de386e76dc0 /var/lib/docker/aufs/diff/a48629f102d282572bb5df964eeec7951057b50f21df7abe162f8de386e76dc0-removing: device or resource busy
    6. Restart docker engine: $ sudo service docker restart
    7. $ docker ps -a shows that the container no longer exists.
    area/storage/aufs version/1.11 
    opened by pheuter 204
  • hack: restore copy_binaries func

    hack: restore copy_binaries func

    reported on community slack https://dockercommunity.slack.com/archives/C50QFMRC2/p1672827713572689

    - What I did

    copy_binaries func has been removed in https://github.com/moby/moby/pull/44546 (https://github.com/moby/moby/commit/8086f4012330d1c1058e07fc4e5e4522dd432c20#diff-04ff962b93ae3db9d1620183ecb03b37c366b4fa3cf189cf5ec2b646c44d432f) but is still useful for the dev environment.

    cc @akerouanton

    - How I did it

    - How to verify it

    - Description for the changelog

    - A picture of a cute animal (not mandatory but encouraged)

    opened by crazy-max 0
  • Updated outdated docker contributing guidelines link

    Updated outdated docker contributing guidelines link

    - What I did Updated the contributing guidelines link. I discovered this when making a separate PR.

    - How I did it Found the equivalent link for docker contributing guidelines

    - How to verify it Click the link

    - Description for the changelog

    Updated contributing guidelines link in CONTRIBUTING.md since the old link was broken.

    - A picture of a cute animal (not mandatory but encouraged)

    cute_animal2

    opened by KirkEasterson 1
  • `GenerateRandomName` now panics when `size` is over 64. Fixes #44362

    `GenerateRandomName` now panics when `size` is over 64. Fixes #44362

    Signed-off-by: Kirk Easterson [email protected]

    - What I did Added a check for size to GenerateRandomName and added a test for it

    - How I did it Added a check for size to GenerateRandomName

    - How to verify it Run the test TestGenerateRandomName in libnetwork/netutils/utils_linux_test.go. Or TESTDIRS="./libnetwork/netutils/" make test-unit

    - Description for the changelog

    netutils.GenerateRandomName now panics when the size argument is over 64

    - A picture of a cute animal (not mandatory but encouraged)

    cute_animal

    opened by KirkEasterson 0
  • Clear conntrack entries for published UDP ports

    Clear conntrack entries for published UDP ports

    - What I did

    Conntrack entries are created for UDP flows even if there's nowhere to route these packets (ie. no listening socket and no NAT rules to apply). Moreover, iptables NAT rules are evaluated by netfilter only when creating a new conntrack entry.

    When Docker adds NAT rules, netfilter will ignore them for any packet matching a pre-existing conntrack entry. In such case, when dockerd runs with userland proxy enabled, packets got routed to it and the main symptom will be bad source IP address (as shown by #44688).

    If the publishing container is run through Docker Swarm or in "standalone" Docker but with no userland proxy, affected packets will be dropped (eg. routed to nowhere).

    As such, Docker needs to flush all conntrack entries for published UDP ports to make sure NAT rules are correctly applied to all packets.

    • Fixes #44688
    • Fixes #8795
    • Fixes #16720
    • Fixes #7540
    • Fixes moby/libnetwork#2423 and probably more.

    - How to verify it

    By running the repro case in #44688.

    As a side note: I tested the repro case with the current master branch, which includes #43409, but this fix doesn't work (at least not for that issue) as it's the equivalent of these commands:

    $ conntrack -D -f ipv4 --reply-src 172.17.0.2
    $ conntrack -D -f ipv4 --reply-dst 172.17.0.2
    

    - Description for the changelog

    Clear conntrack entries for published UDP ports.

    opened by akerouanton 1
  • Dockerfile: use default apt mirrors

    Dockerfile: use default apt mirrors

    follow-up https://github.com/moby/moby/pull/44546#discussion_r1059776409

    - What I did

    Removes APT_MIRROR added in https://github.com/moby/moby/pull/39537 as I don't think we need an alternative mirror anymore. Also removes BUILD_APT_MIRROR added in https://github.com/moby/moby/pull/26375 that does not seem to be used.

    - How I did it

    - How to verify it

    - Description for the changelog

    - A picture of a cute animal (not mandatory but encouraged)

    status/2-code-review area/project area/packaging kind/refactor 
    opened by crazy-max 2
  • ci: build and push moby bundles on Docker Hub

    ci: build and push moby bundles on Docker Hub

    follow-up https://github.com/rumpl/moby/pull/24

    - What I did

    Adds a new workflow to build and push non-runnable multi-platform image on Docker Hub moby/moby-bin that contains bundles. This is useful if we want to try out latest changes without building moby.

    image

    https://hub.docker.com/r/crazymax/moby-bin/tags

    $ undock --rm-dist --all crazymax/moby-bin:latest ./moby-bin
    ...
    $ tree -nh ./moby-bin/
    [4.0K]  ./moby-bin/
    ├── [4.0K]  linux_amd64
    │   ├── [ 37M]  containerd
    │   ├── [8.1M]  containerd-shim-runc-v2
    │   ├── [ 18M]  ctr
    │   ├── [748K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 62M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 11M]  rootlesskit
    │   ├── [6.9M]  rootlesskit-docker-proxy
    │   ├── [ 14M]  runc
    │   └── [ 32M]  vpnkit
    ├── [4.0K]  linux_arm64
    │   ├── [ 36M]  containerd
    │   ├── [7.9M]  containerd-shim-runc-v2
    │   ├── [ 17M]  ctr
    │   ├── [527K]  docker-init
    │   ├── [2.0M]  docker-proxy
    │   ├── [ 59M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.6M]  rootlesskit-docker-proxy
    │   ├── [ 13M]  runc
    │   └── [ 38M]  vpnkit
    ├── [4.0K]  linux_armv5
    │   ├── [ 35M]  containerd
    │   ├── [7.8M]  containerd-shim-runc-v2
    │   ├── [ 17M]  ctr
    │   ├── [484K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 57M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.8M]  rootlesskit-docker-proxy
    │   └── [ 13M]  runc
    ├── [4.0K]  linux_armv6
    │   ├── [ 35M]  containerd
    │   ├── [7.8M]  containerd-shim-runc-v2
    │   ├── [ 17M]  ctr
    │   ├── [484K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 57M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.8M]  rootlesskit-docker-proxy
    │   └── [ 13M]  runc
    ├── [4.0K]  linux_armv7
    │   ├── [ 35M]  containerd
    │   ├── [7.8M]  containerd-shim-runc-v2
    │   ├── [ 17M]  ctr
    │   ├── [364K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 57M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.8M]  rootlesskit-docker-proxy
    │   └── [ 12M]  runc
    ├── [4.0K]  linux_ppc64le
    │   ├── [ 36M]  containerd
    │   ├── [8.0M]  containerd-shim-runc-v2
    │   ├── [ 18M]  ctr
    │   ├── [843K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 60M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.7M]  rootlesskit-docker-proxy
    │   └── [ 13M]  runc
    ├── [4.0K]  linux_s390x
    │   ├── [ 39M]  containerd
    │   ├── [8.6M]  containerd-shim-runc-v2
    │   ├── [ 19M]  ctr
    │   ├── [615K]  docker-init
    │   ├── [2.0M]  docker-proxy
    │   ├── [ 64M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 11M]  rootlesskit
    │   ├── [7.1M]  rootlesskit-docker-proxy
    │   └── [ 14M]  runc
    └── [4.0K]  windows_amd64
        ├── [103K]  containerutility.exe
        ├── [2.1M]  docker-proxy.exe
        └── [ 57M]  dockerd.exe
    
    8 directories, 82 files
    

    - How I did it

    - How to verify it

    $ docker buildx bake bin-image-cross
    

    - Description for the changelog

    - A picture of a cute animal (not mandatory but encouraged)

    cc @rumpl @vvoland

    opened by crazy-max 1
Releases(v23.0.0-rc.1)
  • v23.0.0-rc.1(Dec 27, 2022)

    This is a pre-release of the upcoming 23.0.0 release.

    Pre-releases are intended for testing new releases: only install in a test environment!

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo CHANNEL=test sh get-docker.sh
    

    Starting with the 23.0.0 release, we're moving away from using CalVer versioning, and use versioning that's using SemVer format. Changing the version format is a stepping-stone towards go module compatibility, but the repository does not yet use go-modules, and still requires using a "+incompatible" version; we'll be working towards go module compatibility in a future release.

    Known issues:

    Bugs and regressions can be reported in these issue trackers:

    • Related to the CLI: https://github.com/docker/cli/issues
    • Related to the Docker Engine https://github.com/moby/moby/issues

    When reporting issues, include [23.0.0-rc] in the issue title

    Source code(tar.gz)
    Source code(zip)
  • v20.10.22(Dec 16, 2022)

    Bug fixes and enhancements

    • Improve error message when attempting to pull an unsupported image format or OCI artifact (moby/moby#44413, moby/moby#44569).
    • Fix an issue where the host's ephemeral port-range was ignored when selecting random ports for containers (moby/moby#44476).
    • Fix ssh: parse error in message type 27 errors during docker build on hosts using OpenSSH 8.9 or above (moby/moby#3862).
    • seccomp: block socket calls to AF_VSOCK in default profile (moby/moby#44564).

    Packaging Updates

    Source code(tar.gz)
    Source code(zip)
  • v23.0.0-beta.1(Dec 6, 2022)

    This is a pre-release of the upcoming 23.0.0 release.

    Pre-releases are intended for testing new releases: only install in a test environment!

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo CHANNEL=test sh get-docker.sh
    

    Starting with the 23.0.0 release, we're moving away from using CalVer versioning, and use versioning that's using SemVer format. Changing the version format is a stepping-stone towards go module compatibility, but the repository does not yet use go-modules, and still requires using a "+incompatible" version; we'll be working towards go module compatibility in a future release.

    Known issues:

    Bugs and regressions can be reported in these issue trackers:

    • Related to the CLI: https://github.com/docker/cli/issues
    • Related to the Docker Engine https://github.com/moby/moby/issues

    When reporting issues, include [23.0.0-beta] in the issue title

    Source code(tar.gz)
    Source code(zip)
  • v20.10.21(Oct 25, 2022)

    This release of Docker Engine contains updated versions of Docker Compose, Docker Scan, Containerd, added packages for Ubuntu 22.10, and some minor bug fixes and enhancements.

    Client

    • Remove "experimental" gates around "--platform" in bash completion docker/cli#3824.

    Daemon

    • Allow "allow-nondistributable-artifacts" to be configured for Docker Hub moby/moby#44313.
    • Fix an Invalid standard handle identifie panic when registering the docker daemon as a service from a legacy CLI on Windows moby/moby#44326.

    Builder

    • Fix running git commands in Cygwin on Windows moby/moby#44332.
    • Update bundled BuildKit version to to fix "output clipped, log limit 1MiB reached" errors moby/moby#44339.

    Packaging

    • Provide packages for Ubuntu 22.10 "Kinetic Kudu".
    • Update Docker Compose to v2.12.2.
    • Update Docker Scan to v0.21.0.
    • Update containerd (containerd.io package) to v1.6.9.
    Source code(tar.gz)
    Source code(zip)
  • v20.10.20(Oct 18, 2022)

    This release of Docker Engine contains partial mitigations for a Git vulnerability (CVE-2022-39253), and has updated handling of image:tag@digest image references.

    The Git vulnerability allows a maliciously crafted Git repository, when used as a build context, to copy arbitrary filesystem paths into resulting containers/images; this can occur in both the daemon, and in API clients, depending on the versions and tools in use.

    The mitigations available in this release and in other consumers of the daemon API are partial and only protect users who build a Git URL context (e.g. git+protocol://). As the vulnerability could still be exploited by manually run Git commands that interact with and check out submodules, users should immediately upgrade to a patched version of Git to protect against this vulernability. Further details are available from the GitHub blog ("Git security vulnerabilities announced").

    Client

    • Added a mitigation for CVE-2022-39253, when using the classic Builder with a Git URL as the build context.

    Daemon

    • Updated handling of image:tag@digest references. When pulling an image using the image:tag@digest ("pull by digest"), image resolution happens through the content-addressable digest and the image and tag are not used. While this is expected, this could lead to confusing behavior, and could potentially be exploited through social engineering to run an image that is already present in the local image store. Docker now checks if the digest matches the repository name used to pull the image, and otherwise will produce an error.

    Builder

    • Updated handling of image:tag@digest references. Refer to the "Daemon" section above for details.
    • Added a mitigation to the classic Builder and updated BuildKit to v0.8.3-31-gc0149372, for CVE-2022-39253.
    Source code(tar.gz)
    Source code(zip)
  • v20.10.19(Oct 13, 2022)

    This release of Docker Engine comes with some bug-fixes, and an updated version of Docker Compose.

    Builder

    • Fix an issue that could result in a panic during docker builder prune or docker system prune moby/moby#44122.

    Daemon

    • Fix a bug where using docker volume prune would remove volumes that were still in use if the daemon was running with "live restore" and was restarted moby/moby#44238.

    Packaging

    Source code(tar.gz)
    Source code(zip)
  • v20.10.18(Sep 9, 2022)

    This release of Docker Engine comes with a fix for a low-severity security issue, some minor bug fixes, and updated versions of Docker Compose, Docker Buildx, containerd, and runc.

    Client

    Builder

    • Fix an issue where file-capabilities were not preserved during build moby/moby#43876.
    • Fix an issue that could result in a panic caused by a concurrent map read and map write moby/moby#44067

    Daemon

    • Fix a security vulnerability relating to supplementary group permissions, which could allow a container process to bypass primary group restrictions within the container CVE-2022-36109, GHSA-rc4r-wh2q-q6c4.
    • seccomp: add support for Landlock syscalls in default policy moby/moby#43991.
    • seccomp: update default policy to support new syscalls introduced in kernel 5.12 - 5.16 moby/moby#43991.
    • Fix an issue where cache lookup for image manifests would fail, resulting in a redundant round-trip to the image registry moby/moby#44109.
    • Fix an issue where exec processes and healthchecks were not terminated when they timed out moby/moby#44018.

    Packaging

    Source code(tar.gz)
    Source code(zip)
  • v20.10.17(Jun 7, 2022)

    This release of Docker Engine comes with updated versions of the compose, containerd, and runc components, as well as some minor bug fixes.

    Client

    • Remove asterisk from docker commands in zsh completion script docker/cli#3648.

    Networking

    • Fix Windows port conflict with published ports in host mode for overlay moby/moby#43644.
    • Ensure performance tuning is always applied to libnetwork sandboxes moby/moby#43683.

    Packaging

    Source code(tar.gz)
    Source code(zip)
  • v22.06.0-beta.0(Jun 3, 2022)

    This is the first pre-release of the upcoming 22.06 release.

    Pre-releases are intended for testing new releases: only install in a test environment!

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo CHANNEL=test sh get-docker.sh
    

    Known issues:

    Bugs and regressions can be reported in these issue trackers:

    • Related to the CLI: https://github.com/docker/cli/issues
    • Related to the Docker Engine https://github.com/moby/moby/issues

    When reporting issues, include [22.06-beta] in the issue title

    Source code(tar.gz)
    Source code(zip)
  • v20.10.16(May 12, 2022)

    This release of Docker Engine fixes a regression in the Docker CLI builds for macOS, fixes an issue with docker stats when using containerd 1.5 and up, and updates the Go runtime to include a fix for CVE-2022-29526.

    Client

    Daemon

    • Fix an issue where docker stats was showing empty stats when running with containerd 1.5.0 or up moby/moby#43567.
    • Update the golang.org/x/sys build-time dependency which contains a fix for CVE-2022-29526.

    Packaging

    • Update Go runtime to 1.17.10, which contains a fix for CVE-2022-29526.
    • Use "weak" dependencies for the docker scan CLI plugin, to prevent a "conflicting requests" error when users performed an off-line installation from downloaded RPM packages docker/docker-ce-packaging#659.
    Source code(tar.gz)
    Source code(zip)
  • v20.10.15(May 5, 2022)

    This release of Docker Engine comes with updated versions of the compose, buildx, containerd, and runc components, as well as some minor bugfixes.

    Daemon

    • Use a RWMutex for stateCounter to prevent potential locking congestion moby/moby#43426.
    • Prevent an issue where the daemon was unable to find an available IP-range in some conditions moby/moby#43360

    Packaging

    • Update Docker Compose to v2.5.0.
    • Update Docker Buildx to v0.8.2.
    • Update Go runtime to 1.17.9.
    • Update containerd (containerd.io package) to v1.6.4.
    • Update runc version to v1.1.1.
    • Add packages for CentOS 9 stream and Fedora 36.
    Source code(tar.gz)
    Source code(zip)
  • v20.10.14(Mar 24, 2022)

    This release of Docker Engine updates the default inheritable capabilities for containers to address CVE-2022-24769, a new version of the containerd.io runtime is also included to address the same issue.

    Daemon

    • Update the default inheritable capabilities.

    Builder

    • Update the default inheritable capabilities for containers used during build.

    Packaging

    • Update containerd (containerd.io package) to v1.5.11.
    • Update docker buildx to v0.8.1.
    Source code(tar.gz)
    Source code(zip)
  • v20.10.13(Mar 10, 2022)

    This release of Docker Engine contains some bug-fixes and packaging changes, updates to the docker scan and docker buildx commands, an updated version of the Go runtime, and new versions of the containerd.io runtime. Together with this release, we now also provide .deb and .rpm packages of Docker Compose V2, which can be installed using the (optional) docker-compose-plugin package.

    Builder

    • Updated the bundled version of buildx to v0.8.0.

    Daemon

    • Fix a race condition when updating the container's state moby/moby#43166.
    • Update the etcd dependency to prevent the daemon from incorrectly holding file locks moby/moby#43259
    • Fix detection of user-namespaces when configuring the default net.ipv4.ping_group_range sysctl moby/moby#43084.

    Distribution

    • Retry downloading image-manifests if a connection failure happens during image pull moby/moby#43333.

    Documentation

    • Various fixes in command-line reference and API documentation.

    Logging

    • Prevent an OOM when using the "local" logging driver with containers that produce a large amount of log messages moby/moby#43165.
    • Updates the fluentd log driver to prevent a potential daemon crash, and prevent containers from hanging when using the fluentd-async-connect=true and the remote server is unreachable moby/moby#43147.

    Packaging

    • Provide .deb and .rpm packages for Docker Compose V2. Docker Compose v2.3.3 can now be installed on Linux using the docker-compose-plugin packages, which provides the docker compose subcommand on the Docker CLI. The Docker Compose plugin can also be installed and run standalone to be used as a drop-in replacement for docker-compose (Docker Compose V1) docker/docker-ce-packaging#638. The compose-cli-plugin package can also be used on older version of the Docker CLI with support for CLI plugins (Docker CLI 18.09 and up).
    • Provide packages for the upcoming Ubuntu 22.04 "Jammy Jellyfish" LTS release docker/docker-ce-packaging#645, docker/containerd-packaging#271.
    • Update docker buildx to v0.8.0.
    • Update docker scan (docker-scan-plugin) to v0.17.0.
    • Update containerd (containerd.io package) to v1.5.10.
    • Update the bundled runc version to v1.0.3.
    • Update Golang runtime to Go 1.16.15.
    Source code(tar.gz)
    Source code(zip)
  • v20.10.12(Jan 10, 2022)

  • v20.10.11(Nov 18, 2021)

    20.10.11

    IMPORTANT

    Due to net/http changes in Go 1.16, HTTP proxies configured through the $HTTP_PROXY environment variable are no longer used for TLS (https://) connections. Make sure you also set an $HTTPS_PROXY environment variable for handling requests to https:// URLs.

    Refer to the HTTP/HTTPS proxy section to learn how to configure the Docker Daemon to use a proxy server. {: .important }

    Distribution

    Windows

    Packaging

    Source code(tar.gz)
    Source code(zip)
  • v20.10.10(Oct 25, 2021)

    20.10.10

    IMPORTANT

    Due to net/http changes in Go 1.16, HTTP proxies configured through the $HTTP_PROXY environment variable are no longer used for TLS (https://) connections. Make sure you also set an $HTTPS_PROXY environment variable for handling requests to https:// URLs.

    Refer to the HTTP/HTTPS proxy section to learn how to configure the Docker Daemon to use a proxy server.

    Builder

    • Fix platform-matching logic to fix docker build using not finding images in the local image cache on Arm machines when using BuildKit moby/moby#42954

    Runtime

    • Add support for clone3 syscall in the default seccomp policy to support running containers based on recent versions of Fedora and Ubuntu. moby/moby/#42836.
    • Windows: update hcsshim library to fix a bug in sparse file handling in container layers, which was exposed by recent changes in Windows moby/moby#42944.
    • Fix some situations where docker stop could hang forever moby/moby#42956.

    Swarm

    • Fix an issue where updating a service did not roll back on failure moby/moby#42875.

    Packaging

    • Add packages for Ubuntu 21.10 "Impish Indri" and Fedora 35.
    • Update docker scan to v0.9.0
    • Update Golang runtime to Go 1.16.9.
    Source code(tar.gz)
    Source code(zip)
  • v20.10.9(Oct 4, 2021)

    This release is a security release with security fixes in the CLI, runtime, as well as updated versions of the containerd.io package and the Go runtime.

    Client

    • CVE-2021-41092 Ensure default auth config has address field set, to prevent credentials being sent to the default registry.

    Runtime

    • CVE-2021-41089 Create parent directories inside a chroot during docker cp to prevent a specially crafted container from changing permissions of existing files in the host’s filesystem.
    • CVE-2021-41091 Lock down file permissions to prevent unprivileged users from discovering and executing programs in /var/lib/docker.

    Packaging

    • Update Golang runtime to Go 1.16.8, which contains fixes for CVE-2021-36221 and CVE-2021-39293
    • Update static binaries and containerd.io rpm and deb packages to containerd v1.4.11 and runc v1.0.2 to address CVE-2021-41103.
    • Update the bundled buildx version to v0.6.3 for rpm and deb packages.
    Source code(tar.gz)
    Source code(zip)
  • v20.10.8(Aug 4, 2021)

    20.10.8

    IMPORTANT

    Due to net/http changes in Go 1.16, HTTP proxies configured through the $HTTP_PROXY environment variable are no longer used for TLS (https://) connections. Make sure you also set an $HTTPS_PROXY environment variable for handling requests to https:// URLs. Refer to the HTTP/HTTPS proxy section in the documentation to learn how to configure the Docker Daemon to use a proxy server.

    Deprecation

    • Deprecate support for encrypted TLS private keys. Legacy PEM encryption as specified in RFC 1423 is insecure by design. Because it does not authenticate the ciphertext, it is vulnerable to padding oracle attacks that can let an attacker recover the plaintext. Support for encrypted TLS private keys is now marked as deprecated, and will be removed in an upcoming release. docker/cli#3219
    • Deprecate Kubernetes stack support. Following the deprecation of Compose on Kubernetes, support for Kubernetes in the stack and context commands in the Docker CLI is now marked as deprecated, and will be removed in an upcoming release docker/cli#3174.

    Client

    • Fix Invalid standard handle identifier errors on Windows docker/cli#3132.

    Rootless

    • Avoid can't open lock file /run/xtables.lock: Permission denied error on SELinux hosts moby/moby#42462.
    • Disable overlay2 when running with SELinux to prevent permission denied errors moby/moby#42462.
    • Fix x509: certificate signed by unknown authority error on openSUSE Tumbleweed moby/moby#42462.

    Runtime

    • Print a warning when using the --platform option to pull a single-arch image that does not match the specified architecture moby/moby#42633.
    • Fix incorrect Your kernel does not support swap memory limit warning when running with cgroups v2 moby/moby#42479.
    • Windows: Fix a situation where containers were not stopped if HcsShutdownComputeSystem returned an ERROR_PROC_NOT_FOUND error moby/moby#42613

    Swarm

    • Fix a possibility where overlapping IP addresses could exist as a result of the node failing to clean up its old loadbalancer IPs moby/moby#42538
    • Fix a deadlock in log broker ("dispatcher is stopped") moby/moby#42537

    Packaging

    Known issue

    The ctr binary shipping with the static packages of this release is not statically linked, and will not run in Docker images using alpine as a base image. Users can install the libc6-compat package, or download a previous version of the ctr binary as a workaround. Refer to the containerd ticket related to this issue for more details: containerd/containerd#5824.

    Source code(tar.gz)
    Source code(zip)
  • v20.10.7(Jun 2, 2021)

    20.10.7

    Client

    • Suppress warnings for deprecated cgroups docker/cli#3099.
    • Prevent sending SIGURG signals to container on Linux and macOS. The Go runtime (starting with Go 1.14) uses SIGURG signals internally as an interrupt to support preemptable syscalls. In situations where the Docker CLI was attached to a container, these interrupts were forwarded to the container. This fix changes the Docker CLI to ignore SIGURG signals docker/cli#3107, moby/moby#42421.

    Builder

    • Update BuildKit to version v0.8.3-3-g244e8cde moby/moby#42448:
      • Transform relative mountpoints for exec mounts in the executor to work around a breaking change in runc v1.0.0-rc94 and up. moby/buildkit#2137.
      • Add retry on image push 5xx errors. moby/buildkit#2043.
      • Fix build-cache not being invalidated when renaming a file that is copied using a COPY command with a wildcard. Note that this change invalidates existing build caches for copy commands that use a wildcard. moby/buildkit#2018.
      • Fix build-cache not being invalidated when using mounts moby/buildkit#2076.
    • Fix build failures when FROM image is not cached when using legacy schema 1 images moby/moby#42382.

    Logging

    • Update the hcsshim SDK to make daemon logs on Windows less verbose moby/moby#42292.

    Rootless

    • Fix capabilities not being honored when an image was built on a daemon with user-namespaces enabled moby/moby#42352.

    Networking

    • Update libnetwork to fix publishing ports on environments with kernel boot parameter ipv6.disable=1, and to fix a deadlock causing internal DNS lookups to fail moby/moby#42413.

    Contrib

    • Update rootlesskit to v0.14.2 to fix a timeout when starting the userland proxy with the slirp4netns port driver moby/moby#42294.
    • Fix "Device or resource busy" errors when running docker-in-docker on a rootless daemon moby/moby#42342.

    Packaging

    Source code(tar.gz)
    Source code(zip)
  • v20.10.6(Apr 14, 2021)

  • v20.10.5(Mar 3, 2021)

  • v20.10.4(Feb 28, 2021)

    release notes: https://docs.docker.com/engine/release-notes/#20104

    20.10.4

    Builder

    • Fix incorrect cache match for inline cache import with empty layers moby/moby#42061
    • Update BuildKit to v0.8.2 moby/moby#42061
      • resolver: avoid error caching on token fetch
      • fileop: fix checksum to contain indexes of inputs preventing certain cache misses
      • Fix reference count issues on typed errors with mount references (fixing invalid mutable ref errors)
      • git: set token only for main remote access allowing cloning submodules with different credentials
    • Ensure blobs get deleted in /var/lib/docker/buildkit/content/blobs/sha256 after pull. To clean up old state run builder prune moby/moby#42065
    • Fix parallel pull synchronization regression moby/moby#42049
    • Ensure libnetwork state files do not leak moby/moby#41972

    Client

    • Fix a panic on docker login if no config file is present docker/cli#2959
    • Fix WARNING: Error loading config file: .dockercfg: $HOME is not defined docker/cli#2958

    Runtime

    Logger

    • Honor labels-regex config even if labels is not set moby/moby#42046
    • Handle long log messages correctly preventing awslogs in non-blocking mode to split events bigger than 16kB mobymoby#41975

    Rootless

    Security

    Swarm

    • Fix issue with heartbeat not persisting upon restart moby/moby#42060
    • Fix potential stalled tasks moby/moby#42060
    • Fix --update-order and --rollback-order flags when only --update-order or --rollback-order is provided docker/cli#2963
    • Fix docker service rollback returning a non-zero exit code in some situations docker/cli#2964
    • Fix inconsistent progress-bar direction on docker service rollback docker/cli#2964
    Source code(tar.gz)
    Source code(zip)
  • v20.10.3(Feb 2, 2021)

    Release notes: https://docs.docker.com/engine/release-notes/#20103

    20.10.3

    Security

    • CVE-2021-21285 Prevent an invalid image from crashing docker daemon
    • CVE-2021-21284 Lock down file permissions to prevent remapped root from accessing docker state
    • Ensure AppArmor and SELinux profiles are applied when building with BuildKit

    Client

    • Check contexts before importing them to reduce risk of extracted files escaping context store
    • Windows: prevent executing certain binaries from current directory docker/cli#2950
    Source code(tar.gz)
    Source code(zip)
  • v19.03.15(Feb 2, 2021)

    Release notes: https://docs.docker.com/engine/release-notes/19.03/#190315

    Security

    • CVE-2021-21285 Prevent an invalid image from crashing docker daemon
    • CVE-2021-21284 Lock down file permissions to prevent remapped root from accessing docker state
    • Ensure AppArmor and SELinux profiles are applied when building with BuildKit

    Client

    • Check contexts before importing them to reduce risk of extracted files escaping context store
    Source code(tar.gz)
    Source code(zip)
  • v20.10.2(Jan 5, 2021)

  • v20.10.1(Dec 15, 2020)

  • v20.10.0(Dec 9, 2020)

  • v19.03.14(Dec 2, 2020)

    For official release notes for Docker Engine CE and Docker Engine EE, visit the release notes page.

    Security

    • CVE-2020-15257: Update bundled static binaries of containerd to v1.3.9 moby/moby#41731. Package managers should update the containerd.io package.

    Builder

    • Beta versions of apparmor are now parsed correctly preventing build failures moby/moby#41542

    Networking

    Runtime

    Rootless

    • Lock state dir for preventing automatic clean-up by systemd-tmpfiles moby/moby#41635
    • dockerd-rootless.sh: support new containerd shim socket path convention moby/moby#41557

    Logging

    Source code(tar.gz)
    Source code(zip)
  • v19.03.13(Sep 17, 2020)

  • v19.03.12(Jun 30, 2020)

Owner
Moby
An open framework to assemble specialized container systems without reinventing the wheel.
Moby
Plugin for Helm to integrate the sigstore ecosystem

helm-sigstore Plugin for Helm to integrate the sigstore ecosystem. Search, upload and verify signed Helm Charts in the Rekor Transparency Log. Info he

sigstore 47 Dec 21, 2022
Boxygen is a container as code framework that allows you to build container images from code

Boxygen is a container as code framework that allows you to build container images from code, allowing integration of container image builds into other tooling such as servers or CLI tooling.

nitric 5 Dec 13, 2021
Amazon ECS Container Agent: a component of Amazon Elastic Container Service

Amazon ECS Container Agent The Amazon ECS Container Agent is a component of Amazon Elastic Container Service (Amazon ECS) and is responsible for manag

null 0 Dec 28, 2021
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Fortress 0 Jan 23, 2022
Resilient, scalable Brainf*ck, in the spirit of modern systems design

Brainf*ck-as-a-Service A little BF interpreter, inspired by modern systems design trends. How to run it? docker-compose up -d bash hello.sh # Should p

Serge Zaitsev 145 Nov 22, 2022
go-ima is a tool that checks if a file has been tampered with. It is useful in ensuring integrity in CI systems

go-ima Tool that checks the ima-log to see if a file has been tampered with. How to use Set the IMA policy to tcb by configuring GRUB GRUB_CMDLINE_LIN

TestifySec 9 Apr 26, 2022
An Alert notification service is an application which can receive alerts from certain alerting systems like System_X and System_Y and send these alerts to developers in the form of SMS and emails.

Alert-System An Alert notification service is an application which can receive alerts from certain alerting systems like System_X and System_Y and sen

null 0 Dec 10, 2021
My solutions to labs of MIT 6.824: Distributed Systems.

MIT 6.824 Distributed Systems Labs

null 0 Dec 30, 2021
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.9k Jan 7, 2023
Go project to manage an ubuntu docker container

Go-docker-manager This project consist of a Go app that connects to a Docker backend, spans a Ubuntu container and shows live CPU/Memory information f

Miguel Sama 1 Oct 27, 2021
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

null 1 Dec 17, 2021
crud is a cobra based CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service documentation and k8s deployment manifests

crud crud is a CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service docum

Piyush Jajoo 0 Nov 29, 2021
Using the Golang search the Marvel Characters. This project is a web based golang application that shows the information of superheroes using Marvel api.

marvel-universe-web using the Golang search the Marvel Universe Characters About The Project This project is a web based golang application that shows

Burak KÖSE 2 Oct 10, 2021
Hackathon project with intent to help based on heuristics for aks cluster upgrades.

AKS-Upgrade-Doctor AKS Upgrade Doctor is a client side, self-help diagnostic tool designed to identify and detect possible issues that cause upgrade o

null 2 Sep 20, 2022
Git-based DevOps PaaS: Project, Pipeline, Kubernetes, ServiceMesh, MutilCloud

gitctl 一体化 DevOps 平台 从代码到应用的一体化编排,应用全生命周期管理,多云托管。 gitctl 会有哪些功能? git 代码托管 projec

null 12 Oct 24, 2022
A simple project (which is visitor counter) on kubernetesA simple project (which is visitor counter) on kubernetes

k8s playground This project aims to deploy a simple project (which is visitor counter) on kubernetes. Deploy steps kubectl apply -f secret.yaml kubect

null 13 Dec 16, 2022
this Project is base project about restfull API and MySQL

Requirements. This project only supports to run on Ubuntu currently go version >= 1.16 docker docker-compose Install Protobuffer https://github.com/pr

Quang 0 Dec 10, 2021
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Kubernetes 94.8k Jan 2, 2023