Convenience of containers, security of virtual machines

Overview

Convenience of containers, security of virtual machines

With firebuild, you can build and deploy secure VMs directly from Dockerfiles and Docker images in just few minutes.

The concept of firebuild is to leverage as much of the existing Docker world as possible. There are thousands of Docker images out there. Docker images are awesome because they encapsulate the software we want to run in our workloads, they also encapsulate dependencies. Dockerfiles are what Docker images are built from. Dockeriles are the blueprints of the modern infrastructure. There are thousands of them for almost anything one can imagine and new ones are very easy to write.

With firebuild it is possible to:

  • build root file systems directly from Dockerfiles
  • tag and version root file systems
  • run and manage microvms on a single host
  • define run profiles

High level example

Build and start HashiCorp Consul 1.9.4 on Firecracker with three simple steps:

  • build a base operating system image
  • build Consul image
  • start the application
sudo $GOPATH/bin/firebuild baseos \
    --profile=standard \
    --dockerfile $(pwd)/baseos/_/alpine/3.12/Dockerfile
sudo $GOPATH/bin/firebuild rootfs \
    --profile=standard \
    --dockerfile=git+https://github.com/hashicorp/docker-consul.git:/0.X/Dockerfile \
    --cni-network-name=machine-builds \
    --ssh-user=alpine \
    --vmlinux-id=vmlinux-v5.8 \
    --tag=combust-labs/consul:1.9.4
sudo $GOPATH/bin/firebuild run \
    --profile=standard \
    --name=consul1 \
    --from=combust-labs/consul:1.9.4 \
    --cni-network-name=machines \
    --vmlinux-id=vmlinux-v5.8

Find the IP of the consul1 VM and query Consul:

VMIP=$(sudo $GOPATH/bin/firebuild inspect \
    --profile=standard \
    --vmm-id=consul1 | jq '.NetworkInterfaces[0].StaticConfiguration.IPConfiguration.IP' -r)
$ curl http://${VMIP}:8500/v1/status/leader
"127.0.0.1:8300"

But how?

clone and build from sources

mkdir -p $GOPATH/src/github.com/combust-labs/firebuild
cd $GOPATH/src/github.com/combust-labs/firebuild
go install

The binary will be placed in $GOPATH/bin/firebuild.

create a profile

# create required directories, these need to exist before the profile can be created:
sudo mkdir -p /firecracker/rootfs
sudo mkdir -p /firecracker/vmlinux
sudo mkdir -p /srv/jailer
sudo mkdir -p /var/lib/firebuild
# create a profile:
sudo $GOPATH/bin/firebuild profile-create \
	--profile=standard \
	--binary-firecracker=$(readlink /usr/bin/firecracker) \
	--binary-jailer=$(readlink /usr/bin/jailer) \
	--chroot-base=/srv/jailer \
	--run-cache=/var/lib/firebuild \
	--storage-provider=directory \
	--storage-provider-property-string="rootfs-storage-root=/firecracker/rootfs" \
	--storage-provider-property-string="kernel-storage-root=/firecracker/vmlinux" \
	--tracing-enable

Kernel images will be stored in /firecracker/vmlinux, root file systems will be stored in /firecracker/rootfs.

build the kernel

The examples use the 5.8 Linux kernel image which is built using the configuration from the baseos/kernel/5.8.config file in this repository. To build the kernel:

export KERNEL_VERSION=v5.8
mkdir -p /tmp/linux && cd /tmp/linux
git clone https://github.com/torvalds/linux.git .
git checkout ${KERNEL_VERSION}
wget -O .config https://raw.githubusercontent.com/combust-labs/firebuild/master/baseos/kernel/5.8.config
make vmlinux -j32 # adapt to the number of cores you have

Once built, copy the kernel to the storage:

mv /tmp/linux/vmlinux /firecracker/vmlinux/vmlinux-${KERNEL_VERSION}

setup CNI

firebuild assumes CNI availability. Installing the plugins is very straightforward. Create /opt/cni/bin/ directory and download the plugins:

mkdir -p /opt/cni/bin
curl -O -L https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
tar -C /opt/cni/bin -xzf cni-plugins-linux-amd64-v0.9.1.tgz

Firecracker also requires the tc-redirect-tap plugin. Unfortunately, this one does not offer downloadable binaries and has to be built from sources.

mkdir -p $GOPATH/src/github.com/awslabs/tc-redirect-tap
cd $GOPATH/src/github.com/awslabs/tc-redirect-tap
git clone https://github.com/awslabs/tc-redirect-tap.git .
make install

create a dedicated CNI network for the builds

Feel free to change the ipam.subnet or set multiple ones. host-local IPAM CNI plugin documentation.

cat <<EOF > /etc/cni/conf.d/machine-builds.conflist
{
    "name": "machine-builds",
    "cniVersion": "0.4.0",
    "plugins": [
        {
            "type": "bridge",
            "name": "builds-bridge",
            "bridge": "builds0",
            "isDefaultGateway": true,
            "ipMasq": true,
            "hairpinMode": true,
            "ipam": {
                "type": "host-local",
                "subnet": "192.168.128.0/24",
                "resolvConf": "/etc/resolv.conf"
            }
        },
        {
            "type": "firewall"
        },
        {
            "type": "tc-redirect-tap"
        }
    ]
}
EOF

caution

The maximum socket path in the Linux Kernel is 107 characters + \0:

struct sockaddr_un {
	__kernel_sa_family_t sun_family; /* AF_UNIX */
	char sun_path[UNIX_PATH_MAX];	/* pathname */
};

The --chroot-base value must have a maximum length of 31 characters. The constant jailer path suffix used by firebuild is 76 characters:

  • constant /firecracker-v0.22.4-x86_64/ (automatically generated by the jailer)
  • VM ID is always 20 characters long
  • constant /root/run/firecracker.socket assumed by the jailer

Example: /firecracker-v0.22.4-x86_64/sifuqm4rq2runxparjcx/root/run/firecracker.socket.

Using more than 31 characters for the --chroot-base value, regardless if in the profile setting or using the command --chroot-base flag, will lead to a very obscure error. Firecracker will report an error similar to:

INFO[0006] Called startVMM(), setting up a VMM on /mnt/sdd1/firebuild/jailer/firecracker-v0.22.4-x86_64/6b41ecc3783c4f38a743c9c8af4bbe0f/root/run/firecracker.socket
WARN[0009] Failed handler "fcinit.StartVMM": Firecracker did not create API socket /mnt/sdd1/firebuild/jailer/firecracker-v0.22.4-x86_64/6b41ecc3783c4f38a743c9c8af4bbe0f/root/run/firecracker.socket: context deadline exceeded
{"@level":"error","@message":"Firecracker VMM did not start, build failed","@module":"rootfs","@timestamp":"2021-03-14T19:20:49.856228Z","reason":"Failed to start machine: Firecracker did not create API socket /mnt/sdd1/firebuild/jailer/firecracker-v0.22.4-x86_64/6b41ecc3783c4f38a743c9c8af4bbe0f/root/run/firecracker.socket: context deadline exceeded","veth-name":"vethHvfZiskhLkQ","vmm-id":"6b41ecc3783c4f38a743c9c8af4bbe0f"}
{"@level":"info","@message":"cleaning up jail directory","@module":"rootfs","@timestamp":"2021-03-14T19:20:49.856407Z","veth-name":"vethHvfZiskhLkQ","vmm-id":"6b41ecc3783c4f38a743c9c8af4bbe0f"}
{"@level":"info","@message":"cleaning up temp build directory","@module":"rootfs","@timestamp":"2021-03-14T19:20:49.856458Z"}
WARN[0010] firecracker exited: signal: killed

In the above example, the path is 114 characters long. Changing the chroot to /mnt/sdd1/fc/jail would solve the problem.

build the base operating system root file system

firebuild uses the Docker metaphor. An image of an application is built FROM a base. An application image can be built FROM alpine:3.13, for example. Or FROM debian:buster-slim, or FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3 and dozens others.

In order to fulfill those semantics, a base operating system image must be built before the application root file system can be created.

Build a base Debian Buster slim:

sudo $GOPATH/bin/firebuild baseos \
    --profile=standard \
    --dockerfile $(pwd)/baseos/_/debian/buster-slim/Dockerfile

Because the baseos root file system is built completely with Docker, there is no need to configure the kernel storage.

This does not belong here, structure better: It's possible to tag the baseos output using the --tag= argument, for example:

sudo $GOPATH/bin/firebuild baseos \
    --profile=standard \
    --dockerfile $(pwd)/baseos/_/debian/buster-slim/Dockerfile \
    --tag=custom/os:latest

create a Postgres 13 VM rootfs directly from the upstream Dockerfile

The upstream Dockerfile is built FROM debian:buster-slim, that's the baseos built in the previous step:

sudo $GOPATH/bin/firebuild rootfs \
    --profile=standard \
    --dockerfile=git+https://github.com/docker-library/postgres.git:/13/Dockerfile \
    --cni-network-name=machine-builds \
    --vmlinux-id=vmlinux-v5.8 \
    --mem=512 \
    --tag=combust-labs/postgres:13

create a separate CNI network for running VMs

For example:

cat <<EOF > /etc/cni/conf.d/machines.conflist
{
    "name": "machines",
    "cniVersion": "0.4.0",
    "plugins": [
        {
            "type": "bridge",
            "name": "machines-bridge",
            "bridge": "machines0",
            "isDefaultGateway": true,
            "ipMasq": true,
            "hairpinMode": true,
            "ipam": {
                "type": "host-local",
                "subnet": "192.168.127.0/24",
                "resolvConf": "/etc/resolv.conf"
            }
        },
        {
            "type": "firewall"
        },
        {
            "type": "tc-redirect-tap"
        }
    ]
}
EOF

run the VM from the resulting tag

Once the root file system is built, start the VM:

sudo $GOPATH/bin/firebuild run \
    --profile=standard \
    --name=postgres1 \
    --from=combust-labs/postgres:13 \
    --cni-network-name=machines \
    --vmlinux-id=vmlinux-v5.8 \
    --mem=512 \
    --env="POSTGRES_PASSWORD=some-password"

To avoid passing the password on the command line, you can use --env-file flag instead. The database is running, to verify:

Fine the IP address of the Postgres VM:

VMIP=$(sudo $GOPATH/bin/firebuild inspect \
    --profile=standard \
    --vmm-id=postgres1 | jq '.NetworkInterfaces[0].StaticConfiguration.IPConfiguration.IP' -r)
$ nc -zv ${VMIP} 5432
Connection to 192.168.127.94 5432 port [tcp/postgresql] succeeded!

If SSH access to the VM is required, this command can be used instead:

sudo $GOPATH/bin/firebuild run \
    --profile=standard \
    --name=postgres2 \
    --from=combust-labs/postgres:13 \
    --cni-network-name=machines \
    --vmlinux-id=vmlinux-v5.8 \
    --mem=512 \
    --env="POSTGRES_PASSWORD=some-password" \
    --ssh-user=debian \
    --identity-file=path/to/the/identity.pub

additional run flags

  • --daemonize: when specified, runs the VM in a daemonized mode
  • --env-file: full path to the environment file, multiple OK
  • --env: environment variable to deploy to configure the VM with, multiple OK, format --env=VAR_NAME=value
  • --hostname: hostname to apply to the VM which the VM uses to resolve itself
  • --name: name of the virtual machine, if empty, random string will be used, maxmimum 20 characters, only a-zA-Z0-9 ranges are allowed
  • --ssh-user: username to get access to the VM via SSH with, these are defined in the baseos Dockerfiles and follow the EC2 pattern: alpine for Alpine images and debian for Debian image; together with --identity-file allows access to the running VM via SSH
  • --identity-file: full path to the publish SSH key to deploy to the running VM

environment merging

The final environment variables are written to /etc/profile.d/run-env.sh file. All files specified with --env-file are merged first in the order of occurrcence, variables specified with --env are merged last.

build directly from a Docker image

Sometimes having just the Dockerfile is not sufficient to execute a rootfs build. A good example is this Jaeger all-in-one Dockerfile. The Dockerfile depends on the binary artifact built via Makefile prior to Docker build. In this case, it's possible to build the VM rootfs directly from the Docker image:

sudo $GOPATH/bin/firebuild rootfs \
    --profile=standard \
    --docker-image=jaegertracing/all-in-one:1.22 \
    --docker-image-base=alpine:3.13 \
    --cni-network-name=machine-builds \
    --vmlinux-id=vmlinux-v5.8 \
    --mem=512 \
    --tag=combust-labs/jaeger-all-in-one:1.22

The --docker-image-base is required because the underlying operating system the image was built from cannot be established from the Docker manifest.

To access the Jaeger Query UI via the host:

sudo iptables -t filter -A FORWARD \
    -m comment --comment "jaeger:1.22" \
    -p tcp -d 192.168.127.100 --dport 16686 \
    -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
sudo iptables -t nat -A PREROUTING \
    -m comment --comment "jaeger:1.22" \
    -p tcp -i eno1 --dport 16686 \
    -j DNAT \
    --to-destination 192.168.127.100:16686

Where the exact IP address can be obtained using the firebuild inspect --profile=... --vmm-id=... command and the destination IP and interface depend on your configuration, you can use ip link to find the up broadcast interfaces and relevant IP address. Tool intergration will be added at a later stage.

how does it work

The builder pulls the requested Docker image with Docker. It then open the Docker image via the Docker save command and looks up the manifest.json and the Docker image config json explicitly stated in the manifest. When config is fetched, a temporary Dockerfile is built from the Docker config history. Any ADD and COPY commands for resources other than first / are used to extract files from the saved source image. When resources are exported, the build further continues exactly the same way as in case of the Dockerfile build.

terminating a daemonized VM

A VM started with the --daemonize flag can be stopped in three ways:

  • by executing the kill tool command, this is a clean stop which will take care of all the necessry clean up
  • by executing reboot from inside of the VM SSH connection; unclean stop, manual purge of the CNI cache, jailer directory, run cache and the veth link is needed
  • by executing the cURL HTTP against the VM socket file; unclean stop, manual purge of the CNI cache, jailer directory, run cache and the veth link is needed

VM kill command

To get the VM ID, look closely at the output of the run ... --detached command:

{
    "@level":"info",
    "@message":"VMM running as a daemon",
    "@module":"run",
    "@timestamp":"2021-03-09T19:55:41.684488Z",
    "cache-dir":"/var/lib/firebuild/831b7068f7924584b384260e8d262834",
    "ip-address":"192.168.127.3",
    "ip-net":"192.168.127.3/24",
    "jailer-dir":"/srv/jailer/firecracker-v0.22.4-x86_64/831b7068f7924584b384260e8d262834",
    "pid":17904,
    "veth-name":"vethydMSApKfoDu",
    "vmm-id":"831b7068f7924584b384260e8d262834"
}

Copy the VM ID from the output and run:

sudo $GOPATH/bin/firebuild kill --profile=standard --vmm-id=${VMMID}

purging the remains of the VMs stopped without the kill command

If a VM exits in any other way than via kill command, following data continues residing on the host:

  • jail directory with all contents
  • run cache directory with all contents
  • CNI interface with CNI cache directory

To remove this data, run the purge command.

sudo $GOPATH/bin/firebuild purge --profile=standard

list VMs

sudo $GOPATH/bin/firebuild ls --profile=standard

Example output:

2021-03-12T01:46:21.752Z [INFO]  ls: vmm: id=df45b6e14538456286e4a4bc1f9bf6e2 running=true pid=20658 image=tests/postgres:13 started="2021-03-12 01:46:11 +0000 UTC" ip-address=192.168.127.9

Dockerfile git+http(s):// URL

It's possible to reference a Dockerfile residing in the git repository available under a HTTP(s) URL. Here's an example:

sudo $GOPATH/bin/firebuild rootfs \
    --profile=standard \
    --dockerfile=git+https://github.com/hashicorp/docker-consul.git:/0.X/Dockerfile#master \
    --cni-network-name=machine-builds \
    --vmlinux-id=vmlinux-v5.8 \
    --tag=combust-labs/consul:1.9.4

The URL format is:

git+http(s)://host:port/path/to/repo.git:/path/to/Dockerfile[#<commit-hash | branch-name | tag-name>]

And will be processed as:

  • path /path/to/repo.git:/path/to/Dockerfile will be split by : and must contain both sides
    • /path/to/repo.git is the git repository path
    • /path/to/Dockerfile is the path to the Dockerfile in the repository, must point to a file after clone and checkout
  • optional #fragment may be a comit hash, a branch name or a tag name
    • if no #fragment is given, the program will use the default cloned branch, check the remote to find out what is it
  • the cloned repository will have a single remote and the first remote wil be used

supported Dockerfile URL formats

  • http:// and https:// for direct paths to the Dockerfile, these can handle single file only and do not attempt loading any resources handled by ADD / COPY commands, the server must be capable of responding to HEAD and GET http requests, more details in Caveats when building from the URL further in this document
  • special git+http:// and git+https://, documented above
  • standard ssh://, git:// and git+ssh:// URL formats with the expectation that the path meets the criteria from the git+http(s):// URL section above

caveats when building from the URL

The build command will resolve the resources referenced in ADD and COPY commands even when loading the Dockerfile via the URL. The context root in this case will be established by removing the file name from the URL. An example:

  • consider the URL https://raw.githubusercontent.com/hashicorp/docker-consul/master/0.X/Dockerfile
  • the Dockerfile name will be removed from the URL and the context is https://raw.githubusercontent.com/hashicorp/docker-consul/master/0.X
  • assuming that the Dockerfile contains ADD ./docker-entrypoint.sh ..., the resolver will try loading https://raw.githubusercontent.com/hashicorp/docker-consul/master/0.X/docker-entrypoint.sh

There are following limitations when loading the resources like that via URL:

  • if the ADD or COPY points to a directory, the command will fail because there is no unified way of loading directories via HTTP, the resolver will not even attempt this, it will most likely fail on the HTTP GET request
  • the file permissions will not be carried over because there is no method to infer file mode from a HTTP response

unsupported Dockerfile features

The build program does not support:

  • ONBUILD commands
  • HEALTHCHECK commands
  • STOPSIGNAL commands

multi-stage Dockerfile builds

firebuild supports multi-stage Dockerfile builds. An example with grepplabs Kafka Proxy.

Build v0.2.8 using git repository link:

sudo $GOPATH/bin/firebuild rootfs \
    --profile=standard \
    --dockerfile=git+https://github.com/grepplabs/kafka-proxy.git:/Dockerfile#v0.2.8 \
    --cni-network-name=machine-builds \
    --vmlinux-id=vmlinux-v5.8 \
    --tag=combust-labs/kafka-proxy:0.2.8

tracing

TODO: eat your own dog food, start with firebuild.

Start Jaeger, for example:

docker run --rm -ti \
    -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
    -p 5775:5775/udp \
    -p 6831:6831/udp \
    -p 6832:6832/udp \
    -p 5778:5778 \
    -p 16686:16686 \
    -p 14268:14268 \
    -p 14250:14250 \
    -p 9411:9411 \
    jaegertracing/all-in-one:1.22

And configure respective commands with:

... --tracing-enable \
--tracing-collector-host-port=... \

The default value of the --tracing-collector-host-port is 127.0.0.1:6831. To enable tracer log output, set --tracing-log-enable flag.

license

Unless explcitly stated: AGPL-3.0 License.

Issues
  • Build base OS

    Build base OS

    Build base operating system as a tool command. Major changes:

    • renames the firebuild build to firebuild rootfs
    • adds the firebuild baseos command

    Additionally, this PR will drop the requirement to have the SSH key at the base OS build time. With this PR, the process is the following:

    • build a base OS, for example: sudo /usr/local/go/bin/go run ./main.go baseos --dockerfile $(pwd)/baseos/_/alpine/3.13/Dockerfile
    • before starting the VMM / rootfs build, an SSH public key has to be inserted into the base OS file system, this can be done using custom jailer config ChrootStrategy, before jailer unmounts everything
    sudo mount ...etx /directory/...
    sudo chmod 0774 /tmp/mount/home/alpine/.ssh/authorized_keys
    ... put the public key in there
    sudo chmod 0400 /tmp/mount/home/alpine/.ssh/authorized_keys
    sudo umount /directory/...
    

    Now, SSH is possible.

    What needs to be done in this PR before the merge:

    • [x] clean up the base os image export in dependency.go
    • [x] move docker related operations out of dependency build into a dedicated docker client interface
    • [x] stop the container after file system export
    • [x] remove the container after file system export
    • [x] properly clean up the image because right now, it is only untagged and leaving trash after bootstrap
    • [x] the rootfs command must generate a bootstrap only SSH key pair
    • [x] deploy the public part on the root file system before jailer takes over
    • [x] use the private part to gain SSH access
    • [x] discard the private part after bootstrap
    • [x] ~~clean up public allowed_key from the file system before disconnecting~~ will be taken care of when writing the machine launch command
    • [x] remove the build scripts for every operating system base image
    baseos rootfs 
    opened by radekg 2
  • iptables: run: publish ports on the host

    iptables: run: publish ports on the host

    For the first iteration, the following has to be finished to consider this feature complete:

    • [x] support the following format of the port flag: (network-device:)?((host-port:)?dest-port)(/[udp|tcp])? where:
      • the network-device is optional
      • host-port is optional, defaults to dest-port
      • protocol is optional, defaults to /tcp
    • [x] extract the rulespecs into helper functions
    • [x] implement IPTUnpublisher which, given VM ID and IP address, removes the VM specific chain from NAT table and rules from FIREBUILD-FORWARD chain for that particular VM
    • [x] store exposed ports on the run metadata
    • [x] clean up iptables on daemonized kill and purge, and non-daemonized stop
    • [x] implement filesystem lock because concurrent iptables handling will not work
    • [x] tests
    • [x] godoc

    When implemented, document desired provider based (similar to storage) iptables handlers.

    opened by radekg 1
  • Egress testing: if CNI interface provides a gateway, ping gateway

    Egress testing: if CNI interface provides a gateway, ping gateway

    Egress testing: if CNI interface provides a gateway, ping gateway. Right now, the default is Cloudflare DNS but long term this is not a good idea. Leave the ping target setting in but default to empty. If empty, assume gateway.

    enhancement 
    opened by radekg 1
  • Manual IP address assignment

    Manual IP address assignment

    With this PR, it is possible to assign an IP address to the machine. An example of a three node Consul cluster using this functionality is now in the documentation: https://combust-labs.github.io/firebuild-docs/networking/run_network/#an-example-running-a-3-vm-hashicorp-consul-cluster.

    opened by radekg 0
  • Add support for command override

    Add support for command override

    With this PR it is possible to override the default cmd loaded from the Dockerfile / Docker image. For example:

    sudo $GOPATH/bin/firebuild run \
        --profile=standard \
        --from=combust-labs/consul:1.9.4 \
        --cni-network-name=machines \
        --vmlinux-id=vmlinux-v5.8 \
        -- agent -server -client 0.0.0.0 -bootstrap-expect 1 -data-dir /consul/data
    
    opened by radekg 0
  • Add kernel and VM name

    Add kernel and VM name

    This PR adds the kernel configuration used to build the kernel in the readme examples. It also adds the option to name a virtual machine which leads to simplified examples in the readme.

    The readme itself is now restructured and should be easier to follow: high level example at the top, missing configuration steps included.

    opened by radekg 0
  • When building from Dockerfile, add an option to select a named stage as a build target

    When building from Dockerfile, add an option to select a named stage as a build target

    Today, firebuild assumes that an unnamed stage exists in the Dockefile and will not build if it does not exist. However, having a named stage as the only stage in the Dockerfile is perfectly fine. There are two options possible and both can be supported equal:

    1. if the Dockerfile contains a single stage and the stage is named, build it
    2. if there are multiple stages and no final unnamed stage, allow specifying the --dockerfile-stage=string and build that stage as the rootfs
    rootfs 
    opened by radekg 0
  • rootfs: build from Docker image

    rootfs: build from Docker image

    The Docker image config file contains all metadata from the Docker file. It is possible to build a rootfs from a Docker image and extract metadata as in the case of the Dockerfile. Implement. After extracting the commands from Docker config, switch to the existing path as the command execution should not differ much from building from the Dockerfile. Investigate further.

    rootfs 
    opened by radekg 0
  • vmm run: add firecracker.WithClient(firecracker.NewClient(...)) to options

    vmm run: add firecracker.WithClient(firecracker.NewClient(...)) to options

    Add a configurable tracing of the firecracker vmm http tracing by adding an option to enable firecracker.WithClient(firecracker.NewClient(...)) when a machine is created.

    enhancement 
    opened by radekg 0
  • Implement profiles

    Implement profiles

    Implement profiles. A profile should contain the following settings:

    • binary-firecracker
    • binary-jailer
    • chroot-base
    • storage-provider: storage provider type
    • storage.provider.directory.rootfs-storage-root
    • storage.provider.directory.kernel-storage-root
    • tracing-enable
    • tracing-collector-host-port

    All commands should use the settings from the profile, if the profile is given. The flag will be --profile=profile-name. Add support for --profile-config-dir setting which overrides the default /etc/firebuild/profiles directory.

    Add tool command to create a profile and list existing profiles.

    enhancement 
    opened by radekg 0
  • Use shallow clone

    Use shallow clone

    Cloning the torvalds/linux repository requires several gigabytes of history to be retrieved.

    Using --depth 1 will only clone the commit associated with the given tag and all tree/blob objects required by it. That's a much better experience (also for GitHub).

    opened by tt 0
  • Document the resolv.conf handling for the default host-local IPAM

    Document the resolv.conf handling for the default host-local IPAM

    The user can specify a bridge specific resolv.conf file with custom nameservers configured: https://www.cni.dev/plugins/current/ipam/host-local/. Example:

    {
      "ipam": {
    		"type": "host-local",
    		"subnet": "3ffe:ffff:0:01ff::/64",
    		"rangeStart": "3ffe:ffff:0:01ff::0010",
    		"rangeEnd": "3ffe:ffff:0:01ff::0020",
    		"routes": [
    			{ "dst": "3ffe:ffff:0:01ff::1/64" }
    		],
    		"resolvConf": "/etc/resolv.conf"
    	}
    }
    

    The contents of this resolv.conf are passed to the VM so it is not necessary to handle resolv.conf in vminit. Document.

    documentation 
    opened by radekg 0
  • Add support for STOPSIGNAL

    Add support for STOPSIGNAL

    The vminit service can handle the STOPSIGNAL Docker command. When building the rootfs, store the STOPSIGNAL in the metadata and pass via MMDS to the machine on run. From there, handling should rather straightforward.

    docker-compat 
    opened by radekg 0
  • run command: implement --userdata flag

    run command: implement --userdata flag

    Implement --userdata flag and follow the EC2 style for user data.

    When resolving the flag, check if:

    • starts with a shebang like string, if yes, assume it's an inline command, embed the execution into the vminit service
    • if no shebang:
      • check if value starts with http:// or https://, if so, vminit will download and execute a remote program
      • check if the value is a host local path, require a specific path prefix and use filepath.Clean to make sure no host secret data can be put in the VM, read the file and put on the VM, vminit to execute the command
      • if none of the above, put the data as-is at a specific location in the VM, vminit will do nothing
    enhancement 
    opened by radekg 0
A convenience library for generating, comparing and inspecting password hashes using the scrypt KDF in Go ๐Ÿ”‘

simple-scrypt simple-scrypt provides a convenience wrapper around Go's existing scrypt package that makes it easier to securely derive strong keys ("h

Matt Silverlock 180 Jun 8, 2022
Web-Security-Academy - Web Security Academy, developed in GO

Web-Security-Academy - Web Security Academy, developed in GO

Xavier Llauca 1 Feb 23, 2022
Gorsair hacks its way into remote docker containers that expose their APIs

Gorsair Gorsair is a penetration testing tool for discovering and remotely accessing Docker APIs from vulnerable Docker containers. Once it has access

Brendan Le Glaunec 774 Jun 15, 2022
Exploratory implementation of the Eva virtual machine

Eva Exploratory implementation of the Eva virtual machine in pure Go. Eva is a simple virtual machine designed for educational use. This is not intend

Louis Thibault 0 Dec 27, 2021
Hotdog is a set of OCI hooks used to inject the Log4j Hot Patch into containers.

Hotdog Hotdog is a set of OCI hooks used to inject the Log4j Hot Patch into containers. How it works When runc sets up the container, it invokes hotdo

null 33 Apr 26, 2022
Simple Go-based permission setter for containers running as non root users

Simple Go-based permission setter for containers running as non root users

Jacob Alberty 1 May 17, 2022
Cossack Labs 1k Jun 22, 2022
HTTP middleware for Go that facilitates some quick security wins.

Secure Secure is an HTTP middleware for Go that facilitates some quick security wins. It's a standard net/http Handler, and can be used with many fram

Cory Jacobsen 2k Jun 28, 2022
Gryffin is a large scale web security scanning platform.

Gryffin (beta) Gryffin is a large scale web security scanning platform. It is not yet another scanner. It was written to solve two specific problems w

Yahoo 2.1k Jun 29, 2022
set of web security test cases and a toolkit to construct new ones

Webseclab Webseclab contains a sample set of web security test cases and a toolkit to construct new ones. It can be used for testing security scanners

Yahoo 917 Jun 14, 2022
PHP security vulnerabilities checker

Local PHP Security Checker The Local PHP Security Checker is a command line tool that checks if your PHP application depends on PHP packages with know

Fabien Potencier 913 Jun 30, 2022
Tracee: Linux Runtime Security and Forensics using eBPF

Tracee is a Runtime Security and forensics tool for Linux. It is using Linux eBPF technology to trace your system and applications at runtime, and analyze collected events to detect suspicious behavioral patterns.

Aqua Security 1.9k Jun 27, 2022
Sqreen's Application Security Management for the Go language

Sqreen's Application Security Management for Go After performance monitoring (APM), error and log monitoring itโ€™s time to add a security component int

Sqreen 160 Jun 18, 2022
A scalable overlay networking tool with a focus on performance, simplicity and security

What is Nebula? Nebula is a scalable overlay networking tool with a focus on performance, simplicity and security. It lets you seamlessly connect comp

Slack 10k Jul 1, 2022
How to systematically secure anything: a repository about security engineering

How to Secure Anything Security engineering is the discipline of building secure systems. Its lessons are not just applicable to computer security. In

Veeral Patel 9.3k Jun 27, 2022
MQTTๅฎ‰ๅ…จๆต‹่ฏ•ๅทฅๅ…ท (MQTT Security Tools)

โ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ•โ–ˆโ–ˆโ•—โ•šโ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ•šโ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ• โ–ˆโ–ˆโ•”โ–ˆโ–ˆโ–ˆโ–ˆโ•”โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•‘โ•šโ–ˆโ–ˆโ•”โ•โ–ˆ

null 25 Jun 17, 2022
gosec - Golang Security Checker

Inspects source code for security problems by scanning the Go AST.

Secure Go 6.1k Jun 25, 2022
GoPhish by default tips your hand to defenders and security solutions. T

GoPhish by default tips your hand to defenders and security solutions. The container here strips those indicators and makes other changes to hopefully evade detection during operations.

null 88 Jun 26, 2022
Go binary that finds .EXEs and .DLLs on the system that don't have security controls enabled

Go Hunt Weak PEs Go binary that finds .EXEs and .DLLs on the system that don't have security controls enabled (ASLR, DEP, CFG etc). Usage $ ./go-hunt-

m0rv4i 13 Oct 28, 2021