Feels like Cloud Foundry. Runs on Kubernetes.

Related tags

DevOps Tools kf
Overview

Migrate Cloud Foundry applications to Kubernetes using Kf

As your teams standardize on Kubernetes, migrating applications from existing platforms like Cloud Foundry is often one of the biggest challenges. Kf, which is now fully supported by Google Cloud Platform, was designed to help your teams minimize any disruption to developer workflows during the migration to Google Kubernetes Engine and Anthos.

Kf offers developers the Cloud Foundry experience while empowering operators to adopt declarative Kubernetes practice. It makes migrating Cloud Foundry workloads to Kubernetes straighforward, and most importantly, avoids major changes to developer workflows. You can also eliminate commercial Cloud Foundry licensing costs, and take advantage of the config and policy features of Anthos for governance.

Developers use the Kf CLI and operators can use kubectl to interact with Kf's underlying components.

Additional resources

To get started with Kf checkout these resources:

Comments
  • Generate all clients

    Generate all clients

    Fixes #

    Proposed Changes

    • Update knative.dev/pkg to include their informers for resourcequota and limitrange
    • update-codegen.sh deletes all code in pkg/client before executing (when running with "all")

    Release Notes

    
    
    cla: yes ok-to-test 
    opened by mattysweeps 8
  • pkg/kf/commands/spaces: configure BuildServiceAccount

    pkg/kf/commands/spaces: configure BuildServiceAccount

    Fixes #785

    Proposed Changes

    • Adds way to configure BuildServiceAccount for spaces

    Release Notes

    Adds way to configure space's BuildServiceAccount property
    
    cla: yes ok-to-test 
    opened by poy 8
  • improvements of UAA installation process to KF cluster

    improvements of UAA installation process to KF cluster

    Proposed Changes

    • Introduces. project name as a variable in UAA CloudSQL deployment
    • Fixes bug in UAA deployment manifest. UAA endpoint was not reachable with uaac tool
    • Improved UAA docker image build by adding builder of UAA war into Dockerfile.
    cla: no 
    opened by lasdolphin 6
  • Add container and buildpack start commands to app status

    Add container and buildpack start commands to app status

    Proposed Changes

    • Add a new start command struct to the app status and update the OpenAPIspec
    • Populate start command fields in app reconciler
    • Update Tekton pipeline to output app start command
    opened by lildann 5
  • docs: Improve GKE instructions

    docs: Improve GKE instructions

    This change improves GKE cluster creation instructions by adding a custom service account and new VPC network. It also integrates those instructions into the overall install instructions.

    cla: yes 
    opened by evandbrown 4
  • Add concourse pipline for integration tests

    Add concourse pipline for integration tests

    Add a concourse template.

    1. initial pipelien listenes for PRs, creates a new pipeline for each PR
    2. PR pipeline runs all tests, including integration tests.

    Screenshot of the initial pipeline: image

    Screenshot of the PR pipeline: image

    cla: yes 
    opened by mattysweeps 4
  • pkg/kf/internal/tools: adds command-generator

    pkg/kf/internal/tools: adds command-generator

    command-generator generates cobra.Commands, and markdown from YAML. This helps standardize the commands along with generating docs for the commands.

    This CL only implements the version, target, and debug commands.

    fixes #337

    Proposed Changes

    • Add generator for commands and markdown

    Release Notes

    Added generated docs for `debug`, `target` and `version` commands.
    
    cla: yes 
    opened by poy 3
  • docs: Docsy is a submodule

    docs: Docsy is a submodule

    Previously, Docsy was a submodule but its own embedded submodules failed because they were in a vendor/ directory that was excluded by our .gitignore file. This change modifies the .gitignore to allow vendor/ dirs in the docs/ folder, and puts docsy back as a submodule.

    cla: yes 
    opened by evandbrown 3
  • sources now run  once

    sources now run once

    Fixes #460

    Proposed Changes

    • Sources only schedule a build if they aren't completed.
    • Sources receive an image destination rather than a registry so the names will actually stack in the container registry and not include random parts.
    • Users are not allowed to set Images, BuildpackBuilders, or ServiceAccounts on their builds anymore; these values are taken from the space.

    Release Notes

    Fixed Sources only schedule a build if they aren't completed.
    Changed BREAKING Sources now require an output image rather than just a registry
    Changed Derived image names are now `<registry>/app-<ns>-<app>:<base36 timestamp>` 
    Security BREAKING Users are not allowed to set Images, BuildpackBuilders, or ServiceAccounts on their builds anymore; these values are taken from the space.
    
    bug cla: yes 
    opened by josephlewis42 3
  • Remove options string retroactively. Fixes #211.

    Remove options string retroactively. Fixes #211.

    The template package we use does not let us adjust the internal formatting. I was unable to find a way to disable options via the exposed settings. However, we can capture cobra command output...and remove the troubled text.

    In the future, we should move away from using this library altogether, especially since it no longer exists in kubectl master

    cla: yes 
    opened by mattysweeps 3
  • pkg/kf/commands/apps: adds start and stop commands

    pkg/kf/commands/apps: adds start and stop commands

    The commands simply toggle the app.Spec.Instances.Stopped value. The controller has been updated to delete the knative service if the app is stopped. This is required because knative will otherwise delete the pods and bring back a single pod for a while (even if scaling is set to 0): https://github.com/knative/serving/issues/4098

    fixes #282

    cla: yes 
    opened by poy 3
  • Allow for requests with Host header to be routed by ignoring any speified port as part of the header value

    Allow for requests with Host header to be routed by ignoring any speified port as part of the header value

    In cf, an http request with a Host header containing a port number (e.g. example.com:443) is routed by ignoring the port number (e.g. example.com:443 matches to example.com).

    Currently if a port number is included in the Host header on an http request to kf, it will not match with any virtual service. For parity with cf, kf should ignore port numbers when matching is happening by using a regexp instead of an exact match.

    For now the ability to ignore port specified in the Host header will be implemented behind the routeHostIgnoringPort flag.

    opened by kenneth-rosario 0
Releases(v2.11.13)
  • v2.11.13(Nov 22, 2022)

    Changelog

    • Added: Deployment viewer permission to developers and auditors.
    • Added: Start commands can be shown for apps.
    • Added: Ability to disable VirtualService retries.
    • Added: The ability for Tasks to work with NFS.
    • Added: Examples of using set-env with arguments that look like flags.
    • Added: Documents explaining how to debug Kf resources using kubectl.
    • Added: Annotation to Apps that allows kubectl to pick the user container by default for logs, exec, cp, etc.
    • Added: Documentation explaining what is tested during Kf releases.
    • Added: Command kf fix-orphaned-bindings to fix bindings without owners which dry-runs by default.
    • Fixed: Deployment Manager example creating Kubernetes with an unsupported base image.

    Risks and mitigations

    • kf fix-orphaned-bindings may stop part way through if it encounters an error. If that happens, fix the broken binding or use the dry-run mode (default) and run the kubectl commands it recommends manually.
    • If NFS breaks tasks disable the feature.
    • If App start command adds too much latency to reconciliation, disable the feature.

    Known issues

    • Routing retries won't be applied immediately. Instead you'll have to wait for a periodic reconciliation or force one by adding/removing a route or deleting the VirtualService.
    Source code(tar.gz)
    Source code(zip)
  • v2.11.10(Oct 20, 2022)

    Release Notes

    This is a patch of the v2.11.9 release and doesn't include all changes on the main branch.

    • Fixed RouteSpec ordering when hostname are the same and path are of same length.
    • Changed Route reconcilation to not track the status of VirtualService, which caused too many unnecessary updates to VirtualService.

    Risks and Mitigations

    This release changes the interaction between VirtualServices and Routes. Route statuses reflect when a VirtualService has been updated, but may indicate they're ready before the VirtualService is ready. Most route changes should take effect nearly instantly, but in rare cases may take longer. This means that Apps may also show ready before their routes are fully propagated.

    To return to the old behavior set routeTrackVirtualService to true:

    kubectl patch \
        kfsystem kfsystem \
        --type='json' \
        -p="[{'op':'add','path':'/spec/kf/config/routeTrackVirtualService','value':true}]"
    

    This reduces the number of conflicting updates in spaces where hundreds of routes are mapped to the same domain.

    Source code(tar.gz)
    Source code(zip)
  • v2.11.9(Oct 14, 2022)

    v2.11.9

    This is a patch of the v2.11.8 release and doesn't include all changes on the main branch.

    Release Notes

    • Changed: Reconciliation method has been changed on ClusterRoles for the Kf control plane because it was causing thrashing with the Kf operator.
    • Fixed: CPU limits are no longer set if an application defines cpu in the manifest, the behavior is reset to Kf's original behavior where cpu indicates a request rather than a limit.
    • Added: A new manifest property cpu-limit which sets an upper bound on CPUs. See: https://kf.dev/docs/v2.11/developer/build-and-deploy/manifest/
    • Added: A new document describing resource management and best practices in Kf: https://kf.dev/docs/v2.11/developer/scaling/resource-management/

    Risks and Mitigations

    • CPU intensive apps will not get an upper bound unless explicitly set in the manifest. Operators should monitor for any intensive applications and inform the teams to set cpu-limit
    • App teams should be asked to validate their CPU requirements are correct in Kf, teams that increased CPU to account for bursts necessary during app startup may be able to reduce their requests.
    Source code(tar.gz)
    Source code(zip)
  • v2.11.8(Sep 21, 2022)

    Changelog

    • Changed: Apps now get a startup probe and default liveness and readiness probes that more closely match CF's Diego runtime.
    • Added: Developers can set startupProbe, livenessProbe, and readinessProbe mapping to the K8s concepts of the same name in their manifests.
    • Added: Kf manifests now support health-check-invocation-timeout which was introduced in CAPIv3.
    • Added: New validation for health check fields in manifests to catch errors earlier and provide better messages.
    • Breaking: Kf now requires a Kubernetes version that supports startupProbe in Pods (v1.20 or above).
    • Changed: Apps pushed with container images now support overriding the entrypoint so they can be used with NFS UID/GID mapping
    • Added: Platform operators can now configure default progress deadlines and termination grace periods for apps.
    • Added: Platform operators can now pin buildpacks using git tags.
    • Added: Platform operators can now configure task retention policies.
    • Fixed: Space auditors now get the same set of permissions as developers (except for secrets).
    • Fixed: Apps with bindings in their manifest will now delete the bindings when the app is deleted.
    • Fixed: kf.dev "edit this page" links now point to GitHub.
    Source code(tar.gz)
    Source code(zip)
  • v2.11.7(Sep 21, 2022)

Owner
Google
Google ❤️ Open Source
Google
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2.3k Jan 4, 2023
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
Opinionated platform that runs on Kubernetes, that takes you from App to URL in one step.

Epinio Opinionated platform that runs on Kubernetes, that takes you from App to URL in one step. Contents Epinio Contents What problem does Epinio sol

Julien ADAMEK 2 Nov 13, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

CloudSnorkel 16 Jun 8, 2022
Cloud-on-k8s- - Elastic Cloud on Kubernetes (ECK)

Elastic Cloud on Kubernetes (ECK) Elastic Cloud on Kubernetes automates the depl

null 1 Jan 29, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 24 Sep 27, 2022
rld is a tiny tool that runs a go program and watch changes on it.

RLD rld is a tiny tool that runs a go program and watch changes on it. rld was inspired by Nodemon Installation Clone the git repository and build: $

Francis Sunday 10 Jun 13, 2022
Runwasi - A containerd shim which runs wasm workloads in wasmtime

containerd-shim-wasmtime-v1 This is a containerd shim which runs wasm workloads

Brian Goff 329 Dec 28, 2022
GoScanPlayers - Hypixel online player tracker. Runs as an executable and can notify a Discord Webhook

GoScanPlayers Hypixel online player tracker. Runs as an executable and can notif

null 2 Oct 16, 2022
A plugin for argo which behaves like I'd like

argocd-lovely-plugin An ArgoCD plugin to perform various manipulations in a sensible order to ultimately output YAML for Argo CD to put into your clus

null 120 Dec 27, 2022
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.

gecco helps teams optimise their cloud resource costs. Locate abandoned, idle, and inefficiently configured resources quickly. gecco helps teams build

aeihr. 2 Jan 9, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

null 0 Oct 19, 2021
Cloud-gaming-operator - The one that manages VMs for cloud gaming built on GCE

cloud-gaming-operator GCE上に建てたクラウドゲーミング用のVMを管理するやつ 事前準備 GCEのインスタンスかマシンイメージを作成してお

Naoki Kishi 1 Jan 22, 2022
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Oliver Borchert 32 Jan 4, 2023
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

Kilo Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes. Overview Kilo connects nodes in a cluster by providing an e

Lucas Servén Marín 1.6k Jan 1, 2023
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

RD Station 22 Nov 5, 2021
Cloud Native Configurations for Kubernetes

CNCK CNCK = Cloud Native Configurations for Kubernetes Make your Kubernetes applications more cloud native by injecting runtime cluster information in

Tal Liron 5 Nov 4, 2021
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.

A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s. OAM operator scaffolds all of the code required to create resources across various cloud provides, which includes both K8s and Non-K8s resources

Pavan Kumar 2 Nov 30, 2021
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration Karmada (Kubernetes Armada) is a Kubernetes management system that enables

null 3k Dec 30, 2022