Kubernetes community content


Kubernetes Community

Welcome to the Kubernetes community!

This is the starting point for joining and contributing to the Kubernetes community - improving docs, improving code, giving talks etc.

To learn more about the project structure and organization, please refer to Project Governance information.


The communication page lists communication channels like chat, issues, mailing lists, conferences, etc.

For more specific topics, try a SIG.


Kubernetes has the following types of groups that are officially supported:

  • Committees are named sets of people that are chartered to take on sensitive topics. This group is encouraged to be as open as possible while achieving its mission but, because of the nature of the topics discussed, private communications are allowed. Examples of committees include the steering committee and things like security or code of conduct.
  • Special Interest Groups (SIGs) are persistent open groups that focus on a part of the project. SIGs must have open and transparent proceedings. Anyone is welcome to participate and contribute provided they follow the Kubernetes Code of Conduct. The purpose of a SIG is to own and develop a set of subprojects.
    • Subprojects Each SIG can have a set of subprojects. These are smaller groups that can work independently. Some subprojects will be part of the main Kubernetes deliverables while others will be more speculative and live in the kubernetes-sigs github org.
  • Working Groups are temporary groups that are formed to address issues that cross SIG boundaries. Working groups do not own any code or other long term artifacts. Working groups can report back and act through involved SIGs.
  • User Groups are groups for facilitating communication and discovery of information related to topics that have long term relevance to large groups of Kubernetes users. They do not have ownership of parts of the Kubernetes code base.

See the full governance doc for more details on these groups.

A SIG can have its own policy for contribution, described in a README or CONTRIBUTING file in the SIG folder in this repo (e.g. sig-cli/CONTRIBUTING.md), and its own mailing list, slack channel, etc.

If you want to edit details about a SIG (e.g. its weekly meeting time or its leads), please follow these instructions that detail how our docs are auto-generated.

Learn to Build

Links in contributors/devel/README.md lead to many relevant technical topics.


A first step to contributing is to pick from the list of kubernetes SIGs. Start attending SIG meetings, join the slack channel and subscribe to the mailing list. SIGs will often have a set of "help wanted" issues that can help new contributors get involved.

The Contributor Guide provides detailed instruction on how to get your ideas and bug fixes seen and accepted, including:

  1. How to file an issue
  2. How to find something to work on
  3. How to open a pull request


We encourage all contributors to become members. We aim to grow an active, healthy community of contributors, reviewers, and code owners. Learn more about requirements and responsibilities of membership in our Community Membership page.

  • Tracking issue: Unconscious Bias Training for current Chairs and Technical Leads

    Tracking issue: Unconscious Bias Training for current Chairs and Technical Leads

    Per #4940, this is a list of all current chairs and technical leads for any community group to track completion of the required unconscious bias training:

    opened by cblecker 120
  • [SIG Service Catalog] Persist cluster-wide data in configmap

    [SIG Service Catalog] Persist cluster-wide data in configmap

    This is an attempt to introduce a way to persist cluster-wide data. We have a need to identify different clusters accessing our api server. With this design, a clusters can be identified through the new cluster-id in the configmap.

    *note: We just need a simple unique ID to identify one cluster from another, the cluster-id will not tell us how to talk back to the cluster, and it is out of the scope of this proposal.

    A implementation has been submitted for this design proposal #46384

    /cc @duglin

    cncf-cla: yes kind/design sig/api-machinery sig/architecture sig/auth sig/cluster-lifecycle sig/multicluster size/M 
    opened by simonleung8 106
  • kep-sidecar-containers


    This is my initial thoughts on the concept of the sidecar container to address https://github.com/kubernetes/kubernetes/issues/25908 Hopefully this can prompt some discussion around the issue as I think it's something that a fair amount of people would like to be addressed.

    lgtm cncf-cla: yes approved kind/design kind/feature sig/apps sig/architecture sig/node sig/testing size/L 
    opened by Joseph-Irving 83
  • Form  a Batch Working Group

    Form a Batch Working Group

    Kubernetes historically focused on service-type workloads, support for load balancing, traffic splitting, rolling-updates, spreading, autoscaling and topology-aware routing are few examples of features the community built for service workloads. Stateful workloads are also getting more support with the introduction of CSI, topology-aware volume provisioning and storage capacity tracking to mention a few.

    However, support for Batch lagged in Kubernetes core, leading to a challenging migration journey of batch workloads to Kubernetes. Multiple past efforts tried to improve this status, but those efforts lacked continuity, in some cases leading to forked projects outside k8s (including a forked scheduler).

    Recently, there has been momentum to improve core k8s support for batch. Examples:

    1. Job API enhancements (indexed job, suspend jobs, pod deletion cost, accurate job tracking, ready pods tracking in jobs, ttl after finish to GA and CronJob to GA)
    2. NUMA aware scheduling: https://github.com/kubernetes/enhancements/pull/2787
    3. co-scheduling scheduler plugin (https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/coscheduling)
    4. proposal for job-level management: bit.ly/k8s-job-management
    5. Incubating scheduler plugins for co-scheduling and capacity-scheduling.

    To keep the momentum and coordinate efforts, we would like explore forming a batch SIG or WG.

    /sig scheduling /sig apps

    sig/apps sig/autoscaling sig/scheduling lifecycle/stale 
    opened by ahg-g 80
  • CSI: support for in-line volumes in pods.

    CSI: support for in-line volumes in pods.

    Current CSI implementation supports only PersistentVolumes. This proposal adds support for in-line CSI volumes in pods.

    /sig storage

    /assign @davidz627 @saad-ali @vladimirvivien @liggitt

    cncf-cla: yes kind/design sig/storage size/L lifecycle/rotten 
    opened by jsafrane 72
  • design: reduce scope of node on node object

    design: reduce scope of node on node object

    Super mini design doc about centralizing reporting of some sensitive kubelet attributes.

    @kubernetes/sig-auth-misc @roberthbailey @kubernetes/sig-cluster-lifecycle-misc as it relates to registration

    lgtm cncf-cla: yes approved kind/design kind/feature sig/architecture sig/auth sig/cluster-lifecycle sig/node sig/scheduling size/L lifecycle/frozen 
    opened by mikedanese 61
  • [Proposal] Design doc for priority in resource quota

    [Proposal] Design doc for priority in resource quota

    This is a design doc for priority in resource quota.

    Ref: https://github.com/kubernetes/kubernetes/issues/48648

    /assign @bsalamat to make sure everything is on the right track.

    cc @kubernetes/sig-scheduling-pr-reviews

    lgtm cncf-cla: yes approved kind/design sig/scheduling size/L 
    opened by resouer 60
  • Project Governance Umbrella issue

    Project Governance Umbrella issue

    There have been a number of discussions about project governance.

    Relevant issues:

    • Elders
      • https://github.com/kubernetes/community/issues/28
      • https://github.com/kubernetes/community/pull/267
    • Governance.md
      • https://github.com/kubernetes/community/issues/277
      • https://github.com/kubernetes/community/pull/286
      • https://docs.google.com/document/d/1UKfV4Rdqi8JcrDYOYw9epRcXY17P2FDc2MENkJjMcas/edit
    • Three branches
      • https://github.com/kubernetes/community/issues/295
    • Bylaws
      • https://github.com/kubernetes/community/issues/351
    • Community norms
      • https://github.com/kubernetes/community/issues/350

    One thing that is clear is that people are trying to solve different problems. The purpose of this issue is to surface those problems and (ideally) to come to agreement about which problems we're going to tackle first. Then we get move onto proposals for how we're going to solve them.

    Some problems that have been mentioned, culled from the above, in no particular order:

    • The structure of the project is opaque to newcomers.
    • There is no clear technical escalation path / procedure.
    • There aren't consistent / official decision-making procedures for ~anything: consensus, lazy consensus, consensus-seeking, CIVS voting, etc.
    • There are no official processes for adding org members, reviewers, approvers, maintainers, project leaders, etc.
    • There is no official / regularly meeting body to drive overall technical vision of the project.
    • We don't agree on the right level/types of engagement for leaders. Some feel that leaders should be recused from responsibilities such as SIG leadership, while others feel they need to be deeply involved in releases, etc.
    • There aren't official technical leads for most subareas of the project.
    • There is no centralized / authoritative means of resolving non-technical problems on the project, including staffing gaps (engineering, docs, test, release, ...), effort gaps (tragedy of the commons), expertise mismatches, priority conflicts, personnel conflicts, etc.
    • In particular, there is insufficient effort on contributor experience (e.g., github tooling, project metrics), code organization and build processes, documentation, test infrastructure, and test health. Some on the project have argued that there is insufficient backpressure and/or incentives for people employed to deliver customer-facing features to spend time on issues important to overall project health.
    • A related issue is counterbalancing technical- and product-based decision-making.
    • Visibility across the entire project is lacking.
    • Metrics, metrics, metrics, metrics. We're flying blind.
    • There is no documented proposal process.
    • There isn't a documented process for advancing APIs through alpha, beta, stable stages of development.
    • Project technical leaders (de facto or otherwise) are not available via office hours.
    • We don't have processes or documentation for onboarding new contributors.
    • There are no official safeguards to prevent control over the project by a single company.
    • Project leadership lacks diversity.
    • There is no conflict of interest policy regarding leading/directing both the open-source project and commercialization efforts around the project.
    • There is no consistent, documented process for rolling out new processes and major project changes (e.g., requiring two-factor auth, adding the approvers mechanism, moving code between repositories).
    • Nobody has taken responsibility to think about and improve the structure of the project, processes, values, etc. A few people have been working on this part time, but it needs more and more consistent attention given the rate of growth of the project.
    • We're also lacking people to drive, implement, communicate, and roll out improvements (and test, measure, rollback, etc.).
    • There isn't a sufficiently strong feedback loop between technical contributors/leadership and the PM group.
    • Nobody has taken responsibility for legal issues, license validation, trademark enforcement, etc.
    • Technical conventions/principles are not sufficiently documented.
    • Development practices and conventions across our repositories are not consistent.
    • Our communication media are highly fragmented, which makes it hard to understand past decisions.

    What other problems do people think we need to solve? Let's brainstorm first, then prioritize and cull.

    cc @sarahnovotny @brendandburns @countspongebob @sebgoa @pmorie @jbeda @smarterclayton @thockin @idvoretskyi @calebamiles @philips @shtatfeld @craigmcl

    committee/steering lifecycle/stale 
    opened by bgrant0607 60
  • Update README.md

    Update README.md

    What is the PR for ? Removed the kubernetes/kubernetes/cmd/controller-manager bullet point in the readme file

    Which issue(s) this PR fixes: Fixes #5843

    lgtm cncf-cla: yes approved sig/api-machinery sig/contributor-experience size/XS ok-to-test 
    opened by verma-kunal 56
  • There should be a kubernetes contributor guide

    There should be a kubernetes contributor guide

    I've been working on the service-catalog incubator repo for around the last 11 months. The service-catalog is an extension project shaped similarly to kubernetes from an architectural perspective:

    • It has a distinct API server that is used via the aggregator
    • The API server is backed by a controller that has the same state-reconciler architecture as the controllers in the core
    • It uses the kubernetes apiserver and apimachinery projects as dependencies

    The people participating in the SIG were initially largely unfamiliar with our architectural patterns and conventions. This was a bit jarring for me as a Kubernetes OG - as has been written elsewhere, we have a lot of knowledge transmitted as oral history that isn't written down. One big takeaway from this experience is that it would be extremely useful to have a developer guide for kubernetes and projects like the service-catalog that use the same patterns and have a goal of becoming first class projects under the kubernetes github org.

    We have fragments of this knowledge written down already, mostly living in the contributors/devel directory in this repo:

    ... amongst others. However, these pieces are sprinkled in with a bunch of other stuff that lives in the same directory which is very specific to certain niche topics.

    These pieces also don't tell all of the story. For example, the API conventions document doesn't address certain aspects of API design doctrine like how to differentiate behavioral modes for resources (see #859).

    In our experience so far in the catalog incubator, we have tended to spend a lot of time discussing these topics which have been resolved already in the minds of many senior core contributors.

    It would be great to have these pieces brought together into a coherent whole that is maintained as we evolve conventions. Ideally, someone new to the community would be able to read this guide and have a more than minimal grounding in the concepts necessary to understand existing code and a good idea of how to get started thinking 'the kubernetes way'. In addition to making it easier to onboard new community members, it would help conserve time for SIGs forming around new concepts in the community.

    I think this cuts across a number of SIGs:

    • API machinery
    • Architecture
    • Contributor experience

    If we have broad interest in creating a guide like this, it might be effective to create a short-term working group to handle bootstrapping the effort of creating the skeleton of the guide from the existing pieces, identifying gaps, and working with the people knowledgeable about those gaps to close them.

    Of course, such a document is only useful in the long-term if it is kept up to date, so I think there will be a long-term investment and culture-shift necessary to maintain the dev guide.

    Some examples of similar documentation from other projects:

    kind/design kind/feature sig/api-machinery sig/contributor-experience lifecycle/frozen 
    opened by pmorie 56
  • Steering Committee Nomination: Paris Pittman (@parispittman)

    Steering Committee Nomination: Paris Pittman (@parispittman)

    I'd like to self nominate for the 2021 Steering Committee election. ✨

    Serving the upstream Kubernetes community in various roles, including the last two years on the Steering Committee, has been my greatest honor and pleasure. I'd love to continue to represent you all in what would be my last and final two year term. More soon on my bio.

    /committee steering

    opened by parispittman 55
  • RFC: writing-good-e2e-tests.md: polling, contexts and failure messages

    RFC: writing-good-e2e-tests.md: polling, contexts and failure messages

    This reflects the recent changes that came with Ginkgo v2. The goal is to get this reviewed and then send out an email to Kubernetes-Dev to make developers aware of these changes.

    /sig testing /cc @aojea @onsi

    cncf-cla: yes approved sig/testing size/L area/developer-guide 
    opened by pohly 4
  • Update CLA.md

    Update CLA.md

    Signed-off-by: Abhilipsa Sahoo [email protected]


    Enhanced readability of CLA.md file to make it more comprehensible for new contributors.

    cncf-cla: yes size/S tide/merge-method-squash ok-to-test 
    opened by abhilipsasahoo03 10
  • Add Slack Channel - vSphere CSI Driver assessment

    Add Slack Channel - vSphere CSI Driver assessment

    Adding this slack channel for coordinating and performing the vSphere CSI Driver self asessment!

    Which issue(s) this PR fixes:

    Fixes #

    lgtm cncf-cla: yes sig/contributor-experience size/XS area/community-management area/slack-management sig/security 
    opened by aladewberry 3
  • Label for OWNERS file changes

    Label for OWNERS file changes

    Describe the issue

    To help with triage and improve reporting on OWNERS file changes, a label could applied any time an OWNERS / OWNERS_ALIASES file is touched.

    Prow already tracks when OWNERS files are changed and respond with an emoji, this request shouldn't require too much additional work (just applying the label).

    /sig contributor-experience /committee steering /area github-management /milestone v1.27

    sig/contributor-experience committee/steering area/github-management 
    opened by mrbobbytables 1
  • Simplifying Contribution Docs for SIG CLI

    Simplifying Contribution Docs for SIG CLI

    Signed-off-by: Noah Ispas (iamNoah1) [email protected]

    This PR is trying to simplify and update the contribution docs as apparently a lot of the content is out of date and does not reflect how the team is working as of today.

    FYI: I have removed and updated sections that I thought can be removed or updated. For a lot of things, I didn't just go on and modify without having the team involved. I added some comments here and there with questions and suggestions.


    • Do we still see two tracks of contributions? Guided and Self Service?
    • As there are no issues with labels 'type/*' I would remove the Picking the right kind of issue section
    • Is the feature request lifecycle up to date?
    • Is the Design Proposal up to date?
    • Section Merge state meanings seems not really beneficial for me. Except for the information about when to revisit a PR (6 Months for unmerged PRs). Seems not to be the case, or is it?
    • Are all those mentioning aliases used? For me it doesn't look like.
    cncf-cla: yes sig/cli size/L 
    opened by iamNoah1 1
Source code and slides for Kubernetes Community Days - Bangalore.

kcdctl This is the source code for the demo done as part of the talk "Imperative, Declarative and Kubernetes" at the Kubernetes Community Days, Bengal

Madhav Jivrajani 15 Sep 19, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
Woodpecker is a community fork of the Drone CI system.

Woodpecker is a community fork of the Drone CI system.

Woodpecker CI 2k Jan 5, 2023
The community-supported Golang library for Vonage

Vonage Go SDK This is the community-supported Golang library for Vonage. It has support for most of our APIs, but is still under active development. I

null 1 Dec 1, 2021
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Litmus Chaos 3.4k Jan 1, 2023
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.9k Jan 7, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2.3k Jan 4, 2023
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 102 Dec 18, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Opstree Container Kit 111 Oct 15, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Open Cloud-native Game-application Initiative 31 Nov 25, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 832 Dec 30, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 24 Sep 27, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 11k Jan 5, 2023
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

null 10 Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 64 Dec 19, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Dec 14, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 86 Nov 13, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 69 Jan 3, 2023
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022