NATS on Kubernetes ☸️

Related tags

DevOps Tools k8s
Overview

Screen Shot 2020-10-12 at 4 59 32 PM

License Version

Running NATS on K8S

In this repository you can find several examples of how to deploy NATS, NATS Streaming and other tools from the NATS ecosystem on Kubernetes.

Getting started with NATS using Helm

In this repo you can find the Helm 3 based charts to install NATS and NATS Streaming (STAN).

> helm repo add nats https://nats-io.github.io/k8s/helm/charts/
> helm repo update

> helm repo list
NAME          	URL 
nats          	https://nats-io.github.io/k8s/helm/charts/

> helm install my-nats nats/nats
> helm install my-stan nats/stan --set stan.nats.url=nats://my-nats:4222

Quick start using the one-line installer

Another method way to quickly bootstrap a NATS is to use the following command:

curl -sSL https://nats-io.github.io/k8s/setup.sh | sh

In case you don't have a Kubernetes cluster already, you can find some notes on how to create a small cluster using one of the hosted Kubernetes providers here. You can find more info about running NATS on Kubernetes in the docs.

This will run a nats-setup container with the required policy and deploy a NATS cluster on Kubernetes with external access, TLS and decentralized authorization.

asciicast

By default, the installer will deploy the Prometheus Operator and the Cert Manager for metrics and TLS support, and the NATS instances will also bind the 4222 host port for external access.

You can customize the installer to install without TLS or without Auth to have a simpler setup as follows:

# Disable TLS
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls

# Disable Auth and TLS (also disables NATS surveyor and NATS Streaming)
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls --without-auth

Note: Since NATS Streaming will be running as a leafnode to NATS (under the STAN account) and that NATS Surveyor requires the system account to monitor events, disabling auth also means that NATS Streaming and NATS Surveyor based monitoring will be disabled.

The monitoring dashboard setup using NATS Surveyor can be accessed by using port-forward:

kubectl port-forward deployments/nats-surveyor-grafana 3000:3000

Next, open the following URL in your browser:

http://127.0.0.1:3000/d/nats/nats-surveyor?refresh=5s&orgId=1

surveyor

To cleanup the results you can run:

curl -sSL https://nats-io.github.io/k8s/destroy.sh | sh

License

Unless otherwise noted, the NATS source files are distributed under the Apache Version 2.0 license found in the LICENSE file.

Issues
  • [helm] Add support for using NATS Account Server as the resolver

    [helm] Add support for using NATS Account Server as the resolver

    This adds a Helm chart for the account server and support for the NATS Helm chart to use the same secrets. Flow to use both charts would like this:

    • First, create an NSC setup locally
    export NKEYS_PATH=$(pwd)/nsc/nkeys
    export NSC_HOME=$(pwd)/nsc/accounts
    curl -sSL https://nats-io.github.io/k8s/setup/nsc-setup.sh | sh
    nsc edit operator --account-jwt-server-url http://nats-account-server:9090/jwt/v1/ --service-url nats://nats:4222
    
    • Upload the shared secrets and configmaps
    kubectl create configmap operator-jwt --from-file ./nsc/accounts/nats/KO/KO.jwt
    kubectl create configmap nats-sys-jwt --from-file ./nsc/accounts/nats/KO/accounts/SYS/SYS.jwt
    kubectl create secret generic nats-sys-creds --from-file ./nsc/nkeys/creds/KO/SYS/sys.creds
    
    • Create the Helm deploy yaml for the NATS Server
    echo '
    # NATS Server connection settings.
    nats:
      # NATS Service to which we can connect.
      url: "nats://nats:4222"
    
      # Credentials to connect to the NATS Server.
      credentials:
        secret:
          name: nats-sys-creds
          key: sys.creds
    
    # Trusted Operator mode settings.
    operator:
      # Reference to the system account jwt.
      systemaccountjwt:
        configMap:
          name: nats-sys-jwt
          key: SYS.jwt
    
      # Reference to the Operator JWT.
      operatorjwt:
        configMap:
          name: operator-jwt
          key: KO.jwt
    ' > deploy-account-server.yaml
    
    • Create the NATS Account Server with file storage and a persistent volume
    helm install nats-account-server -f deploy-account-server.yaml ./helm/charts/nats-account-server/
    
    • Now need to first feed into the NATS Account Server the current set of accounts NOTE: This requires to sudo vi /etc/hosts to add localhost temporarily so that it resolves 127.0.0.1 as the K8S service domain.
    # 
    # 127.0.0.1 nats-account-server
    # 
    
    • Open the port for the nats-account-server so that can feed the accounts locally
    kubectl port-forward nats-account-server-0 9090:9090 &
    nsc push -A
    
    • Get the public key of system account from the NATS server
    nsc list accounts
    ╭─────────────────────────────────────────────────────────────────╮
    │                            Accounts                             │
    ├──────┬──────────────────────────────────────────────────────────┤
    │ Name │ Public Key                                               │
    ├──────┼──────────────────────────────────────────────────────────┤
    │ A    │ ADR5DUK5TCFGAUHQ76QP4V7GXIEBYTQ26STXCRTP4X2BJDMUMSBXWPWS │
    │ B    │ ABIDLGSR6YZBDGH3C4FK6WI6MMVR7U7M6MR3RI3GKEUCJD27UFHMOKY5 │
    │ STAN │ AAF5E33YN7DIKFQVR3R52XOM5GOM7SIILPITZXW3DFVU743QICIN3Z3J │
    │ SYS  │ ADHUAQZ7UEXIVAXNI6B4VUU4IBV2ZUXMQMSIYDFQQ5BFTSMWTKK2NWJN │ < this one
    ╰──────┴──────────────────────────────────────────────────────────╯
    
    # Helm config for NATS
    echo '
    # Authentication setup
    auth:
      enabled: true
    
      # Reference to the Operator JWT which will be mounted as a volume,
      # shared with the account server in this case.
      operatorjwt:
        configMap:
          name: operator-jwt
          key: KO.jwt
    
      # Public key of the System Account
      systemAccount: ADHUAQZ7UEXIVAXNI6B4VUU4IBV2ZUXMQMSIYDFQQ5BFTSMWTKK2NWJN
    
      resolver:
        type: URL
        url: "http://nats-account-server:9090/jwt/v1/accounts/"
    
    # Add system credentials to the nats-box instance for example
    natsbox:
      enabled: true
    
      credentials:
        secret:
          name: nats-sys-creds
          key: sys.creds
    ' > deploy-nats.yaml
    
    • Now deploy the NATS Server with the URL resolver configuration against the NATS Account Server running in the same cluster
    helm install nats -f deploy-nats.yaml ./helm/charts/nats/
    
    • Can use nats-box with the injected system credentials accounts now as well:
    kubectl exec -n default -it nats-box -- /bin/sh -l
    nats-sub -creds /etc/nats-config/syscreds/sys.creds '>'
    nats-sub '>' # also works since $USER_CREDS env var is set automatically
    

    Signed-off-by: Waldemar Quevedo [email protected]

    opened by wallyqs 14
  • Helm: store.file.path does not seem to be working on NFS

    Helm: store.file.path does not seem to be working on NFS

    Hi,

    I have attached an NFS to our k8s cluster at path /var/mnt

    I have installed nats + stan using the latest helm charts (as of this writing) like this

    helm install nats-server nats/nats --namespace=nats 
    
    helm install stan-server nats/stan --namespace=nats \
      --set stan.nats.url=nats://nats-server:4222 \
      --set store.type=file \
      --set store.file.path=/var/mnt/nats \
      --set store.file.storageSize=1Gi
    

    The startup log from stan shows the NFS path being set

    [1] 2020/05/13 04:03:38.932852 [INF] STREAM: Starting nats-streaming-server[stan-server] version 0.17.0
    [1] 2020/05/13 04:03:38.933013 [INF] STREAM: ServerID: xDec25eoUVPr0wxKKaj7xS
    [1] 2020/05/13 04:03:38.933029 [INF] STREAM: Go version: go1.13.7
    [1] 2020/05/13 04:03:38.933041 [INF] STREAM: Git commit: [f4b7190]
    [1] 2020/05/13 04:03:38.943212 [INF] STREAM: Recovering the state...
    [1] 2020/05/13 04:03:38.945227 [INF] STREAM: No recovered state
    [1] 2020/05/13 04:03:39.196686 [INF] STREAM: Message store is FILE
    [1] 2020/05/13 04:03:39.196697 [INF] STREAM: Store location: /var/mnt/nats
    [1] 2020/05/13 04:03:39.196721 [INF] STREAM: ---------- Store Limits ----------
    [1] 2020/05/13 04:03:39.196723 [INF] STREAM: Channels:                  100 *
    [1] 2020/05/13 04:03:39.196725 [INF] STREAM: --------- Channels Limits --------
    [1] 2020/05/13 04:03:39.196726 [INF] STREAM:   Subscriptions:          1000 *
    [1] 2020/05/13 04:03:39.196727 [INF] STREAM:   Messages     :       1000000 *
    [1] 2020/05/13 04:03:39.196728 [INF] STREAM:   Bytes        :     976.56 MB *
    [1] 2020/05/13 04:03:39.196730 [INF] STREAM:   Age          :     unlimited *
    [1] 2020/05/13 04:03:39.196731 [INF] STREAM:   Inactivity   :     unlimited *
    [1] 2020/05/13 04:03:39.196732 [INF] STREAM: ----------------------------------
    [1] 2020/05/13 04:03:39.196734 [INF] STREAM: Streaming Server is ready
    

    However, when I look at my NFS, clients.dat and server.dat and the subjects are being created at /var/mnt

    I even created the folder at /var/mnt/nats but that didn't seem to help either. I even tried setting securityContext to null but that didn't work either.

    Is there something I am doing wrong?

    Thanks.

    opened by samstride 13
  • Allow overriding of service name via values.yaml

    Allow overriding of service name via values.yaml

    Can we modify the name of the service object so that it can be customized via values.yaml? i.e. change it from this:

     {{- default .Release.Name -}}
    

    To something like so:

     {{- default .Release.Name .Values.nameOverride | trunc 63 | trimSuffix "-" | trimSuffix "." -}}
    

    I have a top-level chart that pulls in nats-operator alongside other charts, and there are cases where a top-level chart has two third-party sub-charts that use {{- default .Release.Name -}} for the name of a service object (resulting in a naming collision). It'd be great to be able to override this outright.

    opened by IAXES 11
  • JetStream cluster no metadata leader

    JetStream cluster no metadata leader

    I tried to use the helm chart to deploy jetstream of nats, however I come across error as below stating that the metadata leader is not created in jetstream.

    [6] 2022/02/23 00:31:43.526170 [INF] JetStream cluster no metadata leader
    [6] 2022/02/23 00:31:51.329692 [INF] JetStream cluster no metadata leader
    

    Below is my setup which I think related to this,

    service to allow external access

    apiVersion: v1
    kind: Service
    metadata:
      name: nats-lb
      namespace: nats-test
    spec:
      type: ClusterIP
      externalIPs: 
        - 192.168.1.1
      selector:
        app.kubernetes.io/name: nats
      ports:
        - protocol: TCP
          port: 4222
          targetPort: 4222
          name: nats
        - protocol: TCP
          port: 7422
          targetPort: 7422
          name: leafnodes
        - protocol: TCP
          port: 7522
          targetPort: 7522
          name: gateways
    

    helm chart

    auth:
      enabled: true
    
      basic:
        users: 
          - foo
          
        accounts:
          foo:
            users:
            - user: hello
              pass: world
          js:
            jetstream: true
            users:
            - user: foo
    
    
      jetstream:
        enabled: true
    
        domain:
    
        encryption:
    
        #############################
        #                           #
        #  Jetstream Memory Storage #
        #                           #
        #############################
        memStorage:
          enabled: true
          size: 1Gi
    
        ############################
        #                          #
        #  Jetstream File Storage  #
        #                          #
        ############################
        fileStorage:
          enabled: true
          storageDirectory: /data
    
          # Use below block to create new persistent volume
          # only used if existingClaim is not specified
          size: 1Gi
          storageClassName: "rook-ceph-block-fast"
          accessModes:
            - ReadWriteOnce
          annotations:
          # key: "value"
    

    Please note that I am using external IP of K8s service object to give the external access instead of loadbalancer. Also, my current situation is I can pub and sub message in nats, but the jetstream simply not bootstrap successfully. Also, the account info is as below

    Connection Information:
    
                   Client ID: 11
                   Client IP: 172.24.4.123
                         RTT: 102.822502ms
           Headers Supported: true
             Maximum Payload: 1.0 MiB
           Connected Cluster: nats
               Connected URL: nats://192.168.1.1:4222
           Connected Address: 192.168.1.1:4222
         Connected Server ID: <...>
       Connected Server Name: my-nats-1
    
    JetStream Account Information:
    
       Could not obtain account information: JetStream system temporarily unavailable (10008)
    
    opened by skyrain 10
  • nats-server-config-reloader is a 2 years old image

    nats-server-config-reloader is a 2 years old image

    Hello, the config reloader uses 2 years old image which should be rebuild, it's dangerous to use such old image, in addition to that, it only supports amd64 and nats & nats streaming has arm64 support

    opened by kamilgregorczyk 10
  • [helm] Upgrade from 2.3.4 to 2.4.0 is not stable

    [helm] Upgrade from 2.3.4 to 2.4.0 is not stable

    Just tried to upgrade from 2.3.4 to 2.4.0 version via helm chart.

      nats:
        image: nats:2.4.0-alpine
        resources:
          requests:
            cpu: 1
            memory: 2Gi
          limits:
            cpu: 2
            memory: 2Gi
    
        limits:
          maxConnections:
          maxSubscriptions:
          maxControlLine:
          maxPayload:
          writeDeadline:
          maxPending:
          maxPings:
          lameDuckDuration:
    
        terminationGracePeriodSeconds: 60
    
        logging:
          debug:
          trace:
          logtime:
          connectErrorReports:
          reconnectErrorReports:
    
        jetstream:
          enabled: true
    
          memStorage:
            enabled: true
    
          fileStorage:
            enabled: true
            storageDirectory: /data
            storageClassName: standard
            size: 30Gi
            accessModes:
              - ReadWriteOnce
    
      cluster:
        enabled: true
        replicas: 3
        noAdvertise: false
    
      auth:
        enabled: true
        systemAccount:
        resolver:
          type: memory
          configMap:
            name: nats-accounts
            key: resolver.conf
    
      natsbox:
        enabled: false
    
      exporter:
        enabled: true   
    

    Upgrade completely failed and streams stopped responding. Issues:

    [ERR] RAFT [vURD0sM3 - S-R3F-pU5w3rId] Error sending snapshot to follower [yLCaJyhQ]: raft: no snapshot available
    
    nats: no response from stream
    

    No leader on stream. Also saw duplicate streams.

    nats-js-0!, nats-js-1!, nats-js-2!
    

    Also I tried to purge some problematic streams or change leader eg: nats stream cluster step-down agent but only received errors or timeouts.

    opened by anjmao 8
  • 0.14.0 breaks generated YAML

    0.14.0 breaks generated YAML

    Relocating the data part of helm/charts/nats/templates/configmap.yaml to helm/charts/nats/files/configmap/nats.conf seems to have introduced an error (formatting/line-break).

    If you helm template nats/nats, you get

    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nats-core-config
      namespace: nats-ns
      labels:
        helm.sh/chart: nats-0.14.0
        app.kubernetes.io/name: nats
        app.kubernetes.io/instance: nats-core
        app.kubernetes.io/version: "2.7.3"
        app.kubernetes.io/managed-by: Helm
    data:
      nats.conf: |
        <... snip ...>
        #####################
        #                   #
        # TLS Configuration #
        #                   #
        #####################tls {
          cert_file: /etc/nats-certs/clients/nats-server-tls/tls.crt
          key_file:  /etc/nats-certs/clients/nats-server-tls/tls.key
          ca_file: /etc/nats-certs/clients/nats-server-tls/ca.crt
      }
    
    

    This seems to have some trouble with the "tls" subtree.

    I think this is the reason why i get Error: INSTALLATION FAILED: YAML parse error on nats/templates/configmap.yaml: error converting YAML to JSON: yaml: line 34: did not find expected key" returned by helm install.

    opened by burner-account 7
  • Allow to specify the `websocket.handshake_timeout` for NATS chart

    Allow to specify the `websocket.handshake_timeout` for NATS chart

    Allow to specify the websocket.handshake_timeout for the NATS chart .

    We're currently experiencing an authorization error with NATS WS clients and the default 2-second timeout on the server side.

    NatsError: "Authentication Timeout"
    
    opened by manifest 7
  • Sidecar leafnode certificates

    Sidecar leafnode certificates

    This PR makes it possible to:

    • provide additional containers, volumes, and volume mounts to the statefulset
    • provide custom leafnodes TLS configuration options
    • provide extra configuration to the reloader sidecar
    opened by chris13524 7
  • stan ft-mode requires shared filestore, but k8s-demo yaml creates pvc for each replica

    stan ft-mode requires shared filestore, but k8s-demo yaml creates pvc for each replica

    Hi, to my understanding, in ft mode, each replica will try to acquire the exclusive lock of the shared filestore. If k8s creates one pvc for each replica, how will this be achieved? What's more, if each replica uses independent pvc, the data is not shared.

    opened by umialpha 7
  • nats/nats Helm chart not deployable on ARMv8/64

    nats/nats Helm chart not deployable on ARMv8/64

    Hello, I noticed that it is currently not possible to deploy NATS on ARMv8 architecture by using a helm chart with default settings. Since I was able to compile the code and deploy the images successfully, it seems like there is just a simple issue with the Docker images. Namely, amd64 images are pulled for these images when deployed on ARMv8 devices:

    • synadia/nats-box:0.4.0
    • connecteverything/nats-server-config-reloader:0.6.0
    • synadia/prometheus-nats-exporter:0.5.0
    • connecteverything/nats-boot-config:0.5.2

    In case you want to reproduce; I'm deploying on a K3s cluster with four Raspberry Pi 4's, K3s master is a virtual machine based on amd64.

    Would be great if you could update the images, thank you!

    PS; if someone encounters the same problem, simply edit the values.yaml and change each of the four image names to michalkeit/imagename:tag.

    opened by michalke-it 6
  • helm: make it possible to override `-prefix=nats` for the exporter

    helm: make it possible to override `-prefix=nats` for the exporter

    -prefix=nats is hard codded here:

    https://github.com/nats-io/k8s/blob/88dcf6eefbc6a7cd41fe1ba4374f977a52e05cc5/helm/charts/nats/templates/statefulset.yaml#L542

    but it seems that gnatsd is normally the default and used in public dashboards:

    https://github.com/nats-io/prometheus-nats-exporter/blob/9468d46771d476bb896f8be95d3101558fa9f466/walkthrough/grafana-nats-dash.json#L246-L249

    https://github.com/kubenav/deploy/blob/ed66ab3bf470fe7a43c6990e39d158b249f5ff7c/dashboards/nats-dashboard.yaml#L23

    opened by bbigras 1
  • Labels in request-reply test pod in nats chart clash with service selector

    Labels in request-reply test pod in nats chart clash with service selector

    When the nats chart is generated and run we noticed the service would resolve to 1 more DNS addresses than there were running nats replicas. The extra DNS address would not accept traffic on port 4222, though the others did.

    This IP address was tracked down to being the nats-test-request-reply that runs on chart installation. Even though it has run to completion, the headless service includes it as matching DNS for the main nats service address, as the selectors match.

    The request-reply test selector is .. {{- include "nats.labels" . | nindent 4 }}

    The service selector is ... {{- include "nats.selectorLabels" . | nindent 4 }}

    The nats.labels includes selectorLabels, so however you set selectorLabels, the service selector will always include the test pod.

    This doesn't usually cause a permenant error, I'm assuming the client will try other DNS addresses, until one works - it does seem to cause some instability for some of our local dev environments (the nats server will become inaccessible). Fixing this, by removing the completed test pod resolves the problem, so whilst it should cope with the incorrect host in the DNS, it is the proximate cause.

    The fix should be to set some different set of labels for the test (like the nats-box)

    opened by domg123 1
  • Add tlsConfig configuration to Prometheus ServiceMonitor

    Add tlsConfig configuration to Prometheus ServiceMonitor

    To be able to configure the Prometheus service monitor with custom TLS config, support tlsConfig in helm chart.

    Extension for the exporter configuration:

    exporter:
      enabled: true
      #...
      serviceMonitor:
        enabled: true
        # ...
        # new configuration for tls configuration    
        tlsConfig:
          caFile: /etc/istio-certs/root-cert.pem
          certFile: /etc/istio-certs/cert-chain.pem
          insecureSkipVerify: true
          keyFile: /etc/istio-certs/key.pem
    

    Needs to be evaluated in the servcieMonitor.yaml

      tlsConfig:
        {{- toYaml .Values.exporter.serviceMonitor.tlsConfig | nindent  6 }}
    
    opened by in-cloud-opensource 0
  • nats-box template - add env variables for credentials and certs

    nats-box template - add env variables for credentials and certs

    Hi, this is my very first issue, so i hope i'm doing this right. When using the nats-box, it would be great if the environment variables for credentials (NATS_CREDS) and the tls cert (NATS_CERT, NATS_KEY, and NATS_CA) would automatically be set. The template for the nats-box already sets an env variable for the credentials (USER_CREDS), but unfortunately it does not match the nats variable.

    I would simply add/edit here:

    env:
        - name: NATS_URL
          value: {{ template "nats.fullname" . }}
        {{- if .Values.natsbox.credentials }}
        - name: USER_CREDS
          value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
        - name: USER2_CREDS
          value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
        {{- end }}
    

    As i think this would be an easy and quick fix i would offer to work on this.

    opened by HenryTheMoose 1
  • auth.basic and auth.nkey configure the same underlying keys

    auth.basic and auth.nkey configure the same underlying keys

    auth.basic and auth.nkey both configure NATS authorizations.users

    also, auth.basic.accounts is supported and maps to NATS accounts

    but auth.nkey.accounts is not supported

    these should be streamlined to map closer to the NATS configuration

    opened by caleblloyd 0
Releases(nack-0.14.1)
Owner
NATS - The Cloud Native Messaging System
NATS is a simple, secure and performant communications system for digital systems, services and devices.
NATS - The Cloud Native Messaging System
Remaphore - Admin tool employing NATS to coordinate processes on distributed infrastructure.

remaphore Admin tool employing NATS to coordinate processes on distributed infrastructure. Tasks on widely distributed machines often have to be coord

Aurora 3 Jan 24, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Litmus Chaos 3.1k Jun 21, 2022
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.1k Jun 20, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 1.7k Jun 27, 2022
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 97 Jun 12, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Opstree Container Kit 111 Apr 28, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Open Cloud-native Game-application Initiative 29 Jun 15, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 663 Jun 27, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 22 Jun 17, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 10k Jun 26, 2022
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

null 10 Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 57 Jun 16, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 45 Jun 18, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 47 Jun 25, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 56 Jun 20, 2022
kube-champ 37 Jun 6, 2022