NATS on Kubernetes ☸️

Related tags

DevOps Tools k8s
Overview

Screen Shot 2020-10-12 at 4 59 32 PM

License Version

Running NATS on K8S

In this repository you can find several examples of how to deploy NATS, NATS Streaming and other tools from the NATS ecosystem on Kubernetes.

Getting started with NATS using Helm

In this repo you can find the Helm 3 based charts to install NATS and NATS Streaming (STAN).

> helm repo add nats https://nats-io.github.io/k8s/helm/charts/
> helm repo update

> helm repo list
NAME          	URL 
nats          	https://nats-io.github.io/k8s/helm/charts/

> helm install my-nats nats/nats
> helm install my-stan nats/stan --set stan.nats.url=nats://my-nats:4222

Quick start using the one-line installer

Another method way to quickly bootstrap a NATS is to use the following command:

curl -sSL https://nats-io.github.io/k8s/setup.sh | sh

In case you don't have a Kubernetes cluster already, you can find some notes on how to create a small cluster using one of the hosted Kubernetes providers here. You can find more info about running NATS on Kubernetes in the docs.

This will run a nats-setup container with the required policy and deploy a NATS cluster on Kubernetes with external access, TLS and decentralized authorization.

asciicast

By default, the installer will deploy the Prometheus Operator and the Cert Manager for metrics and TLS support, and the NATS instances will also bind the 4222 host port for external access.

You can customize the installer to install without TLS or without Auth to have a simpler setup as follows:

# Disable TLS
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls

# Disable Auth and TLS (also disables NATS surveyor and NATS Streaming)
curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls --without-auth

Note: Since NATS Streaming will be running as a leafnode to NATS (under the STAN account) and that NATS Surveyor requires the system account to monitor events, disabling auth also means that NATS Streaming and NATS Surveyor based monitoring will be disabled.

The monitoring dashboard setup using NATS Surveyor can be accessed by using port-forward:

kubectl port-forward deployments/nats-surveyor-grafana 3000:3000

Next, open the following URL in your browser:

http://127.0.0.1:3000/d/nats/nats-surveyor?refresh=5s&orgId=1

surveyor

To cleanup the results you can run:

curl -sSL https://nats-io.github.io/k8s/destroy.sh | sh

License

Unless otherwise noted, the NATS source files are distributed under the Apache Version 2.0 license found in the LICENSE file.

Comments
  • [helm] Add support for using NATS Account Server as the resolver

    [helm] Add support for using NATS Account Server as the resolver

    This adds a Helm chart for the account server and support for the NATS Helm chart to use the same secrets. Flow to use both charts would like this:

    • First, create an NSC setup locally
    export NKEYS_PATH=$(pwd)/nsc/nkeys
    export NSC_HOME=$(pwd)/nsc/accounts
    curl -sSL https://nats-io.github.io/k8s/setup/nsc-setup.sh | sh
    nsc edit operator --account-jwt-server-url http://nats-account-server:9090/jwt/v1/ --service-url nats://nats:4222
    
    • Upload the shared secrets and configmaps
    kubectl create configmap operator-jwt --from-file ./nsc/accounts/nats/KO/KO.jwt
    kubectl create configmap nats-sys-jwt --from-file ./nsc/accounts/nats/KO/accounts/SYS/SYS.jwt
    kubectl create secret generic nats-sys-creds --from-file ./nsc/nkeys/creds/KO/SYS/sys.creds
    
    • Create the Helm deploy yaml for the NATS Server
    echo '
    # NATS Server connection settings.
    nats:
      # NATS Service to which we can connect.
      url: "nats://nats:4222"
    
      # Credentials to connect to the NATS Server.
      credentials:
        secret:
          name: nats-sys-creds
          key: sys.creds
    
    # Trusted Operator mode settings.
    operator:
      # Reference to the system account jwt.
      systemaccountjwt:
        configMap:
          name: nats-sys-jwt
          key: SYS.jwt
    
      # Reference to the Operator JWT.
      operatorjwt:
        configMap:
          name: operator-jwt
          key: KO.jwt
    ' > deploy-account-server.yaml
    
    • Create the NATS Account Server with file storage and a persistent volume
    helm install nats-account-server -f deploy-account-server.yaml ./helm/charts/nats-account-server/
    
    • Now need to first feed into the NATS Account Server the current set of accounts NOTE: This requires to sudo vi /etc/hosts to add localhost temporarily so that it resolves 127.0.0.1 as the K8S service domain.
    # 
    # 127.0.0.1 nats-account-server
    # 
    
    • Open the port for the nats-account-server so that can feed the accounts locally
    kubectl port-forward nats-account-server-0 9090:9090 &
    nsc push -A
    
    • Get the public key of system account from the NATS server
    nsc list accounts
    ╭─────────────────────────────────────────────────────────────────╮
    │                            Accounts                             │
    ├──────┬──────────────────────────────────────────────────────────┤
    │ Name │ Public Key                                               │
    ├──────┼──────────────────────────────────────────────────────────┤
    │ A    │ ADR5DUK5TCFGAUHQ76QP4V7GXIEBYTQ26STXCRTP4X2BJDMUMSBXWPWS │
    │ B    │ ABIDLGSR6YZBDGH3C4FK6WI6MMVR7U7M6MR3RI3GKEUCJD27UFHMOKY5 │
    │ STAN │ AAF5E33YN7DIKFQVR3R52XOM5GOM7SIILPITZXW3DFVU743QICIN3Z3J │
    │ SYS  │ ADHUAQZ7UEXIVAXNI6B4VUU4IBV2ZUXMQMSIYDFQQ5BFTSMWTKK2NWJN │ < this one
    ╰──────┴──────────────────────────────────────────────────────────╯
    
    # Helm config for NATS
    echo '
    # Authentication setup
    auth:
      enabled: true
    
      # Reference to the Operator JWT which will be mounted as a volume,
      # shared with the account server in this case.
      operatorjwt:
        configMap:
          name: operator-jwt
          key: KO.jwt
    
      # Public key of the System Account
      systemAccount: ADHUAQZ7UEXIVAXNI6B4VUU4IBV2ZUXMQMSIYDFQQ5BFTSMWTKK2NWJN
    
      resolver:
        type: URL
        url: "http://nats-account-server:9090/jwt/v1/accounts/"
    
    # Add system credentials to the nats-box instance for example
    natsbox:
      enabled: true
    
      credentials:
        secret:
          name: nats-sys-creds
          key: sys.creds
    ' > deploy-nats.yaml
    
    • Now deploy the NATS Server with the URL resolver configuration against the NATS Account Server running in the same cluster
    helm install nats -f deploy-nats.yaml ./helm/charts/nats/
    
    • Can use nats-box with the injected system credentials accounts now as well:
    kubectl exec -n default -it nats-box -- /bin/sh -l
    nats-sub -creds /etc/nats-config/syscreds/sys.creds '>'
    nats-sub '>' # also works since $USER_CREDS env var is set automatically
    

    Signed-off-by: Waldemar Quevedo [email protected]

    opened by wallyqs 14
  • Helm: store.file.path does not seem to be working on NFS

    Helm: store.file.path does not seem to be working on NFS

    Hi,

    I have attached an NFS to our k8s cluster at path /var/mnt

    I have installed nats + stan using the latest helm charts (as of this writing) like this

    helm install nats-server nats/nats --namespace=nats 
    
    helm install stan-server nats/stan --namespace=nats \
      --set stan.nats.url=nats://nats-server:4222 \
      --set store.type=file \
      --set store.file.path=/var/mnt/nats \
      --set store.file.storageSize=1Gi
    

    The startup log from stan shows the NFS path being set

    [1] 2020/05/13 04:03:38.932852 [INF] STREAM: Starting nats-streaming-server[stan-server] version 0.17.0
    [1] 2020/05/13 04:03:38.933013 [INF] STREAM: ServerID: xDec25eoUVPr0wxKKaj7xS
    [1] 2020/05/13 04:03:38.933029 [INF] STREAM: Go version: go1.13.7
    [1] 2020/05/13 04:03:38.933041 [INF] STREAM: Git commit: [f4b7190]
    [1] 2020/05/13 04:03:38.943212 [INF] STREAM: Recovering the state...
    [1] 2020/05/13 04:03:38.945227 [INF] STREAM: No recovered state
    [1] 2020/05/13 04:03:39.196686 [INF] STREAM: Message store is FILE
    [1] 2020/05/13 04:03:39.196697 [INF] STREAM: Store location: /var/mnt/nats
    [1] 2020/05/13 04:03:39.196721 [INF] STREAM: ---------- Store Limits ----------
    [1] 2020/05/13 04:03:39.196723 [INF] STREAM: Channels:                  100 *
    [1] 2020/05/13 04:03:39.196725 [INF] STREAM: --------- Channels Limits --------
    [1] 2020/05/13 04:03:39.196726 [INF] STREAM:   Subscriptions:          1000 *
    [1] 2020/05/13 04:03:39.196727 [INF] STREAM:   Messages     :       1000000 *
    [1] 2020/05/13 04:03:39.196728 [INF] STREAM:   Bytes        :     976.56 MB *
    [1] 2020/05/13 04:03:39.196730 [INF] STREAM:   Age          :     unlimited *
    [1] 2020/05/13 04:03:39.196731 [INF] STREAM:   Inactivity   :     unlimited *
    [1] 2020/05/13 04:03:39.196732 [INF] STREAM: ----------------------------------
    [1] 2020/05/13 04:03:39.196734 [INF] STREAM: Streaming Server is ready
    

    However, when I look at my NFS, clients.dat and server.dat and the subjects are being created at /var/mnt

    I even created the folder at /var/mnt/nats but that didn't seem to help either. I even tried setting securityContext to null but that didn't work either.

    Is there something I am doing wrong?

    Thanks.

    opened by samstride 13
  • JetStream cluster no metadata leader

    JetStream cluster no metadata leader

    I tried to use the helm chart to deploy jetstream of nats, however I come across error as below stating that the metadata leader is not created in jetstream.

    [6] 2022/02/23 00:31:43.526170 [INF] JetStream cluster no metadata leader
    [6] 2022/02/23 00:31:51.329692 [INF] JetStream cluster no metadata leader
    

    Below is my setup which I think related to this,

    service to allow external access

    apiVersion: v1
    kind: Service
    metadata:
      name: nats-lb
      namespace: nats-test
    spec:
      type: ClusterIP
      externalIPs: 
        - 192.168.1.1
      selector:
        app.kubernetes.io/name: nats
      ports:
        - protocol: TCP
          port: 4222
          targetPort: 4222
          name: nats
        - protocol: TCP
          port: 7422
          targetPort: 7422
          name: leafnodes
        - protocol: TCP
          port: 7522
          targetPort: 7522
          name: gateways
    

    helm chart

    auth:
      enabled: true
    
      basic:
        users: 
          - foo
          
        accounts:
          foo:
            users:
            - user: hello
              pass: world
          js:
            jetstream: true
            users:
            - user: foo
    
    
      jetstream:
        enabled: true
    
        domain:
    
        encryption:
    
        #############################
        #                           #
        #  Jetstream Memory Storage #
        #                           #
        #############################
        memStorage:
          enabled: true
          size: 1Gi
    
        ############################
        #                          #
        #  Jetstream File Storage  #
        #                          #
        ############################
        fileStorage:
          enabled: true
          storageDirectory: /data
    
          # Use below block to create new persistent volume
          # only used if existingClaim is not specified
          size: 1Gi
          storageClassName: "rook-ceph-block-fast"
          accessModes:
            - ReadWriteOnce
          annotations:
          # key: "value"
    

    Please note that I am using external IP of K8s service object to give the external access instead of loadbalancer. Also, my current situation is I can pub and sub message in nats, but the jetstream simply not bootstrap successfully. Also, the account info is as below

    Connection Information:
    
                   Client ID: 11
                   Client IP: 172.24.4.123
                         RTT: 102.822502ms
           Headers Supported: true
             Maximum Payload: 1.0 MiB
           Connected Cluster: nats
               Connected URL: nats://192.168.1.1:4222
           Connected Address: 192.168.1.1:4222
         Connected Server ID: <...>
       Connected Server Name: my-nats-1
    
    JetStream Account Information:
    
       Could not obtain account information: JetStream system temporarily unavailable (10008)
    
    opened by skyrain 12
  • Allow overriding of service name via values.yaml

    Allow overriding of service name via values.yaml

    Can we modify the name of the service object so that it can be customized via values.yaml? i.e. change it from this:

     {{- default .Release.Name -}}
    

    To something like so:

     {{- default .Release.Name .Values.nameOverride | trunc 63 | trimSuffix "-" | trimSuffix "." -}}
    

    I have a top-level chart that pulls in nats-operator alongside other charts, and there are cases where a top-level chart has two third-party sub-charts that use {{- default .Release.Name -}} for the name of a service object (resulting in a naming collision). It'd be great to be able to override this outright.

    opened by IAXES 11
  • nats-server-config-reloader is a 2 years old image

    nats-server-config-reloader is a 2 years old image

    Hello, the config reloader uses 2 years old image which should be rebuild, it's dangerous to use such old image, in addition to that, it only supports amd64 and nats & nats streaming has arm64 support

    opened by kamilgregorczyk 10
  • could not pick a Stream to operate on: context deadline exceeded

    could not pick a Stream to operate on: context deadline exceeded

    Setup

    • NATS was deployed on 6 nodes k8s cluster with JetStream enabled in Production.
    • NATS Image: 2.8.2-alpine
    • NATS cluster enabled with 3 nodes
    • JetStream Stream was created with 3 replicas

    Actual behaviour

    NATS was deployed on a k8s cluster with JetStream enabled and the application was working. After some time (about 1 day), the JetStream server seemed to be unavailable, even though the consumers were receiving events slowly. But the NATS server was not returning any data for JetStream. The nats account info returned No response from JetStream server. The nats stream info was returning context deadline exceeded.

    We deleted two of the pods of NATS, which provisioned new pods and then the issue was resolved.

    The point is that the JetStream server was not healthy, but the status was not reported anywhere, also the readiness checks were fine.

    Expected behaviour

    If the JetStream server was unhealthy, then it should be reflected in the readiness checks so that k8s could have crashed/restarted the NATS pods and the issue could have being resolved automatically.

    NATS state

    $ nats stream info
    nats: error: could not pick a Stream to operate on: context deadline exceeded
    
    $ nats stream ls
    nats: error: could not list streams: context deadline exceeded, try --help
    
    $ nats account info
    Connection Information:
    
                   Client ID: 206
                   Client IP: 127.0.0.1
                         RTT: 21.682764ms
           Headers Supported: true
             Maximum Payload: 1.0 MiB
           Connected Cluster: xxx-nats
               Connected URL: nats://127.0.0.1:4222
           Connected Address: 127.0.0.1:4222
         Connected Server ID: xxxxx
       Connected Server Name: xxx-nats-0
    
    JetStream Account Information:
    
       No response from JetStream server
    

    NATS config:

    apiVersion: v1
    data:
      nats.conf: |
        # PID file shared with configuration reloader.
        pid_file: "/var/run/nats/nats.pid"
        ###############
        #             #
        # Monitoring  #
        #             #
        ###############
        http: 8222
        server_name: $POD_NAME
        ###################################
        #                                 #
        # NATS JetStream                  #
        #                                 #
        ###################################
        jetstream {
          max_mem: 1Gi
          store_dir: /data
          max_file:1Gi
        }
        ###################################
        #                                 #
        # NATS Full Mesh Clustering Setup #
        #                                 #
        ###################################
        cluster {
          port: 6222
          name: xxx-nats
          routes = [
            nats://xxx-nats-0.xxx-nats.xxx-system.svc.cluster.local:6222,nats://xxx-nats-1.xxx-nats.xxx-system.svc.cluster.local:6222,nats://xxx-nats-2.xxx-nats.xxx-system.svc.cluster.local:6222,
          ]
          cluster_advertise: $CLUSTER_ADVERTISE
          connect_retries: 120
        }
        lame_duck_duration: 120s
    kind: ConfigMap
    metadata:
      labels:
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: nats
        helm.sh/chart: nats-0.9.0
      name: xxx-nats-config
      namespace: xxx-system
      resourceVersion: "253626407"
      uid: xxx-09ce-4674-b47b-d13056ad6e6c
    

    NATS Pod logs:

    xxx-nats-1 logs:

    [30] 2022/07/27 16:16:18.361482 [INF] 100.96.4.73:46618 - rid:211 - Router connection closed: Stale Connection
    [30] 2022/07/27 16:16:19.418317 [INF] 100.96.4.73:6222 - rid:214 - Route connection created
    [30] 2022/07/27 16:16:19.449028 [INF] 100.96.4.73:53162 - rid:215 - Route connection created
    [30] 2022/07/27 16:16:19.449455 [INF] 100.96.4.73:53162 - rid:215 - Router connection closed: Duplicate Route
    [30] 2022/07/27 16:20:20.614364 [INF] 100.96.4.73:6222 - rid:214 - Router connection closed: Stale Connection
    [30] 2022/07/27 16:20:21.680957 [INF] 100.96.4.73:6222 - rid:216 - Route connection created
    [30] 2022/07/27 16:20:21.693075 [INF] 100.96.4.73:40032 - rid:217 - Route connection created
    [30] 2022/07/27 16:20:21.693375 [INF] 100.96.4.73:40032 - rid:217 - Router connection closed: Duplicate Route
    [30] 2022/07/27 16:24:22.832827 [INF] 100.96.4.73:6222 - rid:216 - Router connection closed: Stale Connection
    [30] 2022/07/27 16:24:22.832943 [ERR] 100.96.4.73:6222 - rid:216 - Error flushing: writev tcp 100.96.0.31:54260->100.96.4.73:6222: writev: broken pipe
    [30] 2022/07/27 16:24:23.844558 [INF] 100.96.4.73:54816 - rid:219 - Route connection created
    [30] 2022/07/27 16:24:23.889044 [INF] 100.96.4.73:6222 - rid:220 - Route connection created
    [30] 2022/07/27 16:24:23.889766 [INF] 100.96.4.73:6222 - rid:220 - Router connection closed: Duplicate Route
    [30] 2022/07/27 16:28:25.023357 [ERR] 100.96.4.73:54816 - rid:219 - Error flushing: writev tcp 100.96.0.31:6222->100.96.4.73:54816: writev: broken pipe
    [30] 2022/07/27 16:28:25.023389 [INF] 100.96.4.73:54816 - rid:219 - Router connection closed: Write Error
    [30] 2022/07/27 16:28:25.915414 [INF] 100.96.4.73:40108 - rid:221 - Route connection created
    [30] 2022/07/27 16:28:26.055999 [INF] 100.96.4.73:6222 - rid:222 - Route connection created
    [30] 2022/07/27 16:28:26.056789 [INF] 100.96.4.73:6222 - rid:222 - Router connection closed: Duplicate Route
    [30] 2022/07/27 16:30:08.367559 [ERR] 100.96.4.73:40108 - rid:221 - Error flushing: writev tcp 100.96.0.31:6222->100.96.4.73:40108: writev: broken pipe
    [30] 2022/07/27 16:30:08.367587 [INF] 100.96.4.73:40108 - rid:221 - Router connection closed: Write Error
    

    xxx-nats-2 logs:

    [30] 2022/07/27 15:19:50.103675 [INF] JetStream cluster new consumer leader for '$G > sap > b7f0696f20db40a892dcff93674f1449'
    [30] 2022/07/27 15:19:56.623791 [INF] JetStream cluster new consumer leader for '$G > sap > b7f0696f20db40a892dcff93674f1449'
    [30] 2022/07/27 15:23:50.936276 [INF] 100.96.0.31:55852 - rid:165 - Router connection closed: Stale Connection
    [30] 2022/07/27 15:23:51.967280 [INF] 100.96.0.31:6222 - rid:167 - Route connection created
    [30] 2022/07/27 15:23:52.095311 [INF] 100.96.0.31:34972 - rid:168 - Route connection created
    [30] 2022/07/27 15:23:52.095735 [INF] 100.96.0.31:34972 - rid:168 - Router connection closed: Duplicate Route
    [30] 2022/07/27 15:27:52.974935 [INF] 100.96.0.31:6222 - rid:167 - Router connection closed: Stale Connection
    [30] 2022/07/27 15:27:53.986202 [INF] 100.96.0.31:6222 - rid:169 - Route connection created
    [30] 2022/07/27 15:27:54.112633 [INF] 100.96.0.31:42390 - rid:170 - Route connection created
    [30] 2022/07/27 15:27:54.112833 [INF] 100.96.0.31:42390 - rid:170 - Router connection closed: Duplicate Route
    [30] 2022/07/27 15:31:55.064012 [INF] 100.96.0.31:6222 - rid:169 - Router connection closed: Stale Connection
    [30] 2022/07/27 15:31:56.096922 [INF] 100.96.0.31:6222 - rid:171 - Route connection created
    [30] 2022/07/27 15:31:56.203202 [INF] 100.96.0.31:49678 - rid:172 - Route connection created
    [30] 2022/07/27 15:31:56.203592 [INF] 100.96.0.31:49678 - rid:172 - Router connection closed: Duplicate Route
    
    opened by mfaizanse 8
  • Hard coded static routes not respecting namespace

    Hard coded static routes not respecting namespace

    I'm one of the few users that can't install nats using helm, hence my recent changes adding support to the install scripts/bootstrapping images to support namespacing. In using the install scripts, I'm installing nats in the nats namespace. Unfortunately, the default configuration for the nats-config ConfigMap hard-codes some routes that aren't valid if you're running in any namespace other than default. I believe the relevant version of the config map is this one, though I'm not 100% sure as the configmaps themselves don't contain a reference to their original filename.

    https://github.com/nats-io/k8s/blob/main/nats-server/simple-nats.yml

    I'm not sure how this should be handled, though I'm more than happy to actually do the implementation myself if it'll take some legwork.

    The immediate fix I've implemented is that I manually edited the configmap to replace

            nats://nats-0.nats.default.svc:6222
            nats://nats-1.nats.default.svc:6222
            nats://nats-2.nats.default.svc:6222
    

    with

            nats://nats-0.nats.nats.svc:6222
            nats://nats-1.nats.nats.svc:6222
            nats://nats-2.nats.nats.svc:6222
    

    Which I've tested and it works just fine, but I'd prefer to fix it for everyone using the install script, and not just me :) Any ideas/guidance?

    opened by jbury 8
  • [helm] Upgrade from 2.3.4 to 2.4.0 is not stable

    [helm] Upgrade from 2.3.4 to 2.4.0 is not stable

    Just tried to upgrade from 2.3.4 to 2.4.0 version via helm chart.

      nats:
        image: nats:2.4.0-alpine
        resources:
          requests:
            cpu: 1
            memory: 2Gi
          limits:
            cpu: 2
            memory: 2Gi
    
        limits:
          maxConnections:
          maxSubscriptions:
          maxControlLine:
          maxPayload:
          writeDeadline:
          maxPending:
          maxPings:
          lameDuckDuration:
    
        terminationGracePeriodSeconds: 60
    
        logging:
          debug:
          trace:
          logtime:
          connectErrorReports:
          reconnectErrorReports:
    
        jetstream:
          enabled: true
    
          memStorage:
            enabled: true
    
          fileStorage:
            enabled: true
            storageDirectory: /data
            storageClassName: standard
            size: 30Gi
            accessModes:
              - ReadWriteOnce
    
      cluster:
        enabled: true
        replicas: 3
        noAdvertise: false
    
      auth:
        enabled: true
        systemAccount:
        resolver:
          type: memory
          configMap:
            name: nats-accounts
            key: resolver.conf
    
      natsbox:
        enabled: false
    
      exporter:
        enabled: true   
    

    Upgrade completely failed and streams stopped responding. Issues:

    [ERR] RAFT [vURD0sM3 - S-R3F-pU5w3rId] Error sending snapshot to follower [yLCaJyhQ]: raft: no snapshot available
    
    nats: no response from stream
    

    No leader on stream. Also saw duplicate streams.

    nats-js-0!, nats-js-1!, nats-js-2!
    

    Also I tried to purge some problematic streams or change leader eg: nats stream cluster step-down agent but only received errors or timeouts.

    opened by anjmao 8
  • 0.14.0 breaks generated YAML

    0.14.0 breaks generated YAML

    Relocating the data part of helm/charts/nats/templates/configmap.yaml to helm/charts/nats/files/configmap/nats.conf seems to have introduced an error (formatting/line-break).

    If you helm template nats/nats, you get

    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nats-core-config
      namespace: nats-ns
      labels:
        helm.sh/chart: nats-0.14.0
        app.kubernetes.io/name: nats
        app.kubernetes.io/instance: nats-core
        app.kubernetes.io/version: "2.7.3"
        app.kubernetes.io/managed-by: Helm
    data:
      nats.conf: |
        <... snip ...>
        #####################
        #                   #
        # TLS Configuration #
        #                   #
        #####################tls {
          cert_file: /etc/nats-certs/clients/nats-server-tls/tls.crt
          key_file:  /etc/nats-certs/clients/nats-server-tls/tls.key
          ca_file: /etc/nats-certs/clients/nats-server-tls/ca.crt
      }
    
    

    This seems to have some trouble with the "tls" subtree.

    I think this is the reason why i get Error: INSTALLATION FAILED: YAML parse error on nats/templates/configmap.yaml: error converting YAML to JSON: yaml: line 34: did not find expected key" returned by helm install.

    opened by burner-account 7
  • Allow to specify the `websocket.handshake_timeout` for NATS chart

    Allow to specify the `websocket.handshake_timeout` for NATS chart

    Allow to specify the websocket.handshake_timeout for the NATS chart .

    We're currently experiencing an authorization error with NATS WS clients and the default 2-second timeout on the server side.

    NatsError: "Authentication Timeout"
    
    opened by manifest 7
  • Sidecar leafnode certificates

    Sidecar leafnode certificates

    This PR makes it possible to:

    • provide additional containers, volumes, and volume mounts to the statefulset
    • provide custom leafnodes TLS configuration options
    • provide extra configuration to the reloader sidecar
    opened by chris13524 7
  • Add resource specification for test-request-reply pod

    Add resource specification for test-request-reply pod

    Hello team, Could you please add an option to specify resources for a nats-box container in test-request-reply pod. Other resources have it, only this one is missing.

    Thank you in advance.

    opened by ypoplavs 0
  • warning logged on upgrade

    warning logged on upgrade

    With something like:

    nats:
      image: nats:alpine
    
      jetstream:
        enabled: true
    
        fileStorage:
          enabled: true
          size: "1Gi"
          storageDirectory: /data/
    
    cluster:
      enabled: true
      replicas: 3
    
    auth:
      enabled: true
      systemAccount: "$SYS"
      timeout: 5s
      basic:
        accounts:
          $SYS:
            users:
            - user: sys
              password: sys
    
    helm upgrade issue3722 ./helm/charts/nats  -f examples/b.yaml 
    coalesce.go:220: warning: cannot overwrite table with non table for nats.nats.image (map[pullPolicy:IfNotPresent repository:nats tag:2.9.9-alpine])
    
    opened by wallyqs 1
  • Nats box service account issue in Openshift environment

    Nats box service account issue in Openshift environment

    Basic Info: nats helm chart version: 0.17.4 test environment: Openshift cluster

    I am passing some parameters to nats helm chart as follows:

    image

    I wish that a new serviceAccount is not required to be created and just use the already prsent serviceAccount "smo-ric-common-infra-sa". But, In openshift environment, this new service account is not assigned to nats-box pod and "default" serviceaccount is assigned to nats-box. So, this is an issue in nats-box.yaml file.

    I have to add these lines in templates/nats-box.yaml file, then it works fine. image

    I verified in the pod spec that my desired serviceAccount name is present. image

    opened by bhati-github 0
  • Prometheus NATS Exporter configuration add switch on/off part of metrics

    Prometheus NATS Exporter configuration add switch on/off part of metrics

    Could you switch on/off the following metrics parameter in NATS Exporter configuration?

    Example:

    `  metrics:
        channelz: true
        connz: true
        jsz: true
        gatewayz: true
        leafz: true
        routez: true
        serverz: true
        subz: true
        varz: true`
    

    from https://github.com/nats-io/k8s/blob/d5e26bc18b143f00e8903356677def0b32e6be13/helm/charts/nats/templates/statefulset.yaml#L556

    
     args:
            - -connz
            - -routez
            - -subz
            - -varz
    
    to 
              args:
                {{- if .Values.config.metrics.varz }}
                - "-varz"
                {{- end }}
                {{- if .Values.config.metrics.channelz }}
                - "-channelz"
                {{- end }}
                {{- if .Values.config.metrics.connz }}
                - "-connz"
                {{- end }}
                {{- if .Values.config.metrics.routez }}
                - "-routez"
                {{- end }}
                {{- if .Values.config.metrics.serverz }}
                - "-serverz"
                {{- end }}
                {{- if .Values.config.metrics.subz }}
                - "-subz"
                {{- end }}
                {{- if .Values.config.metrics.gatewayz }}
                - "-gatewayz"
                {{- end }}
                {{- if .Values.config.metrics.jsz }}
                - "-jsz=all"
                {{- end }}
                {{- if .Values.config.metrics.leafz }}
                - "-leafz"
                {{- end }}
    
    opened by vvkovtun 0
  • Support 2-phase rolling of cluster authorization

    Support 2-phase rolling of cluster authorization

    Add an option to cluster to support rolling old authorization to minimize downtime, such as:

    cluster
      authorizationOld:
        user: foo
        password: pwdOld
    

    This would be used during a 2-phase rolling of cluster auth https://github.com/nats-io/nats-server/issues/3490#issuecomment-1255587709

    opened by caleblloyd 0
Releases(nats-0.19.4)
Owner
NATS - The Cloud Native Messaging System
NATS is a simple, secure and performant communications system for digital systems, services and devices.
NATS - The Cloud Native Messaging System
Remaphore - Admin tool employing NATS to coordinate processes on distributed infrastructure.

remaphore Admin tool employing NATS to coordinate processes on distributed infrastructure. Tasks on widely distributed machines often have to be coord

Aurora 3 Jan 24, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Litmus Chaos 3.4k Jan 1, 2023
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.9k Jan 7, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Loft Labs 2.3k Jan 4, 2023
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 102 Dec 18, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Opstree Container Kit 111 Oct 15, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Open Cloud-native Game-application Initiative 31 Nov 25, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Portshift 832 Dec 30, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 24 Sep 27, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 11k Jan 5, 2023
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

null 10 Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

null 64 Dec 19, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Dec 14, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

digitalis.io 86 Nov 13, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Crossplane Contrib 69 Jan 3, 2023
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

adolli 1 Feb 21, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Daimler Group 61 Oct 27, 2022
kube-champ 43 Oct 19, 2022