The Ultimate Engineer Toolbox YouTube 🔨 🔧

Overview

The Ultimate Engineer Toolbox YouTube 🔨 🔧

A Collection of tools, hands-on walkthroughs with source code.
The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers


Steps Playlist 📺 Source :octocat:
Learn Kubernetes ❄️ Kubernetes Guide source
Learn about CI/CD tools 🐳 CI/CD Guide
Deploy Kubernetes to the cloud Cloud Guide source
Monitoring Kubernetes 🔍 Cloud Guide source
Guide to Logging 📃 Cloud Guide source
Guide to ServiceMesh 🌐 Cloud Guide source

Docker Development Basics

Step ✔️ Video 🎥 Source Code :octocat:
Working with Dockerfiles
(.NET, Golang, Python, NodeJS)
Docker 1 source
Working with code
(.NET, Golang, Python, NodeJS)
Docker 1 source
Docker Multistage explained Docker 1 source
Debugging Go in Docker Docker 1 source
Debugging .NET in Docker Docker 1 source
Debugging Python in Docker Docker 1 source
Debugging NodeJS in Docker Docker 1 source

Engineering Toolbox 🔨 🔧

Checkout the toolbox website

toolbox 1

Issues
  • Fatal config file error for sentinel

    Fatal config file error for sentinel

    Hey @marcel-dempers , I copied your script exactly as it is for the sentinel and i'm getting the following output:

    Wed, Dec 1 2021 8:47:24 am |   Wed, Dec 1 2021 8:47:24 am | *** FATAL CONFIG FILE ERROR (Redis 6.2.3) *** Wed, Dec 1 2021 8:47:24 am | Reading the configuration file, at line 4 Wed, Dec 1 2021 8:47:24 am | >>> 'sentinel monitor mymaster 6379 2' Wed, Dec 1 2021 8:47:24 am | Unrecognized sentinel configuration statement.

    Any idea's? I'm parsing through docs right now but not seeing anything obviously wrong.

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: sentinel
    spec:
      serviceName: sentinel
      replicas: 3
      selector:
        matchLabels:
          app: sentinel
      template:
        metadata:
          labels:
            app: sentinel
        spec:
          initContainers:
          - name: config
            image: redis:6.2.3-alpine
            command: [ "sh", "-c" ]
            args:
              - |
                REDIS_PASSWORD=a-very-complex-password-here
                nodes=redis-0.redis.redis.svc.cluster.local,redis-1.redis.redis.svc.cluster.local,redis-2.redis.redis.svc.cluster.local
    
                for i in ${nodes//,/ }
                do
                    echo "finding master at $i"
                    MASTER=$(redis-cli --no-auth-warning --raw -h $i -a $REDIS_PASSWORD info replication | awk '{print $1}' | grep master_host: | cut -d ":" -f2)
                    if [ "$MASTER" == "" ]; then
                        echo "no master found"
                        MASTER=
                    else
                        echo "found $MASTER"
                        break
                    fi
                done
                echo "sentinel monitor mymaster $MASTER 6379 2" >> /tmp/master
                echo "port 5000
                sentinel resolve-hostnames yes
                sentinel announce-hostnames yes
                $(cat /tmp/master)
                sentinel down-after-milliseconds mymaster 5000
                sentinel failover-timeout mymaster 60000
                sentinel parallel-syncs mymaster 1
                sentinel auth-pass mymaster $REDIS_PASSWORD
                " > /etc/redis/sentinel.conf
                cat /etc/redis/sentinel.conf
            volumeMounts:
            - name: redis-config
              mountPath: /etc/redis/
          containers:
          - name: sentinel
            image: redis:6.2.3-alpine
            command: ["redis-sentinel"]
            args: ["/etc/redis/sentinel.conf"]
            ports:
            - containerPort: 5000
              name: sentinel
            volumeMounts:
            - name: redis-config
              mountPath: /etc/redis/
            - name: data
              mountPath: /data
          volumes:
          - name: redis-config
            emptyDir: {}
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "longhorn"
          resources:
            requests:
              storage: 50Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sentinel
    spec:
      clusterIP: None
      ports:
      - port: 5000
        targetPort: 5000
        name: sentinel
      selector:
        app: sentinel
    
    opened by funkel1989 13
  • errors while running in k8s cluster in GKE

    errors while running in k8s cluster in GKE

    ➜ POC-admission-controller git:(master) ✗ kubectl logs -f example-webhook-78c8bc67b7-p95gd -n k8s-controller panic: pods is forbidden: User "system:serviceaccount:k8s-controller:example-webhook" cannot list resource "pods" in API group "" at the cluster scope

    goroutine 1 [running]: main.test() /app/test.go:14 +0x1a8 main.main() /app/main.go:93 +0x392 ➜ POC-admission-controller git:(master) ✗

    @marcel-dempers

    opened by mrrobothack1 10
  • Deployment failing in Kubernetes Deployments for Beginners

    Deployment failing in Kubernetes Deployments for Beginners

    Hey Marcel,

    Your videos are awesome.

    I was trying to follow the steps from you deployments video in my local environment but had a few issues. It looks like the deployments.yaml in your repo has changed a bit from the one shown in the deployments video.

    The current deployments.yaml in your master branch has a couple of issues:

    • the configmap & secrets volumes don't exist
    • the aimvector/golang:1.0.0 image fails due to missing /configs/config.json file

    Cheers,

    Tim

    opened by wonboyn 6
  • docker build is failing on slave

    docker build is failing on slave

    Hello,

    I followed your tutorial, but when I tried to build a new job in pipeline, it results in an error

    Started by user jenkins
    Obtained jenkins/JenkinsFile from git https://github.com/marcel-dempers/docker-development-youtube-series.git
    Running in Durability level: MAX_SURVIVABILITY
    [Pipeline] Start of Pipeline
    [Pipeline] node
    Still waiting to schedule task
    ‘jenkins-slave-mbm4d’ is offline
    Agent jenkins-slave-mbm4d is provisioned from template Kubernetes Pod Template
    ---
    apiVersion: "v1"
    kind: "Pod"
    metadata:
      annotations: {}
      labels:
        jenkins: "slave"
        jenkins/label: "jenkins-slave"
      name: "jenkins-slave-mbm4d"
    spec:
      containers:
      - env:
        - name: "JENKINS_SECRET"
          value: "********"
        - name: "JENKINS_TUNNEL"
          value: "jenkins:50000"
        - name: "JENKINS_AGENT_NAME"
          value: "jenkins-slave-mbm4d"
        - name: "JENKINS_NAME"
          value: "jenkins-slave-mbm4d"
        - name: "JENKINS_AGENT_WORKDIR"
          value: "/home/jenkins/agent"
        - name: "JENKINS_URL"
          value: "http://jenkins/"
        image: "aimvector/jenkins-slave"
        imagePullPolicy: "IfNotPresent"
        name: "jnlp"
        resources:
          limits: {}
          requests: {}
        securityContext:
          privileged: false
        tty: true
        volumeMounts:
        - mountPath: "/var/run/docker.sock"
          name: "volume-0"
          readOnly: false
        - mountPath: "/home/jenkins/agent"
          name: "workspace-volume"
          readOnly: false
        workingDir: "/home/jenkins/agent"
      hostNetwork: false
      nodeSelector:
        beta.kubernetes.io/os: "linux"
      restartPolicy: "Never"
      securityContext: {}
      volumes:
      - hostPath:
          path: "/var/run/docker.sock"
        name: "volume-0"
      - emptyDir:
          medium: ""
        name: "workspace-volume"
    
    Running on jenkins-slave-mbm4d in /home/jenkins/agent/workspace/test
    [Pipeline] {
    [Pipeline] stage
    [Pipeline] { (test pipeline)
    [Pipeline] sh
    + echo hello
    hello
    + git clone https://github.com/marcel-dempers/docker-development-youtube-series.git
    Cloning into 'docker-development-youtube-series'...
    + cd ./docker-development-youtube-series/golang
    + docker build . -t test
    time="2019-12-24T02:08:35Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied"
    context canceled
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] }
    [Pipeline] // node
    [Pipeline] End of Pipeline
    ERROR: script returned exit code 1
    Finished: FAILURE
    

    It is failing at

    time="2019-12-24T02:08:35Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied" context canceled

    opened by vvavepacket 6
  • Debugging containerized python flask app with non-standard code organization with VS Code

    Debugging containerized python flask app with non-standard code organization with VS Code

    I am trying to debug my Python 3 Flask app using VS Code. I have the extensions docker and python installed for the Remote WSL mode of VS Code. The relevant part of the docker-compose.yml is:

      web:
        build: 
          context: .
        image: web
        container_name: web
        ports:
          - 5004:5000
        command: python manage.py run -h 0.0.0.0
        volumes:
          - .:/usr/src/app
        environment:
          - FLASK_DEBUG=1
          - APP_SETTINGS=project.server.config.DevelopmentConfig
        networks:
          - webnet
    

    When I follow the instructions as per your video, I end up with commenting the command statement in my docker-compose file and an additional layer in my Dockerfile like so:

    ENV FLASK_APP=manage.py
    
    # ###########START NEW IMAGE : DEBUGGER ###################
    FROM base as debug
    RUN pip install ptvsd
    
    WORKDIR /usr/src/app
    CMD python -m ptvsd --host 0.0.0.0 --port 5678 --wait --multiprocess -m flask run -h 0.0.0.0 -p 5000
    

    and a launch.json file like:

    // .vscode/launch.json
    {
      "configurations": [
        {
          "name": "Python Attach",
          "type": "python",
          "request": "attach",
          "pathMappings": [
            {
              "localRoot": "${workspaceFolder}",
              "remoteRoot": "/usr/src/app"
            }
          ], 
          "port": 5678, 
          "host": "127.0.0.1"
        }
      ]
    }
    

    Now here comes the kicker. My app is composed like so:

    manage.py (has the statement app = create_app()) project   server    - _init_.py (this is where create_app() is defined using Flask() with appropriate config)    - config.py       main (rest of the app)

    When try to debug this, I can apply breakpoints in manage.py and they trigger fine on app start, but my breakpoints in the views, which are located in .\project\server\main\views.py do not get triggered. I am guessing, this has either to do with how I am initiating the debug sub-process, or my pathMappings in the launch.json.

    Any suggestions to debug this are appreciated. Sorry about the long post.

    opened by chintanp 5
  • security issue on redis/kubernetes

    security issue on redis/kubernetes

    https://github.com/marcel-dempers/docker-development-youtube-series/blob/master/storage/redis/kubernetes/sentinel/sentinel-statefulset.yaml#L22

    i think it's a security issue but idk maybe not, im new at kubernetes. this line seems like wrong, i guess it should come from the secret config/config map.

    good first issue 
    opened by portlek 3
  • standalone-prometheus is unable to scrape example-app

    standalone-prometheus is unable to scrape example-app

    Hello,

    Following you promethues-operator example under 1.14.8 cuz there isn't one in 1.18.4. When I add the standalone-prometheus instance the example-app target says error http://:5000/metrics. See attached screenshot. The code is straight out of you repo. I have double checked all of the labels and they all look correct. Is the because the app is not service that /metrics endpoint? This is the example from the kubernetes/deployments, services, secrets, configmaps right? Did something change here? I there another example application to try?

    I tried using your python-application on port 80 and received a different error shown in the second screenshot. Is there anyway to update an example that works?

    Screen Shot 2021-08-19 at 5 13 11 PM

    Screen Shot 2021-08-19 at 5 43 58 PM

    opened by darnone 3
  • Could not connect to Redis at redis-06379: Name does not resolve

    Could not connect to Redis at redis-06379: Name does not resolve

    Hi,

    while deploying sentinel i am getting below error.

    finding master at redis-0.redis.redis.svc.cluster.local Could not connect to Redis at redis-0.redis.redis.svc.cluster.local:6379: Name does not resolve no master found finding master at redis-1.redis.redis.svc.cluster.local Could not connect to Redis at redis-1.redis.redis.svc.cluster.local:6379: Name does not resolve no master found finding master at redis-2.redis.redis.svc.cluster.local Could not connect to Redis at redis-2.redis.redis.svc.cluster.local:6379: Name does not resolve no master found port 5000 sentinel monitor mymaster 6379 2 sentinel down-after-milliseconds mymaster 5000 sentinel failover-timeout mymaster 60000 sentinel parallel-syncs mymaster 1 sentinel auth-pass mymaster a-very-complex-password-here

    Here are my nodes:-

    kubectl get nodes NAME STATUS ROLES AGE VERSION rbqn01h02 Ready controlplane,etcd,worker 295d v1.17.5 rbqn01h03 Ready controlplane,etcd,worker 295d v1.17.5 rbqn01h04 Ready controlplane,etcd,worker 295d v1.17.5 rbqn04h02 Ready controlplane,etcd,worker 295d v1.17.5

    Here are my redis podes

    kubectl get pods -n redis-cluster1 NAME READY STATUS RESTARTS AGE redis-0 1/1 Running 0 22h redis-1 1/1 Running 0 22h redis-2 1/1 Running 0 22h

    I even tried sentinel deployment by adding above mentioned nodes in initcontainer startup script. Here:--

    REDIS_PASSWORD=a-very-complex-password-here nodes=

    Please suggest what need to add in nodes so that sentinel can connect to redis cluster

    opened by nishit93-hub 3
  • TLS error in basic secret injection video

    TLS error in basic secret injection video

    Hey Marcel,

    I was trying to replicate the setup from the video "Basic secret injection for microservices on Kubernetes using Vault". I got to the point of starting the example app deployment & found that the pod starts but stays in the "Init:0/1" status.

    The vault injector pod logs show that it received the mutating webhook call:

    kubectl -n vault-example logs vault-example-agent-injector-7cdd648787-tv4lb
    2020-08-12T22:55:14.523Z [INFO] handler: Starting handler.. Listening on ":8080"... Updated certificate bundle received. Updating certs... 2020-08-12T23:08:00.894Z [INFO] handler: Request received: Method=POST URL=/mutate?timeout=30s

    The logs from the vault pod show a TLS error:

    kubectl -n vault-example logs vault-example-0
    ==> Vault server configuration:

             Api Address: https://10.244.0.6:8200
                     Cgo: disabled
         Cluster Address: https://10.244.0.6:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: file
                 Version: Vault v1.3.1
    

    2020-08-12T22:50:10.226Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy= ==> Vault server started! Log data will stream in below:

    2020-08-12T22:50:50.416Z [INFO] core.cluster-listener: starting listener: listener_address=[::]:8201 2020-08-12T22:50:50.416Z [INFO] core.cluster-listener: serving cluster requests: cluster_listen_address=[::]:8201 2020-08-12T22:50:50.416Z [INFO] core: post-unseal setup starting 2020-08-12T22:50:50.417Z [INFO] core: loaded wrapping token key 2020-08-12T22:50:50.417Z [INFO] core: successfully setup plugin catalog: plugin-directory= 2020-08-12T22:50:50.418Z [INFO] core: successfully mounted backend: type=system path=sys/ 2020-08-12T22:50:50.418Z [INFO] core: successfully mounted backend: type=identity path=identity/ 2020-08-12T22:50:50.419Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-08-12T22:50:50.421Z [INFO] core: successfully enabled credential backend: type=token path=token/ 2020-08-12T22:50:50.421Z [INFO] core: restoring leases 2020-08-12T22:50:50.421Z [INFO] rollback: starting rollback manager 2020-08-12T22:50:50.422Z [INFO] identity: entities restored 2020-08-12T22:50:50.422Z [INFO] expiration: lease restore complete 2020-08-12T22:50:50.422Z [INFO] identity: groups restored 2020-08-12T22:50:50.422Z [INFO] core: post-unseal setup complete 2020-08-12T22:50:50.423Z [INFO] core: vault is unsealed 2020-08-12T23:01:10.547Z [INFO] core: enabled credential backend: path=kubernetes/ type=kubernetes 2020-08-12T23:05:51.876Z [INFO] core: successful mount: namespace= path=secret/ type=kv 2020-08-12T23:06:38.902Z [INFO] http: TLS handshake error from 127.0.0.1:52998: remote error: tls: unknown certificate

    And the logs from the init container show an error trying to authenticate with vault:

    kubectl -n vault-example logs basic-secret-74b4fdbcdc-2zmtl -c vault-agent-init ==> Vault server started! Log data will stream in below:

    ==> Vault agent configuration: 2020-08-12T23:08:01.568Z [INFO] sink.file: creating file sink 2020-08-12T23:08:01.568Z [INFO] sink.file: file sink configured: path=/home/vault/.token mode=-rw-r----- 2020-08-12T23:08:01.568Z [INFO] auth.handler: starting auth handler 2020-08-12T23:08:01.568Z [INFO] auth.handler: authenticating 2020-08-12T23:08:01.568Z [INFO] sink.server: starting sink server

    2020-08-12T23:08:01.568Z [INFO] template.server: starting template server Cgo: disabled Log Level: info Version: Vault v1.3.1

    2020/08/12 23:08:01.569034 [INFO] (runner) creating new runner (dry: false, once: false) 2020/08/12 23:08:01.569618 [WARN] (clients) disabling vault SSL verification 2020/08/12 23:08:01.569658 [INFO] (runner) creating watcher 2020-08-12T23:08:11.580Z [ERROR] auth.handler: error authenticating: error="Put https://vault-example.vault-example.svc:8200/v1/auth/kubernetes/login: dial tcp: lookup vault-example.vault-example.svc on 10.96.0.10:53: read udp 10.244.0.8:50821->10.96.0.10:53: read: connection refused" backoff=2.156164762 2020-08-12T23:08:13.703Z [INFO] auth.handler: authenticating 2020-08-12T23:08:23.712Z [ERROR] auth.handler: error authenticating: error="Put https://vault-example.vault-example.svc:8200/v1/auth/kubernetes/login: dial tcp: lookup vault-example.vault-example.svc on 10.96.0.10:53: read udp 10.244.0.8:41477->10.96.0.10:53: i/o timeout" backoff=2.29257713

    In terms of TLS - I used the exact TLS config/process indicated in your ssl_generate_self_signed.txt file.

    Any suggestions would be greatly appreciated.

    Thanks

    Tim

    opened by wonboyn 3
  • ERROR: Could not find specified Maven installation 'maven_3_5_0'

    ERROR: Could not find specified Maven installation 'maven_3_5_0'

    I am trying a jenkins maven pipeline. My sample Jenkinsfile below

    node('jenkins-slave') {
        stage ('Compile Stage') {
            withMaven(maven : 'maven_3_5_0') {
                sh 'mvn clean compile'
            }
        }
    
        stage ('Testing Stage') {
            withMaven(maven : 'maven_3_5_0') {
                sh 'mvn test'
            }
        }
        
        stage ('Deployment Stage') {
            withMaven(maven : 'maven_3_5_0') {
                sh 'mvn deploy'
            }
        }
    }
    

    And when I build in and running on jenkins-slave I am getting below error

    Running on jenkins-slave-3tc8l in /home/jenkins/agent/workspace/ins-multi-branch-pipeline_master
    [Pipeline] {
    [Pipeline] stage
    [Pipeline] { (Compile Stage)
    [Pipeline] withMaven
    [withMaven] Options: []
    [withMaven] Available options: 
    [withMaven] using JDK installation provided by the build agent
    [Pipeline] // withMaven
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] }
    [Pipeline] // node
    [Pipeline] End of Pipeline
    ERROR: Could not find specified Maven installation 'maven_3_5_0'.
    Finished: FAILURE
    

    What is the recommended way to fix this?

    Based on the error, there is no maven installation on the jenkins-slave. So do we have to edit the docker file of the jenkins-slave and have maven installed there and rebuild and push it again? Is this the best approach or what other alternatives do we have?

    opened by vvavepacket 3
  • MountVolume.SetUp failed for volume

    MountVolume.SetUp failed for volume "jenkins" - role s not authorized to perform: elasticfilesystem:DescribeMountTargets

    Hello

    I'm facing one issue during Jenkins deployment in the pod. Apparently, the role is not set to elasticfilesystem:DescribeMountTargets. Does anyone faced the same issue?

    `Events: Type Reason Age From Message


    Normal Scheduled 7m18s default-scheduler Successfully assigned jenkins/jenkins-755df69664-279s6 to ip-192-168-83-52.ca-central-1.compute.internal Warning FailedMount 59s (x11 over 7m17s) kubelet MountVolume.SetUp failed for volume "jenkins" : rpc error: code = Internal desc = Could not mount "fs-022497d2da99cd928:/" at "/var/lib/kubelet/pods/aa09bb27-3be5-41dd-86a8-4ac5484b0e01/volumes/kubernetes.io~csi/jenkins/mount": mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o tls fs-022497d2da99cd928:/ /var/lib/kubelet/pods/aa09bb27-3be5-41dd-86a8-4ac5484b0e01/volumes/kubernetes.io~csi/jenkins/mount Output: Failed to resolve "fs-022497d2da99cd9277.efs.ca-central-1.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::054550991362:assumed-role/eksctl-reelcruit-eks-cluster-node-NodeInstanceRole-1AEPZZE5D06NE/i-0f83f000ca47c460a is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning FailedMount 43s (x3 over 5m15s) kubelet Unable to attach or mount volumes: unmounted volumes=[jenkins], unattached volumes=[jenkins kube-api-access-hhp87]: timed out waiting for the condition`

    opened by jbpadilha 2
  • Bump fluentd and fluent-plugin-kubernetes_metadata_filter in /monitoring/logging/fluentd/kubernetes/dockerfiles

    Bump fluentd and fluent-plugin-kubernetes_metadata_filter in /monitoring/logging/fluentd/kubernetes/dockerfiles

    Bumps fluentd and fluent-plugin-kubernetes_metadata_filter. These dependencies needed to be updated together. Updates fluentd from 1.11.5 to 1.15.1

    Release notes

    Sourced from fluentd's releases.

    Fluentd v1.15.1

    Bug Fix

    • #3808 Add support for concurrent append in out_file

    Misc

    • #3829 in_tail: Show more information on skipping update_watcher

    Contributors to this release (Alphabetical order)

    • Fujimoto Seiji
    • Takuro Ashie

    Fluentd v1.15.0 comes with 4 enhancements and 6 bug fixes. A huge thank to 4 contributors who made this release possible.

    Enhancement

    • #3535 #3771 in_tail: Add log throttling in files based on group rules
    • #3680 Add dump command to fluent-ctl
    • #3712 Handle YAML configuration format on configuration file
    • #3768 Add restart_worker_interval parameter in <system> directive to set interval to restart workers that has stopped for some reason.

    Bug fixes

    • #3711 out_forward: Fix to update timeout of cached sockets
    • #3754 in_tail: Fix a possible crash on file rotation when follow_inodes true
    • #3755 output: Fix a possible crash of flush thread
    • #3766 in_tail: Fix crash bugs on Ruby 3.1 on Windows
    • #3774 in_tail: Fix a bug that in_tail cannot open non-ascii path on Windows
    • #3782 Fix a bug that fluentd doesn't release its own log file even after rotated by external tools

    Misc

    • #3489 in_tail: Simplify TargetInfo related code
    • #3700 Fix a wrong issue number in CHANGELOG
    • #3701 server helper: Add comments to linger_timeout behavior about Windows
    • #3724 service_discovery: Fix typo
    • #3745 #3753 #3767 #3783 #3784 #3785 #3787 test: Fix unstable tests and warnings

    Contributors to this release (Alphabetical order)

    • Daijiro Fukuda
    • Hiroshi Hatake
    • JiHyunSong
    • Takuro Ashie

    Fluentd v1.14.6 comes with 2 enhancements and 4 bug fixes. A huge thank to 7 contributors who made this release possible.

    ... (truncated)

    Changelog

    Sourced from fluentd's changelog.

    Release v1.15.1 - 2022/07/27

    Bug Fix

    Misc

    Release v1.15.0 - 2022/06/29

    Enhancement

    Bug fixes

    Misc

    • in_tail: Simplify TargetInfo related code fluent/fluentd#3489
    • Fix a wrong issue number in CHANGELOG fluent/fluentd#3700
    • server helper: Add comments to linger_timeout behavior about Windows

    ... (truncated)

    Commits
    • cf241f0 v1.15.1
    • a83d6b8 Merge pull request #3829 from fluent/issue3614-add-log
    • 260a08a in_tail: Show more information on skipping update_watcher
    • caec5b7 Tweak the new inter-worker lock support
    • 49970e4 Merge pull request #3808 from fujimotos/sf/multi-workers-lock
    • 2c9f3f3 Add more tests for inter-worker locking
    • f3838cb Fix variable name
    • d9c793e Apply feeback from kou on GitHub#3808
    • d31631e Add testcase for worker lock functions
    • 0a7a590 Apply feedback from ashie on GitHub#3808
    • Additional commits viewable in compare view

    Updates fluent-plugin-kubernetes_metadata_filter from 2.5.2 to 3.0.1

    Changelog

    Sourced from fluent-plugin-kubernetes_metadata_filter's changelog.

    Release Notes

    2.9.4

    As of this release, the 'de_dot' functionality is depricated and will be removed in future releases. Ref: fabric8io/fluent-plugin-kubernetes_metadata_filter#320

    v2.1.4

    The use of use_journal is DEPRECATED. If this setting is not present, the plugin will attempt to figure out the source of the metadata fields from the following:

    • If lookup_from_k8s_field true (the default) and the following fields are present in the record: docker.container_id, kubernetes.namespace_name, kubernetes.pod_name, kubernetes.container_name, then the plugin will use those values as the source to use to lookup the metadata
    • If use_journal true, or use_journal is unset, and the fields CONTAINER_NAME and CONTAINER_ID_FULL are present in the record, then the plugin will parse those values using container_name_to_kubernetes_regexp and use those as the source to lookup the metadata
    • Otherwise, if the tag matches tag_to_kubernetes_name_regexp, the plugin will parse the tag and use those values to lookup the metdata

    v2.1.x

    As of the release 2.1.x of this plugin, it no longer supports parsing the source message into JSON and attaching it to the payload. The following configuration options are removed:

    • merge_json_log
    • preserve_json_log

    One way of preserving JSON logs can be through the parser plugin. It can parsed with the parser plugin like this:

    <filter kubernetes.**>
      @type parser
      key_name log
      <parse>
        @type json
        json_parser json
      </parse>
      replace_invalid_sequence true
      reserve_data true # this preserves unparsable log lines
      emit_invalid_record_to_error false # In case of unparsable log lines keep the error log clean
      reserve_time # the time was already parsed in the source, we don't want to overwrite it with current time.
    </filter>
    
    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump yajl-ruby from 1.4.1 to 1.4.3 in /monitoring/logging/fluentd/kubernetes/dockerfiles

    Bump yajl-ruby from 1.4.1 to 1.4.3 in /monitoring/logging/fluentd/kubernetes/dockerfiles

    Bumps yajl-ruby from 1.4.1 to 1.4.3.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • selfsigned/issuer.yaml - error

    selfsigned/issuer.yaml - error

    I am following steps from docker-development-youtube-series/kubernetes/cert-manager/Readme.md

    I get an error while doing kubectl apply -f ./selfsigned/issuer.yaml Error from server (InternalError): error when creating "./selfsigned/issuer.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority

    opened by forvaidya 0
  • nginx ingress deployment issue

    nginx ingress deployment issue

    I've followed the YouTube video and deployed using the yaml files, but I had to modify some v1beta1 to v1.

    My services don't get an external IP and the nginx-ingress-controller pods get status CrashLoopBackOff. I can see that the health check fails.

    I'm not running this on DockerDesktop. Instead I've installed four Ubuntu 22.04 VMs on a proxmox server. image

    Events:
      Type     Reason     Age                From               Message
      ----     ------     ----               ----               -------
      Normal   Scheduled  58s                default-scheduler  Successfully assigned ingress-nginx/nginx-ingress-controller-9776fbf95-7lsr6 to k8s-worker1
      Normal   Pulling    57s                kubelet            Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0"
      Normal   Pulled     36s                kubelet            Successfully pulled image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0" in 20.653688932s
      Warning  Unhealthy  36s                kubelet            Readiness probe failed: Get "http://10.244.1.6:10254/healthz": dial tcp 10.244.1.6:10254: connect: connection refused
      Normal   Created    16s (x3 over 36s)  kubelet            Created container nginx-ingress-controller
      Normal   Started    16s (x3 over 36s)  kubelet            Started container nginx-ingress-controller
      Normal   Pulled     16s (x2 over 35s)  kubelet            Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0" already present on machine
      Warning  BackOff    7s (x7 over 34s)   kubelet            Back-off restarting failed container
    

    This is a list of everything in my cluster:

    NAMESPACE              NAME                                             READY   STATUS             RESTARTS        AGE
    default                pod/nginx-deployment-6595874d85-9lqkl            1/1     Running            0               160m
    default                pod/nginx-deployment-6595874d85-j8qj4            1/1     Running            0               160m
    default                pod/nginx-deployment-6595874d85-rd5gv            1/1     Running            0               160m
    default                pod/nginx-server                                 1/1     Running            0               160m
    example-app            pod/example-deploy-7988897cb8-jdbxg              1/1     Running            0               88m
    example-app            pod/example-deploy-7988897cb8-xrgf9              1/1     Running            0               88m
    ingress-nginx          pod/nginx-ingress-controller-9776fbf95-7lsr6     0/1     CrashLoopBackOff   6 (3m40s ago)   9m39s
    ingress-nginx          pod/nginx-ingress-controller-9776fbf95-lgbpx     0/1     CrashLoopBackOff   6 (3m32s ago)   9m39s
    kube-system            pod/coredns-6d4b75cb6d-jzkbt                     1/1     Running            0               3h30m
    kube-system            pod/coredns-6d4b75cb6d-z68cf                     1/1     Running            0               3h30m
    kube-system            pod/etcd-k8s-master                              1/1     Running            3               3h30m
    kube-system            pod/kube-apiserver-k8s-master                    1/1     Running            3               3h30m
    kube-system            pod/kube-controller-manager-k8s-master           1/1     Running            0               3h30m
    kube-system            pod/kube-flannel-ds-6kx27                        1/1     Running            0               3h26m
    kube-system            pod/kube-flannel-ds-qr6bp                        1/1     Running            0               164m
    kube-system            pod/kube-flannel-ds-rk4m2                        1/1     Running            0               174m
    kube-system            pod/kube-flannel-ds-tdjhs                        1/1     Running            0               3h2m
    kube-system            pod/kube-proxy-6t8g8                             1/1     Running            0               164m
    kube-system            pod/kube-proxy-984kj                             1/1     Running            0               3h30m
    kube-system            pod/kube-proxy-ddnlc                             1/1     Running            0               3h2m
    kube-system            pod/kube-proxy-ld844                             1/1     Running            0               174m
    kube-system            pod/kube-scheduler-k8s-master                    1/1     Running            3               3h30m
    kubernetes-dashboard   pod/dashboard-metrics-scraper-7bfdf779ff-sszdv   1/1     Running            0               157m
    kubernetes-dashboard   pod/kubernetes-dashboard-6cdd697d84-7jht4        1/1     Running            0               157m
    
    NAMESPACE              NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
    default                service/kubernetes                  ClusterIP      10.96.0.1        <none>        443/TCP                      3h30m
    default                service/nginx-http                  ClusterIP      10.104.87.158    <none>        80/TCP                       159m
    example-app            service/example-service             LoadBalancer   10.98.61.110     <pending>     80:31537/TCP                 84m
    ingress-nginx          service/ingress-nginx               LoadBalancer   10.96.36.79      <pending>     80:32451/TCP,443:30304/TCP   50m
    kube-system            service/kube-dns                    ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       3h30m
    kubernetes-dashboard   service/dashboard-metrics-scraper   ClusterIP      10.104.144.188   <none>        8000/TCP                     157m
    kubernetes-dashboard   service/kubernetes-dashboard        ClusterIP      10.108.35.49     <none>        443/TCP                      157m
    
    NAMESPACE     NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
    kube-system   daemonset.apps/kube-flannel-ds   4         4         4       4            4           <none>                   3h26m
    kube-system   daemonset.apps/kube-proxy        4         4         4       4            4           kubernetes.io/os=linux   3h30m
    
    NAMESPACE              NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
    default                deployment.apps/nginx-deployment            3/3     3            3           160m
    example-app            deployment.apps/example-deploy              2/2     2            2           88m
    ingress-nginx          deployment.apps/nginx-ingress-controller    0/2     2            0           9m39s
    kube-system            deployment.apps/coredns                     2/2     2            2           3h30m
    kubernetes-dashboard   deployment.apps/dashboard-metrics-scraper   1/1     1            1           157m
    kubernetes-dashboard   deployment.apps/kubernetes-dashboard        1/1     1            1           157m
    
    NAMESPACE              NAME                                                   DESIRED   CURRENT   READY   AGE
    default                replicaset.apps/nginx-deployment-6595874d85            3         3         3       160m
    example-app            replicaset.apps/example-deploy-7988897cb8              2         2         2       88m
    ingress-nginx          replicaset.apps/nginx-ingress-controller-9776fbf95     2         2         0       9m39s
    kube-system            replicaset.apps/coredns-6d4b75cb6d                     2         2         2       3h30m
    kubernetes-dashboard   replicaset.apps/dashboard-metrics-scraper-7bfdf779ff   1         1         1       157m
    kubernetes-dashboard   replicaset.apps/kubernetes-dashboard-6cdd697d84        1         1         1       157m
    

    Can you guess what's most likely the issue here?

    opened by eloekset 2
Releases(kubernetes-monitoring-1)
  • kubernetes-monitoring-1(Feb 9, 2020)

    Monitoring cluster components on Kubernetes using Prometheus.

    This is a release tag of the source code cover in the below video: Video: https://youtu.be/TrGgvkJfslw

    Source code(tar.gz)
    Source code(zip)
  • prometheus-node-exporter-1(Feb 4, 2020)

    Node Exporter for Prometheus on Kubernetes

    This is a release tag of the source code cover in the below video: Video: https://youtu.be/1-tRiThpFrY

    Source code(tar.gz)
    Source code(zip)
  • prometheus-operator-1(Jan 18, 2020)

  • debugging-python(Sep 30, 2019)

  • part5(Aug 4, 2019)

    This is a snapshot of the repo covering part 5. Once you have completed this, you can move on to part 6. Refer to the readme for all parts (tagged)

    Source code(tar.gz)
    Source code(zip)
  • part4(Jul 18, 2019)

    This is a snapshot of the repo covering part 4. Once you have completed this, you can move on to part 5. Refer to the readme for all parts (tagged)

    Source code(tar.gz)
    Source code(zip)
  • part3(Jun 30, 2019)

    This is a snapshot of the repo covering part 3. Once you have completed this, you can move on to part 4. Refer to the readme for all parts (tagged)

    Source code(tar.gz)
    Source code(zip)
  • part2(Jun 20, 2019)

    This is a snapshot of the repo covering part 2. Once you have completed this, you can move on to part 3. Refer to the readme for all parts (tagged)

    Source code(tar.gz)
    Source code(zip)
  • part1(Jun 16, 2019)

    This is a snapshot of the repo covering part 1. Once you have completed this, you can move on to part 2. Refer to the readme for all parts (tagged)

    Source code(tar.gz)
    Source code(zip)
Owner
Marcel Dempers
Marcel Dempers
A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, views, avg. earnings etc.

Youtube-channel-monitor A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, view

null 0 Dec 30, 2021
Fluxcdproj - The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers

Fluxcdproj - The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers

null 0 Feb 1, 2022
⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app development.

Goku (WIP; Author Only) ⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app deve

Fauzan 1 Jan 6, 2022
This is an assignment for Intern-Software Engineer, Backend Go from LINE MAN Wongnai. It is create with Go and GIN framework

COVID-19-API-Assignment Create by Chayaphon Bunyakan, Email: [email protected] Run the API by typing the following command go run main.go Run t

Chayaphon Bunyakan 0 Jan 9, 2022
A toolbox for debugging docker container and kubernetes with web UI.

A toolbox for debugging Docker container and Kubernetes with visual web UI. You can start the debugging journey on any docker container host! You can

CloudNativer 54 Aug 1, 2022
A shields.io API for your youtube channel to protect your api key

Youtube-Channel-Badge A shields.io API for your youtube channel to protect your

Mallikarjun H 0 Dec 23, 2021
A go (golang) library to search videos in YouTube.

YT Search A go (golang) library to search videos in YouTube. Installation go get github.com/AnjanaMadu/YTSearch Usage package main import ( "fmt"

Anjana Madu 3 Apr 30, 2022
YouTube'da altyazısı olan veya otomatik olarak oluşturulmuş altyazılı videolarda istediğiniz kelimenin hangi saat, dakika ve saniye de geçtiğini size gösterip aradığınız şeyi hızlıca bulmanızı sağlar.

YouTube Subtitles YouTube'da altyazısı olan veya otomatik olarak oluşturulmuş altyazılı videolarda istediğiniz kelimenin hangi saat, dakika ve saniye

İbrahim Can Mercan 2 Mar 4, 2022
A long-running Go program that watches a Youtube playlist for new videos, and downloads them using yt-dlp or other preferred tool.

ytdlwatch A long-running Go program that watches a Youtube playlist for new videos, and downloads them using yt-dlp or other preferred tool. Ideal for

Raine Virta 9 Jul 25, 2022
A tiny Go library + client for downloading Youtube videos. The library is capable of fetching Youtube video metadata, in addition to downloading videos.

A tiny Go library + client (command line Youtube video downloader) for downloading Youtube videos. The library is capable of fetching Youtube video metadata, in addition to downloading videos. If ffmpeg is available, client can extract MP3 audio from downloaded video files.

Kunal Diwan 2 Jun 3, 2022
A youtube library for retrieving metadata, and obtaining direct links to video-only/audio-only/mixed versions of videos on YouTube in Go.

A youtube library for retrieving metadata, and obtaining direct links to video-only/audio-only/mixed versions of videos on YouTube in Go. Install go g

José Pastor 3 Jul 30, 2022
This project will help you to create Live img.shields.io Badges which will Count YouTube Stats (Subscriber, Views, Videos) without YouTube API

Free YouTube Stats Badge This project will help you to create Live img.shields.io Badges which will Count YouTube Stats (Subscriber, Views, Videos) wi

Pushpender Singh 1 Dec 17, 2021
A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, views, avg. earnings etc.

Youtube-channel-monitor A simple go application that uses Youtube Data API V3 to show the real-time stats for a youtube channel such as the subs, view

null 0 Dec 30, 2021
Golang ultimate ANSI-colors that supports Printf/Sprintf methods

Aurora Ultimate ANSI colors for Golang. The package supports Printf/Sprintf etc. TOC Installation Usage Simple Printf aurora.Sprintf Enable/Disable co

Konstantin Ivanov 1.2k Aug 8, 2022
The Ultimate Workshop Track for #golang Developer

layout title nav_order description permalink default An Ultimate GopherLabs Hands-on Labs 1 An Ultimate GopherLabs Hands-on Labs / Join GopherLabs Com

Sangam Biradar 146 May 19, 2022
The ultimate CLI tool for TiKV

The ultimate CLI tool for TiKV

dongxu 36 Aug 8, 2022
The ultimate way to move between folders in the CLI

goto-command The ultimate way to move between folders in the command line Goto is a command that can be used like cd, and also allows you to add speci

null 0 Mar 21, 2022
Fluxcdproj - The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers

Fluxcdproj - The Ultimate Swiss Army knife for DevOps, Developers and Platform Engineers

null 0 Feb 1, 2022
A non-go engineer tries to write Go to solve Advent of Code

Wherein an engineer (who primarily uses Kotlin, Java, Scala and C#) tries to teach themselves Go by solving Advent of Code challenges. It's... not pre

Evan Nelson 0 Dec 9, 2021
⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app development.

Goku (WIP; Author Only) ⚔ Personal Golang starter kit with an engineer research perspective, expressjs developer friendly, and aims for rapid app deve

Fauzan 1 Jan 6, 2022
This is an assignment for Intern-Software Engineer, Backend Go from LINE MAN Wongnai. It is create with Go and GIN framework

COVID-19-API-Assignment Create by Chayaphon Bunyakan, Email: [email protected] Run the API by typing the following command go run main.go Run t

Chayaphon Bunyakan 0 Jan 9, 2022
Shopify Production Engineer Intern Challenge - Summer 2022

shopify-pe ---------- A tiny inventory management web-application. DESCRIPTION The API backend for this application is written in `go'. It handle

Srinidhi Kaushik 0 Jan 17, 2022
customer.io full stack engineer take home project

customer.io full stack engineer take home project

Gyan Prakash Karn 0 Jan 21, 2022
Owldetect - Take home challenge for Haraj Solutions Engineer candidates

OwlDetect Welcome to Haraj take home challenge! In this challenge you will be as

Riandy Rahman Nugraha 3 Feb 17, 2022
Simple and expressive toolbox written in Go

ugo Simple and expressive toolbox written with love and care in Go. Deeply inspired by underscore.js and has the same syntax and behaviour Fully cover

Alexey Derbyshev 26 Nov 22, 2021
A toolbox for debugging docker container and kubernetes with web UI.

A toolbox for debugging Docker container and Kubernetes with visual web UI. You can start the debugging journey on any docker container host! You can

CloudNativer 54 Aug 1, 2022