A lightweight, cloud-native data transfer agent and aggregator

Overview

go.dev reference CII Best Practices

English | 中文

Loggie is a lightweight, high-performance, cloud-native agent and aggregator based on Golang. It supports multiple pipeline and pluggable components:

  • One stack logging solution: supports data transfer, filtering, parsing, alarm, etc
  • Cloud native: native Kubernetes CRD usage
  • Production level: a full range of observability, automatic operation and reliability capabilities

Architecture

Documentation

Setup

User Guide

Reference

License

Apache-2.0

Comments
  • 容器内日志如何采集

    容器内日志如何采集

    需要pod的 yaml 需要做特定的 empty 或者 hostpath 的前提才可以采集到吗? 下面是一个我们的demo yaml

    volumeMounts:
            - mountPath: /home/admin/logs
              name:01606d0d6354456aa546dfbb36d8a764-datadir
              subPath: logs
    
      volumes:
        - name:01606d0d6354456aa546dfbb36d8a764-datadir
          persistentVolumeClaim:
            claimName: >-
              01606d0d6354456aa54
    
    area/discovery status/Todo 
    opened by liguozhong 8
  • logconfig elasticsearch报错

    logconfig elasticsearch报错

    {"level":"warn","time":"2022-03-21T17:26:04Z","caller":"/go/src/loggie.io/loggie/pkg/interceptor/retry/interceptor.go:175","message":"interceptor/retry retry buffer size(2) too large"} {"level":"error","time":"2022-03-21T17:26:04Z","caller":"/go/src/loggie.io/loggie/pkg/pipeline/pipeline.go:267","message":"consumer batch fail,err: elasticsearch client not initialized yet"}

    es版本7x

    opened by EverCurse 7
  • goccy/go-yaml deal with blank string uncorrectly

    goccy/go-yaml deal with blank string uncorrectly

    goccy/go-yaml deal with blank string uncorrectly which is introduced by #242. And I met problem with parsing default containerd log.

    CODE

    package main
    
    import (
    	"fmt"
    
    	errYaml "github.com/goccy/go-yaml"
    	okYaml "gopkg.in/yaml.v2"
    )
    
    func main() {
    	v := struct {
    		Key string
    	}{
    		Key: " ",
    	}
    	d1, _ := okYaml.Marshal(v)
    	fmt.Printf("%s\n%s\n", "YES", string(d1))
    
    	d2, _ := errYaml.Marshal(v)
    	fmt.Printf("%s\n%s\n", "NO", string(d2))
    
    }
    

    OUTPUT

    YES
    key: ' '
    
    NO
    key:
    

    In what area(s)?

    /area interceptor

    What version of Loggie?

    v1.2.0+

    Expected Behavior

    Load interceptor successfully

    Actual Behavior

    Got warnning log. get processor error: Key: 'SplitConfig.Separator' Error:Field validation for 'Separator' failed on the 'required' tag.

    Steps to Reproduce the Problem

    1. Config CRD Interceptor
        - type: normalize
          processors:
            - split:
                separator: ' '
                max: 4
                keys: [ "time", "stream", "F", "message" ]
            - drop:
                targets: [ "F", "body" ]
            - rename:
                convert:
                  - from: "message"
                    to: "body"
            - underRoot:
                keys:
                  - kubernetes
    
    1. kubectl delete pod and kubectl logs pod
    bug 
    opened by polym 6
  • The interceptors cannot remove fields such as stream and time

    The interceptors cannot remove fields such as stream and time

    环境:kubernetes1.21 + docker logconfig apiVersion: loggie.io/v1beta1 kind: LogConfig metadata: name: nginx namespace: default spec: pipeline: interceptorsRef: nginx-interce sinkRef: nginx-sink sources: | - type: file name: mylog containerName: nginx fields: topic: "nginx-access" matchFields: labelKey: [app] paths: - stdout selector: labelSelector: app: nginx type: pod Interceptor apiVersion: loggie.io/v1beta1 kind: Interceptor metadata: name: nginx-interce spec: interceptors: | - type: normalize name: stdproc belongTo: ["mylog"] processors: - jsonDecode: target: body - drop: targets: ["stream", "time", "body"] - rename: convert: - from: "log" to: "message" sink apiVersion: loggie.io/v1beta1 kind: Sink metadata: name: nginx-sink spec: sink: | type: dev printEvents: true

    部署完成请求nginx,查看loggie pod日志为: { "fields": { "namespace": "default", "nodename": "10.0.20.28", "podname": "nginx-6799fc88d8-td4sc", "containername": "nginx", "logconfig": "nginx", "topic": "nginx-access" }, "body": "{\"log\":\"10.203.2.0 - - [21/Mar/2022:14:47:44 +0000] \\\"GET / HTTP/1.1\\\" 200 615 \\\"-\\\" \\\"curl/7.29.0\\\" \\\"-\\\"\\n\",\"stream\":\"stdout\",\"time\":\"2022-03-21T14:47:44.246358969Z\"}" } 日志中log字段没有被替换,且stream、time字段未被删除

    opened by xiaojiayu404 6
  • loggie无法写入kafka报错

    loggie无法写入kafka报错

    2022-08-08 19:52:01 ERR pkg/pipeline/pipeline.go:341 > consumer batch failed: write to kafka: kafka write errors (2048/2048)

    Ask your question here:

    image

    question 
    opened by bruse-peng 5
  • loggie 无法收集kube-event信息

    loggie 无法收集kube-event信息

    Ask your question here:

    版本:loggie-v1.3.0-rc.0 采用inclusterconfig 读取配置文件

    配置如下


    apiVersion: loggie.io/v1beta1 kind: Interceptor metadata: name: jsondecode spec: interceptors: | - type: normalize name: json processors: - jsonDecode: ~ - drop: targets: ["body"]

    apiVersion: loggie.io/v1beta1 kind: ClusterLogConfig metadata: name: kubeevent spec: selector: type: cluster cluster: aggregator pipeline: sources: | - type: kubeEvent name: event interceptorRef: jsondecode sinkRef: k8sbgy-kube-eventer

    无法收集kube-event信息

    question 
    opened by Michael754267513 4
  • v1.3.0-rc.0版本,【多行日志采集】与【Elasticsearch sink动态索引名称】这两项同时配置时报错

    v1.3.0-rc.0版本,【多行日志采集】与【Elasticsearch sink动态索引名称】这两项同时配置时报错

    What version of Loggie?

    v1.3.0-rc.0

    Expected Behavior

    1、正常采集日志且按照指定正则将多行合并输出到Elasticsearch索引中。 2、loggie后端无报错日志。

    Actual Behavior

    1、Elasticsearch中没有生成指定索引。 2、loggie后端日志报错: 2022-09-03 17:03:28 INF pkg/eventbus/export/logger/logger.go:141 > [metric]: {"filesource":{},"queue":{"public/catalog-channel":{"capacity":2048,"fillPercentage":0,"pipeline":"public/catalog","queueType":"channel","size":0}},"reload":{"ReloadTotal":6},"sink":{}} 2022-09-03 17:03:39 ERR pkg/pipeline/pipeline.go:353 > consumer batch failed: send events to elasticsearch: request to elasticsearch response error: {"errors":true,"items":[{"index":{"_index":"--2022-09","type":"doc","status":400,"error":{"type":"invalid_index_name_exception","reason":"Invalid index name [--2022-09], must not start with '', '-', or '+'","index":"--2022-09"}}}]} 2022-09-03 17:03:39 ERR pkg/pipeline/pipeline.go:353 > consumer batch failed: send events to elasticsearch: request to elasticsearch response error: {"errors":true,"items":[{"index":{"_index":"--2022-09","type":"doc","status":400,"error":{"type":"invalid_index_name_exception","reason":"Invalid index name [--2022-09], must not start with '', '-', or '+'","index":"--2022-09"}}}]} 2022-09-03 17:03:45 WRN pkg/interceptor/retry/interceptor.go:191 > interceptor/retry retry buffer size(2) too large 2022-09-03 17:03:45 INF pkg/interceptor/retry/interceptor.go:214 > next retry duration: 690ms 2022-09-03 17:03:45 ERR pkg/pipeline/pipeline.go:353 > consumer batch failed: send events to elasticsearch: request to elasticsearch response error: {"errors":true,"items":[{"index":{"_index":"--2022-09","type":"doc","status":400,"error":{"type":"invalid_index_name_exception","reason":"Invalid index name [--2022-09], must not start with '', '-', or '+'","index":"--2022-09"}}}]}

    Steps to Reproduce the Problem

    1、部署loggie时,增加discovery.kubernetes.typePodFields: typePodFields: logconfig: "${_k8s.logconfig}" namespace: "${_k8s.pod.namespace}" workloadkind: "${_k8s.workload.kind}" workloadname: "${_k8s.workload.name}" nodename: "${_k8s.node.name}" nodeip: "${_k8s.node.ip}" poduid: "${_k8s.pod.uid}" podname: "${_k8s.pod.name}" podip: "${_k8s.pod.ip}" containerid: "${_k8s.pod.container.id}" containername: "${_k8s.pod.container.name}" containerimage: "${_k8s.pod.container.image}" 2、创建LogConfig,source配置多行采集,sink配置动态索引名称: apiVersion: loggie.io/v1beta1 kind: LogConfig metadata: name: catalog namespace: public spec: selector: type: pod labelSelector: app: catalog release: catalog pipeline: sources: | - type: file name: logfile addonMeta: true paths: - /catalog/logs/*.log multi: active: true pattern: "^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}" sink: | type: elasticsearch hosts: ["192.168.0.1:9200"] index: "${fields.workloadname}-${fields.namespace}-${+YYYY-MM}" etype: "doc" codec: type: json beatsFormat: true

    bug 
    opened by qiuzhj 4
  • pod 批量重建后,新pod日志未采集

    pod 批量重建后,新pod日志未采集

    What version of Loggie?

    1.2.0

    Expected Behavior

    某 logconfig 匹配的 label 为 workload.user.cattle.io/workloadselector=statefulSet-game-s6-game-core-v4-cluster

    image

    之前日志可以正常采集,但是用户有过pod 重建 pod 重建后,因为原pod 路径不存在,能看到 loggie中的异常日志

    image

    在loggie 容器里 /var/log/pods 路径能看到新的pod,但是新的pod 日志也无法采集

    describe logconfig 发现 events 为none:

    image

    但是 label Selector 是没有问题的,之前日志可以正常采集,也没有动过,get pod 也可以匹配到新的pod:

    image

    Actual Behavior

    新pod 日志无法采集

    Steps to Reproduce the Problem

    bug 
    opened by guanbear 4
  • stdout 日志收集中断

    stdout 日志收集中断

    What version of Loggie?

    v1.2.0

    Expected Behavior

    stdout 日志收集完整

    Actual Behavior

    日志在 14:13 左右中断收集 image

    同时有错误信息

    2022-07-05 14:19:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:20:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:20:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:21:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:21:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:22:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:22:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:23:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:23:43 WRN pkg/source/file/watch.go:172 > remove file(/var/log/pods/default_lxcfs-9smvp_b4abc637-282f-47ba-8ee0-3f98805e7bb8/lxcfs/15413.log) os notify fail: can't remove non-existent inotify watch for: /var/log/pods/default_lxcfs-9smvp_b4abc637-282f-47ba-8ee0-3f98805e7bb8/lxcfs/15413.log
    2022-07-05 14:23:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:24:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:24:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:25:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:25:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:26:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:26:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:27:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:27:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:28:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:28:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:29:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:29:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:30:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:30:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:31:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:31:55 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    2022-07-05 14:32:25 ERR pkg/eventbus/listener/pipeline/listener.go:148 > pipeline(default/stdout) stop timeout, because no component event
    
    

    Steps to Reproduce the Problem

    apiVersion: loggie.io/v1beta1
    kind: LogConfig
    metadata:
      name: stdout
      namespace: default
    spec:
      pipeline:
        sinkRef: kafka
        sources: |
          - type: file
            name: stdout
            paths:
              - stdout
            addonMeta: true
            ignoreOlder: 12h
            workerCount: 128
            fields:
              cluster: dev
              topic: terminal
      selector:
        type: pod
    
    --- 
    apiVersion: loggie.io/v1beta1
    kind: Sink
    metadata:
      name: kafka
    spec:
      sink: |
        type: kafka
        brokers: ["10.10.xx.xx:9092"]
        topic: "dev_log_${fields.topic}"
        isYK: true
    
    ---
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: loggie-config-falcon-loggie
      namespace: default
      labels:
        app.kubernetes.io/managed-by: Helm
    data:
      loggie.yml: |-
        loggie:
          discovery:
            enabled: true
            kubernetes:
              containerRuntime: docker
              k8sFields:
                agent: loggie
                containerName: ${_k8s.pod.container.name}
                logConfig: ${_k8s.logconfig}
                namespace: ${_k8s.pod.namespace}
                nodeIp: ${_k8s.node.ip}
                nodeName: ${_k8s.node.name}
                podName: ${_k8s.pod.name}
              parseStdout: true
              rootFsCollectionEnabled: true
          http:
            enabled: true
            port: 9196
          monitor:
            listeners:
              filesource: null
              filewatcher: null
              pipeline: null
              queue: null
              reload: null
              sink: null
            logger:
              enabled: true
              period: 30s
          reload:
            enabled: true
            period: 10s
    
    bug 
    opened by zanderme 3
  • panic: sync: negative WaitGroup counter

    panic: sync: negative WaitGroup counter

    What version of Loggie?

    v1.2.0

    Expected Behavior

    不产生panic

    Actual Behavior

    panic,导致 loggie POD 重启

    goroutine 90 [running]:
    sync.(*WaitGroup).Add(0xc000aed000, 0xc0065abf00)
    	/usr/local/go/src/sync/waitgroup.go:74 +0x105
    sync.(*WaitGroup).Done(...)
    	/usr/local/go/src/sync/waitgroup.go:99
    github.com/loggie-io/loggie/pkg/source/file.(*Watcher).checkWaitForStopTask(0xc000c8c480)
    	/pkg/source/file/watch.go:759 +0x25d
    github.com/loggie-io/loggie/pkg/source/file.(*Watcher).maintenance(0xc005097f50)
    	/pkg/source/file/watch.go:737 +0x28
    github.com/loggie-io/loggie/pkg/source/file.(*Watcher).run(0xc000c8c480)
    	/pkg/source/file/watch.go:628 +0x2aa
    created by github.com/loggie-io/loggie/pkg/source/file.newWatcher
    	/pkg/source/file/watch.go:80 +0x291
    

    Steps to Reproduce the Problem

    配置:

    apiVersion: loggie.io/v1beta1
    kind: Sink
    metadata:
      name: kafka
    spec:
      sink: |
        type: kafka
        brokers: ["10.10.xxx.xxx:9092"]
        topic: "dev_log_${fields.topic}"
    
    ---
    
    apiVersion: loggie.io/v1beta1
    kind: LogConfig
    metadata:
      name: error
      namespace: default
    spec:
      pipeline:
        sinkRef: kafka
        sources: |
          - type: file
            name: error
            paths:
              - /var/log/service/err/*/log.*
            addonMeta: true
            ignoreOlder: 12h
            workerCount: 128
            fields:
              cluster: dev
              topic: error
      selector:
        type: pod
    
    ---
    
    apiVersion: loggie.io/v1beta1
    kind: LogConfig
    metadata:
      name: business
      namespace: default
    spec:
      pipeline:
        sinkRef: kafka
        sources: |
          - type: file
            name: business
            paths:
              - /var/log/service/business/*/log.*
            addonMeta: true
            ignoreOlder: 12h
            workerCount: 128
            fields:
              cluster: dev
              topic: business
      selector:
        type: pod
    
    ---
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: loggie-config-falcon-loggie
      namespace: default
      labels:
        app.kubernetes.io/managed-by: Helm
    data:
      loggie.yml: |-
        loggie:
          discovery:
            enabled: true
            kubernetes:
              containerRuntime: docker
              k8sFields:
                agent: loggie
                containerName: ${_k8s.pod.container.name}
                logConfig: ${_k8s.logconfig}
                namespace: ${_k8s.pod.namespace}
                nodeIp: ${_k8s.node.ip}
                nodeName: ${_k8s.node.name}
                podName: ${_k8s.pod.name}
              parseStdout: true
              rootFsCollectionEnabled: true
          http:
            enabled: true
            port: 9196
          monitor:
            listeners:
              filesource: null
              filewatcher: null
              pipeline: null
              queue: null
              reload: null
              sink: null
            logger:
              enabled: true
              period: 30s
          reload:
            enabled: true
            period: 10s
    
    
    bug 
    opened by zanderme 3
  • Lost

    Lost "leases" resource in loggie-rbac.yaml

    What version of Loggie?

    1.1.0

    Expected Behavior

    loggie-aggregator runs normally

    Actual Behavior

    get error: error retrieving resource lock kube-system/loggie-leader-election-key: leases.coordination.k8s.io "loggie-leader-election-key" is forbidden: User "system:serviceaccount:loggie-aggregator:loggie-aggregator" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"

    image image

    Steps to Reproduce the Problem

    Add missing RBAC permissions to allow loggie-aggregator do its leader election

    bug 
    opened by guanbear 3
  • LogConfig spec.selector能否支持或表达式

    LogConfig spec.selector能否支持或表达式

    opened by Mercurius-lei 0
  •  sink 支持 http

    sink 支持 http

    sink 支持 http

    通过post或者put方法,发送数据到一个http(s) endpoint,支持json格式 用户需要配置 endpoint或者uri auth类型,basic或者bearer 支持设置username,password或者api token 支持设置batch大小,每天日志是一个jsonobject,一个batch就是一个jsonarray,或者其它形式 支持压缩类型gzip 支持https tls配置等

    参考https://vector.dev/docs/reference/configuration/sinks/http/

    enhancement 
    opened by dongbin86 0
  • sink配置kafka写入topic问题

    sink配置kafka写入topic问题

    Ask your question here:

    请问配置sink 为kafka,kafka配置了auto create topic,loggie在写入kafka时是不会自动创建topic的吗?需要手动创建才能写入 配置如下 apiVersion: loggie.io/v1beta1

    kind: Sink
    metadata:
      creationTimestamp: "2022-11-07T09:23:38Z"
      generation: 5
      name: kafka
      resourceVersion: "49137555"
      uid: 86085072-d28e-484c-974f-71371a8a8594
    spec:
      sink: "type: kafka\ntopic: \"huawei_prod_log_${fields._topic_}\"\nbrokers: \n  -
        kafka-0.kafka-headless.default:9092\n  - kafka-1.kafka-headless.default:9092\n
        \ - kafka-2.kafka-headless.default:9092\n
    
    question 
    opened by carmenlee52 2
  • Feat alert frame and rules

    Feat alert frame and rules

    Proposed Changes:

    • customized alert and webhook
    • more customized logAlert interceptor rules
    • adapt to multi-line log alert

    Which issue(s) this PR fixes:

    Fixes #

    Additional documentation:

    
    
    opened by ziyu-zhao 0
Releases(v1.3.0)
  • v1.3.0-rc.0(Sep 1, 2022)

    Release v1.3.0-rc.0

    :star2: Features

    • Add transformer interceptor (#267)
    • Add api /api/v1/helper for debugging (#291)
    • Monitoring listener for normalize interceptor (#224)
    • Add typeNodeFields in kubernetes discovery, change k8sFields to typePodFields, add pod.uid, pod.container.id, pod.container.image to typePodFields (#348)
    • File source support any lineDelimiter (#261)
    • Logconfig/Clusterlogconfig labelSelector support wildcard * (#269)
    • Add override annotation in logConfig/clusterLogConfig (#269)
    • Add go test in makefile and github action (#282)
    • Support total events in dev source (#307)
    • Make file sink default file permissions from 600 to 644 (#304)
    • Change goccy/go-yaml to gopkg.in/yaml.v2 (#317)
    • Add schema interceptor (#340)
    • Support collect log files from pod pv (#341)
    • Support dynamic container logs (#334)
    • Put the encoding conversion module in the source (#344)
    • Support [a.b] when fields key is a.b other than a nested fields (#301) (#297)
    • Loki sink will replace fields including string '. / -' to '_' (#347)

    :bug: Bug Fixes

    • Set beatsFormat timestamp to UTC in sink codec (#268)
    • Fix pipeline stop block (#280) (#284) (#289) (#305) (#309) (#335)
    • Fix addonMeta panic (#285)
    • Move private fields from header to meta in grpc source (#286)
    • Fix clusterlogconfig npe when deleting (#306)
    • Fix prometheus export topic (#314)
    • Fix watchTask stop panic; fix big line collect block (#315)
    • Fix type assert panic when deleting pod recieve DeletedFinalStateUnknown objects (#323)
    • Fix maxbytes order (#326)
    • Check if pod is ready when handle logconfig events (#328)
    • Stop pipeline when config is empty (#330)
    • Change leaderElection Identity in kubeEvent source (#349)
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Jun 16, 2022)

    Release v1.2.0

    :star2: Features

    • Support container root filesystem log collection, so Loggie could collect the log files in container even if the pod is not mounting any volumes. #209
    • Add sls sink for aliyun. #172
    • Support specifying more time intervals(like 1d, 1w, 1y) for the "ignoreOlder" config of file source. #153 (by @iamyeka)
    • Add ignoreError in normalize interceptor. #205
    • Add system listener, which can export CPU/Memory metrics of Loggie. #88
    • Set log.jsonFormat default to false, so Loggie would no longer print json format logs in default. #186
    • Add callerSkipCount and noColor logger flag. #185
    • file sink support codec. #215 (by @lyp256)
    • Support linux arm64. #229 (by @ziyu-zhao)
    • addk8smeta support adding workload metadata. #221 (by @lyp256)
    • Support extra labels in prometheusExporter source. #233
    • Logconfig matchFields labels, annotations and env support wildcards(*). #235 (by @LeiZhang-Hunter)
    • Add source codec for file source. #239
    • Some specific fields pattern now supports ${_env.XX}, ${+date}, ${object}, ${_k8s.XX}. #248
    • We support rename body of events in normalize interceptor now. #238
    • Use local time format in pattern fields. #234
    • Refactor reader process chain. #166
    • Optimize merge strategy of default interceptors. #174 (by @iamyeka)
    • Merge defaults configurations support recursion. #212
    • Cache env for optimizing performance when using fieldsFromEnv. #178 (by @iamyeka)
    • Update yaml lib to goccy/go-yaml. #242
    • Oomit empty pipeline configurations. #244

    :bug: Bug Fixes

    • Fix json error with go 1.18. #164
    • Fix list logconfigs by namespace when handling pod events. #173
    • Fix watcher stop timeout in file source. #188
    • Fix MultiFileWriter in file sink. #202 (by @lyp256)
    • Fix pipeline stop may block. #207
    • Fix unstable interceptor sort. #216
    • Fix grpc source ack panic. #218
    • Fix sub path in kubernetes discovery. #236 (by @yuzhichang)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Apr 15, 2022)

    Loggie release v1.1.0

    :star2: Features

    • Loki sink supported, so you can send messages to Loki now. #119

      eg:

      sink:
        type: loki
        url: "http://localhost:3100/loki/api/v1/push"
      
    • Add spec.interceptors and spec.sink fields in ClusterLogConfig/LogConfig CRD #120

      eg:

      apiVersion: loggie.io/v1beta1
      kind: LogConfig
      metadata:
        name: tomcat
        namespace: default
      spec:
        selector:
          type: pod
          labelSelector:
            app: tomcat
        pipeline:
          sources: |
            - type: file
              name: common
              paths:
                - stdout
          sink: |
            type: dev
            printEvents: false
          interceptors: |
            - type: rateLimit
              qps: 90000
      
    • A new codec type raw is added. #137

      eg:

          sink:
            type: dev
            codec:
              type: raw
      
    • A new approach to external configurations, -config.from=env, allows Loggie to read configs from env. #132 (by @chaunceyjiang)

    • Normalize interceptor now support fmt processor. This can reformat the fields by patterns. #141

      eg:

      interceptors:
      - type: normalize
        processors:
        - fmt:
            fields:
              d: new-${a.b}-${c}
      
    • Adding enabled fields in source/interceptor. #140

    • We would flush the pipelines.yml immediately when updating sink/interceptor CR. #139

    • Upgraded client-go version to 0.22.2 #99 (by @fengshunli)

    • sniff are disabled in elasticsearch sink by default. #138

    • Updated kubeEvent source #144

      • leader election support.

      • blacklist of namespaces support.

      • adding watchLatestEvents fields, allows Loggie watch the lastest events from K8s.

      eg:

            - type: kubeEvent
              name: eventer
              kubeconfig: ~/.kube/config
              watchLatestEvents: true
              blackListNamespaces: ["default"]
      
    • Brace Expansion:{alt1,...} suppport, matches a sequence of characters if one of the comma-separated alternatives matches. #143

      eg:

      log files following:

      1. /tmp/loggie/service/order/access.log
      2. /tmp/loggie/service/order/access.log.2022-04-11
      3. /tmp/loggie/service/pay/access.log
      4. /tmp/loggie/service/pay/access.log.2022-04-11
      

      then the path field in filesource could be: /tmp/loggie/**/access.log{,.[2-9][0-9][0-9][0-9]-[01][0-9]-[0123][0-9]}

    • Adding local in normalize interceptor, in case we want to parse timestamp with local time. #145

    • We support a simple way to add file state when collecting log files, enable addonMeta, then Loggie will add more metadata of files in events. #148

      eg:

      {
        "body": "this is test",
        "state": {
          "pipeline": "local",
          "source": "demo",
          "filename": "/var/log/a.log",
          "timestamp": "2006-01-02T15:04:05.000Z",
          "offset": 1024,
          "bytes": 4096,
          "hostname": "node-1"
        }
      }
      
    • Adding pod.ipnode.ip in K8s discovery. #149

    • Customized documentId supported in elasticsearch sink. #146

    :bug: Bug Fixes

    • Fix read chan size error in filesource #102
    • Update default value of maxAcceptedBytes in kafka source which may cause panic in component starting. #112
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Mar 9, 2022)

    Loggie release v1.0.0

    First Loggie Release! :sparkler:

    :star2: Features

    add components:

    • sources:

      • file
      • kafka
      • grpc
      • unix
      • kubeEvent
      • prometheusExporter
      • dev
    • interceptors:

      • normalize
      • rateLimit
      • addK8sMeta
      • logAlert
      • metric
      • retry
      • maxbytes
    • sink:

      • elasticsearch
      • kafka
      • grpc
      • file
      • dev
    • discovery:

      • Kubernetes discovery, introduced CRD: ClusterLogConfig/LogConfig/Interceptor/Sink
    • monitor:

      • filesource
      • filewatcher
      • logAlert
      • queue
      • sink
      • reload

    :bug: Bug Fixes

    Source code(tar.gz)
    Source code(zip)
provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

null 23 May 8, 2022
Igo Agent is the agent of Igo, a command-line tool, through which you can quickly start Igo

igo agent 英文 | 中文 Igo Agent is the agent of Igo, a command-line tool, through which you can quickly start Igo, and other capabilities may be added lat

null 1 Dec 22, 2021
Shoes-agent - Framework for myshoes provider using agent

shoes-agent Framework for myshoes provider using agent. agent: agent for shoes-a

Tachibana waita 2 Jan 8, 2022
Cloudbase Solutions 1 Feb 17, 2022
Integrated ssh-agent for windows. (pageant compatible. openSSH ssh-agent etc ..)

OmniSSHAgent About The chaotic windows ssh-agent has been integrated into one program. Chaos Map of SSH-Agent on Windows There are several different c

YAMASAKI Masahide 46 Nov 24, 2022
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.

Open Service Mesh (OSM) Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure,

Open Service Mesh 2.5k Nov 20, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

CloudSnorkel 16 Jun 8, 2022
Substation is a cloud native toolkit for building modular ingest, transform, and load (ITL) data pipelines

Substation Substation is a cloud native data pipeline toolkit. What is Substation? Substation is a modular ingest, transform, load (ITL) application f

Brex 97 Nov 24, 2022
AWS Data Transfer Cost Explorer

The AWS Data Transfer Cost Explorer The AWS Data Transfer Cost Explorer tool analyzes the billed Data Transfer items in your AWS account and presents

Oğuzhan YILMAZ 90 Jul 18, 2022
go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data.

go-opa-validate go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data. Installation Usage Cont

chenk 5 Feb 5, 2022
Polaris is a cloud-native service discovery and governance center

It can be used to solve the problem of service connection, fault tolerance, traffic control and secure in distributed and microservice architecture.

PolarisMesh 1.7k Nov 23, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Oracle 24 Sep 27, 2022
Cloud Native Electronic Trading System built on Kubernetes and Knative Eventing

Ingenium -- Still heavily in prototyping stage -- Ingenium is a cloud native electronic trading system built on top of Kubernetes and Knative Eventing

Mark Winter 6 Aug 29, 2022
🔥 🔥 Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥

CVE-2021-44228 Log4J Vulnerability can be detected at runtime and attack paths can be visualized by ThreatMapper. Live demo of Log4J Vulnerability her

null 2.4k Nov 19, 2022
The Cloud Native Application Proxy

Traefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your ex

Traefik Labs 40.5k Nov 21, 2022
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Oliver Borchert 31 Jun 25, 2022
Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

Zadig Developer-oriented Continuous Delivery Product English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? Ho

KodeRover 1.8k Nov 26, 2022
Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

Zadig Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use?

KodeRover 30 May 12, 2021
Interactive Cloud-Native Environment Client

Fenix-CLI:Interactive Cloud-Native Environment Client English | 简体中文 Fenix-CLI is an interactive cloud-native operating environment client. The goal i

IcyFenix 84 Nov 13, 2022