Describe the bug
采集的数据无法上报到kafka,出现报错;我这边kafka是正常的logstash可以消费测试的数据,集群是没问题的

iLogtail Running Environment
Please provide the following information:
-
ilogtail version:
最新镜像版本
-
Yaml configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: ilogtail-user-cm
namespace: default
data:
cloud_stdout.yaml: |
enable: true
inputs:
- Type: service_docker_stdout
Stderr: false
Stdout: true # only collect stdout
IncludeK8sLabel:
app: nginx # choose containers with this label
flushers:
- Type: flusher_kafka
Brokers:
- kafka-cluster:9092
Topic: access-log
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ilogtail-ds
namespace: default
labels:
k8s-app: logtail-ds
spec:
selector:
matchLabels:
k8s-app: logtail-ds
template:
metadata:
labels:
k8s-app: logtail-ds
spec:
tolerations:
- operator: Exists # deploy on all nodes
containers:
- name: logtail
env:
- name: ALIYUN_LOG_ENV_TAGS # add log tags from env
value: _node_name_|_node_ip_
- name: _node_name_
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: _node_ip_
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: cpu_usage_limit # iLogtail's self monitor cpu limit
value: "1"
- name: mem_usage_limit # iLogtail's self monitor mem limit
value: "512"
image: >-
sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:latest
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 400m
memory: 384Mi
volumeMounts:
- mountPath: /var/run # for container runtime socket
name: run
- mountPath: /logtail_host # for log access on the node
mountPropagation: HostToContainer
name: root
readOnly: true
- mountPath: /usr/local/ilogtail/checkpoint # for checkpoint between container restart
name: checkpoint
- mountPath: /usr/local/ilogtail/user_yaml_config.d # mount config dir
name: user-config
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
volumes:
- hostPath:
path: /var/run
type: Directory
name: run
- hostPath:
path: /
type: Directory
name: root
- hostPath:
path: /etc/ilogtail-ilogtail-ds/checkpoint
type: DirectoryOrCreate
name: checkpoint
- configMap:
defaultMode: 420
name: ilogtail-user-cm
name: user-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: 'nginx:latest'
name: nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources:
requests:
cpu: 100m
memory: 100Mi
[2022-12-05 07:55:32.433230] [info] [000011] /src/core/app_config/AppConfigBase.cpp:150 AppConfigBase AppConfigBase:success
[2022-12-05 07:55:32.433250] [info] [000011] /src/core/app_config/AppConfigBase.cpp:314 load env tags from env key:node_name|node_ip
[2022-12-05 07:55:32.433281] [info] [000011] /src/core/app_config/AppConfigBase.cpp:322 load env, key:node_name value:k8s-master
[2022-12-05 07:55:32.433284] [info] [000011] /src/core/app_config/AppConfigBase.cpp:322 load env, key:node_ip value:10.3.0.14
[2022-12-05 07:55:32.433292] [info] [000011] /src/core/app_config/AppConfigBase.cpp:379 purage container mode:true
[2022-12-05 07:55:32.433310] [info] [000011] /src/core/logtail.cpp:156 change working dir:/usr/local/ilogtail/1.3.0 fail, reason:No such file or directory
[2022-12-05 07:55:32.433326] [info] [000011] /src/core/logtail.cpp:158 change working dir:/usr/local/ilogtail/ result:0
[2022-12-05 07:55:32.433407] [info] [000011] /src/core/app_config/AppConfigBase.cpp:1117 set logtail sys conf dir:/usr/local/ilogtail/./ user local config path:/usr/local/ilogtail/./user_local_config.json user local config dir path:/usr/local/ilogtail/./user_config.d/ user local yaml config dir path:/usr/local/ilogtail/./user_yaml_config.d/
[2022-12-05 07:55:32.433446] [info] [000011] /src/core/app_config/AppConfigBase.cpp:283 load logtail config file, path:ilogtail_config.json
[2022-12-05 07:55:32.433449] [info] [000011] /src/core/app_config/AppConfigBase.cpp:284 load logtail config file, detail:{}
[2022-12-05 07:55:32.433622] [info] [000011] /src/core/app_config/AppConfigBase.cpp:592 logtail checkpoint path:/usr/local/ilogtail/checkpoint/logtail_check_point
[2022-12-05 07:55:32.433634] [info] [000011] /src/core/common/JsonUtil.cpp:138 load config from env:docker_file_cache_path value:checkpoint/docker_path_config.json
[2022-12-05 07:55:32.433637] [info] [000011] /src/core/common/JsonUtil.cpp:171 load parameter from env:docker_file_cache_path value:checkpoint/docker_path_config.json
[2022-12-05 07:55:32.433663] [info] [000011] /src/core/app_config/AppConfigBase.cpp:340 set cpu_usage_limit from env, value:1
[2022-12-05 07:55:32.433672] [info] [000011] /src/core/app_config/AppConfigBase.cpp:340 set mem_usage_limit from env, value:512
[2022-12-05 07:55:32.433678] [info] [000011] /src/core/app_config/AppConfigBase.cpp:1043 default cache size, polling total max stat:250000 per dir max stat:25000 per config max stat:25000 cache config max size:250000 modify max:25000 watch dir count max:25000 max open files limit:32765 max reader open files limit:25000
[2022-12-05 07:55:32.433681] [info] [000011] /src/core/app_config/AppConfigBase.cpp:1047 send byte per second limit:26214400 batch send interval:3 batch send size:262144
[2022-12-05 07:55:32.436558] [info] [000011] /src/core/logtail.cpp:183 IP is still empty, specified interface: try to get any available IP:10.3.0.14
[2022-12-05 07:55:32.436643] [info] [000011] /src/core/common/TimeUtil.cpp:232 get system boot time from /proc/uptime:1668743992
[2022-12-05 07:55:32.436710] [info] [000011] /src/core/config_manager/ConfigManagerBase.cpp:2138 invalid aliuid conf dir:/usr/local/ilogtail/./users error:No such file or directory
[2022-12-05 07:55:32.436759] [info] [000011] /src/core/config_manager/ConfigManagerBase.cpp:2145 recreate aliuid conf dir success:/usr/local/ilogtail/./users
[2022-12-05 07:55:32.447413] [info] [000011] /src/core/config_manager/ConfigManagerBase.cpp:2398 user yaml config file loaded:/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml
[2022-12-05 07:55:32.447427] [info] [000011] /src/core/config_manager/ConfigManagerBase.cpp:2407 user yaml config removed or added, last:0 now:1
[2022-12-05 07:55:32.447448] [info] [000011] /src/core/config_manager/ConfigManagerBase.cpp:2013 load user defined id from env:default
[2022-12-05 07:55:32.447688] [info] [000011] /src/core/config_manager/ConfigYamlToJson.cpp:245 Trans yaml to json:success config_name:/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml is_file_mode:false input_plugin_type:service_docker_stdout has_accelerate_processor:false accelerate_processor_plugin_type: log_split_processor: logtype:plugin user_local_json_config:{
"metrics" :
{
"config#/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml" :
{
"enable" : true,
"log_type" : "plugin",
"plugin" :
{
"flushers" :
[
{
"detail" :
{
"Brokers" :
[
"kafka-cluster:9092"
],
"Topic" : "access-log"
},
"type" : "flusher_kafka"
}
],
"inputs" :
[
{
"detail" :
{
"IncludeK8sLabel" :
{
"app" : "nginx"
},
"Stderr" : false,
"Stdout" : true
},
"type" : "service_docker_stdout"
}
]
}
}
}
}
[2022-12-05 07:55:32.447930] [info] [000011] /src/core/config_manager/ConfigManagerBase.cpp:1070 load user_yaml_config.d config:true file:/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml now config count:1
[2022-12-05 07:55:32.448615] [info] [000011] /src/core/monitor/Monitor.cpp:537 machine cpu cores:64
[2022-12-05 07:55:32.449196] [info] [000011] /src/core/sls_control/SLSControl.cpp:65 user agent:ilogtail/1.3.0 (Linux; 4.15.0; x86_64) ip/10.3.0.14 env/K8S-Daemonset
[2022-12-05 07:55:32.449205] [info] [000011] /src/core/sender/Sender.cpp:453 Set sender queue param depend value:10
[2022-12-05 07:55:32.449450] [info] [000019] /src/core/sender/Sender.cpp:1264 SendThread:start
[2022-12-05 07:55:32.449511] [info] [000011] /src/core/plugin/LogtailPlugin.cpp:307 load plugin base, config count:1 docker env config:false dl file:/usr/local/ilogtail/libPluginAdapter.so
[2022-12-05 07:55:32.449620] [info] [000011] /src/core/plugin/LogtailPlugin.cpp:328 check plugin adapter version success, version:300
[2022-12-05 07:55:32.593219] [info] [000011] /src/core/plugin/LogtailPlugin.cpp:439 init plugin base:success
[2022-12-05 07:55:33.369049] [warning] [000011] /src/core/plugin/LogtailPlugin.cpp:84 msg:load plugin error project: logstore: config:config#/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml content:{
"flushers" :
[
{
"detail" :
{
"Brokers" :
[
"kafka-cluster:9092"
],
"Topic" : "access-log"
},
"type" : "flusher_kafka"
}
],
"inputs" :
[
{
"detail" :
{
"IncludeK8sLabel" :
{
"app" : "nginx"
},
"Stderr" : false,
"Stdout" : true
},
"type" : "service_docker_stdout"
}
]
}
result:1
[2022-12-05 07:55:33.369252] [info] [000011] /src/core/plugin/LogtailPlugin.cpp:116 logtail plugin Resume:start
[2022-12-05 07:55:33.369358] [info] [000011] /src/core/plugin/LogtailPlugin.cpp:118 logtail plugin Resume:success
[2022-12-05 07:55:33.369416] [info] [000011] /src/core/common/DynamicLibHelper.cpp:104 load glibc dynamic library:begin
[2022-12-05 07:55:33.369454] [info] [000011] /src/core/common/DynamicLibHelper.cpp:118 load glibc dynamic library:success
[2022-12-05 07:55:33.369579] [info] [000011] /src/core/checkpoint/CheckPointManager.cpp:154 load checkpoint, version:200 file check point:0 dir check point:0
[2022-12-05 07:55:33.369639] [info] [000011] /src/core/profiler/LogIntegrity.cpp:507 no integrity file to load:/usr/local/ilogtail/logtail_integrity_snapshot.json
[2022-12-05 07:55:33.369669] [info] [000011] /src/core/profiler/LogLineCount.cpp:263 no line count file to load:/usr/local/ilogtail/logtail_line_count_snapshot.json
[2022-12-05 07:55:33.370027] [info] [000011] /src/core/logtail.cpp:282 Logtail started, appInfo:{
"UUID" : "3186E44C-7472-11ED-A315-AE84100FC2D2",
"build_date" : "20221202",
"compiler" : "GCC 4.8.5",
"git_hash" : "f900f5a7c94d9c531b9259143e40675877b59808",
"hostname" : "k8s-master",
"instance_id" : "3186DA9C-7472-11ED-802E-AE84100FC2D2_10.3.0.14_1670226932",
"ip" : "10.3.0.14",
"logtail_version" : "1.3.0 Community Edition",
"os" : "Linux; 4.15.0-197-generic; #208-Ubuntu SMP Tue Nov 1 17:23:37 UTC 2022; x86_64",
"update_time" : "2022-12-05 07:55:33"
}
[2022-12-05 07:55:33.370135] [info] [000011] /src/core/controller/EventDispatcherBase.cpp:588 start add existed check point events, size:0
[2022-12-05 07:55:33.370143] [info] [000011] /src/core/controller/EventDispatcherBase.cpp:602 add existed checkpoint events, size:0 cache size:0 event size:0 delete size:0
[2022-12-05 07:55:33.370265] [info] [000052] /src/core/polling/PollingModify.cpp:224 PollingModify::Polling:start
[2022-12-05 07:55:33.370491] [info] [000055] /src/core/processor/LogProcess.cpp:205 local timezone offset second:0
2022-12-05 07:55:32 [INF] [plugin_export.go:250] [setGCPercentForSlowStart] set startup GC percent from 30 to 20
2022-12-05 07:55:32 [INF] [plugin_export.go:234] [func1] init plugin base, version:0.1.0
2022-12-05 07:55:32 [INF] [global_config.go:55] [func1] load global config:{
"HostIP" : "10.3.0.14",
"Hostname" : "k8s-master",
"LogtailSysConfDir" : "/usr/local/ilogtail/./"
}
2022-12-05 07:55:32 [INF] [plugin_manager.go:96] [Init] init plugin, local env tags:[node_name k8s-master node_ip 10.3.0.14]
2022-12-05 07:55:32 [INF] [checkpoint_manager.go:96] [Init] init checkpoint:
2022-12-05 07:55:32 [INF] [logstore_config.go:799] [loadBuiltinConfig] load built-in config statistics, config name: shennong_log_profile, logstore: logtail_plugin_profile
2022-12-05 07:55:32 [INF] [logstore_config.go:799] [loadBuiltinConfig] load built-in config alarm, config name: logtail_alarm, logstore: logtail_alarm
2022-12-05 07:55:32 [INF] [logstore_config.go:788] [LoadLogstoreConfig] load config:config#/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml logstore:
2022-12-05 07:55:32 [WRN] [flusher_kafka.go:65] [Init] [config#/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml,] AlarmType:FLUSHER_INIT_ALARM SASL information is not set, access Kafka server without authentication:logstore :config config#/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml:
2022-12-05 07:55:33 [ERR] [flusher_kafka.go:90] [Init] [config#/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml,] AlarmType:FLUSHER_INIT_ALARM init kafka flusher fail, error:kafka: client has run out of available brokers to talk to (Is your cluster reachable?) logstore: config:config#/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml
2022-12-05 07:55:33 [ERR] [plugin_export.go:92] [LoadConfig] AlarmType:CONFIG_LOAD_ALARM load config error, project: logstore: config:config#/usr/local/ilogtail/./user_yaml_config.d/cloud_stdout.yaml error:kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
2022-12-05 07:55:33 [INF] [plugin_export.go:154] [Resume] Resume:start
2022-12-05 07:55:33 [INF] [logstore_config.go:146] [Start] [shennong_log_profile,logtail_plugin_profile] config start:begin
2022-12-05 07:55:33 [INF] [logstore_config.go:184] [Start] [shennong_log_profile,logtail_plugin_profile] config start:success
2022-12-05 07:55:33 [INF] [logstore_config.go:146] [Start] [logtail_alarm,logtail_alarm] config start:begin
2022-12-05 07:55:33 [INF] [logstore_config.go:184] [Start] [logtail_alarm,logtail_alarm] config start:success
2022-12-05 07:55:33 [INF] [checkpoint_manager.go:115] [Resume] checkpoint:Resume
2022-12-05 07:55:33 [INF] [plugin_export.go:162] [Resume] Resume:success
2022-12-05 07:55:33 [INF] [metric_wrapper.go:42] [Run] [shennong_log_profile,logtail_plugin_profile] start run metric :&{0xc0007cae40}
2022-12-05 07:55:33 [INF] [metric_wrapper.go:42] [Run] [logtail_alarm,logtail_alarm] start run metric :&{0xc0007cafc0}
bug