log4j2
https://github.com/mp911de/lo…
log4j2 把日志发送到logstash用好几种方式,一个是通过socket appender,这种方式有个缺点是断掉之后不会自动重连;第二种方式就是使用logstash-gelf
<dependency>
<groupId>biz.paluch.logging</groupId>
<artifactId>logstash-gelf</artifactId>
<version>1.11.1</version>
</dependency>
log4j2 配置
<Gelf name="gelf" host="udp:192.168.77.205" port="12201" version="1.1" extractStackTrace="true"
filterStackTrace="true" mdcProfiling="true" includeFullMdc="true" maximumMessageSize="8192"
originHost="%host{fqdn}" additionalFieldTypes="fieldName1=String,fieldName2=Double,fieldName3=Long">
<Field name="timestamp" pattern="%d{dd MMM yyyy HH:mm:ss,SSS}" />
<Field name="level" pattern="%level" />
<Field name="simpleClassName" pattern="%C{1}" />
<Field name="className" pattern="%C" />
<Field name="server" pattern="%host" />
<Field name="server.fqdn" pattern="%host{fqdn}" />
<!-- This is a static field -->
<Field name="fieldName2" literal="fieldValue2" />
<!-- This is a field using MDC -->
<Field name="mdcField2" mdc="mdcField2" />
<DynamicMdcFields regex="mdc.*" />
<DynamicMdcFields regex="(mdc|MDC)fields" />
</Gelf>
logstash
配置如下
-
需要去掉server字段,不然导入elasticsearch 会有问题。
-
用json格式解开message
-
es的索引名称可以使用变量。
input {
gelf {
port => 12201
}
}
filter {
json {
source => "message"
add_field => ["type", "%{dtype}"]
remove_field => [ "server", "server.fqdn", "timestamp" ]
}
}
output {
stdout { codec => rubydebug }
if "_jsonparsefailure" not in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{dtype}-%{action}"
}
}
}