注意logstash启动的时候有个坑,如果启动指定目录,那么必须保证该目录下都是配置文件,否则就会误以为都是配置文件。
配置pom文件
配置logback
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| <?xml version="1.0" encoding="utf-8"?>
<configuration debug="false"> <springProperty scope="context" name="logPath" source="joy.logback.path"/> <springProperty scope="context" name="logName" source="spring.application.name"/> <property name="LOG_HOME" value="${logPath}/logs"/> <property name="LOG_NAME" value="${logName}"/> <appender name="STDOUT" class="ch.qos.logback.core.rolling.RollingFileAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> </encoder> </appender> <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>172.16.16.13:4568</destination> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> <root level="INFO"> <appender-ref ref="stash"/> <appender-ref ref="STDOUT"/> </root> </configuration>
|
此处定义的level是指程序里的INFO,代表INFO级别的日志都输出至logstash。
配置logstash
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
| input{ tcp { add_field => { "serverType" => "collect"} mode => "server" host => "0.0.0.0" port => 4568 codec => json { charset => "UTF-8"} } } filter{ grok { match => { "message" => "(?<datetime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3}) INFO %{NUMBER:thread} --- %{SYSLOG5424SD:task} %{JAVACLASS:javaclass}\s*: %{SYSLOG5424SD:module}\s*%{GREEDYDATA:msg}" } } mutate { convert => ["[geoip][coordinates]", "float"] remove_field => ["tags", "offset", "host", "beat"] } }
output{ if [serverType] == "collect" { elasticsearch { hosts => ["127.0.0.1:9200"] index => "collcet-%{+YYYY.MM.dd}" } } }
|