为了账号安全,请及时绑定邮箱和手机立即绑定

logstash启动报错

阿里云服务器 CentOS Linux release 7.4.1708 (Core)

Java(TM) SE Runtime Environment (build 1.8.0_151-b12)

logstash-6.5.0

-------------------------------------------------------------------------------------------

使用控制台输入控制台输出时,启动正常,没问题。

使用filebeat输入,控制台输出时报错,但可以正常接收到filebeat传过来的内容。配置文件如下:

input {

    beats {

        port => 9011

    }

}

filter {

}

output {

    stdout {

        codec=>rubydebug

    }

}

指定监听端口9011。

启动logstash:

./bin/logstash -f config/filebeat_std.conf

启动后日志如下:

Sending Logstash logs to /usr/share/kkitsupadm/elk/logstash-6.4.3/logs which is now configured via log4j2.properties

[2018-11-16T16:17:24,741][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified

[2018-11-16T16:17:25,664][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}

[2018-11-16T16:17:29,219][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}

[2018-11-16T16:17:29,830][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:9011"}

[2018-11-16T16:17:29,866][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5aeab466 run>"}

[2018-11-16T16:17:29,971][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

[2018-11-16T16:17:30,055][INFO ][org.logstash.beats.Server] Starting server on port: 9011

[2018-11-16T16:17:30,619][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:9011, remote: undefined] Handling exception: Connection reset by peer

[2018-11-16T16:17:30,628][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

java.io.IOException: Connection reset by peer

at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0_151]

at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:1.8.0_151]

at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:1.8.0_151]

at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_151]

at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:1.8.0_151]

at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1108) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:345) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:126) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.18.Final.jar:4.1.18.Final]

at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.18.Final.jar:4.1.18.Final]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

[2018-11-16T16:17:30,637][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:9011, remote: undefined] Handling exception: Connection reset by peer

[2018-11-16T16:17:30,638][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.


而我在另一台我们自己搭的服务器上启动时,正常启动,日志如下:

Sending Logstash logs to /opt/elk/logstash-6.4.3/logs which is now configured via log4j2.properties

[2018-11-16T16:11:54,903][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified

[2018-11-16T16:11:57,971][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}

[2018-11-16T16:12:12,528][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}

[2018-11-16T16:12:13,325][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:9011"}

[2018-11-16T16:12:13,439][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x186a652 run>"}

[2018-11-16T16:12:13,836][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

[2018-11-16T16:12:14,071][INFO ][org.logstash.beats.Server] Starting server on port: 9011

[2018-11-16T16:12:15,395][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}        

正在回答

4 回答

这你要查下看了,一般 beats 默认都是用 5044

0 回复 有任何疑惑可以回复我~
#1

superychen 提问者

感觉跟端口关系不大,服务器没有独立ip,配置了负载均衡,不知道跟这个有没有关系,ε=(´ο`*)))唉
2018-11-19 回复 有任何疑惑可以回复我~
#2

superychen 提问者

9012端口在控制台开放了之后也不行了,怀疑是负载均衡的问题。。。
2018-11-19 回复 有任何疑惑可以回复我~
#3

rockybean 回复 superychen 提问者

那你直接连负载均衡和 logstash 端口测试下不就知道了
2018-11-19 回复 有任何疑惑可以回复我~

巧了,我也用了阿里云的lb,一样的问题。自己在本地搭的没有这种状况

0 回复 有任何疑惑可以回复我~

请问是怎么解决的


0 回复 有任何疑惑可以回复我~

监听端口换成9012就不报错了,怀疑9011端口有logstash无法解析的数据进来。但还不知道具体原因

0 回复 有任何疑惑可以回复我~

举报

0/150
提交
取消

logstash启动报错

我要回答 关注问题
意见反馈 帮助中心 APP下载
官方微信