目录
前言
1、官方下载链接:kibana??elasticsearch? logstash
2、修改配置文件:
3、启动
前言
搭建一套elk用于日志的检索,版本选择的是7.0.0。搭建过程快速、简便。注意最好在虚拟机中搭建,不要在docker容器中搭建,要有服务器的root权限,遇到问题的时候有权限可以很好的缩短搭建时间。这个版本中的kibana还可以直接用root权限运行,只用新建一个es用户就行。主要思路:logstash从文件和kafka 的topic读取数据进入到elasticsearch中,kibana做图形化检索。
2、修改配置文件:
修改elasticsearch.yml文件如下几行:
network.host: 10.18.11.148
#
# Set a custom port for HTTP:
#
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
http.cors.enabled: true #加这行
http.cors.allow-origin: "*" #加这行
修改kibana.yml文件如下几行:
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "10.18.11.148"
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://10.18.11.148:9200"]
logstash (新建logstash.conf,启动的时候需要指定此配置文件):
input { kafka {
bootstrap_servers => "10.18.14.170:9092" #这里可以是kafka集群,如"192.168.149.101:9092,192.168.149.102:9092,192.168.149.103:9092"
group_id => "host_log"
# client_id => "logstash1" #注意,多台logstash实例消费同一个topics时,client_id需要指定不同的名字
# auto_offset_reset => "latest"
topics => ["yourtopic"]
# add_field => {"logs_type" => "host"} #value_type is hash,not nessory
# codec => json { charset => "UTF-8" }
}
file{
path => "/var/log/messages-20210802"
start_position => "beginning"
type => "nginx_access_log"
}
}
output {
elasticsearch {
hosts => ["http://10.18.11.148:9200"]
index => "test-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
stdout {
codec => rubydebug
}
}
3、启动
bin/elasticsearh?
bin/kibana
bin/logstash -f config/logstash.conf
访问 http://logstash所在的ip地址:5601即可
|