ELK 日志管理
ELK是elastic公司提供的一套完整的日志收集及展示的解决方案,是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana 。
解决问题: 日志数据量大,查询起来慢 集群部署的工程,通过日志排查问题需要逐个去服务器去查看,排查问题。
简单elk架构
部署环境:centos7.6
Elasticsearch(7.16.2)
介绍:Elasticsearch是一个基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java语言开发的,并作为Apache许可条款下的开放源码发布,是一种流行的企业级搜索引擎。 版本:7.16.2 下载地址: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.2-linux-x86_64.tar.gz
1.1. 安装启动:
启动elasticsearch 不能使用root用户,首先创建用户 adduser es 需要安装java环境。 创建日志、数据文件夹并赋权限。
mkdir -p /data/elasticsearch/data
mkdir -p /data/elasticsearch/logs
chown -R es:es ${elasticsearchHome}
chown -R es:es /data/elasticsearch/data
chown -R es:es /data/elasticsearch/logs
修改配置文件: vim ${elasticsearchHome}/config/elasticsearch.yml
cluster.name: my-application
node.name: node-1
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
network.host: 0.0.0.0
http.port: 9200
设置密码: 默认账号:elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user 修改配置文件 vim ${elasticsearchHome}/config/elasticsearch.yml
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
./elasticsearch-setup-passwords interactive
注意:elasticsearch 配置密码后,kibana需要安装插件,设置账号密码,否则无法登录。
启动命令: sudo -u es /usr/package/elasticsearch-7.16.2/bin/elasticsearch -d 实时打印启动日志: tail -n 200 -f /data/elasticsearch/logs/my-application.log 验证: 直接访问:http://ip:9200/
1.2. 优化配置:
设置jvm堆大小: 默认情况下,Elasticsearch 会根据节点的角色和总内存自动设置 JVM 堆大小。我们建议大多数生产环境使用默认大小。
如需修改: ${elasticsearchHome}/config/jvm.options -Xms2g -Xmx2g
1.3. 常见问题:
max virtual memory areas vm.max_map_count [65530] is too low, increase to at 【修改一个进程可以拥有的VMA(虚拟内存区域)的数量】 解决办法: vim /etc/sysctl.conf 最后增加: vm.max_map_count=262144
执行 sysctl -p
2.bootstrap check failure [1] of [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535] 【每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量】
ulimit -Hn
ulimit -Sn
解决办法: vim /etc/security/limits.conf
soft nofile 65536
hard nofile 65536
Kibana(7.16.2)
介绍:Kibana 是一个免费且开放的用户界面,能够让您对 Elasticsearch 数据进行可视化,并让您在 Elastic Stack 中进行导航。您可以进行各种操作,从跟踪查询负载,到理解请求如何流经您的整个应用,都能轻松完成。
也可用于服务器资源的监控,跟Grafana功能类似。 结合metricbeat 可监测服务器资源 版本:7.16.2
下载地址:https://www.elastic.co/cn/downloads/kibana
2.1.安装
解压tar.gz
修改配置:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://127.0.0.1:9200"]
elasticsearch.username: "kibana_system" # 如果没有账号,则注释掉
elasticsearch.password: "123456" #
i18n.locale: "zh-CN" # 管理界面中文,默认为英文
赋权限: chown -R es:es ${kibanaHome}
创建日志目录: mkdir -p /data/kibana/log
设置账号、密码(如果elasticsearch没有账号密码,可忽略): 安装插件:
启动:sudo -u es /usr/package/kibana-7.16.2-linux-x86_64/bin/kibana --allow-root > /data/kibana/log/kibana.log & 停止:
kill -9 `lsof -i :5601 -t`
直接访问:http://ip:5601/
2.2.kibana的使用
日志查询: 首先使用日志收集工具(logstash、filebeat等)将日志收集后,然后在Kibana web界面 Management —— Stack Management —— kibana - 索引模式 创建索引模式。
2.3.常见问题
1.启动时报错 TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block 可能原因:Elasticsearch 所在服务器,磁盘空间不足,扩容后可解决。
Logstash(7.16.2)
介绍: Logstash 是免费且开放的服务器端数据处理管道,能够从多个来源采集数据,转换数据,然后将数据发送到您最喜欢的“存储库”中。
Logstash事件处理有三个阶段:inputs→filters→outputs。 是一个接收,处理,转发日志的工具。支持系统日志,webserver日志,错误日志,应用日志,总之包括所有可以抛出来的日志类型。
版本:7.16.2 下载地址:https://www.elastic.co/cn/downloads/logstash 说明文档:https://www.elastic.co/guide/en/logstash/7.16/getting-started-with-logstash.html
3.1.安装
修改配置文件: vim ${LogstashHome}/config/logstash.yml
vim ${LogstashHome}/config/logstash.conf 必须存在配置,不然启动报错。 配置参考:
使用logstash收集服务器日志文件配置:
input {
file {
path => "/root/logs/qgzhdc-px-data-node/common-error.log"
type => "qgzhdc-common-error-data"
start_position =>"beginning"
# start_interval =>"2"
}
file {
path => "/root/logs/qgzhdc-statistics-node/common-error.log"
type => "qgzhdc-common-error-statistics"
start_position =>"beginning"
}
}
output {
if [type] == "qgzhdc-common-error-data"{
elasticsearch {
hosts => ["172.16.100.156:9200"]
index => "qgzhdc-common-error-data-%{+YYYY.MM.dd}"
}
}
if [type] == "qgzhdc-common-error-statistics"{
elasticsearch {
hosts => ["172.16.100.156:9200"]
index => "qgzhdc-common-error-statistics-%{+YYYY.MM.dd}"
}
}
}
使用filebeat收集日志,logstash处理日志
input {
beats {
port => 5044
type => "filebeat"
client_inactivity_timeout => 36000
}
}
filter {
# 拾取日志文件中的日期
grok{
match =>{"message"=>"%{TIMESTAMP_ISO8601:qgzhdc_log_create_time}"}
}
}
output {
if [type] == "filebeat"{
elasticsearch {
user => "elastic"
password => "123456"
hosts => ["172.16.100.156:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
}
3.2.启动
默认端口 9600
/usr/package/logstash-7.16.2/bin/logstash -f /usr/package/logstash-7.16.2/config/logstash.conf &
停止:ps -ef | grep logstash ,然后kill -9 [查询到的端口]
或者将logstash 安装为服务,使用 systemctl 去管理:
- 修改 /usr/package/logstash-7.16.2/config/startup.conf
# Set a home directory
LS_HOME=/usr/package/logstash-7.16.2
# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/usr/package/logstash-7.16.2/config/
# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"
# Arguments to pass to java
LS_JAVA_OPTS=""
# pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
LS_PIDFILE=/var/run/logstash.pid
# user and group id to be invoked as
LS_USER=root
LS_GROUP=root
# Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
LS_GC_LOG_FILE=/var/log/logstash/gc.log
# Open file limit
LS_OPEN_FILES=16384
# Nice level
LS_NICE=19
# Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"
# If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOM
## EOM
执行 ${logstashHome}/bin/system-install
systemctl 管理服务: 启动 :systemctl start logstash 停止:systemctl stop logstash 查看启动状态: systemctl status logstash -l
注意: 目前存在问题,使用systemctl 没有去使用指定的配置文件,可以去修改 vim /etc/systemd/system/logstash.service 中的 ExecStart参数。 完整的如下。
[Unit]
Description=logstash
[Service]
Type=simple
User=root
Group=root
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/usr/package/logstash-7.16.2
EnvironmentFile=-/usr/package/logstash-7.16.2
ExecStart=/usr/package/logstash-7.16.2/bin/logstash "--path.settings" "/usr/package/logstash-7.16.2/config/" -f "/usr/package/logstash-7.16.2/config/logstash.conf"
Restart=always
WorkingDirectory=/usr/package/logstash-7.16.2
Nice=19
LimitNOFILE=16384
# When stopping, how long to wait before giving up and sending SIGKILL?
# Keep in mind that SIGKILL on a process can cause data loss.
TimeoutStopSec=infinity
[Install]
WantedBy=multi-user.target
Filebeats(7.16.2)
介绍:Filebeat 是一个用于转发和集中日志数据的轻量级传送器。作为代理安装在您的服务器上,Filebeat 监控您指定的日志文件或位置,收集日志事件,并将它们转发到Elasticsearch或 Logstash以进行索引。
4.1.安装
版本:7.16.2 下载地址:
配置:
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# filestream is an input for collecting log messages from files.
- type: filestream
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /root/logs/qgzhdc-px-data-node/qgzhdc-px-data-node-web.log.2022-02-28
# fields 自定义字段与值
fields:
qgzhdc_project_name: qgzhdc-px-data-node
qgzhdc_hostip: myiptest
- type: filestream
enabled: true
paths:
- /root/logs/qgzhdc-px-import-node/common-error.log.2022-03-01
fields:
qgzhdc_project_name: qgzhdc-px-import-node
qgzhdc_hostip: myiptest
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d
4.2.启动
./filebeat -e
附录:
Elasticsearch: Elasticsearch可视化工具 http://mobz.github.io/elasticsearch-head
elasticsearch-head:插件安装
0.x,1.x,2.x 支持在elasticsearch 中以插件形式安装:
./elasticsearch-plugin install -h 查看可以安装的插件
1.elasticsearch/bin/plugin -install mobz/elasticsearch-head
2.运行es
3.打开http://localhost:9200/_plugin/head/
5.x 版本不支持插件安装,只能单独启动一个服务: ? git clone git://github.com/mobz/elasticsearch-head.git ? cd elasticsearch-head ? npm install ? npm run start open http://localhost:9100/ `
Elasticsearch常用操作: 查询集群是否健康: curl ‘localhost:9200/_cat/health?v’ 查询所有节点: curl ‘localhost:9200/_cat/nodes?v’ 查询所有索引 curl ‘localhost:9200/_cat/health?v’ 创建索引 curl -XPUT ‘localhost:9200/yangzhtest_index?pretty’ 插入数据 插入数据为 {“name”:”yangzh”} curl -XPUT ‘localhost:9200/yangzhtest_index/external/1?pretty’ -d ‘{“name”:”yangzh”}’ 需要加上请求头,不然报错 curl -H “Content-Type:application/json” -XPUT ‘localhost:9200/yangzhtest_index/external/1?pretty’ -d ‘{“name”:“yangzh”}’ 获取数据: curl -XGET ‘http://localhost:9200/yangzhtest_index/external/1?pretty’ 删除数据: curl -XDELETE ‘http://localhost:9200/yangzhtest_index/external/1?pretty’ 删除索引: curl -XDELETE http://localhost:9200/qgzhdc-* …
|