IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 系统运维 -> ELK部署及使用 -> 正文阅读

[系统运维]ELK部署及使用

ELK 日志管理

ELK是elastic公司提供的一套完整的日志收集及展示的解决方案,是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana 。

解决问题:
日志数据量大,查询起来慢
集群部署的工程,通过日志排查问题需要逐个去服务器去查看,排查问题。

简单elk架构

部署环境:centos7.6

Elasticsearch(7.16.2)

介绍:Elasticsearch是一个基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java语言开发的,并作为Apache许可条款下的开放源码发布,是一种流行的企业级搜索引擎。
版本:7.16.2
下载地址:
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.2-linux-x86_64.tar.gz

1.1. 安装启动:

启动elasticsearch 不能使用root用户,首先创建用户 adduser es
需要安装java环境。
创建日志、数据文件夹并赋权限。

mkdir -p /data/elasticsearch/data
mkdir -p /data/elasticsearch/logs
chown -R es:es ${elasticsearchHome}
chown -R es:es /data/elasticsearch/data
chown -R es:es /data/elasticsearch/logs

修改配置文件:
vim ${elasticsearchHome}/config/elasticsearch.yml

cluster.name: my-application
node.name: node-1
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
network.host: 0.0.0.0
http.port: 9200

设置密码:
默认账号:elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user
修改配置文件 vim ${elasticsearchHome}/config/elasticsearch.yml

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

./elasticsearch-setup-passwords interactive

注意:elasticsearch 配置密码后,kibana需要安装插件,设置账号密码,否则无法登录。

启动命令:
sudo -u es /usr/package/elasticsearch-7.16.2/bin/elasticsearch -d
实时打印启动日志:
tail -n 200 -f /data/elasticsearch/logs/my-application.log
验证:
直接访问:http://ip:9200/

1.2. 优化配置:

设置jvm堆大小:
默认情况下,Elasticsearch 会根据节点的角色和总内存自动设置 JVM 堆大小。我们建议大多数生产环境使用默认大小。

如需修改:
${elasticsearchHome}/config/jvm.options
-Xms2g
-Xmx2g

1.3. 常见问题:

  1. max virtual memory areas vm.max_map_count [65530] is too low, increase to at
    【修改一个进程可以拥有的VMA(虚拟内存区域)的数量】
    解决办法:
    vim /etc/sysctl.conf 最后增加:
    vm.max_map_count=262144

执行 sysctl -p

2.bootstrap check failure [1] of [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
【每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量】

ulimit -Hn
ulimit -Sn

解决办法:
vim /etc/security/limits.conf

  soft    nofile          65536
  hard    nofile          65536

Kibana(7.16.2)

介绍:Kibana 是一个免费且开放的用户界面,能够让您对 Elasticsearch 数据进行可视化,并让您在 Elastic Stack 中进行导航。您可以进行各种操作,从跟踪查询负载,到理解请求如何流经您的整个应用,都能轻松完成。

也可用于服务器资源的监控,跟Grafana功能类似。
结合metricbeat 可监测服务器资源
版本:7.16.2

下载地址:https://www.elastic.co/cn/downloads/kibana

2.1.安装

解压tar.gz

修改配置:

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://127.0.0.1:9200"]
elasticsearch.username: "kibana_system" # 如果没有账号,则注释掉
elasticsearch.password: "123456" #
i18n.locale: "zh-CN"  # 管理界面中文,默认为英文

赋权限:
chown -R es:es ${kibanaHome}

创建日志目录:
mkdir -p /data/kibana/log

设置账号、密码(如果elasticsearch没有账号密码,可忽略):
安装插件:

启动:sudo -u es /usr/package/kibana-7.16.2-linux-x86_64/bin/kibana --allow-root > /data/kibana/log/kibana.log &
停止:

kill -9 `lsof -i :5601 -t` 

直接访问:http://ip:5601/

2.2.kibana的使用

日志查询:
首先使用日志收集工具(logstash、filebeat等)将日志收集后,然后在Kibana web界面 Management —— Stack Management —— kibana - 索引模式 创建索引模式。

2.3.常见问题

1.启动时报错
TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block
可能原因:Elasticsearch 所在服务器,磁盘空间不足,扩容后可解决。

Logstash(7.16.2)

介绍:
Logstash 是免费且开放的服务器端数据处理管道,能够从多个来源采集数据,转换数据,然后将数据发送到您最喜欢的“存储库”中。

Logstash事件处理有三个阶段:inputs→filters→outputs。
是一个接收,处理,转发日志的工具。支持系统日志,webserver日志,错误日志,应用日志,总之包括所有可以抛出来的日志类型。

版本:7.16.2
下载地址:https://www.elastic.co/cn/downloads/logstash
说明文档:https://www.elastic.co/guide/en/logstash/7.16/getting-started-with-logstash.html

3.1.安装

修改配置文件:
vim ${LogstashHome}/config/logstash.yml

vim ${LogstashHome}/config/logstash.conf
必须存在配置,不然启动报错。
配置参考:

使用logstash收集服务器日志文件配置:

input {
  file {
    path => "/root/logs/qgzhdc-px-data-node/common-error.log"
    type => "qgzhdc-common-error-data"
    start_position =>"beginning"
     # start_interval =>"2"
  }
  file {
    path => "/root/logs/qgzhdc-statistics-node/common-error.log"
    type => "qgzhdc-common-error-statistics"
    start_position =>"beginning"
  }
}
output {
  if [type] == "qgzhdc-common-error-data"{
    elasticsearch {
      hosts => ["172.16.100.156:9200"]
      index => "qgzhdc-common-error-data-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "qgzhdc-common-error-statistics"{
    elasticsearch {
      hosts => ["172.16.100.156:9200"]
      index => "qgzhdc-common-error-statistics-%{+YYYY.MM.dd}"
    }
  }
}

使用filebeat收集日志,logstash处理日志

input {
 beats {
    port => 5044
    type => "filebeat"
    client_inactivity_timeout => 36000
  }
}
filter {
  # 拾取日志文件中的日期
  grok{
      match =>{"message"=>"%{TIMESTAMP_ISO8601:qgzhdc_log_create_time}"}
     }
}
output {
 if [type] == "filebeat"{
    elasticsearch {
      user => "elastic" 
      password => "123456"
      hosts => ["172.16.100.156:9200"]
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    }
  }
}

3.2.启动

默认端口 9600

/usr/package/logstash-7.16.2/bin/logstash -f /usr/package/logstash-7.16.2/config/logstash.conf &

停止:ps -ef | grep logstash ,然后kill -9 [查询到的端口]

或者将logstash 安装为服务,使用 systemctl 去管理:

  1. 修改 /usr/package/logstash-7.16.2/config/startup.conf
# Set a home directory
LS_HOME=/usr/package/logstash-7.16.2

# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/usr/package/logstash-7.16.2/config/

# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"

# Arguments to pass to java
LS_JAVA_OPTS=""

# pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
LS_PIDFILE=/var/run/logstash.pid

# user and group id to be invoked as
LS_USER=root
LS_GROUP=root

# Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
LS_GC_LOG_FILE=/var/log/logstash/gc.log

# Open file limit
LS_OPEN_FILES=16384

# Nice level
LS_NICE=19

# Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"

# If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOM
## EOM

执行 ${logstashHome}/bin/system-install

systemctl 管理服务:
启动 :systemctl start logstash
停止:systemctl stop logstash
查看启动状态: systemctl status logstash -l

注意:
目前存在问题,使用systemctl 没有去使用指定的配置文件,可以去修改
vim /etc/systemd/system/logstash.service
中的 ExecStart参数。 完整的如下。

[Unit]
Description=logstash

[Service]
Type=simple
User=root
Group=root
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/usr/package/logstash-7.16.2
EnvironmentFile=-/usr/package/logstash-7.16.2
ExecStart=/usr/package/logstash-7.16.2/bin/logstash "--path.settings" "/usr/package/logstash-7.16.2/config/" -f "/usr/package/logstash-7.16.2/config/logstash.conf"
Restart=always
WorkingDirectory=/usr/package/logstash-7.16.2
Nice=19
LimitNOFILE=16384

# When stopping, how long to wait before giving up and sending SIGKILL?
# Keep in mind that SIGKILL on a process can cause data loss.
TimeoutStopSec=infinity

[Install]
WantedBy=multi-user.target

Filebeats(7.16.2)

介绍:Filebeat 是一个用于转发和集中日志数据的轻量级传送器。作为代理安装在您的服务器上,Filebeat 监控您指定的日志文件或位置,收集日志事件,并将它们转发到Elasticsearch或 Logstash以进行索引。

4.1.安装

版本:7.16.2
下载地址:

配置:


# ============================== Filebeat inputs ===============================
filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /root/logs/qgzhdc-px-data-node/qgzhdc-px-data-node-web.log.2022-02-28
  # fields 自定义字段与值
  fields:
    qgzhdc_project_name: qgzhdc-px-data-node
    qgzhdc_hostip: myiptest
   
- type: filestream
  enabled: true
  paths:
    - /root/logs/qgzhdc-px-import-node/common-error.log.2022-03-01
  fields:
    qgzhdc_project_name: qgzhdc-px-import-node
    qgzhdc_hostip: myiptest
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
# hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.16.100.156:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

 # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
 #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true


4.2.启动

./filebeat -e

附录:

Elasticsearch:
Elasticsearch可视化工具
http://mobz.github.io/elasticsearch-head

elasticsearch-head:插件安装

0.x,1.x,2.x 支持在elasticsearch 中以插件形式安装:
./elasticsearch-plugin install -h  查看可以安装的插件

1.elasticsearch/bin/plugin -install mobz/elasticsearch-head
2.运行es
3.打开http://localhost:9200/_plugin/head/

5.x 版本不支持插件安装,只能单独启动一个服务:
? git clone git://github.com/mobz/elasticsearch-head.git
? cd elasticsearch-head
? npm install
? npm run start
open http://localhost:9100/
`

Elasticsearch常用操作:
查询集群是否健康:
curl ‘localhost:9200/_cat/health?v’
查询所有节点:
curl ‘localhost:9200/_cat/nodes?v’
查询所有索引
curl ‘localhost:9200/_cat/health?v’
创建索引
curl -XPUT ‘localhost:9200/yangzhtest_index?pretty’
插入数据
插入数据为 {“name”:”yangzh”}
curl -XPUT ‘localhost:9200/yangzhtest_index/external/1?pretty’ -d ‘{“name”:”yangzh”}’
需要加上请求头,不然报错
curl -H “Content-Type:application/json” -XPUT ‘localhost:9200/yangzhtest_index/external/1?pretty’ -d ‘{“name”:“yangzh”}’
获取数据:
curl -XGET ‘http://localhost:9200/yangzhtest_index/external/1?pretty’
删除数据:
curl -XDELETE ‘http://localhost:9200/yangzhtest_index/external/1?pretty’
删除索引:
curl -XDELETE http://localhost:9200/qgzhdc-*

  系统运维 最新文章
配置小型公司网络WLAN基本业务(AC通过三层
如何在交付运维过程中建立风险底线意识,提
快速传输大文件,怎么通过网络传大文件给对
从游戏服务端角度分析移动同步(状态同步)
MySQL使用MyCat实现分库分表
如何用DWDM射频光纤技术实现200公里外的站点
国内顺畅下载k8s.gcr.io的镜像
自动化测试appium
ctfshow ssrf
Linux操作系统学习之实用指令(Centos7/8均
上一篇文章      下一篇文章      查看所有文章
加:2022-03-03 16:53:58  更:2022-03-03 16:56:34 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/16 3:16:31-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码