·
节点信息
hostname | IP | 软件 |
---|
elk-web | 10.0.30.30 | kibana-7.9.1、kafka-eagle-2.0.5、cerebro-0.9.4、mariadb-10.2.15 | elk-kafka01 | 10.0.30.31 | zookeeper-3.4.13、kafka-2.12.1 | elk-kafka02 | 10.0.30.32 | zookeeper-3.4.13、kafka-1.0.0 | elk-kafka03 | 10.0.30.33 | zookeeper-3.4.13、kafka-1.0.0 | elk-elasticsearch01 | 10.0.30.34 | elasticsearch-7.9.1 | elk-elasticsearch02 | 10.0.30.35 | elasticsearch-7.9.1 | elk-elasticsearch03 | 10.0.30.36 | elasticsearch-7.9.1 | elk-logstash | 10.0.30.37 | logstash-7.9.1 |
·
架构图
 ·
搭建过程中遇到的问题
所有服务器采用的是双网卡,第一个网卡(eth0)走的是默认路由,第二个网卡(eth1)配置了静态路由。路由信息如下图:  初次搭建时,所有侦听地址使用的是 eth1,在进行到测试 kafka 集群的步骤时,发现连接 zookeeper 和 kafka 时会比较慢,偶尔还会出现连接失败的现象。在安装 kafka-eagle 的步骤时,一直启动失败,观察日志发现了大量连接 zookeeper 的超时报错,最后将所有侦听地址修改为 eth0 后,没有出现此状况。
疑虑:在同一个网段中调用,不应该不走网关吗?为什么会出现这种情况?难道和 xshell 是在 172.168.20.0/24 网段有关系?
由于网络方面的知识欠缺,就没有深入研究此种状况网络方面的原因和修复方法,如果有知道的小伙伴欢迎评论区留言,感谢!!!。
·
部署 Kafka 集群
官网:http://kafka.apache.org/ 由于 Kafka 依赖 zookeeper,所以先部署一套 zookeeper 集群。
·
部署 Zookeeper 集群
官网:https://zookeeper.apache.org/ 下载地址:https://archive.apache.org/dist/zookeeper/
·
PS:zookeeper 集群配置中,除了 myid 必须唯一以外,其余配置均可一致。
·
1 配置 Java 环境
tar -xvf jdk-8u281-linux-x64.tar.gz -C /usr/local/
ln -vs /usr/local/jdk1.8.0_281/bin/java* /usr/bin/
·
2 下载&解压 Zookeeper
tar -xvf zookeeper-3.4.13.tar.gz -C /usr/local/
·
3 设置 JVM 堆大小
cd /usr/local/zookeeper-3.4.13
vim conf/java.env
export JVMFLAGS="-Xms2g -Xmx2g"
·
4 创建配置文件(文件名任意即可,启动时默认加载 conf/zoo.cfg)
vim conf/zoo.cfg
tickTime=2000
initLimit=5
syncLimit=2
dataDir=/data/zookeeper/zkdata/
dataLogDir=/data/zookeeper/zklog/
clientPort=2181
autopurge.purgeInterval=168
server.31=10.0.30.31:2888:3888
server.32=10.0.30.32:2888:3888
server.33=10.0.30.33:2888:3888
·
5 创建依赖目录
mkdir -p /data/zookeeper/zkdata/ /data/zookeeper/zklog/
·
6 编辑 myid 文件
注意: myid 文件位于服务器的数据目录(dataDir)中,它由一行仅包含该机器的 id 文本组成。server.1 的 myid 将只包含文本 1 而没有其他内容,id 在 zookeeper 集群中必须是唯一的,并且应该在 1 到 255 之间。
echo 31 > /data/zookeeper/zkdata/myid
echo 32 > /data/zookeeper/zkdata/myid
echo 33 > /data/zookeeper/zkdata/myid
·
7 启动 Zookeeper
cd /usr/local/zookeeper-3.4.13
./bin/zkServer.sh start
·
8 验证 zk 集群是否成功
./bin/zkServer.sh status
PS: 只启动第一个 zk 节点(myid 31),查看状态会提示 "Error contacting service. It is probably not running." ;启动第二个 zk 节点(myid 32)后,查看状态会发现它的状态为 leader,这时返回查看第一个 zk 节点的状态会发现它的状态变为 follower;最后启动第三个 zk 节点(myid 33),查看状态会发现它的状态为 follower。当 leader 故障后,第三个节点会变为新 leader。
·
开始安装 Kafka
PS:kafka 集群配置中,除了 broker.id 和侦听地址(listeners)不相同外,其余配置均可一致。
·
1 配置 Java 环境
步骤忽略…。部署 zookeeper 集群时,已经配置。
·
2 下载&解压 Kafka
tar -xvf kafka_2.12-1.0.0.tgz -C /usr/local/
·
3 修改配置文件
cd /usr/local/kafka_2.12-1.0.0
vim config/server.properties
broker.id=31
listeners=PLAINTEXT://10.0.30.31:9092
log.dirs=/data/kafka/kkdata
num.partitions=3
zookeeper.connect=10.0.30.31:2181,10.0.30.32:2181,10.0.30.33:2181
auto.create.topics.enable=true
delete.topic.enable=true
min.insync.replicas=1
queued.max.requests=500
default.replication.factor=2
replica.lag.time.max.ms=10000
·
4 JVM 优化
官方建议使用最新发布的 JDK 1.8 版本,因为较旧的免费可用版本已经披露了安全漏洞。如果您决定使用 G1 收集器(当前的默认值),并且您仍然使用 JDK 1.7,请确保您使用的是 u51 或更新版本。
vim bin/kafka-server-start.sh
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi
...
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80"
export JMX_PORT="8888"
fi
...
·
5 创建依赖路径
mkdir -p /data/kafka/kkdata
·
6 启动
./bin/kafka-server-start.sh -daemon config/server.properties
·
7 验证 Kafka 集群是否正常
bin/kafka-topics.sh --zookeeper 10.0.30.31:2181,10.0.30.32:2181,10.0.30.33:2181 \
--create \
--partitions 3 \
--replication-factor 2 \
--topic test
bin/kafka-topics.sh --describe \
--zookeeper 10.0.30.31:2181,10.0.30.32:2181,10.0.30.33:2181 \
--topic test
Topic:test PartitionCount:3 ReplicationFactor:2 Configs:
Topic: test Partition: 0 Leader: 31 Replicas: 31,32 Isr: 31,32
Topic: test Partition: 1 Leader: 32 Replicas: 32,33 Isr: 32,33
Topic: test Partition: 2 Leader: 33 Replicas: 33,31 Isr: 33,31
bin/kafka-console-producer.sh --broker-list 10.0.30.31:9092,10.0.30.32:9092,10.0.30.33:9092 \
--topic test
>123
>456
>789
>bye
bin/kafka-console-consumer.sh --zookeeper 10.0.30.31:2181,10.0.30.32:2181,10.0.30.33:2181 \
--from-beginning \
--topic test
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
789
456
123
bye
bin/kafka-console-consumer.sh --bootstrap-server 10.0.30.31:9092,10.0.30.32:9092,10.0.30.33:9092 \
--from-beginning \
--topic test
456
789
123
bye
bin/kafka-topics.sh --zookeeper 10.0.30.31:2181,10.0.30.32:2181,10.0.30.33:2181 \
--delete \
--topic test
bin/kafka-topics.sh --zookeeper 10.0.30.31:2181,10.0.30.32:2181,10.0.30.33:2181 \
--list
·
部署 Elasticsearch 集群
详细信息可参考官方相关文档。
·
操作系统优化
·
1 设置打开的文件句柄数和线程数
vim /etc/security/limits.conf
* soft nproc 65536
* hard nproc 65536
* soft nofile 65536
* hard nofile 65536
* - as unlimited
* - fsize unlimited
* - memlock unlimited
·
2 关闭 swap 交换空间
swapoff -a && sed -i '/swap/s/^.*$/#&/' /etc/fstab
·
3 设置虚拟内存大小和 TCP 超时重传次数
vim /etc/sysctl.conf
vm.max_map_count=262144
net.ipv4.tcp_retries2=5
sysctl -p
·
开始安装 Elasticsearch
PS:elasticsearch 集群配置中,除了node.name和侦听地址(network.host)不相同外,其余配置均可一致。
·
1 配置主机名解析
cat << "EOF" >> /etc/hosts
10.0.30.34 elk-elasticsearch01
10.0.30.35 elk-elasticsearch02
10.0.30.36 elk-elasticsearch03
EOF
·
2 下载&解压 Elasticsearch
tar -xvf elasticsearch-7.9.1-linux-x86_64.tar.gz -C /usr/local/
·
3 创建 esuser 用户
useradd -u 9200 esuser
echo "111111" | passwd --stdin esuser
chown -R esuser:esuser /usr/local/elasticsearch-7.9.1
·
4 生成证书和私钥供 xpack 使用(只在一个节点上操作)
cd /usr/local/elasticsearch-7.9.1
./bin/elasticsearch-certutil ca
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
mkdir ./config/certs/
mv elastic-certificates.p12 ./config/certs/
scp -r ./config/certs/ esuser@elk-elasticsearch02:/usr/local/elasticsearch-7.9.1/config
scp -r ./config/certs/ esuser@elk-elasticsearch03:/usr/local/elasticsearch-7.9.1/config
·
5 修改配置文件
vim config/elasticsearch.yml
cluster.name: vlan30-log-collection-system
node.name: elk-elasticsearch01
path.data: /data/elasticsearch/es_data
path.logs: /data/elasticsearch/logs
network.host: 10.0.30.34
http.port: 9200
transport.port: 9300
discovery.seed_hosts: ["10.0.30.34:9300", "10.0.30.35:9300", "10.0.30.36:9300"]
cluster.initial_master_nodes: ["10.0.30.34:9300"]
bootstrap.memory_lock: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
·
6 JVM 优化
vim config/jvm.options
-Xms4g
-Xmx4g
-XX:HeapDumpPath=/data/elasticsearch/jvm/heapdump
-XX:ErrorFile=/data/elasticsearch/logs/hs_err_pid%p.log
-Xloggc:/data/elasticsearch/logs/gc.log
·
7 声明临时路径变量
echo "export ES_TMPDIR=/data/elasticsearch/temp/" >> /etc/profile
source /etc/profile
·
8 创建需要的路径并授权
mkdir -p /data/elasticsearch/{es_data,logs,temp}
mkdir -p /data/elasticsearch/jvm/heapdump
chown -R esuser:esuser /data/elasticsearch
mkdir /usr/local/elasticsearch-7.9.1/temp
chown -R esuser:esuser /usr/local/elasticsearch-7.9.1/
·
9 启动 elasticsearch
su - esuser
cd /usr/local/elasticsearch-7.9.1
./bin/elasticsearch -d -p temp/elasticsearch.pid
·
10 所有节点都启动后,在 master 节点激活内置用户密码
./bin/elasticsearch-setup-passwords interactive
future versions of Elasticsearch will require Java 11; your Java version from [/usr/local/jdk1.8.0_281/jre] does not meet this requirement
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
·
11 验证集群是否正常
PS: 开启 xpack 后,需要用户认证,使用 elastic 用户。



·
部署 Kibana
详细信息可参考官方相关文档。
·
1 下载&解压 Kibana
tar -xvf kibana-7.9.1-linux-x86_64.tar.gz -C /usr/local
·
2 修改配置文件
cd /usr/local/kibana-7.9.1-linux-x86_64
vim config/kibana.yml
server.port: 5601
server.host: "172.168.30.30"
elasticsearch.hosts: ["http://10.0.30.34:9200","http://10.0.30.35:9200","http://10.0.30.36:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "111111"
pid.file: /usr/local/kibana-7.9.1-linux-x86_64/temp/kibana.pid
i18n.locale: "zh-CN"
·
3 创建 kibana 用户和依赖目录,并授权
useradd -u 5601 kibana
mkdir /usr/local/kibana-7.9.1-linux-x86_64/temp
chown -R kibana:kibana /usr/local/kibana-7.9.1-linux-x86_64
·
4 启动 Kibana
su - kibana
cd /usr/local/kibana-7.9.1-linux-x86_64
./bin/kibana
PS: 当启动报如下错误(The Reporting plugin encountered issues launching Chromium in a self-test. You may have trouble generating reports.) 
参考官方文档,安装依赖包:
yum -y install ipa-gothic-fonts xorg-x11-fonts-100dpi xorg-x11-fonts-75dpi xorg-x11-utils xorg-x11-fonts-cyrillic xorg-x11-fonts-Type1 xorg-x11-fonts-misc fontconfig freetype
·
5 验证配置是否正常
URL:http://IP:5601 PS: 开启 xpack 后,需要用户认证,使用 elastic 用户。 
·
部署 Cerebro
Github 地址:https://github.com/lmenezes/cerebro
Cerebro 是一个开源的 Elasticsearch WEB 管理工具,使用 Scala、Play 框架、AngularJS 和 Bootstrap 构建。
·
1 下载&解压 Cerebro
tar -xvf cerebro-0.9.4.tgz -C /usr/local/
·
2 配置 Java 环境
tar -xvf jdk-8u281-linux-x64.tar.gz -C /usr/local/
ln -vs /usr/local/jdk1.8.0_281/bin/java* /usr/bin/
·
3 修改配置文件
cd /usr/local/cerebro-0.9.4
vim conf/application.conf
pidfile.path=/usr/local/cerebro-0.9.4/temp/cerebro.pid
rest.history.size = 100
hosts = [
{
host = "http://10.0.30.34:9200"
name = "vlan30-log-collection-system"
auth = {
username = "elastic"
password = "111111"
}
headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]
}
]
·
4 创建依赖目录
mkdir /usr/local/cerebro-0.9.4/temp
·
5 编写启动脚本
cat << "EOF" > /usr/lib/systemd/system/cerebro.service
[Unit]
Description=cerebro
Documentation=https://github.com/lmenezes/cerebro
After=network-online.target
Wants=network-online.target
[Service]
EnvironmentFile=/usr/local/cerebro-0.9.4/conf/application.conf
ExecStart=/usr/local/cerebro-0.9.4/bin/cerebro
KillMode=process
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
EOF
·
6 启动 cerebro
systemctl daemon-reload
systemctl start cerebro && systemctl enable cerebro
·
7 访问&连接 Elasticsearch
URL:http://IP:9000 
·
部署 Kafka-eagle
官网:https://www.kafka-eagle.org/ 下载地址:http://download.kafka-eagle.org/ GitHub:https://github.com/smartloli/kafka-eagle
·
环境配置
1 安装 MySQL
步骤省略…,需要的小伙伴可以参考文章:MariaDB 的二进制包安装方式
·
2 创建 ke 库
CREATE DATABASE ke DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
·
3 创建&授权 kafka_eagle 用户/密码
grant all on ke.* to kafka_eagle@"10.0.30.%" identified by "123456";
·
4 需要开启 kafka 的 JMX 端口,不然获取不到监控数据
PS: 部署 kafka 集群的时候已经开启。
·
开始安装 Kafka-eagle
·
1 配置 Java 环境
步骤省略…,安装 cerebro 的时候已经配置。
·
2 下载&解压 Kafka-eagle
tar -xvf kafka-eagle-bin-2.0.5.tar.gz
cd kafka-eagle-bin-2.0.5
tar -xvf kafka-eagle-web-2.0.5-bin.tar.gz -C /usr/local/
cd /usr/local/
mv kafka-eagle-web-2.0.5 kafka-eagle
·
3 配置 Kafka-eagle 环境
cat << "EOF" > /etc/profile.d/kafka-eagle.sh
export KE_HOME=/usr/local/kafka-eagle
export PATH=$PATH:$KE_HOME/bin
EOF
source /etc/profile
·
4 修改 Kafka-eagle 配置文件
cd /usr/local/kafka-eagle/
vim conf/system-config.properties
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=10.0.30.31:2181,10.0.30.32:2181,10.0.30.33:2181
cluster1.kafka.eagle.broker.size=20
kafka.zk.limit.size=32
kafka.eagle.webui.port=8048
cluster1.kafka.eagle.offset.storage=kafka
kafka.eagle.metrics.charts=true
kafka.eagle.metrics.retain=15
kafka.eagle.topic.token=keadmin
kafka.eagle.driver=com.mysql.cj.jdbc.Driver
kafka.eagle.url=jdbc:mysql://10.0.30.30:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
kafka.eagle.username=kafka_eagle
kafka.eagle.password=123456
·
5 启动 Kafka-eagle
cd /usr/local/kafka-eagle
chmod +x bin/ke.sh
./bin/ke.sh start

·
6 访问 Kafka-eagle
URL:http://IP:8048/ 
·
部署 Filebeat
详细信息可参考官方相关文档。
·
1 下载&解压 Filebeat
tar -xvf filebeat-7.9.1-linux-x86_64.tar.gz -C /usr/local/
·
2 编写配置文件(文件名任意即可)
PS: 我需要收集的日志是多个节点的 /app/node1-8080/logs/*.log 和 /app/node2-8090/logs/*.log ,所以添加了 fields 的 project 和 label,在 output 中通过使用 project 来区分项目;在 kibana 中 通过使用 label 来区分负载节点,当出现报错时,便于定位。
cd /usr/local/filebeat-7.9.1-linux-x86_64
vi winstar-vehicle-filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /app/node1-8080/logs/winstar-vehicle-*.log
fields:
project: winstar-vehicle
label: 71-8080
feilds_under_root: true
multiline.pattern: '^202'
multiline.negate: true
multiline.match: after
- type: log
enabled: true
paths:
- /app/node2-8090/logs/winstar-vehicle-*.log
fields:
project: winstar-vehicle
label: 71-8090
feilds_under_root: true
multiline.pattern: '^202'
multiline.negate: true
multiline.match: after
output.kafka:
enabled: true
hosts: ["10.0.30.31:9092", "10.0.30.32:9092", "10.0.30.33:9092"]
topic: '%{[fields.project]}-topic'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
compression_level: 4
·
3 启动 filebeat
./filebeat -e -c winstar-vehicle-filebeat.yml
PS: 在测试阶段,如果想要 filebeat 多次从头读取日志,可以在你的 filebeat 路径下面的 data 中查找 registry 文件,它记录了读取文件的位置信息。里面有 log.json 和 meta.json 两个文档,删除 log.json 即可。
·
4 查看 Kafka-eagle,看主题是否正常写入。

5 使用 supervisor 管理 filebeat 进程
yum -y install supervisor
vim /etc/supervisord.d/filebeat.ini
[program:filebeat]
command=/usr/local/filebeat-7.9.1-linux-x86_64/filebeat -e -c /usr/local/filebeat-7.9.1-linux-x86_64/winstar-vehicle-filebeat.yml
directory=/usr/local/filebeat-7.9.1-linux-x86_64/
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/filebeat.log
redirect_stderr=true
user=root
systemctl start supervisord && systemctl enable supervisord
supervisorctl status
·
部署 Logstash
详细信息可参考官方相关文档。
·
1 下载&解压 Logstash
tar -xvf logstash-7.9.1.tar.gz -C /usr/local/
·
2 配置 Java 环境
tar -xvf jdk-8u281-linux-x64.tar.gz -C /usr/local/
ln -vs /usr/local/jdk1.8.0_281/bin/java* /usr/bin/
·
3 编辑配置文件(文件名任意即可)
cd /usr/local/logstash-7.9.1/
vim config/winstar-vehicle-logstash.conf
input{
kafka {
bootstrap_servers => "10.0.30.31:9092,10.0.30.32:9092,10.0.30.33:9092"
topics => "winstar-vehicle-topic"
group_id => "vlan30-logstash"
decorate_events => true
consumer_threads => 3
auto_offset_reset => "earliest"
codec => "json"
}
}
output {
stdout {
codec => "rubydebug"
}
}
·
4 启动 logstash
cd /usr/local/logstash-7.9.1
./bin/logstash -f config/winstar-vehicle-logstash.conf
·
5 输出没问题后,修改配置文件输出至 Elasticsearch
PS: 我这里收集的 Java 日志,暂时只是供开发人员进行故障排查时使用,使用场景简单,所以不需要配置 filter 模块。
input{
kafka {
bootstrap_servers => "10.0.30.31:9092,10.0.30.32:9092,10.0.30.33:9092"
topics => "winstar-vehicle-topic"
group_id => "vlan30-logstash"
decorate_events => true
consumer_threads => 3
auto_offset_reset => "earliest"
codec => "json"
}
}
output{
if [fields][project] == "winstar-vehicle"{
elasticsearch {
hosts => ["10.0.30.34:9200","10.0.30.35:9200","10.0.30.36:9200"]
index => "log-winstar-vehicle-%{+YYYY.MM.dd}"
user => "elastic"
password => "vlan30-elastic"
}
}
}
·
附加:自定义索引模板
PS: 我这里创建了一个索引模板,只是将 logstash 索引模板的分片数和副本数更改。
GET _template/logstash
...
PUT _template/logstash-new
{
"order" : 0,
"version" : 60001,
"index_patterns" : [
"log-*"
],
"settings" : {
"index" : {
"number_of_shards" : "3",
"refresh_interval" : "5s",
"number_of_replicas": "0"
}
},
"mappings" : {
......
·
6 查看 cerebro,看索引是否正常创建

7 最后在 kibana 上查看日志信息

8 使用 supervisor 管理 logstash 进程
yum -y install supervisor
vim /etc/supervisord.d/logstash.ini
[program:logstash]
command=/usr/local/logstash-7.9.1/bin/logstash -f /usr/local/logstash-7.9.1/config/winstar-vehicle-logstash.conf
directory=/usr/local/logstash-7.9.1
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/logstash.log
redirect_stderr=true
user=root
systemctl start supervisord && systemctl enable supervisord
supervisorctl status
至此,ELK + Kafka 部署完毕。
·
附加:收集 Nginx 日志的相关配置文件
·
配置 Nginx 的日志格式为 json
......
http {
map $http_x_forwarded_for $clientRealIp {
"" $remote_addr;
~^(?P<firstAddr>[0-9\.]+),?.*$ $firstAddr;
}
......
log_format main_json '{"accessip_list":"$proxy_add_x_forwarded_for","client_ip":"$clientRealIp","http_host":"$host","@timestamp":"$time_iso8601","method":"$request_method","url":"$request_uri","status":"$status","http_referer":"$http_referer","body_bytes_sent":"$body_bytes_sent","request_time":"$request_time","http_user_agent":"$http_user_agent","total_bytes_sent":"$bytes_sent","server_ip":"$server_addr"}';
......
location /test1 {
root html;
index index.html;
access_log /var/log/nginx/test1-access.log main_json;
}
location /test2 {
root html;
index index.html;
access_log /var/log/nginx/test2-access.log main_json;
}
......
·
日志格式详解
{
"accessip_list": "172.168.20.253",
"client_ip": "172.168.20.253",
"http_host": "172.168.30.126",
"@timestamp": "2021-06-17T17:15:39+08:00",
"method": "GET",
"url": "/test1/",
"status": "200",
"http_referer": "-",
"body_bytes_sent": "16",
"request_time": "0.000",
"http_user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.106 Safari/537.36",
"total_bytes_sent": "252",
"server_ip": "172.168.30.126"
}
·
Filebeat 配置文件
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/test1-access.log
fields:
service: nginx
project: test1
feilds_under_root: true
- type: log
enabled: true
paths:
- /var/log/nginx/test2-access.log
fields:
service: nginx
project: test2
feilds_under_root: true
output.kafka:
enabled: true
hosts: ["10.0.30.31:9092", "10.0.30.32:9092", "10.0.30.33:9092"]
topic: '%{[fields.service]}-topic'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
compression_level: 4
·
Logstash 配置文件
input{
kafka {
bootstrap_servers => "10.0.30.31:9092,10.0.30.32:9092,10.0.30.33:9092"
topics => "nginx-topic"
group_id => "logstash"
decorate_events => true
consumer_threads => 3
auto_offset_reset => "earliest"
codec => "json"
}
}
filter {
json {
source => "message"
remove_field => "message"
}
}
output{
if [fields][project] == "test1"{
elasticsearch {
hosts => ["10.0.30.31:9200","10.0.30.32:9200","10.0.30.33:9200"]
index => "nginx-test1-%{+YYYY.MM.dd}"
user => "elastic"
password => "111111"
}
}
if [fields][project] == "test2" {
elasticsearch {
hosts => ["10.0.30.31:9200","10.0.30.32:9200","10.0.30.33:9200"]
index => "nginx-test2-%{+YYYY.MM.dd}"
user => "elastic"
password => "111111"
}
}
}
|