| |
|
开发:
C++知识库
Java知识库
JavaScript
Python
PHP知识库
人工智能
区块链
大数据
移动开发
嵌入式
开发工具
数据结构与算法
开发测试
游戏开发
网络协议
系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程 数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁 |
-> 大数据 -> ELK部署 -> 正文阅读 |
|
[大数据]ELK部署 |
准备三台服务器A、B、C(时间必须一致) 主机A:jdk1.8、kafka、zookeeper、logstash 主机B:jdk1.8、kafka、zookeeper、es 主机C:jdk1.8、kafka、zookeeper、kibana 被控端:filebeat 1、三台服务器分别关闭防火墙、selinux ?? systemctl stop firewalld ?? systemctl disable firewalld ?? setenforce 0 ?? sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 2、三台服务器分别修改主机名 ?? vi /etc/hosts ?? 重启三台服务器 ?? reboot 3、三台服务器分别安装jdk ?? rpm –qa | grep jdk ?? rpm –e 旧版本名称 ?? rpm –ivh jdk-8u121-linux-x64.rpm ?? java -version
1、解压zookeeper安装包 ?? tar -zxvf apache-zookeeper-3.7.0.tar.gz -C /usr/local/ ?? cd /usr/local/ ?? mv apache-zookeeper-3.7.0/ zookeeper ?? cd /usr/local/zookeeper/conf/ 2、修改配置文件 ?? mv zoo_sample.cfg zoo.cfg ?? vim zoo.cfg tickTime=2000 #服务器之间心跳时间 initLimit=10 ???? #zk服务器最大连接失败时间 syncLimit=5 #zk同步通信时间 dataDir=/usr/local/zookeeper/data #zk数据存放路径 clientPort=2181 #监听端口号 server.1=192.168.160.100:2888:3888 #服务器编号,ip地址,集群通信端口号,集群选举端口号 server.2=192.168. 160.101:2888:3888 server.3=192.168. 160.102:2888:3888 3、设置myid ?? 主机A: ?? echo '1' > /usr/local/zookeeper/data/myid ?? 主机B: ?? echo '2' > /usr/local/zookeeper/data/myid ?? 主机C: ?? echo '3' > /usr/local/zookeeper/data/myid 4、三台服务器分别启动zookeeper ?? /usr/local/zookeeper/bin/zkServer.sh start Mode: leader为主节点,Mode: follower为从节点,zk集群一般只有一个leader,多个follower,主一般是响应客户端的读写请求,而从主同步数据,当主挂掉之后就会从follower里投票选举一个leader出来。
1、三台主机分别安装kafka ?? tar xvf kafka_2.12-2.8.0.tar –C /usr/local ?? cd /usr/local/ ?? mv kafka_2.12-2.8.0 kafa 2、修改配置文件 ?? cd /usr/local/kafka/config/ ?? vi server.properties ?? 主机A: broker.id=0? #这里和zookeeper中的myid文件一样,采用的是唯一标识 ?listeners=PLAINTEXT://192.168.160.100:9092 advertised.listeners=PLAINTEXT://192.168.160.100:9092 zookeeper.connect=192.168.160.100:2181,192.168.160.101:2181,192.168.160.102:2181 #集群的各个节点的IP地址及zookeeper的端口,在zookeeper集群设置的端口是多少这里的端口就是多少 ?? 主机B: broker.id=1? #这里和zookeeper中的myid文件一样,采用的是唯一标识 advertised.listeners=PLAINTEXT://192.168.160.101:9092 zookeeper.connect=192.168.160.100:2181,192.168.160.101:2181,192.168.160.102:2181 #集群的各个节点的IP地址及zookeeper的端口,在zookeeper集群设置的端口是多少这里的端口就是多少 ?? 主机C: broker.id=2? #这里和zookeeper中的myid文件一样,采用的是唯一标识 advertised.listeners=PLAINTEXT://192.168.160.102:9092 zookeeper.connect=192.168.160.100:2181,192.168.160.101:2181,192.168.160.102:2181 #集群的各个节点的IP地址及zookeeper的端口,在zookeeper集群设置的端口是多少这里的端口就是多少 3、启动kafka ?? /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties 4、测试kafka集群状态 在主机A测试,创建topic,查看当前topic,模拟生产者 #–replication-factor 2 复制2份 /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.160.100:2181 --replication-factor 2 --partitions 3 --topic msg?? #创建topic Created topic msg. /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.160.100:2181 msg?? #模拟生产者 /usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.160.100:9092 --topic msg?? #模拟消费者 >test? #在主机B上查看,主机A的topic情况 /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.160.101:9092 --topic msg --from-beginning???? #查看消费信息 test #查看集群状态 /kafka-topics.sh --describe --zookeeper 192.168.160.100:2181 --topic nginx #查看topic /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.153.179:2181 #查看topic策略 ./kafka-configs.sh --zookeeper 192.168.201.100:2181,192.168.201.101:2181,192.168.201.102:2181 --describe --entity-type topics --entity-name nginx001 #删除topic ./kafka-topics.sh --delete --topic nginx001 --zookeeper 192.168.201.100:2181,192.168.201.101:2181,192.168.201.102:2181?
1、三台主机分别安装filebeat ????? tar -zxvf filebeat-6.5.2-linux-x86_64.tar.gz ????? mv filebeat-6.5.2-linux-x86_64 /usr/local/filebeat 2、修改配置文件 ????? vi /usr/local/ filebeat/filebeat.yml ????? 主机A: filebeat.inputs: - type: log ? enabled: true ? paths: ??? - /usr/local/filebeat/log/*.log filebeat.config.modules: ? path: ${path.config}/modules.d/*.yml ? reload.enabled: false setup.template.settings: ? index.number_of_shards: 3 setup.kibana: output.kafka: ? enabled: true ? hosts: ["192.168.160.100:9092","192.168.160.101:9092","192.168.160.102:9092"] ? topic: msg processors: ? - add_host_metadata: ~ ? - add_cloud_metadata: ~ 主机B: filebeat.inputs: - type: log ? enabled: true ? paths: ??? - /usr/local/filebeat/log/*.log filebeat.config.modules: ? path: ${path.config}/modules.d/*.yml ? reload.enabled: false setup.template.settings: ? index.number_of_shards: 3 setup.kibana: output.kafka: ? enabled: true ? hosts: ["192.168.160.100:9092","192.168.160.101:9092","192.168.160.102:9092"] ? topic: msg 主机C: filebeat.inputs: - type: log ? enabled: true ? paths: ??? - /usr/local/filebeat/log/*.log filebeat.config.modules: ? path: ${path.config}/modules.d/*.yml ? reload.enabled: false setup.template.settings: ? index.number_of_shards: 3 setup.kibana: output.kafka: ? enabled: true ? hosts: ["192.168.160.100:9092","192.168.160.101:9092","192.168.160.102:9092"] ? topic: msg processors: ? - add_host_metadata: ~ ? - add_cloud_metadata: ~ processors: ? - add_host_metadata: ~ ? - add_cloud_metadata: ~ 3、分别启动三台filebear ????? /usr/local/filebeat/filebeat &
1、主机A安装logstash ????? tar -zxvf logstash-6.5.2.tar.gz ????? mv logstash-6.5.2 /usr/local/logstash 2、修改配置文件 ????? vi /usr/local/logstash/config/logstash-sample.conf input { ??????? kafka { ??????? bootstrap_servers => ["192.168.4.124:9092,192.168.4.125:9092,192.168.4.125:9099"] ??????? group_id => "logstash" ??????? topics => ["nginx75-access","nginx75-error","server76-spring","server76-druid","server76-access"] ??????? decorate_events => true ??????? consumer_threads => 5 ??????? codec => "json" ??????? auto_offset_reset => "latest" ??????? } } filter { ??????? json{ ??????????????? source => "message" ??????? } ??????? mutate { ??????????????? remove_field => ["host","prospector","fields","input","log"] ??????? } ??????? grok { ??????????????? match => { "message" => "%{HTTPDATE:logtime}" } ??????????????? match => { "message" => "(?<logtime>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}.\d{3}).*\] %{LOGLEVEL:loglevel} " } ??????? } ??????? mutate { ??????????????? gsub =>[ ?????????????????????? "message", '/', "/" ??????????????? ] ??????? } ??????? mutate { ??????????????? convert => { ?????????????????????? "usdCnyRate" => "float" ??????????????????????? "futureIndex" => "float" ??????????????? } ??????? } ??????? date { ??????????????? match => [ "logtime", "YYYY-MM-dd HH:mm:ss.SSS", "dd/MMM/yyyy:HH:mm:ss Z" ] ??????????????? target => "@timestamp" ??????? } } output { ?????? elasticsearch { ??????????????? hosts => "192.168.4.124:9200" ??????????????? index => "%{[@metadata][topic]}-%{+YYYY-MM-dd}" ?????? } } #修改nginx文件 vi /usr/local/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx NGX %{IPORHOST:client_ip} (%{USER:ident}|- ) (%{USER:auth}|-) \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} (%{NOTSPACE:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)" %{NUMBER:status} (?:%{NUMBER:bytes}|-) "(?:%{URI:referrer}|-)" "%{GREEDYDATA:agent}" 3、启动logstaash /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/logstash-sample.conf &
1、主机B安装elasticsearch ????? tar -zxvf elasticsearch-6.5.2.tar.gz -C /usr/local/ ????? cd /usr/local/ ????? mv elasticsearch-6.5.2/ elasticsearch 2、修改配置文件 ????? vi /usr/local/elasticsearch/config/elasticsearch.yml cluster.name: test1 node.name: node-1 path.data: /usr/local/elasticsearch/data path.logs: /usr/local/elasticsearch/logs network.host: 192.168.160.101 http.port: 9200 vi /etc/sysctl.conf vm.max_map_count=655360 sysctl –p ??????????? vi /etc/security/limits.conf * soft nofile 65536 * hard nofile 131072 * soft nproc 2048 * hard nproc 4096 3、创建es用户 ????? useradd es ????? passwd es ????? chown -R es:es /usr/local/elasticsearch/ 4、启动es ????? cd /usr/local/elasticsearch ????? ./elasticsearch &
1、主机C安装kibana ????? tar -zxvf kibana-6.5.2-linux-x86_64.tar.gz -C /usr/local/ ????? cd /usr/local/ ????? mv kibana-6.5.2-linux-x86_64/ kibana 2、修改配置文件 ????? vi /usr/local/kibana/config/kibana.yml server.port: 5601 server.host: "192.168.160.102" elasticsearch.url: "http://192.168.160.101:9200" 3、启动kibana /usr/local/kibana/bin/ kibana
时间必须一致 修改hosts vi /etc/hosts 1、安装nginx(此包为提前准备好的,请自行安装) unzip nginx.zip mv nginx /usr/local/nginx 2、启动nginx /usr/local/nginx/sbin/nginx 3、安装filebeat tar –zxvf filebeat-6.5.2-linux-x86_64.tar.gz –C /usr/local mv /usr/local/filebeat-6.5.2-linux-x86_64 /usr/local/filebeat 4、修改filebeat配置文件 vi /usr/local/filebeat/filebeat.yml filebeat.inputs: - type: log ? enabled: true ? paths: ??? - /usr/local/nginx/logs/access.log ? fields: ??? log_topics: nginx002 output.kafka: ? enabled: true ? hosts: ["192.168.201.100:9092","192.168.201.101:9092","192.168.201.102:9092"] ? topic: nginx002 filebeat.config.modules: ? path: /usr/local/filebeat/modules.d/*.yml ? reload.enabled: true 5、启动filebeat /usr/local/filebeat/filebeat & |
|
|
上一篇文章 下一篇文章 查看所有文章 |
|
开发:
C++知识库
Java知识库
JavaScript
Python
PHP知识库
人工智能
区块链
大数据
移动开发
嵌入式
开发工具
数据结构与算法
开发测试
游戏开发
网络协议
系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程 数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁 |
360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 | -2024/11/23 13:22:17- |
|
网站联系: qq:121756557 email:121756557@qq.com IT数码 |