1.简介
2.架构
3.安装
主机名 | hadoop100 | hadoop101 | hadoop102 |
---|
IP | 192.168.100.100 | 192.168.100.101 | 192.168.100.102 | 部署服务 | jdk8 | jdk8 | jdk8 | 部署服务 | zk Server | zk Server | zk Server | 部署服务 | kafka | kafka | kafka |
解压
tar -zxvf kafka_2.12-2.8.0.tgz -C /opt/software
修改目录名称
cd /opt/software
mv kafka_2.12-2.8.0 kafka
创建启动配置文件 ps:自己创建或者修改 (vim config/server.properties)
touch /opt/software/kafka/config/kafka.properties
创建数据文件目录
mkdir -p /opt/software/kafka/data
vim /opt/software/kafka/config/kafka.properties
配置内容
port=9092
host.name=hadoop100
broker.id=100
delete.topic.enable=true
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/software/kafka/data
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
zookeeper.connect=hadoop100:2181,hadoop101:2181,hadoop102:2181/kafka
分发到另外两台机器,并修改对应host.name和broker.id
scp -r /opt/software/kafka hadoop101:/opt/software/
scp -r /opt/software/kafka hadoop102:/opt/software/
(所有机器)添加环境变量 vim /etc/profile
export KAFKA_HOME=/opt/software/kafka
export PATH=$PATH:$KAFKA_HOME/bin
同步环境变量
rsync /etc/profile
环境变量生效
/opt/script/all.sh "source /etc/profile "
先确保已经启动zookeeper集群,操作参照zookeeper集群搭建
创建启停脚本 vim /opt/script/kafka.sh
#!/bin/bash
kafka_start(){
for i in hadoop100 hadoop101 hadoop102
do
echo "****************** $i start*********************"
ssh $i "source /etc/profile && /opt/software/kafka/bin/kafka-server-start.sh -daemon /opt/software/kafka/config/kafka.properties"
done
}
kafka_stop(){
for i in hadoop100 hadoop101 hadoop102
do
echo "========== $i stop=========="
ssh $i "/opt/software/kafka/bin/kafka-server-stop.sh stop"
done
}
case $1 in
"start"){
kafka_start
};;
"stop"){
kafka_stop
};;
"restart"){
kafka_stop
sleep 1
kafka_start
};;
*){
echo "[ERROR-输入参数错误]:请输入start|stop|restart"
};;
esac
修改为执行文件
chmod 777 /opt/script/kafka.sh
验证
jps 三台都有kafka进程
4.shell命令
5.java操作zookeeper
6.注意点
|